Re: [openstack-dev] [all] The future of the integrated release

2014-08-27 Thread Chris Dent

On Wed, 27 Aug 2014, Angus Salkeld wrote:


I believe developers working on OpenStack work for companies that
really want this to happen. The developers also want their projects to
be well regarded. Just the way the problem is using framed is a bit
like you did above and this is very daunting for any one person to
solve. If we can we quantify the problem, break the work into doable
items of work (bugs) and prioritized it will be solved a lot faster.


Yes.

It's very easy when encountering organizational scaling issues to
start catastrophizing and then throwing all the extant problems under
the same umbrella. This thread (and the czar one) has grown to include
a huge number of problems. We could easily change the subject to just
The Future.

I think two things need to happen:

* Be rational about the fact that at least in some areas we are trying
  to do too much with too little.

  Strategically that means we need:

  * to prioritize and decompose issues (of all sorts) better
  * get more resources (human and otherwise)

  That first is on us. The second I guess gets bumped up to the people
  with the money; one aspect of being rational is utilizing the fact
  that though OpenStack is open source, it is to a very large extent
  corporate open source. If the corps need to step up, we need to tell
  them.

* Do pretty much exactly what Angus says:

  10 identify bugs (not just in code)
  20 find groups who care about those bugs
  30 fix em
  40 GOTO 10 # FOR THE REST OF TIME

  We all know this, but I get the impression it can be hard to get
  traction. I think a lot of the slipping comes from too much emphasis
  on the different projects. It would be better to think I work on
  OpenStack rather than I work on Ceilometer (or whatever).

I'm not opposed to process and bureaucracy, it can be very important
part of the puzzle of getting lots of different groups to work
together. However an increase in both can be a bad smell indicating an
effort to hack around things that are perceived to be insurmountable
problems (e.g. getting more nodes for CI, having more documentors,
etc.).
--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-27 Thread Sean Dague
On 08/26/2014 11:40 AM, Anne Gentle wrote:
 
 
 
 On Mon, Aug 25, 2014 at 8:36 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
 On 08/20/2014 12:37 PM, Zane Bitter wrote:
  On 11/08/14 05:24, Thierry Carrez wrote:
  So the idea that being (and remaining) in the integrated release
 should
  also be judged on technical merit is a slightly different effort.
 It's
  always been a factor in our choices, but like Devananda says,
 it's more
  difficult than just checking a number of QA/integration
 checkboxes. In
  some cases, blessing one project in a problem space stifles
 competition,
  innovation and alternate approaches. In some other cases, we reinvent
  domain-specific solutions rather than standing on the shoulders of
  domain-specific giants in neighboring open source projects.
 
  I totally agree that these are the things we need to be vigilant
 about.
 
  Stifling competition is a big worry, but it appears to me that a
 lot of
  the stifling is happening even before incubation. Everyone's time is
  limited, so if you happen to notice a new project on the incubation
  trajectory doing things in what you think is the Wrong Way, you're
 most
  likely to either leave some drive-by feedback or to just ignore it and
  carry on with your life. What you're most likely *not* to do is to
 start
  a competing project to prove them wrong, or to jump in full time
 to the
  existing project and show them the light. It's really hard to argue
  against the domain experts too - when you're acutely aware of how
  shallow your knowledge is in a particular area it's very hard to know
  how hard to push. (Perhaps ironically, since becoming a PTL I feel I
  have to be much more cautious in what I say too, because people are
  inclined to read too much into my opinion - I wonder if TC members
 feel
  the same pressure.) I speak from first-hand instances of guilt here -
  for example, I gave some feedback to the Mistral folks just before the
  last design summit[1], but I haven't had time to follow it up at
 all. I
  wouldn't be a bit surprised if they showed up with an incubation
  request, a largely-unchanged user interface and an expectation that I
  would support it.
 
  The result is that projects often don't hear the feedback they need
  until far too late - often when they get to the incubation review
 (maybe
  not even their first incubation review). In the particularly
 unfortunate
  case of Marconi, it wasn't until the graduation review. (More
 about that
  in a second.) My best advice to new projects here is that you must be
  like a ferret up the pant-leg of any negative feedback. Grab hold
 of any
  criticism and don't let go until you have either converted the person
  giving it into your biggest supporter, been converted by them, or
  provoked them to start a competing project. (Any of those is a win as
  far as the community is concerned.)
 
  Perhaps we could consider a space like a separate mailing list
  (openstack-future?) reserved just for announcements of Related
 projects,
  their architectural principles, and discussions of the same?  They
  certainly tend to get drowned out amidst the noise of openstack-dev.
  (Project management, meeting announcements, and internal project
  discussion would all be out of scope for this list.)
 
  As for reinventing domain-specific solutions, I'm not sure that
 happens
  as often as is being made out. IMO the defining feature of IaaS that
  makes the cloud the cloud is on-demand (i.e. real-time) self-service.
  Everything else more or less falls out of that requirement, but
 the very
  first thing to fall out is multi-tenancy and there just aren't
 that many
  multi-tenant services floating around out there. There are a couple of
  obvious strategies to deal with that: one is to run existing software
  within a tenant-local resource provisioned by OpenStack (Trove and
  Sahara are examples of this), and the other is to wrap a multi-tenancy
  framework around an existing piece of software (Nova and Cinder are
  examples of this). (BTW the former is usually inherently less
  satisfying, because it scales at a much coarser granularity.) The
 answer
  to a question of the form:
 
  Why do we need OpenStack project $X, when open source project $Y
  already exists?
 
  is almost always:
 
  Because $Y is not multi-tenant aware; we need to wrap it with a
  multi-tenancy layer with OpenStack-native authentication, metering and
  quota management. That even allows us to set up an abstraction
 layer so
  that you can substitute $Z as the back end too.
 
  This is completely 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-27 Thread Doug Hellmann

On Aug 27, 2014, at 8:47 AM, Sean Dague s...@dague.net wrote:

 On 08/26/2014 11:40 AM, Anne Gentle wrote:
 
 
 
 On Mon, Aug 25, 2014 at 8:36 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
On 08/20/2014 12:37 PM, Zane Bitter wrote:
 On 11/08/14 05:24, Thierry Carrez wrote:
 So the idea that being (and remaining) in the integrated release
should
 also be judged on technical merit is a slightly different effort.
It's
 always been a factor in our choices, but like Devananda says,
it's more
 difficult than just checking a number of QA/integration
checkboxes. In
 some cases, blessing one project in a problem space stifles
competition,
 innovation and alternate approaches. In some other cases, we reinvent
 domain-specific solutions rather than standing on the shoulders of
 domain-specific giants in neighboring open source projects.
 
 I totally agree that these are the things we need to be vigilant
about.
 
 Stifling competition is a big worry, but it appears to me that a
lot of
 the stifling is happening even before incubation. Everyone's time is
 limited, so if you happen to notice a new project on the incubation
 trajectory doing things in what you think is the Wrong Way, you're
most
 likely to either leave some drive-by feedback or to just ignore it and
 carry on with your life. What you're most likely *not* to do is to
start
 a competing project to prove them wrong, or to jump in full time
to the
 existing project and show them the light. It's really hard to argue
 against the domain experts too - when you're acutely aware of how
 shallow your knowledge is in a particular area it's very hard to know
 how hard to push. (Perhaps ironically, since becoming a PTL I feel I
 have to be much more cautious in what I say too, because people are
 inclined to read too much into my opinion - I wonder if TC members
feel
 the same pressure.) I speak from first-hand instances of guilt here -
 for example, I gave some feedback to the Mistral folks just before the
 last design summit[1], but I haven't had time to follow it up at
all. I
 wouldn't be a bit surprised if they showed up with an incubation
 request, a largely-unchanged user interface and an expectation that I
 would support it.
 
 The result is that projects often don't hear the feedback they need
 until far too late - often when they get to the incubation review
(maybe
 not even their first incubation review). In the particularly
unfortunate
 case of Marconi, it wasn't until the graduation review. (More
about that
 in a second.) My best advice to new projects here is that you must be
 like a ferret up the pant-leg of any negative feedback. Grab hold
of any
 criticism and don't let go until you have either converted the person
 giving it into your biggest supporter, been converted by them, or
 provoked them to start a competing project. (Any of those is a win as
 far as the community is concerned.)
 
 Perhaps we could consider a space like a separate mailing list
 (openstack-future?) reserved just for announcements of Related
projects,
 their architectural principles, and discussions of the same?  They
 certainly tend to get drowned out amidst the noise of openstack-dev.
 (Project management, meeting announcements, and internal project
 discussion would all be out of scope for this list.)
 
 As for reinventing domain-specific solutions, I'm not sure that
happens
 as often as is being made out. IMO the defining feature of IaaS that
 makes the cloud the cloud is on-demand (i.e. real-time) self-service.
 Everything else more or less falls out of that requirement, but
the very
 first thing to fall out is multi-tenancy and there just aren't
that many
 multi-tenant services floating around out there. There are a couple of
 obvious strategies to deal with that: one is to run existing software
 within a tenant-local resource provisioned by OpenStack (Trove and
 Sahara are examples of this), and the other is to wrap a multi-tenancy
 framework around an existing piece of software (Nova and Cinder are
 examples of this). (BTW the former is usually inherently less
 satisfying, because it scales at a much coarser granularity.) The
answer
 to a question of the form:
 
 Why do we need OpenStack project $X, when open source project $Y
 already exists?
 
 is almost always:
 
 Because $Y is not multi-tenant aware; we need to wrap it with a
 multi-tenancy layer with OpenStack-native authentication, metering and
 quota management. That even allows us to set up an abstraction
layer so
 that you can substitute $Z as the back end too.
 
 This is completely uncontroversial when you substitute X, Y, Z = Nova,
 libvirt, Xen. However, when you instead substitute X, Y, Z =
 Zaqar/Marconi, Qpid, MongoDB it suddenly becomes *highly*
controversial.
 I'm all in favour of a healthy scepticism, but I think we've
passed that
 point now. (How would *you* make an AMQP bus 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-27 Thread Doug Hellmann

On Aug 26, 2014, at 2:01 PM, Joe Gordon joe.gord...@gmail.com wrote:

 
 
 
 On Wed, Aug 20, 2014 at 2:25 AM, Eoghan Glynn egl...@redhat.com wrote:
 
 
Additional cross-project resources can be ponied up by the large
contributor companies, and existing cross-project resources are not
necessarily divertable on command.
  
   Sure additional cross-project resources can and need to be ponied up, but 
   I
   am doubtful that will be enough.
 
  OK, so what exactly do you suspect wouldn't be enough, for what
  exactly?
 
 
  I am not sure what would be enough to get OpenStack back in a position where
  more developers/users are happier with the current state of affairs. Which
  is why I think we may want to try several things.
 
 
 
  Is it the likely number of such new resources, or the level of domain-
  expertise that they can be realistically be expected bring to the
  table, or the period of time to on-board them, or something else?
 
 
  Yes, all of the above.
 
 Hi Joe,
 
 In coming to that conclusion, have you thought about and explicitly
 rejected all of the approaches that have been mooted to mitigate
 those concerns? 
 
 Is there a strong reason why the following non-exhaustive list
 would all be doomed to failure:
 
  * encouraging projects to follow the successful Sahara model,
where one core contributor also made a large contribution to
a cross-project effort (in this case infra, but could be QA
or docs or release management or stable-maint ... etc)
 
[this could be seen as essentially offsetting the cost of
 that additional project drawing from the cross-project well]
 
  * assigning liaisons from each project to *each* of the cross-
project efforts
 
[this could be augmented/accelerated with one of the standard
 on-boarding approaches, such as a designated mentor for the
 liaison or even an immersive period of secondment]
 
  * applying back-pressure via the board representation to make
it more likely that the appropriate number of net-new
cross-project resources are forthcoming
 
[c.f. Stef's we're not amateurs or volunteers mail earlier
 on this thread]
 
 All of these are good ideas and I think we should try them. I am just afraid 
 this won't be enough.
 
 Imagine for a second, that the gate is is always stable, and none of the 
 existing cross project efforts are short staffed. OpenStack would still has a 
 pretty poor user experience and return errors in production. Our 'official' 
 CLIs are poor, our logs are cryptic, we have scaling issues (by number of 
 nodes), people are concerned about operational readiness [0], upgrades are 
 very painful, etc. Solving the issue of scaling cross project efforts is not 
 enough, we still have to solve a whole slew of usability issues. 

These are indeed problems, and AFAICT, we don’t really have a structure in 
place to solve some of them directly. There’s the unified CLI project, which is 
making good progress. The SDK project started this cycle as well. I don’t have 
the impression, though, that either of those has quite the traction we need to 
fully replace the in-project versions, yet. Sean has some notes for making 
logging better, but I don’t think there’s a team working on those changes yet 
either.

The challenge with most of these cross-project initiatives is that they need 
every project to contribute resources, at least in the form of reviews if not 
code, but every project also has its own priorities. Would the situation be 
improved if we had a more formal way for the TC to say to projects, “this cycle 
we need you to dedicate resources to work on X with this cross-project team, 
even if that means deprioritizing something else”, similar to what we’ve done 
recently with the gap analysis?

Doug

 
 [0] http://robhirschfeld.com/2014/08/04/oscon-report/
 
  
 
 I really think we need to do better than dismissing out-of-hand
 the idea of beefing up the cross-project efforts. If it won't
 work for specific reasons, let's get those reasons out onto
 the table and make a data-driven decision on this.
 
  And which cross-project concern do you think is most strained by the
  current set of projects in the integrated release? Is it:
 
  * QA
  * infra
  * release management
  * oslo
  * documentation
  * stable-maint
 
  or something else?
 
 
  Good question.
 
  IMHO QA, Infra and release management are probably the most strained.
 
 OK, well let's brain-storm on how some of those efforts could
 potentially be made more scalable.
 
 Should we for example start to look at release management as a
 program onto itself, with a PTL *and* a group of cores to divide
 and conquer the load?
 
 (the hands-on rel mgmt for the juno-2 milestone, for example, was
  delegated - is there a good reason why such delegation wouldn't
  work as a matter of course?)
 
 Should QA programs such as grenade be actively seeking new cores to
 spread the workload?
 
 (until recently, this had the 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-27 Thread Doug Hellmann

On Aug 27, 2014, at 11:17 AM, Chris Dent chd...@redhat.com wrote:

 On Wed, 27 Aug 2014, Doug Hellmann wrote:
 
 For example, Matt helped me with an issue yesterday, and afterwards
 I asked him to write up a few details about how he reached his
 conclusion because he was moving fast enough that I wasn’t
 actually learning anything from what he was saying to me on IRC.
 Having an example with some logs and then even stream of
 consciousness notes like “I noticed the out of memory error, and
 then I found the first instance of that and looked at the oom-killer
 report in syslog to see which process was killed and it was X which
 might mean Y” would help.
 
 +many
 
 I'd _love_ to be more capable at gate debugging.
 
 That said, it does get easier just by doing it. The first many times
 is like beating my head against the wall, especially the constant
 sense of where am I and where do I need to go.

I definitely know the feeling. I don’t expect to become an expert, but given my 
focus on turning out libraries for Oslo it’s hard to find time to “practice” 
enough to get past the frustrated phase. If I had even some hints to look at, 
that would help me, and I’m sure others.

I have found it immensely helpful, for example, to have a written set of the 
steps involved in creating a new library, from importing the git repo all the 
way through to making it available to other projects. Without those 
instructions, it would have been much harder to split up the work. The team 
would have had to train each other by word of mouth, and we would have had 
constant issues with inconsistent approaches triggering different failures. The 
time we spent building and verifying the instructions has paid off to the 
extent that we even had one developer not on the core team handle a graduation 
for us.

Doug

 
 -- 
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-27 Thread Chris Dent

On Wed, 27 Aug 2014, Doug Hellmann wrote:


I have found it immensely helpful, for example, to have a written set
of the steps involved in creating a new library, from importing the
git repo all the way through to making it available to other projects.
Without those instructions, it would have been much harder to split up
the work. The team would have had to train each other by word of
mouth, and we would have had constant issues with inconsistent
approaches triggering different failures. The time we spent building
and verifying the instructions has paid off to the extent that we even
had one developer not on the core team handle a graduation for us.


+many more for the relatively simple act of just writing stuff down

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-27 Thread Doug Hellmann

On Aug 27, 2014, at 1:30 PM, Chris Dent chd...@redhat.com wrote:

 On Wed, 27 Aug 2014, Doug Hellmann wrote:
 
 I have found it immensely helpful, for example, to have a written set
 of the steps involved in creating a new library, from importing the
 git repo all the way through to making it available to other projects.
 Without those instructions, it would have been much harder to split up
 the work. The team would have had to train each other by word of
 mouth, and we would have had constant issues with inconsistent
 approaches triggering different failures. The time we spent building
 and verifying the instructions has paid off to the extent that we even
 had one developer not on the core team handle a graduation for us.
 
 +many more for the relatively simple act of just writing stuff down

Write it down.” is my theme for Kilo.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-26 Thread Jay Pipes

On 08/25/2014 03:50 PM, Adam Lawson wrote:

I recognize I'm joining the discussion late but I've been following the
dialog fairly closely and want to offer my perspective FWIW. I have a
lot going through my head, not sure how to get it all out there so I'll
do a brain dump, get some feedback and apologize in advance.

One the things I like most about Openstack is its incredible flexibility
- a modular architecture where certain programs/capabilities can be
leveraged for a specific install - or not, and ideally the rest of the
feature suite remains functional irrespective of a program status. When
it comes to a program being approved as part of Openstack Proper (pardon
my stepping over that discussion), I think a LOT of what is being
discussed here touches on what Openstack will ultimately be about and
what it won't.

With products like Cloudstack floating around consuming market share,
all I see is Citrix. A product billed as open source but so closely
aligned with one vendor that it almost doesn't matter. They have matured
decision structure, UI polish and organized support but they don't have
community. Not like us anyway. With Openstack the moral authority to
call ourselves the champions of open cloud and with that, we have
competing interests that make our products better. We don't have a
single vendor (yet) that dictates whether something will happen or not.
The maturity of the Openstack products themselves are driven by a
community of consumers where the needs are accommodated rather than sold.

A positive than comes with such a transparent design pipeline is the
increased capability for design agility and accommodating changes when a
change is needed. But I'm becoming increasingly disappointed at the
mount of attention being given to whether one product is blessed by
Openstack or not. In a modular design, these programs should be
interchangeable with only a couple exceptions. Does being blessed really
matter? The consensus I've garnered in this thread is the desperate need
for the consuming community's continued involvement. What I
/haven't/ heard much about is how Openstack can standardize how these
programs - blessed or not - can interact with the rest of the suite to
the extent they adhere to the correct inputs/outputs which makes them
functional. Program status is irrelevant.

I guess when it comes right down to it, I love what Openstack is and
where we're going and I especially appreciate these discussions. But I'm
disappointed at the number of concerns I've been reading about things
that ultimately don't matter (like being blessed, about who has the
power, etc) and I have concerns we lose sight what this is all about to
the point that the vision for Openstack gets clouded.

We have a good thing and no project can accommodate every request so a
decision must be made as to what is 'included' and what is 'supported'.
But with modularity, it really doesn't matter one iota if a program is
blessed in the Openstack integrated release cycle or not.


Couldn't agree with you more, Adam. I believe if OpenStack is to succeed 
in the future, our community and our governance structure needs to 
embrace the tremendous growth in scope that OpenStack's success to-date 
has generated. The last thing we should do, IMO, is reverse course and 
act like a single-vendor product in order to tame the wildlings.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-26 Thread Anne Gentle
On Mon, Aug 25, 2014 at 8:36 AM, Sean Dague s...@dague.net wrote:

 On 08/20/2014 12:37 PM, Zane Bitter wrote:
  On 11/08/14 05:24, Thierry Carrez wrote:
  So the idea that being (and remaining) in the integrated release should
  also be judged on technical merit is a slightly different effort. It's
  always been a factor in our choices, but like Devananda says, it's more
  difficult than just checking a number of QA/integration checkboxes. In
  some cases, blessing one project in a problem space stifles competition,
  innovation and alternate approaches. In some other cases, we reinvent
  domain-specific solutions rather than standing on the shoulders of
  domain-specific giants in neighboring open source projects.
 
  I totally agree that these are the things we need to be vigilant about.
 
  Stifling competition is a big worry, but it appears to me that a lot of
  the stifling is happening even before incubation. Everyone's time is
  limited, so if you happen to notice a new project on the incubation
  trajectory doing things in what you think is the Wrong Way, you're most
  likely to either leave some drive-by feedback or to just ignore it and
  carry on with your life. What you're most likely *not* to do is to start
  a competing project to prove them wrong, or to jump in full time to the
  existing project and show them the light. It's really hard to argue
  against the domain experts too - when you're acutely aware of how
  shallow your knowledge is in a particular area it's very hard to know
  how hard to push. (Perhaps ironically, since becoming a PTL I feel I
  have to be much more cautious in what I say too, because people are
  inclined to read too much into my opinion - I wonder if TC members feel
  the same pressure.) I speak from first-hand instances of guilt here -
  for example, I gave some feedback to the Mistral folks just before the
  last design summit[1], but I haven't had time to follow it up at all. I
  wouldn't be a bit surprised if they showed up with an incubation
  request, a largely-unchanged user interface and an expectation that I
  would support it.
 
  The result is that projects often don't hear the feedback they need
  until far too late - often when they get to the incubation review (maybe
  not even their first incubation review). In the particularly unfortunate
  case of Marconi, it wasn't until the graduation review. (More about that
  in a second.) My best advice to new projects here is that you must be
  like a ferret up the pant-leg of any negative feedback. Grab hold of any
  criticism and don't let go until you have either converted the person
  giving it into your biggest supporter, been converted by them, or
  provoked them to start a competing project. (Any of those is a win as
  far as the community is concerned.)
 
  Perhaps we could consider a space like a separate mailing list
  (openstack-future?) reserved just for announcements of Related projects,
  their architectural principles, and discussions of the same?  They
  certainly tend to get drowned out amidst the noise of openstack-dev.
  (Project management, meeting announcements, and internal project
  discussion would all be out of scope for this list.)
 
  As for reinventing domain-specific solutions, I'm not sure that happens
  as often as is being made out. IMO the defining feature of IaaS that
  makes the cloud the cloud is on-demand (i.e. real-time) self-service.
  Everything else more or less falls out of that requirement, but the very
  first thing to fall out is multi-tenancy and there just aren't that many
  multi-tenant services floating around out there. There are a couple of
  obvious strategies to deal with that: one is to run existing software
  within a tenant-local resource provisioned by OpenStack (Trove and
  Sahara are examples of this), and the other is to wrap a multi-tenancy
  framework around an existing piece of software (Nova and Cinder are
  examples of this). (BTW the former is usually inherently less
  satisfying, because it scales at a much coarser granularity.) The answer
  to a question of the form:
 
  Why do we need OpenStack project $X, when open source project $Y
  already exists?
 
  is almost always:
 
  Because $Y is not multi-tenant aware; we need to wrap it with a
  multi-tenancy layer with OpenStack-native authentication, metering and
  quota management. That even allows us to set up an abstraction layer so
  that you can substitute $Z as the back end too.
 
  This is completely uncontroversial when you substitute X, Y, Z = Nova,
  libvirt, Xen. However, when you instead substitute X, Y, Z =
  Zaqar/Marconi, Qpid, MongoDB it suddenly becomes *highly* controversial.
  I'm all in favour of a healthy scepticism, but I think we've passed that
  point now. (How would *you* make an AMQP bus multi-tenant?)
 
  To be clear, Marconi did made a mistake. The Marconi API presented
  semantics to the user that excluded many otherwise-obvious choices of
  back-end plugin 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-26 Thread Joe Gordon
On Wed, Aug 20, 2014 at 2:25 AM, Eoghan Glynn egl...@redhat.com wrote:



Additional cross-project resources can be ponied up by the large
contributor companies, and existing cross-project resources are not
necessarily divertable on command.
  
   Sure additional cross-project resources can and need to be ponied up,
 but I
   am doubtful that will be enough.
 
  OK, so what exactly do you suspect wouldn't be enough, for what
  exactly?
 
 
  I am not sure what would be enough to get OpenStack back in a position
 where
  more developers/users are happier with the current state of affairs.
 Which
  is why I think we may want to try several things.
 
 
 
  Is it the likely number of such new resources, or the level of domain-
  expertise that they can be realistically be expected bring to the
  table, or the period of time to on-board them, or something else?
 
 
  Yes, all of the above.

 Hi Joe,

 In coming to that conclusion, have you thought about and explicitly
 rejected all of the approaches that have been mooted to mitigate
 those concerns?


 Is there a strong reason why the following non-exhaustive list
 would all be doomed to failure:

  * encouraging projects to follow the successful Sahara model,
where one core contributor also made a large contribution to
a cross-project effort (in this case infra, but could be QA
or docs or release management or stable-maint ... etc)

[this could be seen as essentially offsetting the cost of
 that additional project drawing from the cross-project well]

  * assigning liaisons from each project to *each* of the cross-
project efforts

[this could be augmented/accelerated with one of the standard
 on-boarding approaches, such as a designated mentor for the
 liaison or even an immersive period of secondment]

  * applying back-pressure via the board representation to make
it more likely that the appropriate number of net-new
cross-project resources are forthcoming

[c.f. Stef's we're not amateurs or volunteers mail earlier
 on this thread]


All of these are good ideas and I think we should try them. I am just
afraid this won't be enough.

Imagine for a second, that the gate is is always stable, and none of the
existing cross project efforts are short staffed. OpenStack would still has
a pretty poor user experience and return errors in production. Our
'official' CLIs are poor, our logs are cryptic, we have scaling issues (by
number of nodes), people are concerned about operational readiness [0],
upgrades are very painful, etc. Solving the issue of scaling cross project
efforts is not enough, we still have to solve a whole slew of usability
issues.

[0] http://robhirschfeld.com/2014/08/04/oscon-report/




 I really think we need to do better than dismissing out-of-hand
 the idea of beefing up the cross-project efforts. If it won't
 work for specific reasons, let's get those reasons out onto
 the table and make a data-driven decision on this.

  And which cross-project concern do you think is most strained by the
  current set of projects in the integrated release? Is it:
 
  * QA
  * infra
  * release management
  * oslo
  * documentation
  * stable-maint
 
  or something else?
 
 
  Good question.
 
  IMHO QA, Infra and release management are probably the most strained.

 OK, well let's brain-storm on how some of those efforts could
 potentially be made more scalable.

 Should we for example start to look at release management as a
 program onto itself, with a PTL *and* a group of cores to divide
 and conquer the load?

 (the hands-on rel mgmt for the juno-2 milestone, for example, was
  delegated - is there a good reason why such delegation wouldn't
  work as a matter of course?)

 Should QA programs such as grenade be actively seeking new cores to
 spread the workload?

 (until recently, this had the effective minimum of 2 cores, despite
  now being a requirement for integrated projects)

 Could the infra group potentially delegate some of the workload onto
 the distro folks?

 (given that it's strongly in their interest to have their distro
  represented in the CI gate.

 None of the above ideas may make sense, but it doesn't feel like
 every avenue has been explored here. I for one don't feel entirely
 satisfied that every potential solution to cross-project strain was
 fully thought-out in advance of the de-integration being presented
 as the solution.

 Just my $0.02 ...

 Cheers,
 Eoghan

 [on vacation with limited connectivity]

  But I also think there is something missing from this list. Many of the
 projects
  are hitting similar issues and end up solving them in different ways,
 which
  just leads to more confusion for the end user. Today we have a decent
 model
  for rolling out cross-project libraries (Oslo) but we don't have a good
 way
  of having broader cross project discussions such as: API standards (such
 as
  discoverability of features), logging standards, aligning on concepts
 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-26 Thread Angus Salkeld
On Wed, Aug 27, 2014 at 4:01 AM, Joe Gordon joe.gord...@gmail.com wrote:




 On Wed, Aug 20, 2014 at 2:25 AM, Eoghan Glynn egl...@redhat.com wrote:



Additional cross-project resources can be ponied up by the large
contributor companies, and existing cross-project resources are not
necessarily divertable on command.
  
   Sure additional cross-project resources can and need to be ponied up,
 but I
   am doubtful that will be enough.
 
  OK, so what exactly do you suspect wouldn't be enough, for what
  exactly?
 
 
  I am not sure what would be enough to get OpenStack back in a position
 where
  more developers/users are happier with the current state of affairs.
 Which
  is why I think we may want to try several things.
 
 
 
  Is it the likely number of such new resources, or the level of domain-
  expertise that they can be realistically be expected bring to the
  table, or the period of time to on-board them, or something else?
 
 
  Yes, all of the above.

 Hi Joe,

 In coming to that conclusion, have you thought about and explicitly
 rejected all of the approaches that have been mooted to mitigate
 those concerns?


 Is there a strong reason why the following non-exhaustive list
 would all be doomed to failure:

  * encouraging projects to follow the successful Sahara model,
where one core contributor also made a large contribution to
a cross-project effort (in this case infra, but could be QA
or docs or release management or stable-maint ... etc)

[this could be seen as essentially offsetting the cost of
 that additional project drawing from the cross-project well]

  * assigning liaisons from each project to *each* of the cross-
project efforts

[this could be augmented/accelerated with one of the standard
 on-boarding approaches, such as a designated mentor for the
 liaison or even an immersive period of secondment]

  * applying back-pressure via the board representation to make
it more likely that the appropriate number of net-new
cross-project resources are forthcoming

[c.f. Stef's we're not amateurs or volunteers mail earlier
 on this thread]


 All of these are good ideas and I think we should try them. I am just
 afraid this won't be enough.

 Imagine for a second, that the gate is is always stable, and none of the
 existing cross project efforts are short staffed. OpenStack would still has
 a pretty poor user experience and return errors in production. Our
 'official' CLIs are poor, our logs are cryptic, we have scaling issues (by
 number of nodes), people are concerned about operational readiness [0],
 upgrades are very painful, etc. Solving the issue of scaling cross project
 efforts is not enough, we still have to solve a whole slew of usability
 issues.


I believe developers working on OpenStack work for companies that really
want this to
happen. The developers also want their projects to be well regarded. Just
the way
the problem is using framed is a bit like you did above and this is very
daunting for any
one person to solve.
If we can we quantify the problem, break the work into doable items of work
(bugs) and prioritized
it will be solved a lot faster.


-Angus


 [0] http://robhirschfeld.com/2014/08/04/oscon-report/




 I really think we need to do better than dismissing out-of-hand
 the idea of beefing up the cross-project efforts. If it won't
 work for specific reasons, let's get those reasons out onto
 the table and make a data-driven decision on this.

  And which cross-project concern do you think is most strained by the
  current set of projects in the integrated release? Is it:
 
  * QA
  * infra
  * release management
  * oslo
  * documentation
  * stable-maint
 
  or something else?
 
 
  Good question.
 
  IMHO QA, Infra and release management are probably the most strained.

 OK, well let's brain-storm on how some of those efforts could
 potentially be made more scalable.

 Should we for example start to look at release management as a
 program onto itself, with a PTL *and* a group of cores to divide
 and conquer the load?

 (the hands-on rel mgmt for the juno-2 milestone, for example, was
  delegated - is there a good reason why such delegation wouldn't
  work as a matter of course?)

 Should QA programs such as grenade be actively seeking new cores to
 spread the workload?

 (until recently, this had the effective minimum of 2 cores, despite
  now being a requirement for integrated projects)

 Could the infra group potentially delegate some of the workload onto
 the distro folks?

 (given that it's strongly in their interest to have their distro
  represented in the CI gate.

 None of the above ideas may make sense, but it doesn't feel like
 every avenue has been explored here. I for one don't feel entirely
 satisfied that every potential solution to cross-project strain was
 fully thought-out in advance of the de-integration being presented
 as the solution.

 Just my $0.02 ...

 Cheers,
 Eoghan

 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-26 Thread Rochelle.RochelleGrober


On August 26, 2014, Anne Gentle wrote:
On Mon, Aug 25, 2014 at 8:36 AM, Sean Dague 
s...@dague.netmailto:s...@dague.net wrote:
On 08/20/2014 12:37 PM, Zane Bitter wrote:
 On 11/08/14 05:24, Thierry Carrez wrote:
 So the idea that being (and remaining) in the integrated release should
 also be judged on technical merit is a slightly different effort. It's
 always been a factor in our choices, but like Devananda says, it's more
 difficult than just checking a number of QA/integration checkboxes. In
 some cases, blessing one project in a problem space stifles competition,
 innovation and alternate approaches. In some other cases, we reinvent
 domain-specific solutions rather than standing on the shoulders of
 domain-specific giants in neighboring open source projects.

 I totally agree that these are the things we need to be vigilant about.

 Stifling competition is a big worry, but it appears to me that a lot of
 the stifling is happening even before incubation. Everyone's time is
 limited, so if you happen to notice a new project on the incubation
 trajectory doing things in what you think is the Wrong Way, you're most
 likely to either leave some drive-by feedback or to just ignore it and
 carry on with your life. What you're most likely *not* to do is to start
 a competing project to prove them wrong, or to jump in full time to the
 existing project and show them the light. It's really hard to argue
 against the domain experts too - when you're acutely aware of how
 shallow your knowledge is in a particular area it's very hard to know
 how hard to push. (Perhaps ironically, since becoming a PTL I feel I
 have to be much more cautious in what I say too, because people are
 inclined to read too much into my opinion - I wonder if TC members feel
 the same pressure.) I speak from first-hand instances of guilt here -
 for example, I gave some feedback to the Mistral folks just before the
 last design summit[1], but I haven't had time to follow it up at all. I
 wouldn't be a bit surprised if they showed up with an incubation
 request, a largely-unchanged user interface and an expectation that I
 would support it.

 The result is that projects often don't hear the feedback they need
 until far too late - often when they get to the incubation review (maybe
 not even their first incubation review). In the particularly unfortunate
 case of Marconi, it wasn't until the graduation review. (More about that
 in a second.) My best advice to new projects here is that you must be
 like a ferret up the pant-leg of any negative feedback. Grab hold of any
 criticism and don't let go until you have either converted the person
 giving it into your biggest supporter, been converted by them, or
 provoked them to start a competing project. (Any of those is a win as
 far as the community is concerned.)

 Perhaps we could consider a space like a separate mailing list
 (openstack-future?) reserved just for announcements of Related projects,
 their architectural principles, and discussions of the same?  They
 certainly tend to get drowned out amidst the noise of openstack-dev.
 (Project management, meeting announcements, and internal project
 discussion would all be out of scope for this list.)

 As for reinventing domain-specific solutions, I'm not sure that happens
 as often as is being made out. IMO the defining feature of IaaS that
 makes the cloud the cloud is on-demand (i.e. real-time) self-service.
 Everything else more or less falls out of that requirement, but the very
 first thing to fall out is multi-tenancy and there just aren't that many
 multi-tenant services floating around out there. There are a couple of
 obvious strategies to deal with that: one is to run existing software
 within a tenant-local resource provisioned by OpenStack (Trove and
 Sahara are examples of this), and the other is to wrap a multi-tenancy
 framework around an existing piece of software (Nova and Cinder are
 examples of this). (BTW the former is usually inherently less
 satisfying, because it scales at a much coarser granularity.) The answer
 to a question of the form:

 Why do we need OpenStack project $X, when open source project $Y
 already exists?

 is almost always:

 Because $Y is not multi-tenant aware; we need to wrap it with a
 multi-tenancy layer with OpenStack-native authentication, metering and
 quota management. That even allows us to set up an abstraction layer so
 that you can substitute $Z as the back end too.

 This is completely uncontroversial when you substitute X, Y, Z = Nova,
 libvirt, Xen. However, when you instead substitute X, Y, Z =
 Zaqar/Marconi, Qpid, MongoDB it suddenly becomes *highly* controversial.
 I'm all in favour of a healthy scepticism, but I think we've passed that
 point now. (How would *you* make an AMQP bus multi-tenant?)

 To be clear, Marconi did made a mistake. The Marconi API presented
 semantics to the user that excluded many otherwise-obvious choices of
 back-end plugin (i.e. Qpid/RabbitMQ). It 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-25 Thread Sean Dague
On 08/20/2014 12:37 PM, Zane Bitter wrote:
 On 11/08/14 05:24, Thierry Carrez wrote:
 So the idea that being (and remaining) in the integrated release should
 also be judged on technical merit is a slightly different effort. It's
 always been a factor in our choices, but like Devananda says, it's more
 difficult than just checking a number of QA/integration checkboxes. In
 some cases, blessing one project in a problem space stifles competition,
 innovation and alternate approaches. In some other cases, we reinvent
 domain-specific solutions rather than standing on the shoulders of
 domain-specific giants in neighboring open source projects.
 
 I totally agree that these are the things we need to be vigilant about.
 
 Stifling competition is a big worry, but it appears to me that a lot of
 the stifling is happening even before incubation. Everyone's time is
 limited, so if you happen to notice a new project on the incubation
 trajectory doing things in what you think is the Wrong Way, you're most
 likely to either leave some drive-by feedback or to just ignore it and
 carry on with your life. What you're most likely *not* to do is to start
 a competing project to prove them wrong, or to jump in full time to the
 existing project and show them the light. It's really hard to argue
 against the domain experts too - when you're acutely aware of how
 shallow your knowledge is in a particular area it's very hard to know
 how hard to push. (Perhaps ironically, since becoming a PTL I feel I
 have to be much more cautious in what I say too, because people are
 inclined to read too much into my opinion - I wonder if TC members feel
 the same pressure.) I speak from first-hand instances of guilt here -
 for example, I gave some feedback to the Mistral folks just before the
 last design summit[1], but I haven't had time to follow it up at all. I
 wouldn't be a bit surprised if they showed up with an incubation
 request, a largely-unchanged user interface and an expectation that I
 would support it.
 
 The result is that projects often don't hear the feedback they need
 until far too late - often when they get to the incubation review (maybe
 not even their first incubation review). In the particularly unfortunate
 case of Marconi, it wasn't until the graduation review. (More about that
 in a second.) My best advice to new projects here is that you must be
 like a ferret up the pant-leg of any negative feedback. Grab hold of any
 criticism and don't let go until you have either converted the person
 giving it into your biggest supporter, been converted by them, or
 provoked them to start a competing project. (Any of those is a win as
 far as the community is concerned.)
 
 Perhaps we could consider a space like a separate mailing list
 (openstack-future?) reserved just for announcements of Related projects,
 their architectural principles, and discussions of the same?  They
 certainly tend to get drowned out amidst the noise of openstack-dev.
 (Project management, meeting announcements, and internal project
 discussion would all be out of scope for this list.)
 
 As for reinventing domain-specific solutions, I'm not sure that happens
 as often as is being made out. IMO the defining feature of IaaS that
 makes the cloud the cloud is on-demand (i.e. real-time) self-service.
 Everything else more or less falls out of that requirement, but the very
 first thing to fall out is multi-tenancy and there just aren't that many
 multi-tenant services floating around out there. There are a couple of
 obvious strategies to deal with that: one is to run existing software
 within a tenant-local resource provisioned by OpenStack (Trove and
 Sahara are examples of this), and the other is to wrap a multi-tenancy
 framework around an existing piece of software (Nova and Cinder are
 examples of this). (BTW the former is usually inherently less
 satisfying, because it scales at a much coarser granularity.) The answer
 to a question of the form:
 
 Why do we need OpenStack project $X, when open source project $Y
 already exists?
 
 is almost always:
 
 Because $Y is not multi-tenant aware; we need to wrap it with a
 multi-tenancy layer with OpenStack-native authentication, metering and
 quota management. That even allows us to set up an abstraction layer so
 that you can substitute $Z as the back end too.
 
 This is completely uncontroversial when you substitute X, Y, Z = Nova,
 libvirt, Xen. However, when you instead substitute X, Y, Z =
 Zaqar/Marconi, Qpid, MongoDB it suddenly becomes *highly* controversial.
 I'm all in favour of a healthy scepticism, but I think we've passed that
 point now. (How would *you* make an AMQP bus multi-tenant?)
 
 To be clear, Marconi did made a mistake. The Marconi API presented
 semantics to the user that excluded many otherwise-obvious choices of
 back-end plugin (i.e. Qpid/RabbitMQ). It seems to be a common thing (see
 also: Mistral) to want to design for every feature an existing
 Enterprisey 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-25 Thread Adam Lawson
I recognize I'm joining the discussion late but I've been following the
dialog fairly closely and want to offer my perspective FWIW. I have a lot
going through my head, not sure how to get it all out there so I'll do a
brain dump, get some feedback and apologize in advance.

One the things I like most about Openstack is its incredible flexibility -
a modular architecture where certain programs/capabilities can be leveraged
for a specific install - or not, and ideally the rest of the feature suite
remains functional irrespective of a program status. When it comes to a
program being approved as part of Openstack Proper (pardon my stepping over
that discussion), I think a LOT of what is being discussed here touches on
what Openstack will ultimately be about and what it won't.

With products like Cloudstack floating around consuming market share, all I
see is Citrix. A product billed as open source but so closely aligned with
one vendor that it almost doesn't matter. They have matured decision
structure, UI polish and organized support but they don't have community.
Not like us anyway. With Openstack the moral authority to call ourselves
the champions of open cloud and with that, we have competing interests that
make our products better. We don't have a single vendor (yet) that dictates
whether something will happen or not. The maturity of the Openstack
products themselves are driven by a community of consumers where the needs
are accommodated rather than sold.

A positive than comes with such a transparent design pipeline is the
increased capability for design agility and accommodating changes when a
change is needed. But I'm becoming increasingly disappointed at the mount
of attention being given to whether one product is blessed by Openstack or
not. In a modular design, these programs should be interchangeable with
only a couple exceptions. Does being blessed really matter? The consensus
I've garnered in this thread is the desperate need for the consuming
community's continued involvement. What I *haven't* heard much about is how
Openstack can standardize how these programs - blessed or not - can
interact with the rest of the suite to the extent they adhere to the
correct inputs/outputs which makes them functional. Program status is
irrelevant.

I guess when it comes right down to it, I love what Openstack is and where
we're going and I especially appreciate these discussions. But I'm
disappointed at the number of concerns I've been reading about things that
ultimately don't matter (like being blessed, about who has the power, etc)
and I have concerns we lose sight what this is all about to the point that
the vision for Openstack gets clouded.

We have a good thing and no project can accommodate every request so a
decision must be made as to what is 'included' and what is 'supported'. But
with modularity, it really doesn't matter one iota if a program is blessed
in the Openstack integrated release cycle or not.

But my goodness we have some brilliant minds on our team don't we!

Mahalo,
Adam


*Adam Lawson*
AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072



On Mon, Aug 25, 2014 at 6:36 AM, Sean Dague s...@dague.net wrote:

 On 08/20/2014 12:37 PM, Zane Bitter wrote:
  On 11/08/14 05:24, Thierry Carrez wrote:
  So the idea that being (and remaining) in the integrated release should
  also be judged on technical merit is a slightly different effort. It's
  always been a factor in our choices, but like Devananda says, it's more
  difficult than just checking a number of QA/integration checkboxes. In
  some cases, blessing one project in a problem space stifles competition,
  innovation and alternate approaches. In some other cases, we reinvent
  domain-specific solutions rather than standing on the shoulders of
  domain-specific giants in neighboring open source projects.
 
  I totally agree that these are the things we need to be vigilant about.
 
  Stifling competition is a big worry, but it appears to me that a lot of
  the stifling is happening even before incubation. Everyone's time is
  limited, so if you happen to notice a new project on the incubation
  trajectory doing things in what you think is the Wrong Way, you're most
  likely to either leave some drive-by feedback or to just ignore it and
  carry on with your life. What you're most likely *not* to do is to start
  a competing project to prove them wrong, or to jump in full time to the
  existing project and show them the light. It's really hard to argue
  against the domain experts too - when you're acutely aware of how
  shallow your knowledge is in a particular area it's very hard to know
  how hard to push. (Perhaps ironically, since becoming a PTL I feel I
  have to be much more cautious in what I say too, because people are
  inclined to read too much into my opinion - I wonder if TC members feel
  the same 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread Michael Chapman
On Fri, Aug 22, 2014 at 2:57 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/19/2014 11:28 PM, Robert Collins wrote:

 On 20 August 2014 02:37, Jay Pipes jaypi...@gmail.com wrote:
 ...

  I'd like to see more unification of implementations in TripleO - but I
 still believe our basic principle of using OpenStack technologies that
 already exist in preference to third party ones is still sound, and
 offers substantial dogfood and virtuous circle benefits.



 No doubt Triple-O serves a valuable dogfood and virtuous cycle purpose.
 However, I would move that the Deployment Program should welcome the many
 projects currently in the stackforge/ code namespace that do deployment
 of
 OpenStack using traditional configuration management tools like Chef,
 Puppet, and Ansible. It cannot be argued that these configuration
 management
 systems are the de-facto way that OpenStack is deployed outside of HP,
 and
 they belong in the Deployment Program, IMO.


 I think you mean it 'can be argued'... ;).


 No, I definitely mean cannot be argued :) HP is the only company I know
 of that is deploying OpenStack using Triple-O. The vast majority of
 deployers I know of are deploying OpenStack using configuration management
 platforms and various systems or glue code for baremetal provisioning.

 Note that I am not saying that Triple-O is bad in any way! I'm only saying
 that it does not represent the way that the majority of real-world
 deployments are done.


  And I'd be happy if folk in

 those communities want to join in the deployment program and have code
 repositories in openstack/. To date, none have asked.


 My point in this thread has been and continues to be that by having the TC
 bless a certain project as The OpenStack Way of X, that we implicitly are
 saying to other valid alternatives Sorry, no need to apply here..


  As a TC member, I would welcome someone from the Chef community proposing
 the Chef cookbooks for inclusion in the Deployment program, to live under
 the openstack/ code namespace. Same for the Puppet modules.


 While you may personally welcome the Chef community to propose joining the
 deployment Program and living under the openstack/ code namespace, I'm just
 saying that the impression our governance model and policies create is one
 of exclusion, not inclusion. Hope that clarifies better what I've been
 getting at.



(As one of the core reviewers for the Puppet modules)

Without a standardised package build process it's quite difficult to test
trunk Puppet modules vs trunk official projects. This means we cut release
branches some time after the projects themselves to give people a chance to
test. Until this changes and the modules can be released with the same
cadence as the integrated release I believe they should remain on
Stackforge.

In addition and perhaps as a consequence, there isn't any public
integration testing at this time for the modules, although I know some
parties have developed and maintain their own.

The Chef modules may be in a different state, but it's hard for me to
recommend the Puppet modules become part of an official program at this
stage.



 All the best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread Clint Byrum
Excerpts from Michael Chapman's message of 2014-08-21 23:30:44 -0700:
 On Fri, Aug 22, 2014 at 2:57 AM, Jay Pipes jaypi...@gmail.com wrote:
 
  On 08/19/2014 11:28 PM, Robert Collins wrote:
 
  On 20 August 2014 02:37, Jay Pipes jaypi...@gmail.com wrote:
  ...
 
   I'd like to see more unification of implementations in TripleO - but I
  still believe our basic principle of using OpenStack technologies that
  already exist in preference to third party ones is still sound, and
  offers substantial dogfood and virtuous circle benefits.
 
 
 
  No doubt Triple-O serves a valuable dogfood and virtuous cycle purpose.
  However, I would move that the Deployment Program should welcome the many
  projects currently in the stackforge/ code namespace that do deployment
  of
  OpenStack using traditional configuration management tools like Chef,
  Puppet, and Ansible. It cannot be argued that these configuration
  management
  systems are the de-facto way that OpenStack is deployed outside of HP,
  and
  they belong in the Deployment Program, IMO.
 
 
  I think you mean it 'can be argued'... ;).
 
 
  No, I definitely mean cannot be argued :) HP is the only company I know
  of that is deploying OpenStack using Triple-O. The vast majority of
  deployers I know of are deploying OpenStack using configuration management
  platforms and various systems or glue code for baremetal provisioning.
 
  Note that I am not saying that Triple-O is bad in any way! I'm only saying
  that it does not represent the way that the majority of real-world
  deployments are done.
 
 
   And I'd be happy if folk in
 
  those communities want to join in the deployment program and have code
  repositories in openstack/. To date, none have asked.
 
 
  My point in this thread has been and continues to be that by having the TC
  bless a certain project as The OpenStack Way of X, that we implicitly are
  saying to other valid alternatives Sorry, no need to apply here..
 
 
   As a TC member, I would welcome someone from the Chef community proposing
  the Chef cookbooks for inclusion in the Deployment program, to live under
  the openstack/ code namespace. Same for the Puppet modules.
 
 
  While you may personally welcome the Chef community to propose joining the
  deployment Program and living under the openstack/ code namespace, I'm just
  saying that the impression our governance model and policies create is one
  of exclusion, not inclusion. Hope that clarifies better what I've been
  getting at.
 
 
 
 (As one of the core reviewers for the Puppet modules)
 
 Without a standardised package build process it's quite difficult to test
 trunk Puppet modules vs trunk official projects. This means we cut release
 branches some time after the projects themselves to give people a chance to
 test. Until this changes and the modules can be released with the same
 cadence as the integrated release I believe they should remain on
 Stackforge.
 

Seems like the distros that build the packages are all doing lots of
daily-build type stuff that could somehow be leveraged to get over that.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread Sean Dague
On 08/22/2014 01:30 AM, Michael Chapman wrote:
 
 
 
 On Fri, Aug 22, 2014 at 2:57 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:
 
 On 08/19/2014 11:28 PM, Robert Collins wrote:
 
 On 20 August 2014 02:37, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:
 ...
 
 I'd like to see more unification of implementations in
 TripleO - but I
 still believe our basic principle of using OpenStack
 technologies that
 already exist in preference to third party ones is still
 sound, and
 offers substantial dogfood and virtuous circle benefits.
 
 
 
 No doubt Triple-O serves a valuable dogfood and virtuous
 cycle purpose.
 However, I would move that the Deployment Program should
 welcome the many
 projects currently in the stackforge/ code namespace that do
 deployment of
 OpenStack using traditional configuration management tools
 like Chef,
 Puppet, and Ansible. It cannot be argued that these
 configuration management
 systems are the de-facto way that OpenStack is deployed
 outside of HP, and
 they belong in the Deployment Program, IMO.
 
 
 I think you mean it 'can be argued'... ;).
 
 
 No, I definitely mean cannot be argued :) HP is the only company I
 know of that is deploying OpenStack using Triple-O. The vast
 majority of deployers I know of are deploying OpenStack using
 configuration management platforms and various systems or glue code
 for baremetal provisioning.
 
 Note that I am not saying that Triple-O is bad in any way! I'm only
 saying that it does not represent the way that the majority of
 real-world deployments are done.
 
 
  And I'd be happy if folk in
 
 those communities want to join in the deployment program and
 have code
 repositories in openstack/. To date, none have asked.
 
 
 My point in this thread has been and continues to be that by having
 the TC bless a certain project as The OpenStack Way of X, that we
 implicitly are saying to other valid alternatives Sorry, no need to
 apply here..
 
 
 As a TC member, I would welcome someone from the Chef
 community proposing
 the Chef cookbooks for inclusion in the Deployment program,
 to live under
 the openstack/ code namespace. Same for the Puppet modules.
 
 
 While you may personally welcome the Chef community to propose
 joining the deployment Program and living under the openstack/ code
 namespace, I'm just saying that the impression our governance model
 and policies create is one of exclusion, not inclusion. Hope that
 clarifies better what I've been getting at.
 
 
 
 (As one of the core reviewers for the Puppet modules)
 
 Without a standardised package build process it's quite difficult to
 test trunk Puppet modules vs trunk official projects. This means we cut
 release branches some time after the projects themselves to give people
 a chance to test. Until this changes and the modules can be released
 with the same cadence as the integrated release I believe they should
 remain on Stackforge.
 
 In addition and perhaps as a consequence, there isn't any public
 integration testing at this time for the modules, although I know some
 parties have developed and maintain their own.
 
 The Chef modules may be in a different state, but it's hard for me to
 recommend the Puppet modules become part of an official program at this
 stage.

Is the focus of the Puppet modules only stable releases with packages?
Puppet + git based deploys would be honestly a really handy thing
(especially as lots of people end up having custom fixes for their
site). The lack of CM tools for git based deploys is I think one of the
reasons we seen people using DevStack as a generic installer.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread Duncan Thomas
On 21 August 2014 19:39, gordon chung g...@live.ca wrote:
 from the pov of a project that seems to be brought up constantly and maybe
 it's my naivety, i don't really understand the fascination with branding and
 the stigma people have placed on non-'openstack'/stackforge projects. it
 can't be a legal thing because i've gone through that potential mess. also,
 it's just as easy to contribute to 'non-openstack' projects as 'openstack'
 projects (even easier if we're honest).

It may be easier for you, but it certainly isn't inside big companies,
e.g. HP have pretty broad approvals for contributing to (official)
openstack projects, where as individual approval may be needed to
contribute to none-openstack projects.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread Mooney, Sean K
I would have to agree with Thomas.
Many organizations have already worked out strategies and have processes in 
place to cover contributing to  OpenStack which
Cover all official project. Contributing to additional non-OpenStack projects 
may introduce additional barriers in large 
Organizations which require  ip plan/legal approval on a per project basis.

Regards
sean 
-Original Message-
From: Duncan Thomas [mailto:duncan.tho...@gmail.com] 
Sent: Friday, August 22, 2014 4:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] The future of the integrated release

On 21 August 2014 19:39, gordon chung g...@live.ca wrote:
 from the pov of a project that seems to be brought up constantly and 
 maybe it's my naivety, i don't really understand the fascination with 
 branding and the stigma people have placed on 
 non-'openstack'/stackforge projects. it can't be a legal thing because 
 i've gone through that potential mess. also, it's just as easy to contribute 
 to 'non-openstack' projects as 'openstack'
 projects (even easier if we're honest).

It may be easier for you, but it certainly isn't inside big companies, e.g. HP 
have pretty broad approvals for contributing to (official) openstack projects, 
where as individual approval may be needed to contribute to none-openstack 
projects.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread Michael Chapman
On Fri, Aug 22, 2014 at 9:51 PM, Sean Dague s...@dague.net wrote:

 On 08/22/2014 01:30 AM, Michael Chapman wrote:
 
 
 
  On Fri, Aug 22, 2014 at 2:57 AM, Jay Pipes jaypi...@gmail.com
  mailto:jaypi...@gmail.com wrote:
 
  On 08/19/2014 11:28 PM, Robert Collins wrote:
 
  On 20 August 2014 02:37, Jay Pipes jaypi...@gmail.com
  mailto:jaypi...@gmail.com wrote:
  ...
 
  I'd like to see more unification of implementations in
  TripleO - but I
  still believe our basic principle of using OpenStack
  technologies that
  already exist in preference to third party ones is still
  sound, and
  offers substantial dogfood and virtuous circle benefits.
 
 
 
  No doubt Triple-O serves a valuable dogfood and virtuous
  cycle purpose.
  However, I would move that the Deployment Program should
  welcome the many
  projects currently in the stackforge/ code namespace that do
  deployment of
  OpenStack using traditional configuration management tools
  like Chef,
  Puppet, and Ansible. It cannot be argued that these
  configuration management
  systems are the de-facto way that OpenStack is deployed
  outside of HP, and
  they belong in the Deployment Program, IMO.
 
 
  I think you mean it 'can be argued'... ;).
 
 
  No, I definitely mean cannot be argued :) HP is the only company I
  know of that is deploying OpenStack using Triple-O. The vast
  majority of deployers I know of are deploying OpenStack using
  configuration management platforms and various systems or glue code
  for baremetal provisioning.
 
  Note that I am not saying that Triple-O is bad in any way! I'm only
  saying that it does not represent the way that the majority of
  real-world deployments are done.
 
 
   And I'd be happy if folk in
 
  those communities want to join in the deployment program and
  have code
  repositories in openstack/. To date, none have asked.
 
 
  My point in this thread has been and continues to be that by having
  the TC bless a certain project as The OpenStack Way of X, that we
  implicitly are saying to other valid alternatives Sorry, no need to
  apply here..
 
 
  As a TC member, I would welcome someone from the Chef
  community proposing
  the Chef cookbooks for inclusion in the Deployment program,
  to live under
  the openstack/ code namespace. Same for the Puppet modules.
 
 
  While you may personally welcome the Chef community to propose
  joining the deployment Program and living under the openstack/ code
  namespace, I'm just saying that the impression our governance model
  and policies create is one of exclusion, not inclusion. Hope that
  clarifies better what I've been getting at.
 
 
 
  (As one of the core reviewers for the Puppet modules)
 
  Without a standardised package build process it's quite difficult to
  test trunk Puppet modules vs trunk official projects. This means we cut
  release branches some time after the projects themselves to give people
  a chance to test. Until this changes and the modules can be released
  with the same cadence as the integrated release I believe they should
  remain on Stackforge.
 
  In addition and perhaps as a consequence, there isn't any public
  integration testing at this time for the modules, although I know some
  parties have developed and maintain their own.
 
  The Chef modules may be in a different state, but it's hard for me to
  recommend the Puppet modules become part of an official program at this
  stage.

 Is the focus of the Puppet modules only stable releases with packages?



We try to target puppet module master at upstream OpenStack master, but
without CI/CD we fall behind. The missing piece is building packages and
creating a local repo before doing the puppet run, which I'm working on
slowly as I want a single system for both deb and rpm that doesn't make my
eyes bleed. fpm and pleaserun are the two key tools here.


 Puppet + git based deploys would be honestly a really handy thing
 (especially as lots of people end up having custom fixes for their
 site). The lack of CM tools for git based deploys is I think one of the
 reasons we seen people using DevStack as a generic installer.


It's possible but it's also straight up a poor thing to do in my opinion.
If you're going to install nova from source, maybe you also want libvirt
from source to test a new feature, then you want some of libvirt's deps and
so on. Puppet isn't equipped to deal with this effectively. It runs yum
install x, and that brings in the dependencies.

It's much better to automate the package building process and 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread Joshua Harlow
Comment inline.

On Aug 22, 2014, at 10:13 AM, Michael Chapman wop...@gmail.com wrote:

 
 
 
 On Fri, Aug 22, 2014 at 9:51 PM, Sean Dague s...@dague.net wrote:
 On 08/22/2014 01:30 AM, Michael Chapman wrote:
 
 
 
  On Fri, Aug 22, 2014 at 2:57 AM, Jay Pipes jaypi...@gmail.com
  mailto:jaypi...@gmail.com wrote:
 
  On 08/19/2014 11:28 PM, Robert Collins wrote:
 
  On 20 August 2014 02:37, Jay Pipes jaypi...@gmail.com
  mailto:jaypi...@gmail.com wrote:
  ...
 
  I'd like to see more unification of implementations in
  TripleO - but I
  still believe our basic principle of using OpenStack
  technologies that
  already exist in preference to third party ones is still
  sound, and
  offers substantial dogfood and virtuous circle benefits.
 
 
 
  No doubt Triple-O serves a valuable dogfood and virtuous
  cycle purpose.
  However, I would move that the Deployment Program should
  welcome the many
  projects currently in the stackforge/ code namespace that do
  deployment of
  OpenStack using traditional configuration management tools
  like Chef,
  Puppet, and Ansible. It cannot be argued that these
  configuration management
  systems are the de-facto way that OpenStack is deployed
  outside of HP, and
  they belong in the Deployment Program, IMO.
 
 
  I think you mean it 'can be argued'... ;).
 
 
  No, I definitely mean cannot be argued :) HP is the only company I
  know of that is deploying OpenStack using Triple-O. The vast
  majority of deployers I know of are deploying OpenStack using
  configuration management platforms and various systems or glue code
  for baremetal provisioning.
 
  Note that I am not saying that Triple-O is bad in any way! I'm only
  saying that it does not represent the way that the majority of
  real-world deployments are done.
 
 
   And I'd be happy if folk in
 
  those communities want to join in the deployment program and
  have code
  repositories in openstack/. To date, none have asked.
 
 
  My point in this thread has been and continues to be that by having
  the TC bless a certain project as The OpenStack Way of X, that we
  implicitly are saying to other valid alternatives Sorry, no need to
  apply here..
 
 
  As a TC member, I would welcome someone from the Chef
  community proposing
  the Chef cookbooks for inclusion in the Deployment program,
  to live under
  the openstack/ code namespace. Same for the Puppet modules.
 
 
  While you may personally welcome the Chef community to propose
  joining the deployment Program and living under the openstack/ code
  namespace, I'm just saying that the impression our governance model
  and policies create is one of exclusion, not inclusion. Hope that
  clarifies better what I've been getting at.
 
 
 
  (As one of the core reviewers for the Puppet modules)
 
  Without a standardised package build process it's quite difficult to
  test trunk Puppet modules vs trunk official projects. This means we cut
  release branches some time after the projects themselves to give people
  a chance to test. Until this changes and the modules can be released
  with the same cadence as the integrated release I believe they should
  remain on Stackforge.
 
  In addition and perhaps as a consequence, there isn't any public
  integration testing at this time for the modules, although I know some
  parties have developed and maintain their own.
 
  The Chef modules may be in a different state, but it's hard for me to
  recommend the Puppet modules become part of an official program at this
  stage.
 
 Is the focus of the Puppet modules only stable releases with packages?
 
 
 We try to target puppet module master at upstream OpenStack master, but 
 without CI/CD we fall behind. The missing piece is building packages and 
 creating a local repo before doing the puppet run, which I'm working on 
 slowly as I want a single system for both deb and rpm that doesn't make my 
 eyes bleed. fpm and pleaserun are the two key tools here.
  
 Puppet + git based deploys would be honestly a really handy thing
 (especially as lots of people end up having custom fixes for their
 site). The lack of CM tools for git based deploys is I think one of the
 reasons we seen people using DevStack as a generic installer.
 
 
 It's possible but it's also straight up a poor thing to do in my opinion. If 
 you're going to install nova from source, maybe you also want libvirt from 
 source to test a new feature, then you want some of libvirt's deps and so on. 
 Puppet isn't equipped to deal with this effectively. It runs 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread gordon chung
 It may be easier for you, but it certainly isn't inside big companies,
 e.g. HP have pretty broad approvals for contributing to (official)
 openstack projects, where as individual approval may be needed to
 contribute to none-openstack projects.
i was referring to a company bigger than hp... maybe the legal team is nicer 
there. :)  couldn't hurt to ask them anyways... plenty of good projects that 
exist in stackforge domain.
cheers,gord   ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread gustavo panizzo (gfa)


On 08/22/2014 02:13 PM, Michael Chapman wrote:
 
 We try to target puppet module master at upstream OpenStack master, but
 without CI/CD we fall behind. The missing piece is building packages and
 creating a local repo before doing the puppet run, which I'm working on
 slowly as I want a single system for both deb and rpm that doesn't make
 my eyes bleed. fpm and pleaserun are the two key tools here.

i have used fpm to package python apps, i would happy to help if you can
provide pointers where to start


-- 
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Chris Friesen

On 08/20/2014 09:54 PM, Clint Byrum wrote:

Excerpts from Jay Pipes's message of 2014-08-20 14:53:22 -0700:

On 08/20/2014 05:06 PM, Chris Friesen wrote:

On 08/20/2014 07:21 AM, Jay Pipes wrote:

Hi Thierry, thanks for the reply. Comments inline. :)

On 08/20/2014 06:32 AM, Thierry Carrez wrote:

If we want to follow your model, we probably would have to dissolve
programs as they stand right now, and have blessed categories on one
side, and teams on the other (with projects from some teams being
blessed as the current solution).


Why do we have to have blessed categories at all? I'd like to think of
a day when the TC isn't picking winners or losers at all. Level the
playing field and let the quality of the projects themselves determine
the winner in the space. Stop the incubation and graduation madness and
change the role of the TC to instead play an advisory role to upcoming
(and existing!) projects on the best ways to integrate with other
OpenStack projects, if integration is something that is natural for the
project to work towards.


It seems to me that at some point you need to have a recommended way of
doing things, otherwise it's going to be *really hard* for someone to
bring up an OpenStack installation.


Why can't there be multiple recommended ways of setting up an OpenStack
installation? Matter of fact, in reality, there already are multiple
recommended ways of setting up an OpenStack installation, aren't there?

There's multiple distributions of OpenStack, multiple ways of doing
bare-metal deployment, multiple ways of deploying different message
queues and DBs, multiple ways of establishing networking, multiple open
and proprietary monitoring systems to choose from, etc. And I don't
really see anything wrong with that.



This is an argument for loosely coupling things, rather than tightly
integrating things. You will almost always win my vote with that sort of
movement, and you have here. +1.


I mostly agree, but I think we should distinguish between things that 
are possible, and things that are supported.  Arguably, anything 
that is supported should be tested as part of the core infrastructure 
and documented in the core OpenStack documentation.



We already run into issues with something as basic as competing SQL
databases.


If the TC suddenly said Only MySQL will be supported, that would not
mean that the greater OpenStack community would be served better. It
would just unnecessarily take options away from deployers.


On the other hand, if the community says explicitly we only test with 
sqlite and MySQL then that sends a signal that anyone wanting to use 
something else should plan on doing additional integration testing.


I've stumbled over some of these issues, and it's no fun. (There's still 
an open bug around the fact that sqlite behaves differently than MySQL 
with respect to regex.)



IMO, OpenStack should be about choice. Choice of hypervisor, choice of
DB and MQ infrastructure, choice of operating systems, choice of storage
vendors, choice of networking vendors.



Err, uh. I think OpenStack should be about users. If having 400 choices
means users are just confused, then OpenStack becomes nothing and
everything all at once. Choices should be part of the whole not when 1%
of the market wants a choice, but when 20%+ of the market _requires_
a choice.


I agree.

If there are too many choices without enough documentation as to why 
someone would choose one over the other, or insufficient testing such 
that some choices are theoretically valid but broken in practice, then 
it's less useful for the end users.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Thierry Carrez
Zane Bitter wrote:
 On 11/08/14 05:24, Thierry Carrez wrote:
 This all has created a world where you need to be*in*  OpenStack to
 matter, or to justify the investment. This has created a world where
 everything and everyone wants to be in the OpenStack integrated
 release. This has created more pressure to add new projects, and less
 pressure to fix and make the existing projects perfect. 4 years in, we
 might want to inflect that trajectory and take steps to fix this world.
 
 We should certainly consider this possibility, that we've set up
 perverse incentives leading to failure. But what if it's just because we
 haven't yet come even close to satisfying all of our users' needs? I
 mean, AWS has more than 30 services that could be considered equivalent
 in scope to an OpenStack project... if anything our scope is increasing
 more _slowly_ than the industry at large. I'm slightly shocked that
 nobody in this thread appears to have even entertained the idea that
 *this is what success looks like*.
 
 The world is not going to stop because we want to get off, take a
 breather, do a consolidation cycle.

That's an excellent counterpoint, thank you for voicing it so eloquently.

Our challenge is to improve our structures so that we can follow the
rhythm the world imposes on us. It's a complex challenge, especially in
an open collaboration experiment where you can't rely that much on past
experiences or traditional methods. So it's always tempting to slow
things down, to rate-limit our success to make that challenge easier.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Thierry Carrez
Jay Pipes wrote:
 I don't believe the Programs are needed, as they are currently
 structured. I don't really believe they serve any good purposes, and
 actually serve to solidify positions of power, slanted towards existing
 power centers, which is antithetical to a meritocratic community.

Let me translate that, considering programs are just teams of people...
You're still OK with the concept of teams of people working toward a
common goal, but you don't think blessing some teams serves any good
purpose. Is that right? (if yes, see below for more on what that
actually means).

 [...]
 If we want to follow your model, we probably would have to dissolve
 programs as they stand right now, and have blessed categories on one
 side, and teams on the other (with projects from some teams being
 blessed as the current solution).
 
 Why do we have to have blessed categories at all? I'd like to think of
 a day when the TC isn't picking winners or losers at all. Level the
 playing field and let the quality of the projects themselves determine
 the winner in the space. Stop the incubation and graduation madness and
 change the role of the TC to instead play an advisory role to upcoming
 (and existing!) projects on the best ways to integrate with other
 OpenStack projects, if integration is something that is natural for the
 project to work towards.

I'm still trying to wrap my head around what you actually propose here.
Do you just want to get rid of incubation ? Or do you want to get rid of
the whole integrated release concept ? The idea that we collectively
apply effort around a limited set of projects to make sure they are
delivered in an acceptable fashion (on a predictable schedule, following
roughly the same rules, with some amount of integrated feature, some
amount of test coverage, some amount of documentation...)

Because I still think there is a whole lot of value in that. I don't
think our mission is to be the sourceforge of cloud projects. Our
mission is to *produce* the ubiquitous Open Source Cloud Computing
platform. There must be some amount of opinionated choices there.

Everything else in our structure derives from that. If we have an
integrated release, we need to bless a set of projects that will be part
of it (graduation). We need to single out promising projects so that we
mentor them on the common rules they will have to follow there (incubation).

Now there are bad side-effects we need to solve, like the idea that
incubation and integration are steps on a openstack ecosystem holy
ladder that every project should aspire to climb.

 That would leave the horizontal programs like Docs, QA or Infra,
 where the team and the category are the same thing, as outliers again
 (like they were before we did programs).
 
 What is the purpose of having these programs, though? If it's just to
 have a PTL, then I think we need to reconsider the whole concept of
 Programs. [...]

The main purpose of programs (or official teams) is that being part of
one of them gives you the right to participate in electing the Technical
Committee, and as a result places you under its authority. Both parties
have to agree to be placed under that contract, which is why teams have
to apply (we can't force them), and the TC has to accept (they can't
force us).

Programs have *nothing* to do with PTLs, which are just a convenient way
to solve potential decision deadlocks in teams (insert your favorite
dysfunctional free software project example here). We could get rid of
the PTL concept (to replace them for example with a set of designated
liaisons) and we would still have programs (teams) and projects (the
code repos that team is working on).

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Sean Dague
On 08/20/2014 02:37 PM, Jay Pipes wrote:
 On 08/20/2014 11:41 AM, Zane Bitter wrote:
 On 19/08/14 10:37, Jay Pipes wrote:

 By graduating an incubated project into the integrated release, the
 Technical Committee is blessing the project as the OpenStack way to do
 some thing. If there are projects that are developed *in the OpenStack
 ecosystem* that are actively being developed to serve the purpose that
 an integrated project serves, then I think it is the responsibility of
 the Technical Committee to take another look at the integrated project
 and answer the following questions definitively:

   a) Is the Thing that the project addresses something that the
 Technical Committee believes the OpenStack ecosystem benefits from by
 the TC making a judgement on what is the OpenStack way of addressing
 that Thing.

 and IFF the decision of the TC on a) is YES, then:

   b) Is the Vision and Implementation of the currently integrated
 project the one that the Technical Committee wishes to continue to
 bless as the the OpenStack way of addressing the Thing the project
 does.

 I disagree with part (b); projects are not code - projects, like Soylent
 Green, are people.
 
 Hey! Don't steal my slide content! :P
 
 http://bit.ly/navigating-openstack-community (slide 3)
 
 So it's not critical that the implementation is the
 one the TC wants to bless, what's critical is that the right people are
 involved to get to an implementation that the TC would be comfortable
 blessing over time. For example, everyone agrees that Ceilometer has
 room for improvement, but any implication that the Ceilometer is not
 interested in or driving towards those improvements (because of NIH or
 whatever) is, as has been pointed out, grossly unfair to the Ceilometer
 team.
 
 I certainly have not made such an implication about Ceilometer. What I
 see in the Ceilometer space, though, is that there are clearly a number
 of *active* communities of OpenStack engineers developing code that
 crosses similar problem spaces. I think the TC blessing one of those
 communities before the market has had a chance to do a bit more
 natural filtering of quality is a barrier to innovation. I think having
 all of those separate teams able to contribute code to an openstack/
 code namespace and naturally work to resolve differences and merge
 innovation is a better fit for a meritocracy.

I think the other thing that's been discovered in the metering space is
it's not just an engineering problem with the bulk of the hard stuff
already figured out. This problem actually is really hard to get right,
especially when performance and overhead are key.

By blessing one team what we're saying is all the good ideas pool for
tackling this hard problem can only come from that one team. That has a
trade off cost. It means if we believe that Ceilometer is fundamentally
the right architecture but just needs a bit of polish, that's the right
call. It's telling people to just get with the program. But it seems
right now we don't think that's the case. And we includes a bunch of
folks in Ceilometer. As evidenced by a bunch of rearchitecture going on.
Which is fine, it's a hard problem, as evidenced by the fact that there
are a ton of open source projects in the general area.

But by blessing a team, and saddling them with an existing architecture
that no one loves, we're actually making it a lot harder to come up with
a final best in class thing in this slot in the OpenStack universe. The
Ceilometer team has to live within the upgrade constraints, for
instance. They have API stability requirements applied to them. The
entire set of requirements of a project once integrated does impose a
tax on the rate the team can change the project so that stable contracts
are kept up.

Honestly, I don't want this to be about stigma of kicking something
out, but more about openning up freedom and flexibility to research out
this space, which has shown to be a hard space. I don't want to question
that anyone isn't working hard here, because I absolutely think the
teams doing this are. But I also think that cracking this nut of high
performance metering on a large scale is tough, and only made tougher by
having to go after that solution while also staying within the bounds of
acceptable integrated project evolution.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Chris Dent

On Thu, 21 Aug 2014, Sean Dague wrote:


By blessing one team what we're saying is all the good ideas pool for
tackling this hard problem can only come from that one team.


This is a big part of this conversation that really confuses me. Who is
that one team?

I don't think it is that team that is being blessed, it is that
project space. That project space ought, if possible, have a team
made up of anyone who is interested. Within that umbrella both
the competition and cooperation that everyone wants can happen.

You're quite right Sean, there is a lot of gravity that comes from
needing to support and slowly migrate the existing APIs. That takes
up quite a lot of resources. It doesn't mean, however, that other
resources can't work on substantial improvements in cooperation with
the rest of the project. Gnocchi and the entire V3 concept in
ceilometer are a good example of this. Some folk are working on that
and some folk are working on maintaining and improving the old
stuff.

Some participants in this thread seem to be saying give some else a
chance. Surely nobody needs to be given the chance, they just need
to join the project and make some contributions? That is how this is
supposed to work isn't it?

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Jay Pipes

On 08/21/2014 07:58 AM, Chris Dent wrote:

On Thu, 21 Aug 2014, Sean Dague wrote:


By blessing one team what we're saying is all the good ideas pool for
tackling this hard problem can only come from that one team.


This is a big part of this conversation that really confuses me. Who is
that one team?

I don't think it is that team that is being blessed, it is that
project space. That project space ought, if possible, have a team
made up of anyone who is interested. Within that umbrella both
the competition and cooperation that everyone wants can happen.

You're quite right Sean, there is a lot of gravity that comes from
needing to support and slowly migrate the existing APIs. That takes
up quite a lot of resources. It doesn't mean, however, that other
resources can't work on substantial improvements in cooperation with
the rest of the project. Gnocchi and the entire V3 concept in
ceilometer are a good example of this. Some folk are working on that
and some folk are working on maintaining and improving the old
stuff.

Some participants in this thread seem to be saying give some else a
chance. Surely nobody needs to be given the chance, they just need
to join the project and make some contributions? That is how this is
supposed to work isn't it?


Specifically for Ceilometer, many of the folks working on alternate 
implementations have contributed or are actively contributing to 
Ceilometer. Some have stopped contributing because of fundamental 
disagreements about the appropriateness of the Ceilometer architecture. 
Others have begun working on Gnocchi to address design issues, and 
others have joined efforts on Monasca, and others have continued work on 
Stacktach. Eoghan has done an admirable job of informing the TC about 
goings on in the Ceilometer community and being forthright about the 
efforts around Gnocchi. And there isn't any perceived animosity between 
the aforementioned contributor subteams. The point I've been making is 
that by the TC continuing to bless only the Ceilometer project as the 
OpenStack Way of Metering, I think we do a disservice to our users by 
picking a winner in a space that is clearly still unsettled.


Specifically for Triple-O, by making the Deployment program == Triple-O, 
the TC has picked the disk-image-based deployment of an undercloud 
design as The OpenStack Way of Deployment. And as I've said previously 
in this thread, I believe that the deployment space is similarly 
unsettled, and that it would be more appropriate to let the Chef 
cookbooks and Puppet modules currently sitting in the stackforge/ code 
namespace live in the openstack/ code namespace.


I recommended getting rid of the formal Program concept because I didn't 
think it was serving any purpose other than solidifying existing power 
centers and was inhibiting innovation by sending the signal of blessed 
teams/projects, instead of sending a signal of inclusion.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Jay Pipes

On 08/20/2014 11:54 PM, Clint Byrum wrote:

Excerpts from Jay Pipes's message of 2014-08-20 14:53:22 -0700:

On 08/20/2014 05:06 PM, Chris Friesen wrote:

On 08/20/2014 07:21 AM, Jay Pipes wrote:

...snip

We already run into issues with something as basic as competing SQL
databases.


If the TC suddenly said Only MySQL will be supported, that would not
mean that the greater OpenStack community would be served better. It
would just unnecessarily take options away from deployers.


This is really where supported becomes the mutex binding us all. The
more supported options, the larger the matrix, the more complex a
user's decision process becomes.


I don't believe this is necessarily true.

A large chunk of users of OpenStack will deploy their cloud using one of 
the OpenStack distributions -- RDO, Ubuntu OpenStack, MOS, or one of the 
OpenStack appliances. For these users, they will select the options that 
their distribution offers (or makes for them).


Another chunk of users of OpenStack will deploy their cloud using things 
like the Chef cookbooks or Puppet modules on stackforge. For these 
users, they will select the options that the writers of those Puppet 
modules or Chef cookbooks have wired into the module or cookbook.


Another chunk of users of OpenStack will deploy their cloud by following 
the upstream installation documentation. This documentation currently 
focuses on the integrated projects, and so these users would only be 
deploying the projects that contributed excellent documentation and 
worked with distributors and packagers to make the installation and use 
of their project as easy as possible.


So, I think there is an argument to be made that packagers and deployers 
would have more decisions to make, but not necessarily end-users of 
OpenStack.



   If every component has several competing implementations and

none of them are official how many more interaction issues are going
to trip us up?


IMO, OpenStack should be about choice. Choice of hypervisor, choice of
DB and MQ infrastructure, choice of operating systems, choice of storage
vendors, choice of networking vendors.


Err, uh. I think OpenStack should be about users. If having 400 choices
means users are just confused, then OpenStack becomes nothing and
everything all at once. Choices should be part of the whole not when 1%
of the market wants a choice, but when 20%+ of the market _requires_
a choice.


I believe by picking winners in unsettled spaces, we add more to the 
confusion of users than having 1 option for doing something.



What we shouldn't do is harm that 1%'s ability to be successful. We should
foster it and help it grow, but we don't just pull it into the program and
say You're ALSO in OpenStack now!


I haven't been proposing that these competing projects would be in 
OpenStack now. I have been proposing that these projects live in the 
openstack/ code namespace, as these projects are 100% targeting 
OpenStack installations and users, and they are offering options to 
OpenStack deployers.


I hate the fact that the TC is deciding what is OpenStack.

IMO, we should be instead answering questions like does project X solve 
problem Y for OpenStack users? and can the design of project A be 
adapted to pull in good things from project B? and where can we advise 
project M to put resources that would most benefit OpenStack users?.


 and we also don't want to force those

users to make a hard choice because the better solution is not blessed.


But users are *already* forced to make these choices. They make these 
choices by picking an OpenStack distribution, or by necessity of a 
certain scale, or by their experience and knowledge base of a particular 
technology. Blessing one solution when there are multiple valid 
solutions does not suddenly remove the choice for users.



If there are multiple actively-developed projects that address the same
problem space, I think it serves our OpenStack users best to let the
projects work things out themselves and let the cream rise to the top.
If the cream ends up being one of those projects, so be it. If the cream
ends up being a mix of both projects, so be it. The production community
will end up determining what that cream should be based on what it
deploys into its clouds and what input it supplies to the teams working
on competing implementations.


I'm really not a fan of making it a competitive market. If a space has a
diverse set of problems, we can expect it will have a diverse set of
solutions that overlap. But that doesn't mean they both need to drive
toward making that overlap all-encompassing. Sometimes that happens and
it is good, and sometimes that happens and it causes horrible bloat.


Yes, I recognize the danger that choice brings. I just am more 
optimistic than you about our ability to handle choice. :)



And who knows... what works or is recommended by one deployer may not be
what is best for another type of deployer and I believe we (the

Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Kyle Mestery
On Thu, Aug 21, 2014 at 4:09 AM, Thierry Carrez thie...@openstack.org wrote:
 Zane Bitter wrote:
 On 11/08/14 05:24, Thierry Carrez wrote:
 This all has created a world where you need to be*in*  OpenStack to
 matter, or to justify the investment. This has created a world where
 everything and everyone wants to be in the OpenStack integrated
 release. This has created more pressure to add new projects, and less
 pressure to fix and make the existing projects perfect. 4 years in, we
 might want to inflect that trajectory and take steps to fix this world.

 We should certainly consider this possibility, that we've set up
 perverse incentives leading to failure. But what if it's just because we
 haven't yet come even close to satisfying all of our users' needs? I
 mean, AWS has more than 30 services that could be considered equivalent
 in scope to an OpenStack project... if anything our scope is increasing
 more _slowly_ than the industry at large. I'm slightly shocked that
 nobody in this thread appears to have even entertained the idea that
 *this is what success looks like*.

 The world is not going to stop because we want to get off, take a
 breather, do a consolidation cycle.

 That's an excellent counterpoint, thank you for voicing it so eloquently.

 Our challenge is to improve our structures so that we can follow the
 rhythm the world imposes on us. It's a complex challenge, especially in
 an open collaboration experiment where you can't rely that much on past
 experiences or traditional methods. So it's always tempting to slow
 things down, to rate-limit our success to make that challenge easier.

++

Thanks for wording this perfectly. It's sometimes easy to look at
things through a single lens, as a community it's good when we look at
all the angles of a problem.

I think the main point is it's sometimes hard to judge the future of a
project like OpenStack from the past, because as we move forward we
add new variables to the equation. Thus, adjusting on the fly is
really the only way forward. The points in this thread make it clear
we're doing that as a project, but perhaps not at a quick enough pace.

Thanks,
Kyle

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Clint Byrum
Excerpts from Duncan Thomas's message of 2014-08-21 09:21:06 -0700:
 On 21 August 2014 14:27, Jay Pipes jaypi...@gmail.com wrote:
 
  Specifically for Triple-O, by making the Deployment program == Triple-O, the
  TC has picked the disk-image-based deployment of an undercloud design as The
  OpenStack Way of Deployment. And as I've said previously in this thread, I
  believe that the deployment space is similarly unsettled, and that it would
  be more appropriate to let the Chef cookbooks and Puppet modules currently
  sitting in the stackforge/ code namespace live in the openstack/ code
  namespace.
 
 Totally agree with Jay here, I know people who gave up on trying to
 get any official project around deployment because they were told they
 had to do it under the TripleO umbrella
 

This was why the _program_ versus _project_ distinction was made. But
I think we ended up being 1:1 anyway.

Perhaps the deployment program's mission statement is too narrow, and
we should iterate on that. That others took their ball and went home,
instead of asking for a review of that ruling, is a bit disconcerting.

That probably strikes to the heart of the current crisis. If we were
being reasonable, alternatives to an official OpenStack program's mission
statement would be debated and considered thoughtfully. I know I made the
mistake early on of pushing the narrow _TripleO_ vision into what should
have been a much broader Deployment program. I'm not entirely sure why
that seemed o-k to me at the time, or why it was allowed to continue, but
I think it may be a good exercise to review those events and try to come
up with a few theories or even conclusions as to what we could do better.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Zane Bitter

On 20/08/14 15:37, Jay Pipes wrote:

For example, everyone agrees that Ceilometer has
room for improvement, but any implication that the Ceilometer is not
interested in or driving towards those improvements (because of NIH or
whatever) is, as has been pointed out, grossly unfair to the Ceilometer
team.


I certainly have not made such an implication about Ceilometer.


Sorry, yes, I didn't intend to imply any such... implication on your 
part. I was actually trying (evidently unsuccessfully) to avoid getting 
into finger-pointing at all, and simply make a general statement that if 
anyone were, hypothetically, to imply that the team are not committed to 
improvements, then that would be unfair. Hypothetically.


Is it Friday yet?

- ZB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread gordon chung



 The point I've been making is 
 that by the TC continuing to bless only the Ceilometer project as the 
 OpenStack Way of Metering, I think we do a disservice to our users by 
 picking a winner in a space that is clearly still unsettled.
can we avoid using the word 'blessed' -- it's extremely vague and seems 
controversial. from what i know, no one is being told project x's services are 
the be all end all and based on experience, companies (should) know this. i've 
worked with other alternatives even though i contribute to ceilometer. Totally 
agree with Jay here, I know people who gave up on trying to
 get any official project around deployment because they were told they
 had to do it under the TripleO umbrellafrom the pov of a project that seems 
 to be brought up constantly and maybe it's my naivety, i don't really 
 understand the fascination with branding and the stigma people have placed on 
 non-'openstack'/stackforge projects. it can't be a legal thing because i've 
 gone through that potential mess. also, it's just as easy to contribute to 
 'non-openstack' projects as 'openstack' projects (even easier if we're 
 honest). 
in my mind, the goal of the programs is to encourage collaboration from 
projects with the same focus (whether they overlap or not). that way, even if 
there's differences in goal/implementation, there's a common space between them 
so users can easily decide. also, hopefully with the collaboration, it'll help 
teams realise that certain problems have already been solved and certain parts 
of code can be shared rather than having project x, y, and z all working in 
segregated streams, racing as fast as they can to claim supremacy (how you'd 
decide is another mess) and then n number of months/years later we decide to 
throw away (tens/hundreds) of thousands of person hours of work because we just 
created massive projects that overlap.
suggestion: maybe it's better to drop the branding codenames and just refer to 
everything as their generic feature? ie. identity, telemetry, orchestration, 
etc...
cheers,
gord
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread David Kranz

On 08/21/2014 02:39 PM, gordon chung wrote:

 The point I've been making is
 that by the TC continuing to bless only the Ceilometer project as the
 OpenStack Way of Metering, I think we do a disservice to our users by
 picking a winner in a space that is clearly still unsettled.

can we avoid using the word 'blessed' -- it's extremely vague and 
seems controversial. from what i know, no one is being told project 
x's services are the be all end all and based on experience, companies 
(should) know this. i've worked with other alternatives even though i 
contribute to ceilometer.

 Totally agree with Jay here, I know people who gave up on trying to
 get any official project around deployment because they were told they
 had to do it under the TripleO umbrella
from the pov of a project that seems to be brought up constantly and 
maybe it's my naivety, i don't really understand the fascination with 
branding and the stigma people have placed on 
non-'openstack'/stackforge projects. it can't be a legal thing because 
i've gone through that potential mess. also, it's just as easy to 
contribute to 'non-openstack' projects as 'openstack' projects (even 
easier if we're honest).
Yes, we should be honest. The even easier part is what Sandy cited as 
the primary motivation for pursuing stacktach instead of ceilometer.


I think we need to consider the difference between why OpenStack wants 
to bless a project, and why a project might want to be blessed by 
OpenStack. Many folks believe that for OpenStack to be successful it 
needs to present itself as a stack that can be tested and deployed, not 
a sack of parts that only the most extremely clever people can manage to 
assemble into an actual cloud. In order to have such a stack, some code 
(or, alternatively, dare I say API...) needs to be blessed. Reasonable 
debates will continue about which pieces are essential to this stack, 
and which should be left to deployers, but metering was seen as such a 
component and therefore something needed to be blessed. The hope was 
that every one would jump on that and make it great but it seems that 
didn't quite happen (at least yet).


Though Open Source has many advantages over proprietary development, the 
ability to choose a direction and marshal resources for efficient 
delivery is the biggest advantage of proprietary development like what 
AWS does. The TC process of blessing is, IMO, an attempt to compensate 
for that in an OpenSource project. Of course if the wrong code is 
blessed, the negative  impact can be significant. Blessing APIs would be 
more forgiving, though with its own perils. I am reminded of this 
session, in which Jay was involved, at my first OpenStack summit: 
http://essexdesignsummit.sched.org/event/66f38d3bb4a1b8b169b81179e7f03215#.U_ZLI3Wx02Q


As for why projects have a desire to be blessed, I suspect in many cases 
it is because the OpenStack brand will attract contributors to their 
project.


 -David




in my mind, the goal of the programs is to encourage collaboration 
from projects with the same focus (whether they overlap or not). that 
way, even if there's differences in goal/implementation, there's a 
common space between them so users can easily decide. also, hopefully 
with the collaboration, it'll help teams realise that certain problems 
have already been solved and certain parts of code can be shared 
rather than having project x, y, and z all working in segregated 
streams, racing as fast as they can to claim supremacy (how you'd 
decide is another mess) and then n number of months/years later we 
decide to throw away (tens/hundreds) of thousands of person hours of 
work because we just created massive projects that overlap.


suggestion: maybe it's better to drop the branding codenames and just 
refer to everything as their generic feature? ie. identity, telemetry, 
orchestration, etc...


cheers,
/gord/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Clint Byrum
Excerpts from David Kranz's message of 2014-08-21 12:45:05 -0700:
 On 08/21/2014 02:39 PM, gordon chung wrote:
   The point I've been making is
   that by the TC continuing to bless only the Ceilometer project as the
   OpenStack Way of Metering, I think we do a disservice to our users by
   picking a winner in a space that is clearly still unsettled.
 
  can we avoid using the word 'blessed' -- it's extremely vague and 
  seems controversial. from what i know, no one is being told project 
  x's services are the be all end all and based on experience, companies 
  (should) know this. i've worked with other alternatives even though i 
  contribute to ceilometer.
   Totally agree with Jay here, I know people who gave up on trying to
   get any official project around deployment because they were told they
   had to do it under the TripleO umbrella
  from the pov of a project that seems to be brought up constantly and 
  maybe it's my naivety, i don't really understand the fascination with 
  branding and the stigma people have placed on 
  non-'openstack'/stackforge projects. it can't be a legal thing because 
  i've gone through that potential mess. also, it's just as easy to 
  contribute to 'non-openstack' projects as 'openstack' projects (even 
  easier if we're honest).
 Yes, we should be honest. The even easier part is what Sandy cited as 
 the primary motivation for pursuing stacktach instead of ceilometer.
 
 I think we need to consider the difference between why OpenStack wants 
 to bless a project, and why a project might want to be blessed by 
 OpenStack. Many folks believe that for OpenStack to be successful it 
 needs to present itself as a stack that can be tested and deployed, not 
 a sack of parts that only the most extremely clever people can manage to 
 assemble into an actual cloud. In order to have such a stack, some code 
 (or, alternatively, dare I say API...) needs to be blessed. Reasonable 
 debates will continue about which pieces are essential to this stack, 
 and which should be left to deployers, but metering was seen as such a 
 component and therefore something needed to be blessed. The hope was 
 that every one would jump on that and make it great but it seems that 
 didn't quite happen (at least yet).
 
 Though Open Source has many advantages over proprietary development, the 
 ability to choose a direction and marshal resources for efficient 
 delivery is the biggest advantage of proprietary development like what 
 AWS does. The TC process of blessing is, IMO, an attempt to compensate 
 for that in an OpenSource project. Of course if the wrong code is 
 blessed, the negative  impact can be significant. Blessing APIs would be 

Hm, I wonder if the only difference there is when AWS blesses the wrong
thing, they evaluate the business impact, and respond by going in a
different direction, all behind closed doors. The shame is limited to
that inner circle.

Here, with full transparency, calling something the wrong thing is
pretty much public humiliation for the team involved.

So it stands to reason that we shouldn't call something the right
thing if we aren't comfortable with the potential public shaming.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread David Kranz

On 08/21/2014 04:12 PM, Clint Byrum wrote:

Excerpts from David Kranz's message of 2014-08-21 12:45:05 -0700:

On 08/21/2014 02:39 PM, gordon chung wrote:

The point I've been making is
that by the TC continuing to bless only the Ceilometer project as the
OpenStack Way of Metering, I think we do a disservice to our users by
picking a winner in a space that is clearly still unsettled.

can we avoid using the word 'blessed' -- it's extremely vague and
seems controversial. from what i know, no one is being told project
x's services are the be all end all and based on experience, companies
(should) know this. i've worked with other alternatives even though i
contribute to ceilometer.

Totally agree with Jay here, I know people who gave up on trying to
get any official project around deployment because they were told they
had to do it under the TripleO umbrella

from the pov of a project that seems to be brought up constantly and
maybe it's my naivety, i don't really understand the fascination with
branding and the stigma people have placed on
non-'openstack'/stackforge projects. it can't be a legal thing because
i've gone through that potential mess. also, it's just as easy to
contribute to 'non-openstack' projects as 'openstack' projects (even
easier if we're honest).

Yes, we should be honest. The even easier part is what Sandy cited as
the primary motivation for pursuing stacktach instead of ceilometer.

I think we need to consider the difference between why OpenStack wants
to bless a project, and why a project might want to be blessed by
OpenStack. Many folks believe that for OpenStack to be successful it
needs to present itself as a stack that can be tested and deployed, not
a sack of parts that only the most extremely clever people can manage to
assemble into an actual cloud. In order to have such a stack, some code
(or, alternatively, dare I say API...) needs to be blessed. Reasonable
debates will continue about which pieces are essential to this stack,
and which should be left to deployers, but metering was seen as such a
component and therefore something needed to be blessed. The hope was
that every one would jump on that and make it great but it seems that
didn't quite happen (at least yet).

Though Open Source has many advantages over proprietary development, the
ability to choose a direction and marshal resources for efficient
delivery is the biggest advantage of proprietary development like what
AWS does. The TC process of blessing is, IMO, an attempt to compensate
for that in an OpenSource project. Of course if the wrong code is
blessed, the negative  impact can be significant. Blessing APIs would be

Hm, I wonder if the only difference there is when AWS blesses the wrong
thing, they evaluate the business impact, and respond by going in a
different direction, all behind closed doors. The shame is limited to
that inner circle.
It is only limited to the inner circle if the wrong thing had no 
public api in wide use. The advantage of blessing apis rather than 
implementations is that mistakes can be corrected. I realize many people 
hate that idea.


Here, with full transparency, calling something the wrong thing is
pretty much public humiliation for the team involved.

So it stands to reason that we shouldn't call something the right
thing if we aren't comfortable with the potential public shaming.
Of course not, and no one would argue that we should. The question being 
debated is whether the benefits of choosing wisely are worth the risk of 
choosing wrongly, as compared to the different-in-nature risks of not 
choosing at all. Not so easy to answer IMO.


 -David


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Brad Topol
Hi Everyone,

I have seen a ton of notes on this topic.   Many of us have a lot invested 
in Ceilometer and want it to succeed.   I don't want to focus on whether 
Ceilometer should be in the integrated release or not. To me the bigger 
issue is that if Ceilometer views itself as wanting to be a monitoring 
infrastructure for OpenStack  it needs to be able to scale.  If it is a 
tool that can handle high volume monitoring loads it will attract new 
contributors because more folks will use it and will want to contribute to 
it.

I am happy to leave it to the Ceilometer team to figure out a redesign 
that enables them to scale.  I have been asked in the past to be more 
concrete and make some suggested improvements. Here is a very short list:

1.  Relying on oslo messaging (which is RPC based) simply won't scale when 
you have a high volume monitoring requirement.
2.  Having to query the database first for doing triggers breaks when the 
database gets big.
3.  There  needs to be a way to keep the database from getting too large. 
Usually this means being able to move data from the database to a data 
warehouse and having the data warehouse available to handle data querying 
loads.

Whenever the Ceilometer team feels they can scale significantly more than 
they do now I'm sure folks will take another look at it.   When given a 
choice many of us want to reuse an Open Source option that meets our needs 
whether it has the fancy branding or not.   In the near term many of us 
have to provide a monitoring solution.  And many of us have folks on staff 
that have significant high volume monitoring experience and those folks 
feel there are some Open Source monitoring components that are a better 
foundation for providing high volume monitoring. 

So in summary I do hope there will be less focus on whether Ceilometer has 
fancy status or not and instead more focus on an open and frank dialogue 
on what can be done for Ceilometer to achieve its goal of being able to 
meet high volume monitoring requirements.

Thanks,

Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Clint Byrum cl...@fewbar.com
To: openstack-dev openstack-dev@lists.openstack.org, 
Date:   08/21/2014 04:13 PM
Subject:Re: [openstack-dev] [all] The future of the integrated 
release



Excerpts from David Kranz's message of 2014-08-21 12:45:05 -0700:
 On 08/21/2014 02:39 PM, gordon chung wrote:
   The point I've been making is
   that by the TC continuing to bless only the Ceilometer project as 
the
   OpenStack Way of Metering, I think we do a disservice to our users 
by
   picking a winner in a space that is clearly still unsettled.
 
  can we avoid using the word 'blessed' -- it's extremely vague and 
  seems controversial. from what i know, no one is being told project 
  x's services are the be all end all and based on experience, companies 

  (should) know this. i've worked with other alternatives even though i 
  contribute to ceilometer.
   Totally agree with Jay here, I know people who gave up on trying to
   get any official project around deployment because they were told 
they
   had to do it under the TripleO umbrella
  from the pov of a project that seems to be brought up constantly and 
  maybe it's my naivety, i don't really understand the fascination with 
  branding and the stigma people have placed on 
  non-'openstack'/stackforge projects. it can't be a legal thing because 

  i've gone through that potential mess. also, it's just as easy to 
  contribute to 'non-openstack' projects as 'openstack' projects (even 
  easier if we're honest).
 Yes, we should be honest. The even easier part is what Sandy cited as 
 the primary motivation for pursuing stacktach instead of ceilometer.
 
 I think we need to consider the difference between why OpenStack wants 
 to bless a project, and why a project might want to be blessed by 
 OpenStack. Many folks believe that for OpenStack to be successful it 
 needs to present itself as a stack that can be tested and deployed, not 
 a sack of parts that only the most extremely clever people can manage to 

 assemble into an actual cloud. In order to have such a stack, some code 
 (or, alternatively, dare I say API...) needs to be blessed. Reasonable 
 debates will continue about which pieces are essential to this stack, 
 and which should be left to deployers, but metering was seen as such a 
 component and therefore something needed to be blessed. The hope was 
 that every one would jump on that and make it great but it seems that 
 didn't quite happen (at least yet).
 
 Though Open Source has many advantages over proprietary development, the 

 ability to choose a direction and marshal resources for efficient 
 delivery is the biggest advantage of proprietary development like what 
 AWS does. The TC process of blessing is, IMO, an attempt to compensate

Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread Stefano Maffulli
I think we can't throw Ceilometer and Triple-O in the same discussion:
they're two separate issues IMHO, with different root causes and
therefore different solutions.

On 08/21/2014 06:27 AM, Jay Pipes wrote:
 The point I've been making is
 that by the TC continuing to bless only the Ceilometer project as the
 OpenStack Way of Metering, I think we do a disservice to our users by
 picking a winner in a space that is clearly still unsettled.

When Ceilometer started there was nothing in that area. A quite
significant team formed to discuss API and implementation for metering
OpenStack. All was done in the public and getting as many people
involved from the start. Ceilometer was integrated because the whole
OpenStack project has been about fostering collaboration *on top* of a
common technical infrastructure. And because it was considered ready
from the technical aspect.

Now we're finding out that we don't have appropriate processes and tools
to evaluate what happens later in the maturating cycle of technology:
Ceilometer is not considered the only nor the best tool in town
anymore. How to deal with this?

Your proposal seem to go towards the old, known, 20 years old territory:
OpenStack should provides infrastructure and some mentorship to
development teams, and that's it. That's somewhere between SourceForge
and Apache Foundation.

Contrary to other open source foundations, we have put in places
processes and tools to pick our favorite projects based not just on
technical merits. We're giving strong incentives to do open
collaboration. The collaboration across large corporations as put in
place in OpenStack doesn't happen by chance, quite the contrary.

This is what makes OpenStack different (and probably one of its main
reasons for success).

 Specifically for Triple-O, by making the Deployment program == Triple-O,
 the TC has picked the disk-image-based deployment of an undercloud
 design as The OpenStack Way of Deployment. 

Triple-O is a different case, as puppet and chef modules have existed
since I can remember and OOO is only one of many. Different problem that
should be discussed at TC level. A lot of improvements can be imagined
for the Deployment program.

 I believe by picking winners in unsettled spaces, we add more to the
 confusion of users than having 1 option for doing something.

I think this conversation is crossing the very important What is
OpenStack? issue. Most likely the answer won't be as clear cut as
OpenStack is what the TC decides is integrated. A lot of your concerns
about the TC 'picking the winner' will be resolved because it won't be
the TC alone to decide what will be using the OpenStack mark.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Eoghan Glynn


   Additional cross-project resources can be ponied up by the large
   contributor companies, and existing cross-project resources are not
   necessarily divertable on command.
  
  Sure additional cross-project resources can and need to be ponied up, but I
  am doubtful that will be enough.
 
 OK, so what exactly do you suspect wouldn't be enough, for what
 exactly?
 
 
 I am not sure what would be enough to get OpenStack back in a position where
 more developers/users are happier with the current state of affairs. Which
 is why I think we may want to try several things.
 
 
 
 Is it the likely number of such new resources, or the level of domain-
 expertise that they can be realistically be expected bring to the
 table, or the period of time to on-board them, or something else?
 
 
 Yes, all of the above.

Hi Joe,

In coming to that conclusion, have you thought about and explicitly
rejected all of the approaches that have been mooted to mitigate
those concerns?

Is there a strong reason why the following non-exhaustive list
would all be doomed to failure:

 * encouraging projects to follow the successful Sahara model,
   where one core contributor also made a large contribution to
   a cross-project effort (in this case infra, but could be QA
   or docs or release management or stable-maint ... etc)

   [this could be seen as essentially offsetting the cost of
that additional project drawing from the cross-project well]

 * assigning liaisons from each project to *each* of the cross-
   project efforts

   [this could be augmented/accelerated with one of the standard
on-boarding approaches, such as a designated mentor for the
liaison or even an immersive period of secondment]

 * applying back-pressure via the board representation to make
   it more likely that the appropriate number of net-new
   cross-project resources are forthcoming

   [c.f. Stef's we're not amateurs or volunteers mail earlier
on this thread]

I really think we need to do better than dismissing out-of-hand
the idea of beefing up the cross-project efforts. If it won't
work for specific reasons, let's get those reasons out onto
the table and make a data-driven decision on this.

 And which cross-project concern do you think is most strained by the
 current set of projects in the integrated release? Is it:
 
 * QA
 * infra
 * release management
 * oslo
 * documentation
 * stable-maint
 
 or something else?
 
 
 Good question.
 
 IMHO QA, Infra and release management are probably the most strained.

OK, well let's brain-storm on how some of those efforts could
potentially be made more scalable.

Should we for example start to look at release management as a
program onto itself, with a PTL *and* a group of cores to divide
and conquer the load?

(the hands-on rel mgmt for the juno-2 milestone, for example, was
 delegated - is there a good reason why such delegation wouldn't
 work as a matter of course?)

Should QA programs such as grenade be actively seeking new cores to
spread the workload?

(until recently, this had the effective minimum of 2 cores, despite
 now being a requirement for integrated projects)

Could the infra group potentially delegate some of the workload onto
the distro folks?

(given that it's strongly in their interest to have their distro
 represented in the CI gate.

None of the above ideas may make sense, but it doesn't feel like
every avenue has been explored here. I for one don't feel entirely
satisfied that every potential solution to cross-project strain was
fully thought-out in advance of the de-integration being presented
as the solution.

Just my $0.02 ...

Cheers,
Eoghan

[on vacation with limited connectivity]

 But I also think there is something missing from this list. Many of the 
 projects
 are hitting similar issues and end up solving them in different ways, which
 just leads to more confusion for the end user. Today we have a decent model
 for rolling out cross-project libraries (Oslo) but we don't have a good way
 of having broader cross project discussions such as: API standards (such as
 discoverability of features), logging standards, aligning on concepts
 (different projects have different terms and concepts for scaling and
 isolating failure domains), and an overall better user experience. So I
 think we have a whole class of cross project issues that we have not even
 begun addressing.
 
 
 
 Each of those teams has quite different prerequisite skill-sets, and
 the on-ramp for someone jumping in seeking to make a positive impact
 will vary from team to team.
 
 Different approaches have been tried on different teams, ranging from
 dedicated project-liaisons (Oslo) to shared cores (Sahara/Infra) to
 newly assigned dedicated resources (QA/Infra). Which of these models
 might work in your opinion? Which are doomed to failure, and why?
 
 So can you be more specific here on why you think adding more cross-
 project resources won't be enough to address an identified shortage
 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Thierry Carrez
Eoghan Glynn wrote:
 [...] 
 And which cross-project concern do you think is most strained by the
 current set of projects in the integrated release? Is it:

 * QA
 * infra
 * release management
 * oslo
 * documentation
 * stable-maint

 or something else?


 Good question.

 IMHO QA, Infra and release management are probably the most strained.
 
 OK, well let's brain-storm on how some of those efforts could
 potentially be made more scalable.
 
 Should we for example start to look at release management as a
 program onto itself, with a PTL *and* a group of cores to divide
 and conquer the load?
 
 (the hands-on rel mgmt for the juno-2 milestone, for example, was
  delegated - is there a good reason why such delegation wouldn't
  work as a matter of course?)

For the record, I wouldn't say release management (as a role) is
strained. I'm strained, but that's because I do more than just release
management. We are taking steps to grow the team (both at release
management program level and at foundation development coordination
levels) that should help in that area. Oslo has some growth issues but I
think they are under control. Stable maint (which belongs to the release
management program, btw) needs more a restructuration that a resource
injection.

I think the most strained function is keeping on top of test failures
(which is most case is just about investigating, reproducing and fixing
rare issues bugs). It's a complex task, it falls somewhere between QA
and Infra right now, and the very few resources that have the unique
combination of knowledge and will/time to spend on those is quickly
dying of burnout.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Thierry Carrez
Jay Pipes wrote:
 [...]
 If either of the above answers is NO, then I believe the Technical
 Committee should recommend that the integrated project be removed from
 the integrated release.
 
 HOWEVER, I *also* believe that the previously-integrated project should
 not just be cast away back to Stackforge. I think the project should
 remain in its designated Program and should remain in the openstack/
 code namespace. Furthermore, active, competing visions and
 implementations of projects that address the Thing the
 previously-integrated project addressed should be able to apply to join
 the same Program, and *also* live in the openstack/ namespace.
 
 All of these projects should be able to live in the Program, in the
 openstack/ code namespace, for as long as the project is actively
 developed, and let the contributor communities in these competing
 projects *naturally* work to do any of the following:
 
  * Pick a best-of-breed implementation from the projects that address
 the same Thing
  * Combine code and efforts to merge the good bits of multiple projects
 into one
  * Let multiple valid choices of implementation live in the same Program
 with none of them being blessed by the TC to be part of the integrated
 release

That would work if an OpenStack Program was just like a category under
which you can file projects. However, OpenStack programs are not a
competition category where we could let multiple competing
implementations fight it out for becoming the solution; they are
essentially just a team of people working toward a common goal, having
meetings and sharing/electing the same technical lead.

I'm not convinced you would set competing solutions for a fair
competition by growing them inside the same team (and under the same
PTL!) as the current mainstream/blessed option. How likely is the
Orchestration PTL to make the decision to drop Heat in favor of a new
contender ?

I'm also concerned with making a program a collection of competing
teams, rather than a single team sharing the same meetings and electing
the same leadership, working all together. I don't want the teams
competing to get a number of contributors that would let them game the
elections and take over the program leadership. I think such a setup
would just increase the political tension inside programs, and we have
enough of it already.

If we want to follow your model, we probably would have to dissolve
programs as they stand right now, and have blessed categories on one
side, and teams on the other (with projects from some teams being
blessed as the current solution). That would leave the horizontal
programs like Docs, QA or Infra, where the team and the category are the
same thing, as outliers again (like they were before we did programs).

Finally, I'm slightly concerned with the brand aspect -- letting *any*
project call themselves OpenStack something (which is what living
under the openstack/* namespace gives you) just because they happen to
compete with an existing openstack project sounds like a recipe for
making sure openstack doesn't mean anything upstream anymore.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Jay Pipes

Hi Thierry, thanks for the reply. Comments inline. :)

On 08/20/2014 06:32 AM, Thierry Carrez wrote:

Jay Pipes wrote:

[...] If either of the above answers is NO, then I believe the
Technical Committee should recommend that the integrated project be
removed from the integrated release.

HOWEVER, I *also* believe that the previously-integrated project
should not just be cast away back to Stackforge. I think the
project should remain in its designated Program and should remain
in the openstack/ code namespace. Furthermore, active, competing
visions and implementations of projects that address the Thing the
previously-integrated project addressed should be able to apply to
join the same Program, and *also* live in the openstack/
namespace.

All of these projects should be able to live in the Program, in
the openstack/ code namespace, for as long as the project is
actively developed, and let the contributor communities in these
competing projects *naturally* work to do any of the following:

* Pick a best-of-breed implementation from the projects that
address the same Thing * Combine code and efforts to merge the good
bits of multiple projects into one * Let multiple valid choices of
implementation live in the same Program with none of them being
blessed by the TC to be part of the integrated release


That would work if an OpenStack Program was just like a category
under which you can file projects. However, OpenStack programs are
not a competition category where we could let multiple competing
implementations fight it out for becoming the solution; they are
essentially just a team of people working toward a common goal,
having meetings and sharing/electing the same technical lead.

I'm not convinced you would set competing solutions for a fair
competition by growing them inside the same team (and under the same
PTL!) as the current mainstream/blessed option. How likely is the
Orchestration PTL to make the decision to drop Heat in favor of a
new contender ?


I don't believe the Programs are needed, as they are currently
structured. I don't really believe they serve any good purposes, and
actually serve to solidify positions of power, slanted towards existing
power centers, which is antithetical to a meritocratic community.

Furthermore, the structures we've built into the OpenStack community
governance has resulted in perverse incentives. There is this constant
struggle to be legitimized by being included in a Program, incubated,
and then included in the integrated release. Projects, IMO, should be
free to innovate in *any* area of OpenStack, including areas with
existing integrated projects. We should be more open, not less.


I'm also concerned with making a program a collection of competing
teams, rather than a single team sharing the same meetings and
electing the same leadership, working all together. I don't want the
teams competing to get a number of contributors that would let them
game the elections and take over the program leadership. I think such
a setup would just increase the political tension inside programs,
and we have enough of it already.


By prohibiting competition within a Program, you don't magically get rid
of the competition, though. :) The competition will continue to exist,
and divisions will continue to be increased among the people working on
the same general area. You can't force people to get in-line with a
project whose vision or architectural design principles they don't share.


If we want to follow your model, we probably would have to dissolve
programs as they stand right now, and have blessed categories on one
side, and teams on the other (with projects from some teams being
blessed as the current solution).


Why do we have to have blessed categories at all? I'd like to think of
a day when the TC isn't picking winners or losers at all. Level the
playing field and let the quality of the projects themselves determine
the winner in the space. Stop the incubation and graduation madness and 
change the role of the TC to instead play an advisory role to upcoming 
(and existing!) projects on the best ways to integrate with other 
OpenStack projects, if integration is something that is natural for the 
project to work towards.



That would leave the horizontal programs like Docs, QA or Infra,
where the team and the category are the same thing, as outliers again
(like they were before we did programs).


What is the purpose of having these programs, though? If it's just to 
have a PTL, then I think we need to reconsider the whole concept of 
Programs. We should not be putting in place structures that just serve 
to create centers of power. *Projects* will naturally find/elect/choose 
not to have one or more technical leads. Why should we limit entire 
categories of projects to having a single Lead person? What purpose does 
the role fill that could not be filled in a looser, more natural 
fashion? Since the TC is no longer composed of each integrated project 
PTL along with 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Zane Bitter

On 19/08/14 10:37, Jay Pipes wrote:


By graduating an incubated project into the integrated release, the
Technical Committee is blessing the project as the OpenStack way to do
some thing. If there are projects that are developed *in the OpenStack
ecosystem* that are actively being developed to serve the purpose that
an integrated project serves, then I think it is the responsibility of
the Technical Committee to take another look at the integrated project
and answer the following questions definitively:

  a) Is the Thing that the project addresses something that the
Technical Committee believes the OpenStack ecosystem benefits from by
the TC making a judgement on what is the OpenStack way of addressing
that Thing.

and IFF the decision of the TC on a) is YES, then:

  b) Is the Vision and Implementation of the currently integrated
project the one that the Technical Committee wishes to continue to
bless as the the OpenStack way of addressing the Thing the project
does.


I disagree with part (b); projects are not code - projects, like Soylent 
Green, are people. So it's not critical that the implementation is the 
one the TC wants to bless, what's critical is that the right people are 
involved to get to an implementation that the TC would be comfortable 
blessing over time. For example, everyone agrees that Ceilometer has 
room for improvement, but any implication that the Ceilometer is not 
interested in or driving towards those improvements (because of NIH or 
whatever) is, as has been pointed out, grossly unfair to the Ceilometer 
team.


I think the rest of your plan is a way of recognising this 
appropriately, that the current implementation is actually not the 
be-all and end-all of how the TC should view a project.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Zane Bitter

On 11/08/14 05:24, Thierry Carrez wrote:

So the idea that being (and remaining) in the integrated release should
also be judged on technical merit is a slightly different effort. It's
always been a factor in our choices, but like Devananda says, it's more
difficult than just checking a number of QA/integration checkboxes. In
some cases, blessing one project in a problem space stifles competition,
innovation and alternate approaches. In some other cases, we reinvent
domain-specific solutions rather than standing on the shoulders of
domain-specific giants in neighboring open source projects.


I totally agree that these are the things we need to be vigilant about.

Stifling competition is a big worry, but it appears to me that a lot of 
the stifling is happening even before incubation. Everyone's time is 
limited, so if you happen to notice a new project on the incubation 
trajectory doing things in what you think is the Wrong Way, you're most 
likely to either leave some drive-by feedback or to just ignore it and 
carry on with your life. What you're most likely *not* to do is to start 
a competing project to prove them wrong, or to jump in full time to the 
existing project and show them the light. It's really hard to argue 
against the domain experts too - when you're acutely aware of how 
shallow your knowledge is in a particular area it's very hard to know 
how hard to push. (Perhaps ironically, since becoming a PTL I feel I 
have to be much more cautious in what I say too, because people are 
inclined to read too much into my opinion - I wonder if TC members feel 
the same pressure.) I speak from first-hand instances of guilt here - 
for example, I gave some feedback to the Mistral folks just before the 
last design summit[1], but I haven't had time to follow it up at all. I 
wouldn't be a bit surprised if they showed up with an incubation 
request, a largely-unchanged user interface and an expectation that I 
would support it.


The result is that projects often don't hear the feedback they need 
until far too late - often when they get to the incubation review (maybe 
not even their first incubation review). In the particularly unfortunate 
case of Marconi, it wasn't until the graduation review. (More about that 
in a second.) My best advice to new projects here is that you must be 
like a ferret up the pant-leg of any negative feedback. Grab hold of any 
criticism and don't let go until you have either converted the person 
giving it into your biggest supporter, been converted by them, or 
provoked them to start a competing project. (Any of those is a win as 
far as the community is concerned.)


Perhaps we could consider a space like a separate mailing list 
(openstack-future?) reserved just for announcements of Related projects, 
their architectural principles, and discussions of the same?  They 
certainly tend to get drowned out amidst the noise of openstack-dev. 
(Project management, meeting announcements, and internal project 
discussion would all be out of scope for this list.)


As for reinventing domain-specific solutions, I'm not sure that happens 
as often as is being made out. IMO the defining feature of IaaS that 
makes the cloud the cloud is on-demand (i.e. real-time) self-service. 
Everything else more or less falls out of that requirement, but the very 
first thing to fall out is multi-tenancy and there just aren't that many 
multi-tenant services floating around out there. There are a couple of 
obvious strategies to deal with that: one is to run existing software 
within a tenant-local resource provisioned by OpenStack (Trove and 
Sahara are examples of this), and the other is to wrap a multi-tenancy 
framework around an existing piece of software (Nova and Cinder are 
examples of this). (BTW the former is usually inherently less 
satisfying, because it scales at a much coarser granularity.) The answer 
to a question of the form:


Why do we need OpenStack project $X, when open source project $Y 
already exists?


is almost always:

Because $Y is not multi-tenant aware; we need to wrap it with a 
multi-tenancy layer with OpenStack-native authentication, metering and 
quota management. That even allows us to set up an abstraction layer so 
that you can substitute $Z as the back end too.


This is completely uncontroversial when you substitute X, Y, Z = Nova, 
libvirt, Xen. However, when you instead substitute X, Y, Z = 
Zaqar/Marconi, Qpid, MongoDB it suddenly becomes *highly* controversial. 
I'm all in favour of a healthy scepticism, but I think we've passed that 
point now. (How would *you* make an AMQP bus multi-tenant?)


To be clear, Marconi did made a mistake. The Marconi API presented 
semantics to the user that excluded many otherwise-obvious choices of 
back-end plugin (i.e. Qpid/RabbitMQ). It seems to be a common thing (see 
also: Mistral) to want to design for every feature an existing 
Enterprisey application might use, which IMHO kind of ignores the fact 
that 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Jay Pipes

On 08/20/2014 11:41 AM, Zane Bitter wrote:

On 19/08/14 10:37, Jay Pipes wrote:


By graduating an incubated project into the integrated release, the
Technical Committee is blessing the project as the OpenStack way to do
some thing. If there are projects that are developed *in the OpenStack
ecosystem* that are actively being developed to serve the purpose that
an integrated project serves, then I think it is the responsibility of
the Technical Committee to take another look at the integrated project
and answer the following questions definitively:

  a) Is the Thing that the project addresses something that the
Technical Committee believes the OpenStack ecosystem benefits from by
the TC making a judgement on what is the OpenStack way of addressing
that Thing.

and IFF the decision of the TC on a) is YES, then:

  b) Is the Vision and Implementation of the currently integrated
project the one that the Technical Committee wishes to continue to
bless as the the OpenStack way of addressing the Thing the project
does.


I disagree with part (b); projects are not code - projects, like Soylent
Green, are people.


Hey! Don't steal my slide content! :P

http://bit.ly/navigating-openstack-community (slide 3)

 So it's not critical that the implementation is the

one the TC wants to bless, what's critical is that the right people are
involved to get to an implementation that the TC would be comfortable
blessing over time. For example, everyone agrees that Ceilometer has
room for improvement, but any implication that the Ceilometer is not
interested in or driving towards those improvements (because of NIH or
whatever) is, as has been pointed out, grossly unfair to the Ceilometer
team.


I certainly have not made such an implication about Ceilometer. What I 
see in the Ceilometer space, though, is that there are clearly a number 
of *active* communities of OpenStack engineers developing code that 
crosses similar problem spaces. I think the TC blessing one of those 
communities before the market has had a chance to do a bit more 
natural filtering of quality is a barrier to innovation. I think having 
all of those separate teams able to contribute code to an openstack/ 
code namespace and naturally work to resolve differences and merge 
innovation is a better fit for a meritocracy.



I think the rest of your plan is a way of recognising this
appropriately, that the current implementation is actually not the
be-all and end-all of how the TC should view a project.


Yes, quite well said.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Chris Friesen

On 08/20/2014 07:21 AM, Jay Pipes wrote:

Hi Thierry, thanks for the reply. Comments inline. :)

On 08/20/2014 06:32 AM, Thierry Carrez wrote:

If we want to follow your model, we probably would have to dissolve
programs as they stand right now, and have blessed categories on one
side, and teams on the other (with projects from some teams being
blessed as the current solution).


Why do we have to have blessed categories at all? I'd like to think of
a day when the TC isn't picking winners or losers at all. Level the
playing field and let the quality of the projects themselves determine
the winner in the space. Stop the incubation and graduation madness and
change the role of the TC to instead play an advisory role to upcoming
(and existing!) projects on the best ways to integrate with other
OpenStack projects, if integration is something that is natural for the
project to work towards.


It seems to me that at some point you need to have a recommended way of 
doing things, otherwise it's going to be *really hard* for someone to 
bring up an OpenStack installation.


We already run into issues with something as basic as competing SQL 
databases.  If every component has several competing implementations and 
none of them are official how many more interaction issues are going 
to trip us up?


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Jay Pipes

On 08/20/2014 05:06 PM, Chris Friesen wrote:

On 08/20/2014 07:21 AM, Jay Pipes wrote:

Hi Thierry, thanks for the reply. Comments inline. :)

On 08/20/2014 06:32 AM, Thierry Carrez wrote:

If we want to follow your model, we probably would have to dissolve
programs as they stand right now, and have blessed categories on one
side, and teams on the other (with projects from some teams being
blessed as the current solution).


Why do we have to have blessed categories at all? I'd like to think of
a day when the TC isn't picking winners or losers at all. Level the
playing field and let the quality of the projects themselves determine
the winner in the space. Stop the incubation and graduation madness and
change the role of the TC to instead play an advisory role to upcoming
(and existing!) projects on the best ways to integrate with other
OpenStack projects, if integration is something that is natural for the
project to work towards.


It seems to me that at some point you need to have a recommended way of
doing things, otherwise it's going to be *really hard* for someone to
bring up an OpenStack installation.


Why can't there be multiple recommended ways of setting up an OpenStack 
installation? Matter of fact, in reality, there already are multiple 
recommended ways of setting up an OpenStack installation, aren't there?


There's multiple distributions of OpenStack, multiple ways of doing 
bare-metal deployment, multiple ways of deploying different message 
queues and DBs, multiple ways of establishing networking, multiple open 
and proprietary monitoring systems to choose from, etc. And I don't 
really see anything wrong with that.



We already run into issues with something as basic as competing SQL
databases.


If the TC suddenly said Only MySQL will be supported, that would not 
mean that the greater OpenStack community would be served better. It 
would just unnecessarily take options away from deployers.


 If every component has several competing implementations and

none of them are official how many more interaction issues are going
to trip us up?


IMO, OpenStack should be about choice. Choice of hypervisor, choice of 
DB and MQ infrastructure, choice of operating systems, choice of storage 
vendors, choice of networking vendors.


If there are multiple actively-developed projects that address the same 
problem space, I think it serves our OpenStack users best to let the 
projects work things out themselves and let the cream rise to the top. 
If the cream ends up being one of those projects, so be it. If the cream 
ends up being a mix of both projects, so be it. The production community 
will end up determining what that cream should be based on what it 
deploys into its clouds and what input it supplies to the teams working 
on competing implementations.


And who knows... what works or is recommended by one deployer may not be 
what is best for another type of deployer and I believe we (the 
TC/governance) do a disservice to our user community by picking a winner 
in a space too early (or continuing to pick a winner in a clearly 
unsettled space).


Just my thoughts on the topic, as they've evolved over the years from 
being a pure developer, to doing QA, then deploy/ops work, and back to 
doing development on OpenStack...


Best,
-jay






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Angus Salkeld
On Thu, Aug 21, 2014 at 2:37 AM, Zane Bitter zbit...@redhat.com wrote:

 On 11/08/14 05:24, Thierry Carrez wrote:

 So the idea that being (and remaining) in the integrated release should
 also be judged on technical merit is a slightly different effort. It's
 always been a factor in our choices, but like Devananda says, it's more
 difficult than just checking a number of QA/integration checkboxes. In
 some cases, blessing one project in a problem space stifles competition,
 innovation and alternate approaches. In some other cases, we reinvent
 domain-specific solutions rather than standing on the shoulders of
 domain-specific giants in neighboring open source projects.


 I totally agree that these are the things we need to be vigilant about.

 Stifling competition is a big worry, but it appears to me that a lot of
 the stifling is happening even before incubation. Everyone's time is
 limited, so if you happen to notice a new project on the incubation
 trajectory doing things in what you think is the Wrong Way, you're most
 likely to either leave some drive-by feedback or to just ignore it and
 carry on with your life. What you're most likely *not* to do is to start a
 competing project to prove them wrong, or to jump in full time to the
 existing project and show them the light. It's really hard to argue against
 the domain experts too - when you're acutely aware of how shallow your
 knowledge is in a particular area it's very hard to know how hard to push.
 (Perhaps ironically, since becoming a PTL I feel I have to be much more
 cautious in what I say too, because people are inclined to read too much
 into my opinion - I wonder if TC members feel the same pressure.) I speak
 from first-hand instances of guilt here - for example, I gave some feedback
 to the Mistral folks just before the last design summit[1], but I haven't
 had time to follow it up at all. I wouldn't be a bit surprised if they
 showed up with an incubation request, a largely-unchanged user interface
 and an expectation that I would support it.

 The result is that projects often don't hear the feedback they need until
 far too late - often when they get to the incubation review (maybe not even
 their first incubation review). In the particularly unfortunate case of
 Marconi, it wasn't until the graduation review. (More about that in a
 second.) My best advice to new projects here is that you must be like a
 ferret up the pant-leg of any negative feedback. Grab hold of any criticism
 and don't let go until you have either converted the person giving it into
 your biggest supporter, been converted by them, or provoked them to start a
 competing project. (Any of those is a win as far as the community is
 concerned.)

 Perhaps we could consider a space like a separate mailing list
 (openstack-future?) reserved just for announcements of Related projects,
 their architectural principles, and discussions of the same?  They
 certainly tend to get drowned out amidst the noise of openstack-dev.
 (Project management, meeting announcements, and internal project discussion
 would all be out of scope for this list.)

 As for reinventing domain-specific solutions, I'm not sure that happens as
 often as is being made out. IMO the defining feature of IaaS that makes the
 cloud the cloud is on-demand (i.e. real-time) self-service. Everything else
 more or less falls out of that requirement, but the very first thing to
 fall out is multi-tenancy and there just aren't that many multi-tenant
 services floating around out there. There are a couple of obvious
 strategies to deal with that: one is to run existing software within a
 tenant-local resource provisioned by OpenStack (Trove and Sahara are
 examples of this), and the other is to wrap a multi-tenancy framework
 around an existing piece of software (Nova and Cinder are examples of
 this). (BTW the former is usually inherently less satisfying, because it
 scales at a much coarser granularity.) The answer to a question of the form:

 Why do we need OpenStack project $X, when open source project $Y already
 exists?

 is almost always:

 Because $Y is not multi-tenant aware; we need to wrap it with a
 multi-tenancy layer with OpenStack-native authentication, metering and
 quota management. That even allows us to set up an abstraction layer so
 that you can substitute $Z as the back end too.

 This is completely uncontroversial when you substitute X, Y, Z = Nova,
 libvirt, Xen. However, when you instead substitute X, Y, Z = Zaqar/Marconi,
 Qpid, MongoDB it suddenly becomes *highly* controversial. I'm all in favour
 of a healthy scepticism, but I think we've passed that point now. (How
 would *you* make an AMQP bus multi-tenant?)

 To be clear, Marconi did made a mistake. The Marconi API presented
 semantics to the user that excluded many otherwise-obvious choices of
 back-end plugin (i.e. Qpid/RabbitMQ). It seems to be a common thing (see
 also: Mistral) to want to design for every feature an existing 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Clint Byrum
Excerpts from Robert Collins's message of 2014-08-18 23:41:20 -0700:
 On 18 August 2014 09:32, Clint Byrum cl...@fewbar.com wrote:
 
 I can see your perspective but I don't think its internally consistent...
 
  Here's why folk are questioning Ceilometer:
 
  Nova is a set of tools to abstract virtualization implementations.
 
 With a big chunk of local things - local image storage (now in
 glance), scheduling, rebalancing, ACLs and quotas. Other
 implementations that abstract over VM's at various layers already
 existed when Nova started - some bad ( some very bad!) and others
 actually quite ok.
 

The fact that we have local implementations of domain specific things is
irrelevant to the difference I'm trying to point out. Glance needs to
work with the same authentication semantics and share a common access
catalog to work well with Nova. It's unlikely there's a generic image
catalog that would ever fit this bill. In many ways glance is just an
abstraction of file storage backends and a database to track a certain
domain of files (images, and soon, templates and other such things).

The point of mentioning Nova is, we didn't write libvirt, or xen, we
wrote an abstraction so that users could consume them via a REST API
that shares these useful automated backends like glance.

  Neutron is a set of tools to abstract SDN/NFV implementations.
 
 And implements a DHCP service, DNS service, overlay networking : its
 much more than an abstraction-over-other-implementations.
 

Native DHCP and overlay? Last I checked Neutron used dnsmasq and
openvswitch, but it has been a few months, and I know that is an eon in
OpenStack time.

  Cinder is a set of tools to abstract block-device implementations.
  Trove is a set of tools to simplify consumption of existing databases.
  Sahara is a set of tools to simplify Hadoop consumption.
  Swift is a feature-complete implementation of object storage, none of
  which existed when it was started.
 
 Swift was started in 2009; Eucalyptus goes back to 2007, with Walrus
 part of that - I haven't checked precise dates, but I'm pretty sure
 that it existed and was usable by the start of 2009. There may well be
 other object storage implementations too - I simply haven't checked.
 

Indeed, and MogileFS was sort of like Swift but not HTTP based. Perhaps
Walrus was evaluated and inadequate for the CloudFiles product
requirements? I don't know. But there weren't de-facto object stores
at the time because object stores were just becoming popular.

  Keystone supports all of the above, unifying their auth.
 
 And implementing an IdP (which I know they want to stop doing ;)). And
 in fact lots of OpenStack projects, for various reasons support *not*
 using Keystone (something that bugs me, but thats a different
 discussion).
 

My point was it is justified to have a whole implementation and not
just abstraction because it is meant to enable the ecosystem, not _be_
the ecosystem. I actually think Keystone is problematic too, and I often
wonder why we haven't just do OAuth, but I'm not trying to throw every
project under the bus. I'm trying to state that we accept Keystone because
it has grown organically to support the needs of all the other pieces.

  Horizon supports all of the above, unifying their GUI.
 
  Ceilometer is a complete implementation of data collection and alerting.
  There is no shortage of implementations that exist already.
 
  I'm also core on two projects that are getting some push back these
  days:
 
  Heat is a complete implementation of orchestration. There are at least a
  few of these already in existence, though not as many as their are data
  collection and alerting systems.
 
  TripleO is an attempt to deploy OpenStack using tools that OpenStack
  provides. There are already quite a few other tools that _can_ deploy
  OpenStack, so it stands to reason that people will question why we
  don't just use those. It is my hope we'll push more into the unifying
  the implementations space and withdraw a bit from the implementing
  stuff space.
 
  So, you see, people are happy to unify around a single abstraction, but
  not so much around a brand new implementation of things that already
  exist.
 
 If the other examples we had were a lot purer, this explanation would
 make sense. I think there's more to it than that though :).
 

If purity is required to show a difference, then I don't think I know
how to demonstrate what I think is obvious to most of us: Ceilometer
is an end to end implementation of things that exist in many battle
tested implementations. I struggle to think of another component of
OpenStack that has this distinction.

 What exactly, I don't know, but its just too easy an answer, and one
 that doesn't stand up to non-trivial examination :(.
 
 I'd like to see more unification of implementations in TripleO - but I
 still believe our basic principle of using OpenStack technologies that
 already exist in preference to third party ones is still 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-20 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2014-08-20 14:53:22 -0700:
 On 08/20/2014 05:06 PM, Chris Friesen wrote:
  On 08/20/2014 07:21 AM, Jay Pipes wrote:
  Hi Thierry, thanks for the reply. Comments inline. :)
 
  On 08/20/2014 06:32 AM, Thierry Carrez wrote:
  If we want to follow your model, we probably would have to dissolve
  programs as they stand right now, and have blessed categories on one
  side, and teams on the other (with projects from some teams being
  blessed as the current solution).
 
  Why do we have to have blessed categories at all? I'd like to think of
  a day when the TC isn't picking winners or losers at all. Level the
  playing field and let the quality of the projects themselves determine
  the winner in the space. Stop the incubation and graduation madness and
  change the role of the TC to instead play an advisory role to upcoming
  (and existing!) projects on the best ways to integrate with other
  OpenStack projects, if integration is something that is natural for the
  project to work towards.
 
  It seems to me that at some point you need to have a recommended way of
  doing things, otherwise it's going to be *really hard* for someone to
  bring up an OpenStack installation.
 
 Why can't there be multiple recommended ways of setting up an OpenStack 
 installation? Matter of fact, in reality, there already are multiple 
 recommended ways of setting up an OpenStack installation, aren't there?
 
 There's multiple distributions of OpenStack, multiple ways of doing 
 bare-metal deployment, multiple ways of deploying different message 
 queues and DBs, multiple ways of establishing networking, multiple open 
 and proprietary monitoring systems to choose from, etc. And I don't 
 really see anything wrong with that.
 

This is an argument for loosely coupling things, rather than tightly
integrating things. You will almost always win my vote with that sort of
movement, and you have here. +1.

  We already run into issues with something as basic as competing SQL
  databases.
 
 If the TC suddenly said Only MySQL will be supported, that would not 
 mean that the greater OpenStack community would be served better. It 
 would just unnecessarily take options away from deployers.
 

This is really where supported becomes the mutex binding us all. The
more supported options, the larger the matrix, the more complex a
user's decision process becomes.

   If every component has several competing implementations and
  none of them are official how many more interaction issues are going
  to trip us up?
 
 IMO, OpenStack should be about choice. Choice of hypervisor, choice of 
 DB and MQ infrastructure, choice of operating systems, choice of storage 
 vendors, choice of networking vendors.
 

Err, uh. I think OpenStack should be about users. If having 400 choices
means users are just confused, then OpenStack becomes nothing and
everything all at once. Choices should be part of the whole not when 1%
of the market wants a choice, but when 20%+ of the market _requires_
a choice.

What we shouldn't do is harm that 1%'s ability to be successful. We should
foster it and help it grow, but we don't just pull it into the program and
say You're ALSO in OpenStack now! and we also don't want to force those
users to make a hard choice because the better solution is not blessed.

 If there are multiple actively-developed projects that address the same 
 problem space, I think it serves our OpenStack users best to let the 
 projects work things out themselves and let the cream rise to the top. 
 If the cream ends up being one of those projects, so be it. If the cream 
 ends up being a mix of both projects, so be it. The production community 
 will end up determining what that cream should be based on what it 
 deploys into its clouds and what input it supplies to the teams working 
 on competing implementations.
 

I'm really not a fan of making it a competitive market. If a space has a
diverse set of problems, we can expect it will have a diverse set of
solutions that overlap. But that doesn't mean they both need to drive
toward making that overlap all-encompassing. Sometimes that happens and
it is good, and sometimes that happens and it causes horrible bloat.

 And who knows... what works or is recommended by one deployer may not be 
 what is best for another type of deployer and I believe we (the 
 TC/governance) do a disservice to our user community by picking a winner 
 in a space too early (or continuing to pick a winner in a clearly 
 unsettled space).
 

Right, I think our current situation crowds out diversity, when what we
want to do is enable it, without confusing the users.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-19 Thread Robert Collins
On 18 August 2014 09:32, Clint Byrum cl...@fewbar.com wrote:

I can see your perspective but I don't think its internally consistent...

 Here's why folk are questioning Ceilometer:

 Nova is a set of tools to abstract virtualization implementations.

With a big chunk of local things - local image storage (now in
glance), scheduling, rebalancing, ACLs and quotas. Other
implementations that abstract over VM's at various layers already
existed when Nova started - some bad ( some very bad!) and others
actually quite ok.

 Neutron is a set of tools to abstract SDN/NFV implementations.

And implements a DHCP service, DNS service, overlay networking : its
much more than an abstraction-over-other-implementations.

 Cinder is a set of tools to abstract block-device implementations.
 Trove is a set of tools to simplify consumption of existing databases.
 Sahara is a set of tools to simplify Hadoop consumption.
 Swift is a feature-complete implementation of object storage, none of
 which existed when it was started.

Swift was started in 2009; Eucalyptus goes back to 2007, with Walrus
part of that - I haven't checked precise dates, but I'm pretty sure
that it existed and was usable by the start of 2009. There may well be
other object storage implementations too - I simply haven't checked.

 Keystone supports all of the above, unifying their auth.

And implementing an IdP (which I know they want to stop doing ;)). And
in fact lots of OpenStack projects, for various reasons support *not*
using Keystone (something that bugs me, but thats a different
discussion).

 Horizon supports all of the above, unifying their GUI.

 Ceilometer is a complete implementation of data collection and alerting.
 There is no shortage of implementations that exist already.

 I'm also core on two projects that are getting some push back these
 days:

 Heat is a complete implementation of orchestration. There are at least a
 few of these already in existence, though not as many as their are data
 collection and alerting systems.

 TripleO is an attempt to deploy OpenStack using tools that OpenStack
 provides. There are already quite a few other tools that _can_ deploy
 OpenStack, so it stands to reason that people will question why we
 don't just use those. It is my hope we'll push more into the unifying
 the implementations space and withdraw a bit from the implementing
 stuff space.

 So, you see, people are happy to unify around a single abstraction, but
 not so much around a brand new implementation of things that already
 exist.

If the other examples we had were a lot purer, this explanation would
make sense. I think there's more to it than that though :).

What exactly, I don't know, but its just too easy an answer, and one
that doesn't stand up to non-trivial examination :(.

I'd like to see more unification of implementations in TripleO - but I
still believe our basic principle of using OpenStack technologies that
already exist in preference to third party ones is still sound, and
offers substantial dogfood and virtuous circle benefits.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-19 Thread Flavio Percoco
On 08/14/2014 01:08 AM, Devananda van der Veen wrote:
 On Wed, Aug 13, 2014 at 5:37 AM, Mark McLoughlin mar...@redhat.com
 mailto:mar...@redhat.com wrote:
 On Fri, 2014-08-08 at 15:36 -0700, Devananda van der Veen wrote:
 On Tue, Aug 5, 2014 at 10:02 AM, Monty Taylor mord...@inaugust.com
 mailto:mord...@inaugust.com wrote:

  Yes.
 
  Additionally, and I think we've been getting better at this in the
 2 cycles
  that we've had an all-elected TC, I think we need to learn how to
 say no on
  technical merit - and we need to learn how to say thank you for your
  effort, but this isn't working out Breaking up with someone is
 hard to do,
  but sometimes it's best for everyone involved.
 

 I agree.

 The challenge is scaling the technical assessment of projects. We're
 all busy, and digging deeply enough into a new project to make an
 accurate assessment of it is time consuming. Some times, there are
 impartial subject-matter experts who can spot problems very quickly,
 but how do we actually gauge fitness?

 Yes, it's important the TC does this and it's obvious we need to get a
 lot better at it.

 The Marconi architecture threads are an example of us trying harder (and
 kudos to you for taking the time), but it's a little disappointing how
 it has turned out. On the one hand there's what seems like a this
 doesn't make any sense gut feeling and on the other hand an earnest,
 but hardly bite-sized justification for how the API was chosen and how
 it lead to the architecture. Frustrating that appears to not be
 resulting in either improved shared understanding, or improved
 architecture. Yet everyone is trying really hard.
 
 Sometimes trying really hard is not enough. Saying goodbye is hard,
 but as has been pointed out already in this thread, sometimes it's
 necessary.

Agreed, as long as the reasons behind that goodbye are based on facts
that actually affect OpenStack somehow. These is what the
incubation/graduation requirements are for.


 Letting the industry field-test a project and feed their experience
 back into the community is a slow process, but that is the best
 measure of a project's success. I seem to recall this being an
 implicit expectation a few years ago, but haven't seen it discussed in
 a while.

 I think I recall us discussing a must have feedback that it's
 successfully deployed requirement in the last cycle, but we recognized
 that deployers often wait until a project is integrated.
 
 In the early discussions about incubation, we respected the need to
 officially recognize a project as part of OpenStack just to create the
 uptick in adoption necessary to mature projects. Similarly, integration
 is a recognition of the maturity of a project, but I think we have
 graduated several projects long before they actually reached that level
 of maturity. Actually running a project at scale for a period of time is
 the only way to know it is mature enough to run it in production at scale.
 
 I'm just going to toss this out there. What if we set the graduation bar
 to is in production in at least two sizeable clouds (note that I'm not
 saying public clouds). Trove is the only project that has, to my
 knowledge, met that bar prior to graduation, and it's the only project
 that graduated since Havana that I can, off hand, point at as clearly
 successful. Heat and Ceilometer both graduated prior to being in
 production; a few cycles later, they're still having adoption problems
 and looking at large architectural changes. I think the added cost to
 OpenStack when we integrate immature or unstable projects is significant
 enough at this point to justify a more defensive posture.

What is a good value for sizable? Why should a larger cloud be more
important that a smaller one? Surely a big cloud has different scale
requirements but it would just test that, the scaling capabilities of
the project - which I agree are really important. However, any
deployment should be considered as a good use case, whether it is big or
not, since they've such project deployed.

Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-19 Thread Flavio Percoco
On 08/14/2014 03:38 PM, Russell Bryant wrote:
 On 08/14/2014 09:21 AM, Devananda van der Veen wrote:

 On Aug 14, 2014 2:04 AM, Eoghan Glynn egl...@redhat.com
 mailto:egl...@redhat.com wrote:


 Letting the industry field-test a project and feed their experience
 back into the community is a slow process, but that is the best
 measure of a project's success. I seem to recall this being an
 implicit expectation a few years ago, but haven't seen it
 discussed in
 a while.

 I think I recall us discussing a must have feedback that it's
 successfully deployed requirement in the last cycle, but we
 recognized
 that deployers often wait until a project is integrated.

 In the early discussions about incubation, we respected the need to
 officially recognize a project as part of OpenStack just to create the
 uptick in adoption necessary to mature projects. Similarly,
 integration is a
 recognition of the maturity of a project, but I think we have graduated
 several projects long before they actually reached that level of
 maturity.
 Actually running a project at scale for a period of time is the only
 way to
 know it is mature enough to run it in production at scale.

 I'm just going to toss this out there. What if we set the graduation
 bar to
 is in production in at least two sizeable clouds (note that I'm
 not saying
 public clouds). Trove is the only project that has, to my
 knowledge, met
 that bar prior to graduation, and it's the only project that
 graduated since
 Havana that I can, off hand, point at as clearly successful. Heat and
 Ceilometer both graduated prior to being in production; a few cycles
 later,
 they're still having adoption problems and looking at large
 architectural
 changes. I think the added cost to OpenStack when we integrate
 immature or
 unstable projects is significant enough at this point to justify a more
 defensive posture.

 FWIW, Ironic currently doesn't meet that bar either - it's in
 production in
 only one public cloud. I'm not aware of large private installations yet,
 though I suspect there are some large private deployments being spun up
 right now, planning to hit production with the Juno release.

 We have some hard data from the user survey presented at the Juno summit,
 with respectively 26  53 production deployments of Heat and Ceilometer
 reported.

 There's no cross-referencing of deployment size with services in
 production
 in those data presented, though it may be possible to mine that out of the
 raw survey responses.

 Indeed, and while that would be useful information, I was referring to
 the deployment of those services at scale prior to graduation, not post
 graduation.
 
 We have a tough messaging problem here though.  I suspect many users
 wait until graduation to consider a real deployment.  Incubated is
 viewed as immature / WIP / etc.  That won't change quickly, even if we
 want it to.

Do we need a new stage for projects? A stage that means the project is
mature enough to be deployed but it's making its way into integrated
projects?

It's probably a terrible idea but I still wanted to throw it out there
since depending on the stage of the incubation process the project may
be immature or production-ready. The former is usually true at the
beginning of the incubation process, the later at the end.

The way I read our projects stages is (which doesn't seem to be the same
for everyone):

Not Incubated:

* Immature
* Project Design
* Meeting incubation requirements
* Etc

Incubated:
* Mature enough for production
* Integrating with OpenStack's CI
* Integrating with OpenStack's community
* Meeting graduation requirements

Integrated:
* Well, integrated...

 
 I think our intentions are already to not graduate something that isn't
 ready for production.  That doesn't mean we haven't made mistakes, but
 we're trying to learn and improve.  We developed a set of *written*
 guidelines to stick to, and have been holding all projects up to them.
 Teams like Ceilometer have been very receptive to the process, have
 developed plans to fill gaps, and have been working hard on the issues.
 
 A hard rule for production deployments seems like a heavy rule.  I'd
 rather just say that we should be confident that it's a production ready
 component, and known deployments are one such piece of input that would
 provide that confidence.  It could also just be extraordinary testing
 that shows both scale and quality.

+1, we can't force people to deploy a non-integrated project and we
shouldn't hold all incubated projects on that.

Cheers,
Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-19 Thread Flavio Percoco
On 08/13/2014 08:41 PM, Joe Gordon wrote:
 
 
 
 On Wed, Aug 13, 2014 at 5:13 AM, Mark McLoughlin mar...@redhat.com
 mailto:mar...@redhat.com wrote:
 
 On Thu, 2014-08-07 at 09:30 -0400, Sean Dague wrote:
 
  While I definitely think re-balancing our quality responsibilities
 back
  into the projects will provide an overall better release, I think it's
  going to take a long time before it lightens our load to the point
 where
  we get more breathing room again.
 
 I'd love to hear more about this re-balancing idea. It sounds like we
 have some concrete ideas here and we're saying they're not relevant to
 this thread because they won't be an immediate solution?
 
  This isn't just QA issues, it's a coordination issue on overall
  consistency across projects. Something that worked fine at 5
 integrated
  projects, got strained at 9, and I think is completely untenable
 at 15.
 
 I can certainly relate to that from experience with Oslo.
 
 But if you take a concrete example - as more new projects emerge, it
 became harder to get them all using oslo.messaging and using it
 consistent ways. That's become a lot better with Doug's idea of Oslo
 project delegates.
 
 But if we had not added those projects to the release, the only reason
 that the problem would be more manageable is that the use of
 oslo.messaging would effectively become a requirement for integration.
 So, projects requesting integration have to take cross-project
 responsibilities more seriously for fear their application would be
 denied.
 
 That's a very sad conclusion. Our only tool for encouraging people to
 take this cross-project issue is being accepted into the release and,
 once achieved, the cross-project responsibilities aren't taken so
 seriously?
 
 I don't think it's so bleak as that - given the proper support,
 direction and tracking I think we're seeing in Oslo how projects will
 play their part in getting to cross-project consistency.
 
  I think one of the big issues with a large number of projects is that
  implications of implementation of one project impact others, but
 people
  don't always realize. Locally correct decisions for each project
 may not
  be globally correct for OpenStack. The GBP discussion, the Rally
  discussion, all are flavors of this.
 
 I think we need two things here - good examples of how these
 cross-project initiatives can succeed so people can learn from them, and
 for the initiatives themselves to be patiently lead by those whose goal
 is a cross-project solution.
 
 It's hard work, absolutely no doubt. The point again, though, is that it
 is possible to do this type of work in such a way that once a small
 number of projects adopt the approach, most of the others will follow
 quite naturally.
 
 If I was trying to get a consistent cross-project approach in a
 particular area, the least of my concerns would be whether Ironic,
 Marconi, Barbican or Designate would be willing to fall in line behind a
 cross-project consensus.
 
  People are frustrated in infra load, for instance. It's probably worth
  noting that the 'config' repo currently has more commits landed
 than any
  other project in OpenStack besides 'nova' in this release. It has 30%
  the core team size as Nova (http://stackalytics.com/?metric=commits).
 
 Yes, infra is an extremely busy project. I'm not sure I'd compare
 infra/config commits to Nova commits in order to illustrate that,
 though.
 
 Infra is a massive endeavor, it's as critical a part of the project as
 any project in the integrated release, and like other strategic
 efforts struggles to attract contributors from as diverse a number of
 companies as the integrated projects.
 
  So I do think we need to really think about what *must* be in
 OpenStack
  for it to be successful, and ensure that story is well thought
 out, and
  that the pieces which provide those features in OpenStack are clearly
  best of breed, so they are deployed in all OpenStack deployments, and
  can be counted on by users of OpenStack.
 
 I do think we try hard to think this through, but no doubt we need to do
 better. Is this conversation concrete enough to really move our thinking
 along sufficiently, though?
 
  Because if every version of
  OpenStack deploys with a different Auth API (an example that's current
  but going away), we can't grow an ecosystem of tools around it.
 
 There's a nice concrete example, but it's going away? What's the best
 current example to talk through?
 
  This is organic definition of OpenStack through feedback with
 operators
  and developers on what's minimum needed and currently working well
  enough that people are happy to maintain it. 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-19 Thread Sandy Walsh
On 8/18/2014 9:27 AM, Thierry Carrez wrote:
 Clint Byrum wrote:
 Here's why folk are questioning Ceilometer:

 Nova is a set of tools to abstract virtualization implementations.
 Neutron is a set of tools to abstract SDN/NFV implementations.
 Cinder is a set of tools to abstract block-device implementations.
 Trove is a set of tools to simplify consumption of existing databases.
 Sahara is a set of tools to simplify Hadoop consumption.
 Swift is a feature-complete implementation of object storage, none of
 which existed when it was started.
 Keystone supports all of the above, unifying their auth.
 Horizon supports all of the above, unifying their GUI.

 Ceilometer is a complete implementation of data collection and alerting.
 There is no shortage of implementations that exist already.

 I'm also core on two projects that are getting some push back these
 days:

 Heat is a complete implementation of orchestration. There are at least a
 few of these already in existence, though not as many as their are data
 collection and alerting systems.

 TripleO is an attempt to deploy OpenStack using tools that OpenStack
 provides. There are already quite a few other tools that _can_ deploy
 OpenStack, so it stands to reason that people will question why we
 don't just use those. It is my hope we'll push more into the unifying
 the implementations space and withdraw a bit from the implementing
 stuff space.

 So, you see, people are happy to unify around a single abstraction, but
 not so much around a brand new implementation of things that already
 exist.
 Right, most projects focus on providing abstraction above
 implementations, and that abstraction is where the real domain
 expertise of OpenStack should be (because no one else is going to do it
 for us). Every time we reinvent something, we are at larger risk because
 we are out of our common specialty, and we just may not be as good as
 the domain specialists. That doesn't mean we should never reinvent
 something, but we need to be damn sure it's a good idea before we do.
 It's sometimes less fun to piggyback on existing implementations, but if
 they exist that's probably what we should do.

 While Ceilometer is far from alone in that space, what sets it apart is
 that even after it was blessed by the TC as the one we should all
 converge on, we keep on seeing competing implementations for some (if
 not all) of its scope. Convergence did not happen, and without
 convergence we struggle in adoption. We need to understand why, and if
 this is fixable.


So, here's what happened with StackTach ...

We had two teams working on StackTach, one group working on the original
program (v2) and another working on Ceilometer integration of our new
design. The problem was, there was no way we could compete with the
speed of the v2 team. Every little thing we needed to do in OpenStack
was a herculean effort. Submit a branch in one place, it needs to go
somewhere else. Spend weeks trying to land a branch. Endlessly debate
about minutia. It goes on.

I know that's the nature of running a large project. And I know everyone
is feeling it.

We quickly came to realize that, if the stars aligned and we did what we
needed to do, we'd only be playing catch-up to the other StackTach team.
And StackTach had growing pains. We needed this new architecture to
solve real business problems *today*. This isn't built it and they will
come, this is we know it's valuable ... when can I have the new one?
Like everyone, we have incredible pressure to deliver and we can't
accurately forecast with so many uncontrollable factors.

Much of what is now StackTach.v3 is (R)esearch not (D)evelopment. With
R, we need to be able to run a little fast-and-loose. Not every pull
request is a masterpiece. Our plans are going to change. We need to have
room to experiment. If it was all just D, yes, we could be more formal.
But we frequently go down a road to find a dead end and need to adjust.

We started on StackTach.v3 outside of formal OpenStack. It's still open
source. We still talk with interested parties (including ceilo) about
the design and how we're going to fulfill their needs, but we're mostly
head-down trying to get a production ready release in place. In the
process, we're making all of StackTach.v3 as tiny repos that other
groups (like Ceilo and Monasca) can adopt if they find them useful. Even
our impending move to StackForge is going to be a big productivity hit,
but it's necessary for some of our potential contributors.

Will we later revisit integration with Ceilometer? Possibly, but it's
not a priority. We have to serve the customers that are screaming for
v3. Arguably this is more of a BDFL model, but in order to innovate
quickly, get to large-scale production and remain competitive it may be
necessary.

This is why I'm pushing for an API-first model in OpenStack. Alternative
implementations shouldn't have to live outside the tribe.

(as always, my view only)



Re: [openstack-dev] [all] The future of the integrated release

2014-08-19 Thread Jay Pipes
Caution: words below may cause discomfort. I ask that folks read *all* 
of my message before reacting to any piece of it. Thanks!


On 08/19/2014 02:41 AM, Robert Collins wrote:

On 18 August 2014 09:32, Clint Byrum cl...@fewbar.com wrote:

I can see your perspective but I don't think its internally consistent...


Here's why folk are questioning Ceilometer:

Nova is a set of tools to abstract virtualization implementations.


With a big chunk of local things - local image storage (now in
glance), scheduling, rebalancing, ACLs and quotas. Other
implementations that abstract over VM's at various layers already
existed when Nova started - some bad ( some very bad!) and others
actually quite ok.


Neutron is a set of tools to abstract SDN/NFV implementations.


And implements a DHCP service, DNS service, overlay networking : its
much more than an abstraction-over-other-implementations.


Cinder is a set of tools to abstract block-device implementations.
Trove is a set of tools to simplify consumption of existing databases.
Sahara is a set of tools to simplify Hadoop consumption.
Swift is a feature-complete implementation of object storage, none of
which existed when it was started.


Swift was started in 2009; Eucalyptus goes back to 2007, with Walrus
part of that - I haven't checked precise dates, but I'm pretty sure
that it existed and was usable by the start of 2009. There may well be
other object storage implementations too - I simply haven't checked.


Keystone supports all of the above, unifying their auth.


And implementing an IdP (which I know they want to stop doing ;)). And
in fact lots of OpenStack projects, for various reasons support *not*
using Keystone (something that bugs me, but thats a different
discussion).


Horizon supports all of the above, unifying their GUI.

Ceilometer is a complete implementation of data collection and alerting.
There is no shortage of implementations that exist already.

I'm also core on two projects that are getting some push back these
days:

Heat is a complete implementation of orchestration. There are at least a
few of these already in existence, though not as many as their are data
collection and alerting systems.

TripleO is an attempt to deploy OpenStack using tools that OpenStack
provides. There are already quite a few other tools that _can_ deploy
OpenStack, so it stands to reason that people will question why we
don't just use those. It is my hope we'll push more into the unifying
the implementations space and withdraw a bit from the implementing
stuff space.

So, you see, people are happy to unify around a single abstraction, but
not so much around a brand new implementation of things that already
exist.


If the other examples we had were a lot purer, this explanation would
make sense. I think there's more to it than that though :).

What exactly, I don't know, but its just too easy an answer, and one
that doesn't stand up to non-trivial examination :(.


I actually agree with Robert about this; that Clint may have 
oversimplified whether or not certain OpenStack projects may have 
reimplemented something that previously existed. Everything is a grey 
area, after all. I'm sure each project can go back in time and point to 
some existing piece of software -- good, bad or Java -- and truthfully 
say that there was prior art that could have been used.


The issue that I think needs to be addressed more directly in this 
thread and the ongoing conversation on the TC is this:


By graduating an incubated project into the integrated release, the 
Technical Committee is blessing the project as the OpenStack way to do 
some thing. If there are projects that are developed *in the OpenStack 
ecosystem* that are actively being developed to serve the purpose that 
an integrated project serves, then I think it is the responsibility of 
the Technical Committee to take another look at the integrated project 
and answer the following questions definitively:


 a) Is the Thing that the project addresses something that the 
Technical Committee believes the OpenStack ecosystem benefits from by 
the TC making a judgement on what is the OpenStack way of addressing 
that Thing.


and IFF the decision of the TC on a) is YES, then:

 b) Is the Vision and Implementation of the currently integrated 
project the one that the Technical Committee wishes to continue to 
bless as the the OpenStack way of addressing the Thing the project does.


If either of the above answers is NO, then I believe the Technical 
Committee should recommend that the integrated project be removed from 
the integrated release.


HOWEVER, I *also* believe that the previously-integrated project should 
not just be cast away back to Stackforge. I think the project should 
remain in its designated Program and should remain in the openstack/ 
code namespace. Furthermore, active, competing visions and 
implementations of projects that address the Thing the 
previously-integrated project addressed should be 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-19 Thread Stefano Maffulli
On 08/19/2014 07:37 AM, Jay Pipes wrote:
 All of these projects should be able to live in the Program, in the
 openstack/ code namespace, for as long as the project is actively
 developed, and let the contributor communities in these competing
 projects *naturally* work to do any of the following:
 
  * Pick a best-of-breed implementation from the projects that address
 the same Thing
  * Combine code and efforts to merge the good bits of multiple projects
 into one
  * Let multiple valid choices of implementation live in the same Program
 with none of them being blessed by the TC to be part of the integrated
 release

Sounds reasonable and I'd like to analyze the risks associated with this
change. What's the worst that can happen?

The current setup gives a strong incentive to different teams to
reconcile competing implementations into one collaborative effort (the
Open Development promise[1]): the graduation process is as much about
quality of the code as it is about bootstrapping a culture of
collaboration, not one of competition.

One worst case I see is that we end up with lots of small projects doing
similar/overlapping things or in general not talking much to each other
except to get some infrastructure. (Then we'd have reimplemented the
Apache Foundation).

It's a fine line to walk on. What we have has its big drawbacks, as
Sandy Walsh recently illustrated re:StackTach, and it is definitely in
needs of constant tweaks.

/stef

[1] http://wiki.openstack.org/wiki/Open

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-19 Thread Robert Collins
On 20 August 2014 02:37, Jay Pipes jaypi...@gmail.com wrote:
...

 I'd like to see more unification of implementations in TripleO - but I
 still believe our basic principle of using OpenStack technologies that
 already exist in preference to third party ones is still sound, and
 offers substantial dogfood and virtuous circle benefits.


 No doubt Triple-O serves a valuable dogfood and virtuous cycle purpose.
 However, I would move that the Deployment Program should welcome the many
 projects currently in the stackforge/ code namespace that do deployment of
 OpenStack using traditional configuration management tools like Chef,
 Puppet, and Ansible. It cannot be argued that these configuration management
 systems are the de-facto way that OpenStack is deployed outside of HP, and
 they belong in the Deployment Program, IMO.

I think you mean it 'can be argued'... ;). And I'd be happy if folk in
those communities want to join in the deployment program and have code
repositories in openstack/. To date, none have asked.

 As a TC member, I would welcome someone from the Chef community proposing
 the Chef cookbooks for inclusion in the Deployment program, to live under
 the openstack/ code namespace. Same for the Puppet modules.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-19 Thread Robert Collins
On 20 August 2014 15:28, Robert Collins robe...@robertcollins.net wrote:

 I think you mean it 'can be argued'... ;). And I'd be happy if folk in
 those communities want to join in the deployment program and have code
 repositories in openstack/. To date, none have asked.

Sorry, that was incomplete. I should add that prior to the
program/project splitout it would have been conceptually a lot harder
to do that, and I think we're still feeling the ramifications of that
split. But it seems clear to me that folk doing deployments with Chef
should have a way to collaborate that is recognised by the Deployment
Program - just as (IMO) it makes sense for the docker folk to be able
to have a docker repository in the openstack/ namespace.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-18 Thread Thierry Carrez
Clint Byrum wrote:
 Here's why folk are questioning Ceilometer:
 
 Nova is a set of tools to abstract virtualization implementations.
 Neutron is a set of tools to abstract SDN/NFV implementations.
 Cinder is a set of tools to abstract block-device implementations.
 Trove is a set of tools to simplify consumption of existing databases.
 Sahara is a set of tools to simplify Hadoop consumption.
 Swift is a feature-complete implementation of object storage, none of
 which existed when it was started.
 Keystone supports all of the above, unifying their auth.
 Horizon supports all of the above, unifying their GUI.
 
 Ceilometer is a complete implementation of data collection and alerting.
 There is no shortage of implementations that exist already.
 
 I'm also core on two projects that are getting some push back these
 days:
 
 Heat is a complete implementation of orchestration. There are at least a
 few of these already in existence, though not as many as their are data
 collection and alerting systems.
 
 TripleO is an attempt to deploy OpenStack using tools that OpenStack
 provides. There are already quite a few other tools that _can_ deploy
 OpenStack, so it stands to reason that people will question why we
 don't just use those. It is my hope we'll push more into the unifying
 the implementations space and withdraw a bit from the implementing
 stuff space.
 
 So, you see, people are happy to unify around a single abstraction, but
 not so much around a brand new implementation of things that already
 exist.

Right, most projects focus on providing abstraction above
implementations, and that abstraction is where the real domain
expertise of OpenStack should be (because no one else is going to do it
for us). Every time we reinvent something, we are at larger risk because
we are out of our common specialty, and we just may not be as good as
the domain specialists. That doesn't mean we should never reinvent
something, but we need to be damn sure it's a good idea before we do.
It's sometimes less fun to piggyback on existing implementations, but if
they exist that's probably what we should do.

While Ceilometer is far from alone in that space, what sets it apart is
that even after it was blessed by the TC as the one we should all
converge on, we keep on seeing competing implementations for some (if
not all) of its scope. Convergence did not happen, and without
convergence we struggle in adoption. We need to understand why, and if
this is fixable.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-18 Thread Mark McLoughlin
On Mon, 2014-08-18 at 14:23 +0200, Thierry Carrez wrote:
 Clint Byrum wrote:
  Here's why folk are questioning Ceilometer:
  
  Nova is a set of tools to abstract virtualization implementations.
  Neutron is a set of tools to abstract SDN/NFV implementations.
  Cinder is a set of tools to abstract block-device implementations.
  Trove is a set of tools to simplify consumption of existing databases.
  Sahara is a set of tools to simplify Hadoop consumption.
  Swift is a feature-complete implementation of object storage, none of
  which existed when it was started.
  Keystone supports all of the above, unifying their auth.
  Horizon supports all of the above, unifying their GUI.
  
  Ceilometer is a complete implementation of data collection and alerting.
  There is no shortage of implementations that exist already.
  
  I'm also core on two projects that are getting some push back these
  days:
  
  Heat is a complete implementation of orchestration. There are at least a
  few of these already in existence, though not as many as their are data
  collection and alerting systems.
  
  TripleO is an attempt to deploy OpenStack using tools that OpenStack
  provides. There are already quite a few other tools that _can_ deploy
  OpenStack, so it stands to reason that people will question why we
  don't just use those. It is my hope we'll push more into the unifying
  the implementations space and withdraw a bit from the implementing
  stuff space.
  
  So, you see, people are happy to unify around a single abstraction, but
  not so much around a brand new implementation of things that already
  exist.
 
 Right, most projects focus on providing abstraction above
 implementations, and that abstraction is where the real domain
 expertise of OpenStack should be (because no one else is going to do it
 for us). Every time we reinvent something, we are at larger risk because
 we are out of our common specialty, and we just may not be as good as
 the domain specialists. That doesn't mean we should never reinvent
 something, but we need to be damn sure it's a good idea before we do.
 It's sometimes less fun to piggyback on existing implementations, but if
 they exist that's probably what we should do.

It's certainly a valid angle to evaluate projects on, but it's also easy
to be overly reductive about it - e.g. that rather than re-implement
virtualization management, Nova should just be a thin abstraction over
vSphere, XenServer and oVirt.

To take that example, I don't think we as a project should be afraid of
having such discussions but it wouldn't be productive to frame that
conversation as the sky is falling, Nova re-implements the wheel, we
should de-integrate it.

 While Ceilometer is far from alone in that space, what sets it apart is
 that even after it was blessed by the TC as the one we should all
 converge on, we keep on seeing competing implementations for some (if
 not all) of its scope. Convergence did not happen, and without
 convergence we struggle in adoption. We need to understand why, and if
 this is fixable.

Convergence did not happen is a little unfair. It's certainly a busy
space, and things like Monasca and InfluxDB are new developments. I'm
impressed at how hard the Ceilometer team works to embrace such
developments and patiently talks through possibilities for convergence.
This attitude is something we should be applauding in an integrated
project.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-18 Thread Anne Gentle
On Wed, Aug 13, 2014 at 3:29 PM, Doug Hellmann d...@doughellmann.com
wrote:


 On Aug 13, 2014, at 4:08 PM, Matthew Treinish mtrein...@kortar.org
 wrote:

  On Wed, Aug 13, 2014 at 03:43:21PM -0400, Eoghan Glynn wrote:
 
 
  Divert all cross project efforts from the following projects so we
 can
  focus
  our cross project resources. Once we are in a bitter place we can
 expand
  our
  cross project resources to cover these again. This doesn't mean
 removing
  anything.
  * Sahara
  * Trove
  * Tripleo
 
  You write as if cross-project efforts are both of fixed size and
  amenable to centralized command  control.
 
  Neither of which is actually the case, IMO.
 
  Additional cross-project resources can be ponied up by the large
  contributor companies, and existing cross-project resources are not
  necessarily divertable on command.
 
  What “cross-project efforts” are we talking about? The liaison program
 in
  Oslo has been a qualified success so far. Would it make sense to
 extend that
  to other programs and say that each project needs at least one
 designated
  QA, Infra, Doc, etc. contact?
 
  Well my working assumption was that we were talking about people with
  the appropriate domain knowledge who are focused primarily on standing
  up the QA infrastructure.
 
  (as opposed to designated points-of-contact within the individual
  project teams who would be the first port of call for the QA/infra/doc
  folks if they needed a project-specific perspective on some live issue)
 
  That said however, I agree that it would be useful for the QA/infra/doc
  teams to know who in each project is most domain-knowledgeable when they
  need to reach out about a project-specific issue.
 
 
  I actually hadn't considered doing a formal liaison program, like Oslo,
 in QA
  before. Mostly, because at least myself and most of the QA cores have a
 decent
  grasp on who to ping about certain topics or reviews. That being said, I
 realize
  that probably is only disseminating information in a single direction.
 So maybe
  having a formal liaison makes sense.
 
  I'll talk to Doug and others about this and see whether adopting
 something
  similar for QA makes sense.
 
 
  -Matt Treinish

 The Oslo liaison program started out as a pure communication channel, but
 many of the liaisons have stepped up to take on the task of merging changes
 into their “home” projects. That has allowed adoption of libraries this
 cycle at a rate far higher than we could have achieved if the Oslo team had
 been responsible for submitting those changes ourselves. They’ve helped us
 identify API issues in the process, which benefits the projects that have
 been slower to adopt. So I really think the liaisons are key to library
 graduation being successful at our current scale.



Yes, I was going to say that we use doc liaisons with varying success per
project, but it has definitely helped me keep sane (mostly). We originally
thought of it as a communication channel (you attend my meetings I'll
attend yours) but it's also great as a point person that I can reach out to
as PTL or to point others to when they have questions.

Anne




 Doug

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-18 Thread Anne Gentle
On Fri, Aug 15, 2014 at 3:01 PM, Joe Gordon joe.gord...@gmail.com wrote:




 On Thu, Aug 14, 2014 at 4:02 PM, Eoghan Glynn egl...@redhat.com wrote:


   Additional cross-project resources can be ponied up by the large
   contributor companies, and existing cross-project resources are not
   necessarily divertable on command.
 
  Sure additional cross-project resources can and need to be ponied up,
 but I
  am doubtful that will be enough.

 OK, so what exactly do you suspect wouldn't be enough, for what
 exactly?


 I am not sure what would be enough to get OpenStack back in a position
 where more developers/users are happier with the current state of affairs.
 Which is why I think we may want to try several things.



 Is it the likely number of such new resources, or the level of domain-
 expertise that they can be realistically be expected bring to the
 table, or the period of time to on-board them, or something else?


 Yes, all of the above.


 And which cross-project concern do you think is most strained by the
 current set of projects in the integrated release? Is it:

  * QA
  * infra
  * release management
  * oslo
  * documentation
  * stable-maint

 or something else?


 Good question.

 IMHO QA, Infra and release management are probably the most strained. But
 I also think there is something missing from this list. Many of the
 projects are hitting similar issues and end up solving them in different
 ways, which just leads to more confusion for the end user. Today we have a
 decent model for rolling out cross-project libraries (Oslo) but we don't
 have a good way of having broader cross project discussions such as: API
 standards (such as discoverability of features), logging standards,
 aligning on concepts (different projects have different terms and concepts
 for scaling and isolating failure domains), and an overall better user
 experience. So I think we have a whole class of cross project issues that
 we have not even begun addressing.


Docs are very, very strained. We scope docs to integrated only and we're
still lacking in quality, completeness, and speed of reviews.

At this week's extra TC meeting [1] we discussed only the difficulties with
integration and growth. I also want us to think about the cost of
integration with the current definitions and metrics we have. We discussed
whether the difficulties lie in the sheer number of projects, or is the
difficulty in the complexity due to cross-integration?

For docs I can point to sheer number of projects which is why we scope to
integrated only. But even that definition is becoming difficult for
cross-project so I want to explore the cross-project implications before
the sheer number implications.

One of the metrics I'd like to see is a metric of most cross-project drag
for all programs. The measures might be:
- number of infrastructure nodes used to test
- number of infrastructure jobs needed
- most failing tests
- incompleteness of test suite
- incompleteness of docs
- difficulty for users to use (API, CLI, or configuration) due to lack of
docs or hard-to-understand complexities
- most bugs affecting more than one project (cross-project bugs would count
against both projects)
- performance in production environments due to interlocking project needs
- any others?

We know nova/neutron carries a lot of this integration drag. We know
there's not an easy button -- but should we focus on the hard problems
before integrating many more projects? For now I think our answer is no,
but I want to hear what others think about that consideration as an
additional metric before moving projects through our incubator.

Thanks,
Anne


1.
http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-08-14-19.03.log.html





 Each of those teams has quite different prerequisite skill-sets, and
 the on-ramp for someone jumping in seeking to make a positive impact
 will vary from team to team.

 Different approaches have been tried on different teams, ranging from
 dedicated project-liaisons (Oslo) to shared cores (Sahara/Infra) to
 newly assigned dedicated resources (QA/Infra). Which of these models
 might work in your opinion? Which are doomed to failure, and why?

 So can you be more specific here on why you think adding more cross-
 project resources won't be enough to address an identified shortage
 of cross-project resources, while de-integrating projects would be?

 And, please, can we put the proverbial strawman back in its box on
 this thread? It's all well and good as a polemic device, but doesn't
 really move the discussion forward in a constructive way, IMO.

 Thanks,
 Eoghan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [all] The future of the integrated release

2014-08-17 Thread Stan Lagun
On Fri, Aug 15, 2014 at 7:17 PM, Sandy Walsh sandy.wa...@rackspace.com
wrote:

 I recently suggested that the Ceilometer API (and integration tests) be
 separated from the implementation (two repos) so others might plug in a
 different implementation while maintaining compatibility, but that wasn't
 well received.

 Personally, I'd like to see that model extended for all OpenStack
 projects. Keep compatible at the API level and welcome competing
 implementations.


Brilliant idea I'd vote for


Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

 sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-17 Thread Jay Pipes

On 08/17/2014 05:11 AM, Stan Lagun wrote:


On Fri, Aug 15, 2014 at 7:17 PM, Sandy Walsh sandy.wa...@rackspace.com
mailto:sandy.wa...@rackspace.com wrote:

I recently suggested that the Ceilometer API (and integration tests)
be separated from the implementation (two repos) so others might
plug in a different implementation while maintaining compatibility,
but that wasn't well received.

Personally, I'd like to see that model extended for all OpenStack
projects. Keep compatible at the API level and welcome competing
implementations.


Brilliant idea I'd vote for


The problem is when the API is the worst part of the project.

We have a number of projects (some that I work on) that one of the 
weakest parts of the project is the design, inconsistency, and 
efficiency of the API constructs are simply terrible.


The last thing I would want to do is say here, everyone go build 
multiple implementations on top of this crappy API. :(


As for the idea of letting the market flush out competing 
implementations, I'm all for that ... with some caveats. A couple of 
those caveats would include:


 a) Must be Python if it is to be considered as a part of OpenStack's 
integrated release [1]
 b) The API must be simple, efficient, and consistent, possibly having 
signoff by some working group focused on API standards


All the best,
-jay

[1] This isn't saying other programming languages aren't perfectly 
fine*, just that our integration and CI systems are focused on Python, 
and non-Python projects are a non-starter at this point.


* except Java, of course. That goes without saying.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-17 Thread Nadya Privalova
Hello all,

As a Ceilometer's core, I'd like to add my 0.02$.

During previous discussions it was mentioned several projects which were
started or continue to be developed after Ceilometer became integrated. The
main question I'm thinking of is why it was impossible to contribute into
existing integrated project? Is it because of Ceilometer's architecture,
the team or there are some other (maybe political) reasons? I think it's a
very sad situation when we have 3-4 Ceilometer-like projects from different
companies instead of the only one that satisfies everybody. (We don't see
it in other projects. Though, maybe there are several Novas os Neutrons on
StackForge and I don't know about it...)
Of course, sometimes it's much easier to start the project from scratch.
But there should be strong reasons for doing this if we are talking about
integrated project.
IMHO the idea, the role is the most important thing when we are talking
about integrated project. And if Ceilometer's role is really needed (and I
think it is) then we should improve existing implementation, merge all
needs into the one project and the result will be still Ceilometer.

Thanks,
Nadya


On Fri, Aug 15, 2014 at 12:41 AM, Joe Gordon joe.gord...@gmail.com wrote:




 On Wed, Aug 13, 2014 at 12:24 PM, Doug Hellmann d...@doughellmann.com
 wrote:


 On Aug 13, 2014, at 3:05 PM, Eoghan Glynn egl...@redhat.com wrote:

 
  At the end of the day, that's probably going to mean saying No to more
  things. Everytime I turn around everyone wants the TC to say No to
  things, just not to their particular thing. :) Which is human nature.
  But I think if we don't start saying No to more things we're going to
  end up with a pile of mud that no one is happy with.
 
  That we're being so abstract about all of this is frustrating. I get
  that no-one wants to start a flamewar, but can someone be concrete
 about
  what they feel we should say 'no' to but are likely to say 'yes' to?
 
 
  I'll bite, but please note this is a strawman.
 
  No:
  * Accepting any more projects into incubation until we are comfortable
 with
  the state of things again
  * Marconi
  * Ceilometer
 
  Well -1 to that, obviously, from me.
 
  Ceilometer is on track to fully execute on the gap analysis coverage
  plan agreed with the TC at the outset of this cycle, and has an active
  plan in progress to address architectural debt.

 Yes, there seems to be an attitude among several people in the community
 that the Ceilometer team denies that there are issues and refuses to work
 on them. Neither of those things is the case from our perspective.


 Totally agree.



 Can you be more specific about the shortcomings you see in the project
 that aren’t being addressed?



 Once again, this is just a strawman.

 I'm just not sure OpenStack has 'blessed' the best solution out there.


 https://wiki.openstack.org/wiki/Ceilometer/Graduation#Why_we_think_we.27re_ready

 

- Successfully passed the challenge of being adopted by 3 related
projects which have agreed to join or use ceilometer:
   - Synaps
   - Healthnmon
   - StackTach
   
 https://wiki.openstack.org/w/index.php?title=StackTachaction=editredlink=1
   


 Stacktach seems to still be under active development (
 http://git.openstack.org/cgit/stackforge/stacktach/log/), is used by
 rackspace in production and from everything I hear is more mature then
 ceilometer.



 
  Divert all cross project efforts from the following projects so we can
 focus
  our cross project resources. Once we are in a bitter place we can
 expand our
  cross project resources to cover these again. This doesn't mean
 removing
  anything.
  * Sahara
  * Trove
  * Tripleo
 
  You write as if cross-project efforts are both of fixed size and
  amenable to centralized command  control.
 
  Neither of which is actually the case, IMO.
 
  Additional cross-project resources can be ponied up by the large
  contributor companies, and existing cross-project resources are not
  necessarily divertable on command.


 Sure additional cross-project resources can and need to be ponied up, but
 I am doubtful that will be enough.



 What “cross-project efforts” are we talking about? The liaison program in
 Oslo has been a qualified success so far. Would it make sense to extend
 that to other programs and say that each project needs at least one
 designated QA, Infra, Doc, etc. contact?

 Doug

 
  Yes:
  * All integrated projects that are not listed above
 
  And what of the other pending graduation request?
 
  Cheers,
  Eoghan

 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-17 Thread Clint Byrum
Here's why folk are questioning Ceilometer:

Nova is a set of tools to abstract virtualization implementations.
Neutron is a set of tools to abstract SDN/NFV implementations.
Cinder is a set of tools to abstract block-device implementations.
Trove is a set of tools to simplify consumption of existing databases.
Sahara is a set of tools to simplify Hadoop consumption.
Swift is a feature-complete implementation of object storage, none of
which existed when it was started.
Keystone supports all of the above, unifying their auth.
Horizon supports all of the above, unifying their GUI.

Ceilometer is a complete implementation of data collection and alerting.
There is no shortage of implementations that exist already.

I'm also core on two projects that are getting some push back these
days:

Heat is a complete implementation of orchestration. There are at least a
few of these already in existence, though not as many as their are data
collection and alerting systems.

TripleO is an attempt to deploy OpenStack using tools that OpenStack
provides. There are already quite a few other tools that _can_ deploy
OpenStack, so it stands to reason that people will question why we
don't just use those. It is my hope we'll push more into the unifying
the implementations space and withdraw a bit from the implementing
stuff space.

So, you see, people are happy to unify around a single abstraction, but
not so much around a brand new implementation of things that already
exist.

Excerpts from Nadya Privalova's message of 2014-08-17 11:11:34 -0700:
 Hello all,
 
 As a Ceilometer's core, I'd like to add my 0.02$.
 
 During previous discussions it was mentioned several projects which were
 started or continue to be developed after Ceilometer became integrated. The
 main question I'm thinking of is why it was impossible to contribute into
 existing integrated project? Is it because of Ceilometer's architecture,
 the team or there are some other (maybe political) reasons? I think it's a
 very sad situation when we have 3-4 Ceilometer-like projects from different
 companies instead of the only one that satisfies everybody. (We don't see
 it in other projects. Though, maybe there are several Novas os Neutrons on
 StackForge and I don't know about it...)
 Of course, sometimes it's much easier to start the project from scratch.
 But there should be strong reasons for doing this if we are talking about
 integrated project.
 IMHO the idea, the role is the most important thing when we are talking
 about integrated project. And if Ceilometer's role is really needed (and I
 think it is) then we should improve existing implementation, merge all
 needs into the one project and the result will be still Ceilometer.
 
 Thanks,
 Nadya
 
 On Fri, Aug 15, 2014 at 12:41 AM, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 
 
  On Wed, Aug 13, 2014 at 12:24 PM, Doug Hellmann d...@doughellmann.com
  wrote:
 
 
  On Aug 13, 2014, at 3:05 PM, Eoghan Glynn egl...@redhat.com wrote:
 
  
   At the end of the day, that's probably going to mean saying No to more
   things. Everytime I turn around everyone wants the TC to say No to
   things, just not to their particular thing. :) Which is human nature.
   But I think if we don't start saying No to more things we're going to
   end up with a pile of mud that no one is happy with.
  
   That we're being so abstract about all of this is frustrating. I get
   that no-one wants to start a flamewar, but can someone be concrete
  about
   what they feel we should say 'no' to but are likely to say 'yes' to?
  
  
   I'll bite, but please note this is a strawman.
  
   No:
   * Accepting any more projects into incubation until we are comfortable
  with
   the state of things again
   * Marconi
   * Ceilometer
  
   Well -1 to that, obviously, from me.
  
   Ceilometer is on track to fully execute on the gap analysis coverage
   plan agreed with the TC at the outset of this cycle, and has an active
   plan in progress to address architectural debt.
 
  Yes, there seems to be an attitude among several people in the community
  that the Ceilometer team denies that there are issues and refuses to work
  on them. Neither of those things is the case from our perspective.
 
 
  Totally agree.
 
 
 
  Can you be more specific about the shortcomings you see in the project
  that aren’t being addressed?
 
 
 
  Once again, this is just a strawman.
 
  I'm just not sure OpenStack has 'blessed' the best solution out there.
 
 
  https://wiki.openstack.org/wiki/Ceilometer/Graduation#Why_we_think_we.27re_ready
 
  
 
 - Successfully passed the challenge of being adopted by 3 related
 projects which have agreed to join or use ceilometer:
- Synaps
- Healthnmon
- StackTach

  https://wiki.openstack.org/w/index.php?title=StackTachaction=editredlink=1

 
 
  Stacktach seems to still be under active development (
  http://git.openstack.org/cgit/stackforge/stacktach/log/), is used by
  

Re: [openstack-dev] [all] The future of the integrated release

2014-08-16 Thread Chris Dent

On Fri, 15 Aug 2014, Sandy Walsh wrote:


I recently suggested that the Ceilometer API (and integration tests)
be separated from the implementation (two repos) so others might plug
in a different implementation while maintaining compatibility, but
that wasn't well received.

Personally, I'd like to see that model extended for all OpenStack
projects. Keep compatible at the API level and welcome competing
implementations.


I think this is a _very_ interesting idea, especially the way it fits
in with multiple themes that have bounced around the list lately, not
just this thread:

* Improving project-side testing; that is, pre-gate integration
  testing.

* Providing a framework (at least conceptual) on which to inform the
  tempest-libification.

* Solidifying both intra- and inter-project API contracts (both HTTP
  and notifications).

* Providing a solid basis on which to enable healthy competition between
  implementations.

* Helping to ensure that the various projects work to the goals of their
  public facing name rather than their internal name (e.g. Telemetry
  vs ceilometer).

Given the usual trouble with resource availability it seems best to
find tactics that can be applied to multiple strategic goals.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-16 Thread Sandy Walsh
On 8/16/2014 10:09 AM, Chris Dent wrote:
 On Fri, 15 Aug 2014, Sandy Walsh wrote:

 I recently suggested that the Ceilometer API (and integration tests)
 be separated from the implementation (two repos) so others might plug
 in a different implementation while maintaining compatibility, but
 that wasn't well received.

 Personally, I'd like to see that model extended for all OpenStack
 projects. Keep compatible at the API level and welcome competing
 implementations.
 I think this is a _very_ interesting idea, especially the way it fits
 in with multiple themes that have bounced around the list lately, not
 just this thread:

 * Improving project-side testing; that is, pre-gate integration
testing.

 * Providing a framework (at least conceptual) on which to inform the
tempest-libification.

 * Solidifying both intra- and inter-project API contracts (both HTTP
and notifications).

 * Providing a solid basis on which to enable healthy competition between
implementations.

 * Helping to ensure that the various projects work to the goals of their
public facing name rather than their internal name (e.g. Telemetry
vs ceilometer).
+1 ... love that take on it.

 Given the usual trouble with resource availability it seems best to
 find tactics that can be applied to multiple strategic goals.


Exactly! You get it.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-15 Thread Sandy Walsh
On 8/14/2014 6:42 PM, Doug Hellmann wrote:

On Aug 14, 2014, at 4:41 PM, Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote:




On Wed, Aug 13, 2014 at 12:24 PM, Doug Hellmann 
d...@doughellmann.commailto:d...@doughellmann.com wrote:

On Aug 13, 2014, at 3:05 PM, Eoghan Glynn 
egl...@redhat.commailto:egl...@redhat.com wrote:


 At the end of the day, that's probably going to mean saying No to more
 things. Everytime I turn around everyone wants the TC to say No to
 things, just not to their particular thing. :) Which is human nature.
 But I think if we don't start saying No to more things we're going to
 end up with a pile of mud that no one is happy with.

 That we're being so abstract about all of this is frustrating. I get
 that no-one wants to start a flamewar, but can someone be concrete about
 what they feel we should say 'no' to but are likely to say 'yes' to?


 I'll bite, but please note this is a strawman.

 No:
 * Accepting any more projects into incubation until we are comfortable with
 the state of things again
 * Marconi
 * Ceilometer

 Well -1 to that, obviously, from me.

 Ceilometer is on track to fully execute on the gap analysis coverage
 plan agreed with the TC at the outset of this cycle, and has an active
 plan in progress to address architectural debt.

Yes, there seems to be an attitude among several people in the community that 
the Ceilometer team denies that there are issues and refuses to work on them. 
Neither of those things is the case from our perspective.

Totally agree.


Can you be more specific about the shortcomings you see in the project that 
aren’t being addressed?


Once again, this is just a straw man.

You’re not the first person to propose ceilometer as a project to kick out of 
the release, though, and so I would like to be talking about specific reasons 
rather than vague frustrations.


I'm just not sure OpenStack has 'blessed' the best solution out there.

https://wiki.openstack.org/wiki/Ceilometer/Graduation#Why_we_think_we.27re_ready



  *   Successfully passed the challenge of being adopted by 3 related projects 
which have agreed to join or use ceilometer:
 *   Synaps
 *   Healthnmon
 *   
StackTachhttps://wiki.openstack.org/w/index.php?title=StackTachaction=editredlink=1

Stacktach seems to still be under active development 
(http://git.openstack.org/cgit/stackforge/stacktach/log/), is used by rackspace 
in production and from everything I hear is more mature then ceilometer.

Stacktach is older than ceilometer, but does not do all of the things 
ceilometer does now and aims to do in the future. It has been a while since I 
last looked at it, so the situation may have changed, but some of the reasons 
stacktach would not be a full replacement for ceilometer include: it only works 
with AMQP; it collects notification events, but doesn’t offer any metering 
ability per se (no tracking of values like CPU or bandwidth utilization); it 
only collects notifications from some projects, and doesn’t have a way to 
collect data from swift, which doesn’t emit notifications; and it does not 
integrate with Heat to trigger autoscaling alarms.

Well, that's my cue.

Yes, StackTach was started before the incubation process was established and it 
solves other problems. Specifically around usage, billing and performance 
monitoring, things I wouldn't use Ceilometer for. But, if someone asked me what 
they should use for metering today, I'd point them towards Monasca in a 
heartbeat. Another non-blessed project.

It is nice to see that Ceilometer is working to solve their problems, but there 
are other solutions operators should consider until that time comes. It would 
be nice to see the TC endorse those too. Solve the users need first.

We did work with a few of the Stacktach developers on bringing event collection 
into ceilometer, and that work is allowing us to modify the way we store the 
meter data that causes a lot of the performance issues we’ve seen. That work is 
going on now and will be continued into Kilo, when we expect to be adding 
drivers for time-series databases more appropriate for that type of data.


StackTach isn't actively contributing to Ceilometer any more. Square peg/round 
hole. We needed some room to experiment with alternative solutions and the 
rigidity of the process was a hindrance. Not a problem with the core team, just 
a problem with the dev process overall.

I recently suggested that the Ceilometer API (and integration tests) be 
separated from the implementation (two repos) so others might plug in a 
different implementation while maintaining compatibility, but that wasn't well 
received.

Personally, I'd like to see that model extended for all OpenStack projects. 
Keep compatible at the API level and welcome competing implementations.

We'll be moving StackTach.v3 [1] to StackForge soon and following that model. 
The API and integration tests are one repo (with a bare-bones implementation to 
make the 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-15 Thread Eoghan Glynn

 But, if someone asked me what they should use for metering today,
 I'd point them towards Monasca in a heartbeat.

FWIW my view is that Monasca is an interesting emerging project, with a
team accreting around it that seems to be interested in collaboration.

We've had ongoing discussions with them about overlaps and differences
since the outset of this cycle, though of course our over-riding focus
for Juno has had to be on the TC gap analysis and on addressing
architectural debts.

But going forward into Kilo, I think there should be scope for possible
closer collaboration between the projects, figuring out the aspects that
are complementary and possible shared elements and/or converged APIs.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-15 Thread Joe Gordon
On Fri, Aug 15, 2014 at 8:17 AM, Sandy Walsh sandy.wa...@rackspace.com
wrote:

  On 8/14/2014 6:42 PM, Doug Hellmann wrote:


  On Aug 14, 2014, at 4:41 PM, Joe Gordon joe.gord...@gmail.com wrote:




 On Wed, Aug 13, 2014 at 12:24 PM, Doug Hellmann d...@doughellmann.com
 wrote:


 On Aug 13, 2014, at 3:05 PM, Eoghan Glynn egl...@redhat.com wrote:

 
  At the end of the day, that's probably going to mean saying No to more
  things. Everytime I turn around everyone wants the TC to say No to
  things, just not to their particular thing. :) Which is human nature.
  But I think if we don't start saying No to more things we're going to
  end up with a pile of mud that no one is happy with.
 
  That we're being so abstract about all of this is frustrating. I get
  that no-one wants to start a flamewar, but can someone be concrete
 about
  what they feel we should say 'no' to but are likely to say 'yes' to?
 
 
  I'll bite, but please note this is a strawman.
 
  No:
  * Accepting any more projects into incubation until we are comfortable
 with
  the state of things again
  * Marconi
  * Ceilometer
 
  Well -1 to that, obviously, from me.
 
  Ceilometer is on track to fully execute on the gap analysis coverage
  plan agreed with the TC at the outset of this cycle, and has an active
  plan in progress to address architectural debt.

  Yes, there seems to be an attitude among several people in the community
 that the Ceilometer team denies that there are issues and refuses to work
 on them. Neither of those things is the case from our perspective.


  Totally agree.



 Can you be more specific about the shortcomings you see in the project
 that aren’t being addressed?



  Once again, this is just a straw man.


  You’re not the first person to propose ceilometer as a project to kick
 out of the release, though, and so I would like to be talking about
 specific reasons rather than vague frustrations.


  I'm just not sure OpenStack has 'blessed' the best solution out there.


 https://wiki.openstack.org/wiki/Ceilometer/Graduation#Why_we_think_we.27re_ready

  

- Successfully passed the challenge of being adopted by 3 related
projects which have agreed to join or use ceilometer:
   - Synaps
   - Healthnmon
   - StackTach
   
 https://wiki.openstack.org/w/index.php?title=StackTachaction=editredlink=1



  Stacktach seems to still be under active development (
 http://git.openstack.org/cgit/stackforge/stacktach/log/), is used by
 rackspace in production and from everything I hear is more mature then
 ceilometer.


  Stacktach is older than ceilometer, but does not do all of the things
 ceilometer does now and aims to do in the future. It has been a while since
 I last looked at it, so the situation may have changed, but some of the
 reasons stacktach would not be a full replacement for ceilometer include:
 it only works with AMQP; it collects notification events, but doesn’t offer
 any metering ability per se (no tracking of values like CPU or bandwidth
 utilization); it only collects notifications from some projects, and
 doesn’t have a way to collect data from swift, which doesn’t emit
 notifications; and it does not integrate with Heat to trigger autoscaling
 alarms.

   Well, that's my cue.

 Yes, StackTach was started before the incubation process was established
 and it solves other problems. Specifically around usage, billing and
 performance monitoring, things I wouldn't use Ceilometer for. But, if
 someone asked me what they should use for metering today, I'd point them
 towards Monasca in a heartbeat. Another non-blessed project.


I think this is the crux of the potential argument against ceilometer
today. There are several other viable competing open source candidates in
this space.  And an argument can be made that having OpenStack bless a
winner before one clearly emerges on its own doesn't help. If blessing a
winner resulted in the teams working together more and less duplicated
efforts that would be one thing, but that does not appear to be happening
in this space.



 It is nice to see that Ceilometer is working to solve their problems, but
 there are other solutions operators should consider until that time comes.
 It would be nice to see the TC endorse those too. Solve the users need
 first.


   We did work with a few of the Stacktach developers on bringing event
 collection into ceilometer, and that work is allowing us to modify the way
 we store the meter data that causes a lot of the performance issues we’ve
 seen. That work is going on now and will be continued into Kilo, when we
 expect to be adding drivers for time-series databases more appropriate for
 that type of data.


 StackTach isn't actively contributing to Ceilometer any more. Square
 peg/round hole. We needed some room to experiment with alternative
 solutions and the rigidity of the process was a hindrance. Not a problem
 with the core team, just a problem with the dev process overall.

 I recently 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-15 Thread Joe Gordon
On Thu, Aug 14, 2014 at 4:02 PM, Eoghan Glynn egl...@redhat.com wrote:


   Additional cross-project resources can be ponied up by the large
   contributor companies, and existing cross-project resources are not
   necessarily divertable on command.
 
  Sure additional cross-project resources can and need to be ponied up,
 but I
  am doubtful that will be enough.

 OK, so what exactly do you suspect wouldn't be enough, for what
 exactly?


I am not sure what would be enough to get OpenStack back in a position
where more developers/users are happier with the current state of affairs.
Which is why I think we may want to try several things.



 Is it the likely number of such new resources, or the level of domain-
 expertise that they can be realistically be expected bring to the
 table, or the period of time to on-board them, or something else?


Yes, all of the above.


 And which cross-project concern do you think is most strained by the
 current set of projects in the integrated release? Is it:

  * QA
  * infra
  * release management
  * oslo
  * documentation
  * stable-maint

 or something else?


Good question.

IMHO QA, Infra and release management are probably the most strained. But I
also think there is something missing from this list. Many of the projects
are hitting similar issues and end up solving them in different ways, which
just leads to more confusion for the end user. Today we have a decent model
for rolling out cross-project libraries (Oslo) but we don't have a good way
of having broader cross project discussions such as: API standards (such as
discoverability of features), logging standards, aligning on concepts
(different projects have different terms and concepts for scaling and
isolating failure domains), and an overall better user experience. So I
think we have a whole class of cross project issues that we have not even
begun addressing.



 Each of those teams has quite different prerequisite skill-sets, and
 the on-ramp for someone jumping in seeking to make a positive impact
 will vary from team to team.

 Different approaches have been tried on different teams, ranging from
 dedicated project-liaisons (Oslo) to shared cores (Sahara/Infra) to
 newly assigned dedicated resources (QA/Infra). Which of these models
 might work in your opinion? Which are doomed to failure, and why?

 So can you be more specific here on why you think adding more cross-
 project resources won't be enough to address an identified shortage
 of cross-project resources, while de-integrating projects would be?

 And, please, can we put the proverbial strawman back in its box on
 this thread? It's all well and good as a polemic device, but doesn't
 really move the discussion forward in a constructive way, IMO.

 Thanks,
 Eoghan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-15 Thread Joe Gordon
On Thu, Aug 14, 2014 at 4:02 PM, Eoghan Glynn egl...@redhat.com wrote:


   Additional cross-project resources can be ponied up by the large
   contributor companies, and existing cross-project resources are not
   necessarily divertable on command.
 
  Sure additional cross-project resources can and need to be ponied up,
 but I
  am doubtful that will be enough.

 OK, so what exactly do you suspect wouldn't be enough, for what
 exactly?

 Is it the likely number of such new resources, or the level of domain-
 expertise that they can be realistically be expected bring to the
 table, or the period of time to on-board them, or something else?

 And which cross-project concern do you think is most strained by the
 current set of projects in the integrated release? Is it:

  * QA
  * infra
  * release management
  * oslo
  * documentation
  * stable-maint

 or something else?

 Each of those teams has quite different prerequisite skill-sets, and
 the on-ramp for someone jumping in seeking to make a positive impact
 will vary from team to team.

 Different approaches have been tried on different teams, ranging from
 dedicated project-liaisons (Oslo) to shared cores (Sahara/Infra) to
 newly assigned dedicated resources (QA/Infra). Which of these models
 might work in your opinion? Which are doomed to failure, and why?

 So can you be more specific here on why you think adding more cross-
 project resources won't be enough to address an identified shortage
 of cross-project resources, while de-integrating projects would be?

 And, please, can we put the proverbial strawman back in its box on
 this thread? It's all well and good as a polemic device, but doesn't
 really move the discussion forward in a constructive way, IMO.



/me puts his strawman back in the box



 Thanks,
 Eoghan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-14 Thread Devananda van der Veen
On Aug 14, 2014 2:04 AM, Eoghan Glynn egl...@redhat.com wrote:


   Letting the industry field-test a project and feed their experience
   back into the community is a slow process, but that is the best
   measure of a project's success. I seem to recall this being an
   implicit expectation a few years ago, but haven't seen it discussed
in
   a while.
  
   I think I recall us discussing a must have feedback that it's
   successfully deployed requirement in the last cycle, but we
recognized
   that deployers often wait until a project is integrated.
 
  In the early discussions about incubation, we respected the need to
  officially recognize a project as part of OpenStack just to create the
  uptick in adoption necessary to mature projects. Similarly, integration
is a
  recognition of the maturity of a project, but I think we have graduated
  several projects long before they actually reached that level of
maturity.
  Actually running a project at scale for a period of time is the only
way to
  know it is mature enough to run it in production at scale.
 
  I'm just going to toss this out there. What if we set the graduation
bar to
  is in production in at least two sizeable clouds (note that I'm not
saying
  public clouds). Trove is the only project that has, to my knowledge,
met
  that bar prior to graduation, and it's the only project that graduated
since
  Havana that I can, off hand, point at as clearly successful. Heat and
  Ceilometer both graduated prior to being in production; a few cycles
later,
  they're still having adoption problems and looking at large
architectural
  changes. I think the added cost to OpenStack when we integrate immature
or
  unstable projects is significant enough at this point to justify a more
  defensive posture.
 
  FWIW, Ironic currently doesn't meet that bar either - it's in
production in
  only one public cloud. I'm not aware of large private installations yet,
  though I suspect there are some large private deployments being spun up
  right now, planning to hit production with the Juno release.

 We have some hard data from the user survey presented at the Juno summit,
 with respectively 26  53 production deployments of Heat and Ceilometer
 reported.

 There's no cross-referencing of deployment size with services in
production
 in those data presented, though it may be possible to mine that out of the
 raw survey responses.

Indeed, and while that would be useful information, I was referring to the
deployment of those services at scale prior to graduation, not post
graduation.

Best,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-14 Thread Vishvananda Ishaya

On Aug 13, 2014, at 5:07 AM, Daniel P. Berrange berra...@redhat.com wrote:

 On Wed, Aug 13, 2014 at 12:55:48PM +0100, Steven Hardy wrote:
 On Wed, Aug 13, 2014 at 11:42:52AM +0100, Daniel P. Berrange wrote:
 On Thu, Aug 07, 2014 at 03:56:04AM -0700, Jay Pipes wrote:
 By ignoring stable branches, leaving it upto a
 small team to handle, I think we giving the wrong message about what
 our priorities as a team team are. I can't help thinking this filters
 through to impact the way people think about their work on master.
 
 Who is ignoring stable branches?  This sounds like a project specific
 failing to me, as all experienced core reviewers should consider offering
 their services to help with stable-maint activity.
 
 I don't personally see any reason why the *entire* project core team has to
 do this, but a subset of them should feel compelled to participate in the
 stable-maint process, if they have sufficient time, interest and historical
 context, it's not some other team IMO.
 
 I think that stable branch review should be a key responsibility for anyone
 on the core team, not solely those few who volunteer for stable team. As
 the number of projects in openstack grows I think the idea of having a
 single stable team with rights to approve across any project is ultimately
 flawed because it doesn't scale efficiently and they don't have the same
 level of domain knowledge as the respective project teams.

This side-thread is a bit off topic for the main discussion, but as a
stable-maint with not a lot of time, I would love more help from the core
teams here. That said, help is not just about aproving reviews. There are
three main steps in the process:
 1. Bugs get marked for backport
   I try to stay on top of this in nova by following the feed of merged patches
   and marking them icehouse-backport-potential[1] when they seem like they are
   appropriate but I’m sure I miss some.
 2. Patches get backprorted
   This is sometimes a very time-consuming process, especially late in the
   cycle or for patches that are being backported 2 releases.
 3. Patches get reviewed and merged
   The criteria for a stable backport are pretty straightforward and I think
   any core reviewer is capable of understanding and aply that criteria

While we have fallen behind in number 3. at times, we are much more often WAY
behind on 2. I also suspect that a whole bunch of patches get missed in some
of the other projects where someone isn’t specifically trying to mark them all
as they come in.

Vish

[1] https://bugs.launchpad.net/nova/+bugs?field.tag=icehouse-backport-potential



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-14 Thread Joe Gordon
On Wed, Aug 13, 2014 at 12:24 PM, Doug Hellmann d...@doughellmann.com
wrote:


 On Aug 13, 2014, at 3:05 PM, Eoghan Glynn egl...@redhat.com wrote:

 
  At the end of the day, that's probably going to mean saying No to more
  things. Everytime I turn around everyone wants the TC to say No to
  things, just not to their particular thing. :) Which is human nature.
  But I think if we don't start saying No to more things we're going to
  end up with a pile of mud that no one is happy with.
 
  That we're being so abstract about all of this is frustrating. I get
  that no-one wants to start a flamewar, but can someone be concrete about
  what they feel we should say 'no' to but are likely to say 'yes' to?
 
 
  I'll bite, but please note this is a strawman.
 
  No:
  * Accepting any more projects into incubation until we are comfortable
 with
  the state of things again
  * Marconi
  * Ceilometer
 
  Well -1 to that, obviously, from me.
 
  Ceilometer is on track to fully execute on the gap analysis coverage
  plan agreed with the TC at the outset of this cycle, and has an active
  plan in progress to address architectural debt.

 Yes, there seems to be an attitude among several people in the community
 that the Ceilometer team denies that there are issues and refuses to work
 on them. Neither of those things is the case from our perspective.


Totally agree.



 Can you be more specific about the shortcomings you see in the project
 that aren’t being addressed?



Once again, this is just a strawman.

I'm just not sure OpenStack has 'blessed' the best solution out there.

https://wiki.openstack.org/wiki/Ceilometer/Graduation#Why_we_think_we.27re_ready



   - Successfully passed the challenge of being adopted by 3 related
   projects which have agreed to join or use ceilometer:
  - Synaps
  - Healthnmon
  - StackTach
  
https://wiki.openstack.org/w/index.php?title=StackTachaction=editredlink=1
  


Stacktach seems to still be under active development (
http://git.openstack.org/cgit/stackforge/stacktach/log/), is used by
rackspace in production and from everything I hear is more mature then
ceilometer.



 
  Divert all cross project efforts from the following projects so we can
 focus
  our cross project resources. Once we are in a bitter place we can
 expand our
  cross project resources to cover these again. This doesn't mean removing
  anything.
  * Sahara
  * Trove
  * Tripleo
 
  You write as if cross-project efforts are both of fixed size and
  amenable to centralized command  control.
 
  Neither of which is actually the case, IMO.
 
  Additional cross-project resources can be ponied up by the large
  contributor companies, and existing cross-project resources are not
  necessarily divertable on command.


Sure additional cross-project resources can and need to be ponied up, but I
am doubtful that will be enough.



 What “cross-project efforts” are we talking about? The liaison program in
 Oslo has been a qualified success so far. Would it make sense to extend
 that to other programs and say that each project needs at least one
 designated QA, Infra, Doc, etc. contact?

 Doug

 
  Yes:
  * All integrated projects that are not listed above
 
  And what of the other pending graduation request?
 
  Cheers,
  Eoghan
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-14 Thread Doug Hellmann

On Aug 14, 2014, at 4:41 PM, Joe Gordon joe.gord...@gmail.com wrote:

 
 
 
 On Wed, Aug 13, 2014 at 12:24 PM, Doug Hellmann d...@doughellmann.com wrote:
 
 On Aug 13, 2014, at 3:05 PM, Eoghan Glynn egl...@redhat.com wrote:
 
 
  At the end of the day, that's probably going to mean saying No to more
  things. Everytime I turn around everyone wants the TC to say No to
  things, just not to their particular thing. :) Which is human nature.
  But I think if we don't start saying No to more things we're going to
  end up with a pile of mud that no one is happy with.
 
  That we're being so abstract about all of this is frustrating. I get
  that no-one wants to start a flamewar, but can someone be concrete about
  what they feel we should say 'no' to but are likely to say 'yes' to?
 
 
  I'll bite, but please note this is a strawman.
 
  No:
  * Accepting any more projects into incubation until we are comfortable with
  the state of things again
  * Marconi
  * Ceilometer
 
  Well -1 to that, obviously, from me.
 
  Ceilometer is on track to fully execute on the gap analysis coverage
  plan agreed with the TC at the outset of this cycle, and has an active
  plan in progress to address architectural debt.
 
 Yes, there seems to be an attitude among several people in the community that 
 the Ceilometer team denies that there are issues and refuses to work on them. 
 Neither of those things is the case from our perspective.
 
 Totally agree.
  
 
 Can you be more specific about the shortcomings you see in the project that 
 aren’t being addressed?
 
 
 Once again, this is just a straw man.

You’re not the first person to propose ceilometer as a project to kick out of 
the release, though, and so I would like to be talking about specific reasons 
rather than vague frustrations.

 
 I'm just not sure OpenStack has 'blessed' the best solution out there.
 
 https://wiki.openstack.org/wiki/Ceilometer/Graduation#Why_we_think_we.27re_ready
 
 
 Successfully passed the challenge of being adopted by 3 related projects 
 which have agreed to join or use ceilometer:
 Synaps
 Healthnmon
 StackTach
 
 Stacktach seems to still be under active development 
 (http://git.openstack.org/cgit/stackforge/stacktach/log/), is used by 
 rackspace in production and from everything I hear is more mature then 
 ceilometer.

Stacktach is older than ceilometer, but does not do all of the things 
ceilometer does now and aims to do in the future. It has been a while since I 
last looked at it, so the situation may have changed, but some of the reasons 
stacktach would not be a full replacement for ceilometer include: it only works 
with AMQP; it collects notification events, but doesn’t offer any metering 
ability per se (no tracking of values like CPU or bandwidth utilization); it 
only collects notifications from some projects, and doesn’t have a way to 
collect data from swift, which doesn’t emit notifications; and it does not 
integrate with Heat to trigger autoscaling alarms.

We did work with a few of the Stacktach developers on bringing event collection 
into ceilometer, and that work is allowing us to modify the way we store the 
meter data that causes a lot of the performance issues we’ve seen. That work is 
going on now and will be continued into Kilo, when we expect to be adding 
drivers for time-series databases more appropriate for that type of data.

We’ve just finished the TC meeting where some of these issues were discussed, 
but I want to reiterate my stance here more publicly. As a community, we need 
to be able to talk about the technical shortcomings of projects. We need to be 
able to say, for example, “ceilometer, your runtime performance isn’t good 
enough, you need to work on that rather than adding features.” But we must also 
be willing give the team in question the benefit of the doubt that they will 
work on the problem before we bring out the pitchforks of de-integration, 
because there are serious community implications around that level of rejection.

Doug

  
 
 
  Divert all cross project efforts from the following projects so we can 
  focus
  our cross project resources. Once we are in a bitter place we can expand 
  our
  cross project resources to cover these again. This doesn't mean removing
  anything.
  * Sahara
  * Trove
  * Tripleo
 
  You write as if cross-project efforts are both of fixed size and
  amenable to centralized command  control.
 
  Neither of which is actually the case, IMO.
 
  Additional cross-project resources can be ponied up by the large
  contributor companies, and existing cross-project resources are not
  necessarily divertable on command.
 
 Sure additional cross-project resources can and need to be ponied up, but I 
 am doubtful that will be enough.
  
 
 What “cross-project efforts” are we talking about? The liaison program in 
 Oslo has been a qualified success so far. Would it make sense to extend that 
 to other programs and say that each project needs at least one 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-14 Thread Eoghan Glynn

  Additional cross-project resources can be ponied up by the large
  contributor companies, and existing cross-project resources are not
  necessarily divertable on command.
 
 Sure additional cross-project resources can and need to be ponied up, but I
 am doubtful that will be enough.

OK, so what exactly do you suspect wouldn't be enough, for what
exactly?

Is it the likely number of such new resources, or the level of domain-
expertise that they can be realistically be expected bring to the
table, or the period of time to on-board them, or something else?

And which cross-project concern do you think is most strained by the
current set of projects in the integrated release? Is it:
 
 * QA
 * infra
 * release management
 * oslo
 * documentation
 * stable-maint
  
or something else?

Each of those teams has quite different prerequisite skill-sets, and
the on-ramp for someone jumping in seeking to make a positive impact
will vary from team to team.

Different approaches have been tried on different teams, ranging from
dedicated project-liaisons (Oslo) to shared cores (Sahara/Infra) to
newly assigned dedicated resources (QA/Infra). Which of these models
might work in your opinion? Which are doomed to failure, and why?

So can you be more specific here on why you think adding more cross-
project resources won't be enough to address an identified shortage
of cross-project resources, while de-integrating projects would be?

And, please, can we put the proverbial strawman back in its box on
this thread? It's all well and good as a polemic device, but doesn't
really move the discussion forward in a constructive way, IMO.

Thanks,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Eoghan Glynn


  One thing I'm not seeing shine through in this discussion of slots is
  whether any notion of individual cores, or small subsets of the core
  team with aligned interests, can champion blueprints that they have
  a particular interest in.
 
 I think that's because we've focussed in this discussion on the slots
 themselves, not the process of obtaining a slot.

That's fair.
 
 The proposal as it stands now is that we would have a public list of
 features that are ready to occupy a slot. That list would the ranked
 in order of priority to the project, and the next free slot goes to
 the top item on the list. The ordering of the list is determined by
 nova-core, based on their understanding of the importance of a given
 thing, as well as what they are hearing from our users.
 
 So -- there's totally scope for lobbying, or for a subset of core to
 champion a feature to land, or for a company to explain why a given
 feature is very important to them.

Yeah, that's pretty much what I mean by the championing being subsumed
under the group will.

What's lost is not so much the ability to champion something, as the
freedom to do so in an independent/emergent way.

(Note that this is explicitly not verging into the retrospective veto
policy discussion on another thread[1], I'm totally assuming good faith
and good intent on the part of such champions)
 
 It sort of happens now -- there is a subset of core which cares more
 about xen than libvirt for example. We're just being more open about
 the process and setting expectations for our users. At the moment its
 very confusing as a user, there are hundreds of proposed features for
 Juno, nearly 100 of which have been accepted. However, we're kidding
 ourselves if we think we can land 100 blueprints in a release cycle.

Yeah, so I guess it would be worth drilling down into that user
confusion.

Are users confused because they don't understand the current nature
of the group dynamic, the unseen hand that causes some blueprints to
prosper while others fester seemingly unnoticed?

(for example, in the sense of not appreciating the emergent championing
done by say the core subset interested in libvirt)

Or are they confused in that they read some implicit contract or
commitment into the targeting of those 100 blueprints to a release
cycle?

(in sense of expecting that the core team will land all/most of those
100 target'd BPs within the cycle)

Cheers,
Eoghan 

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-August/042728.html

  For example it might address some pain-point they've encountered, or
  impact on some functional area that they themselves have worked on in
  the past, or line up with their thinking on some architectural point.
 
  But for whatever motivation, such small groups of cores currently have
  the freedom to self-organize in a fairly emergent way and champion
  individual BPs that are important to them, simply by *independently*
  giving those BPs review attention.
 
  Whereas under the slots initiative, presumably this power would be
  subsumed by the group will, as expressed by the prioritization
  applied to the holding pattern feeding the runways?
 
  I'm not saying this is good or bad, just pointing out a change that
  we should have our eyes open to.
 
 Michael
 
 --
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Nikola Đipanov
On 08/13/2014 04:05 AM, Michael Still wrote:
 On Wed, Aug 13, 2014 at 4:26 AM, Eoghan Glynn egl...@redhat.com wrote:

 It seems like this is exactly what the slots give us, though. The core 
 review
 team picks a number of slots indicating how much work they think they can
 actually do (less than the available number of blueprints), and then
 blueprints queue up to get a slot based on priorities and turnaround time
 and other criteria that try to make slot allocation fair. By having the
 slots, not only is the review priority communicated to the review team, it
 is also communicated to anyone watching the project.

 One thing I'm not seeing shine through in this discussion of slots is
 whether any notion of individual cores, or small subsets of the core
 team with aligned interests, can champion blueprints that they have
 a particular interest in.
 
 I think that's because we've focussed in this discussion on the slots
 themselves, not the process of obtaining a slot.
 
 The proposal as it stands now is that we would have a public list of
 features that are ready to occupy a slot. That list would the ranked
 in order of priority to the project, and the next free slot goes to
 the top item on the list. The ordering of the list is determined by
 nova-core, based on their understanding of the importance of a given
 thing, as well as what they are hearing from our users.
 
 So -- there's totally scope for lobbying, or for a subset of core to
 champion a feature to land, or for a company to explain why a given
 feature is very important to them.
 
 It sort of happens now -- there is a subset of core which cares more
 about xen than libvirt for example. We're just being more open about
 the process and setting expectations for our users. At the moment its
 very confusing as a user, there are hundreds of proposed features for
 Juno, nearly 100 of which have been accepted. However, we're kidding
 ourselves if we think we can land 100 blueprints in a release cycle.
 

While I agree with motivation for this - setting the expectations, I
fail to see how this is different to what the Swift guys seem to be
doing apart from more red tape.

I would love for us to say: If you want your feature in - you need to
convince us that it's awesome and that we need to listen to you, by
being active in the community (not only by means of writing code of
course).

I fear that slots will have us saying: Here's another check-box for you
to tick, and the code goes in, which in addition to not communicating
that we are ultimately the ones who chose what goes in, regardless of
slots, also shifts the conversation away from what is really important,
and that is the relative merit of the feature itself.

But it obviously depends on the implementation.

N.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Thierry Carrez
Nikola Đipanov wrote:
 While I agree with motivation for this - setting the expectations, I
 fail to see how this is different to what the Swift guys seem to be
 doing apart from more red tape.

It's not different imho. It's just that nova as significantly more
features being thrown at it, so the job of selecting priority features
is significantly harder, and the backlog is a lot bigger. The slot
system allows to visualize that backlog.

Currently we target all features to juno-3, everyone expects their stuff
to get review attention, nothing gets merged until the end of the
milestone period, and and in the end we merge almost nothing. The
blueprint priorities don't cut it, what you want is a ranked list. See
how likely you are to be considered for a release. Communicate that the
feature will actually be a Kilo feature earlier. Set downstream
expectations right. Merge earlier.

That ties into the discussions we are having for StoryBoard to support
task lists[1], which are arbitrary ranked lists of tasks. Those are much
more flexible than mono-dimensional priorities that fail to express the
complexity of priority in a complex ecosystem like OpenStack development.

[1] https://wiki.openstack.org/wiki/StoryBoard/Task_Lists

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Thierry Carrez
Rochelle.RochelleGrober wrote:
 [...]
 So, with all that prologue, here is what I propose (and please consider 
 proposing your improvements/changes to it).  I would like to see for Kilo:
 
 - IRC meetings and mailing list meetings beginning with Juno release and 
 continuing through the summit that focus on core project needs (what Thierry 
 call strategic) that as a set would be considered the primary focus of the 
 Kilo release for each project.  This could include high priority bugs, 
 refactoring projects, small improvement projects, high interest extensions 
 and new features, specs that didn't make it into Juno, etc.
 - Develop the list and prioritize it into Needs and Wants. Consider these 
 the feeder projects for the two runways if you like.  
 - Discuss the lists.  Maybe have a community vote? The vote will freeze the 
 list, but as in most development project freezes, it can be a soft freeze 
 that the core, or drivers or TC can amend (or throw out for that matter).
 [...]

One thing we've been unable to do so far is to set release goals at
the beginning of a release cycle and stick to those. It used to be
because we were so fast moving that new awesome stuff was proposed
mid-cycle and ends up being a key feature (sometimes THE key feature)
for the project. Now it's because there is so much proposed noone knows
what will actually get completed.

So while I agree that what you propose is the ultimate solution (and the
workflow I've pushed PTLs to follow every single OpenStack release so
far), we have struggled to have the visibility, long-term thinking and
discipline to stick to it in the past. If you look at the post-summit
plans and compare to what we end up in a release, you'll see quite a lot
of differences :)

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Daniel P. Berrange
On Mon, Aug 11, 2014 at 10:30:12PM -0700, Joe Gordon wrote:
 On Fri, Aug 8, 2014 at 6:58 AM, Kyle Mestery mest...@mestery.com wrote:
   I really like this idea, as Michael and others alluded to in above, we
  are
   attempting to set cycle goals for Kilo in Nova. but I think it is worth
   doing for all of OpenStack. We would like to make a list of key goals
  before
   the summit so that we can plan our summit sessions around the goals. On a
   really high level one way to look at this is, in Kilo we need to pay down
   our technical debt.
  
   The slots/runway idea is somewhat separate from defining key cycle
  goals; we
   can be approve blueprints based on key cycle goals without doing slots.
   But
   with so many concurrent blueprints up for review at any given time, the
   review teams are doing a lot of multitasking and humans are not very
  good at
   multitasking. Hopefully slots can help address this issue, and hopefully
   allow us to actually merge more blueprints in a given cycle.
  
  I'm not 100% sold on what the slots idea buys us. What I've seen this
  cycle in Neutron is that we have a LOT of BPs proposed. We approve
  them after review. And then we hit one of two issues: Slow review
  cycles, and slow code turnaround issues. I don't think slots would
  help this, and in fact may cause more issues. If we approve a BP and
  give it a slot for which the eventual result is slow review and/or
  code review turnaround, we're right back where we started. Even worse,
  we may have not picked a BP for which the code submitter would have
  turned around reviews faster. So we've now doubly hurt ourselves. I
  have no idea how to solve this issue, but by over subscribing the
  slots (e.g. over approving), we allow for the submissions with faster
  turnaround a chance to merge quicker. With slots, we've removed this
  capability by limiting what is even allowed to be considered for
  review.
 
 
 Slow review: by limiting the number of blueprints up we hope to focus our
 efforts on fewer concurrent things
 slow code turn around: when a blueprint is given a slot (runway) we will
 first make sure the author/owner is available for fast code turnaround.
 
 If a blueprint review stalls out (slow code turnaround, stalemate in review
 discussions etc.) we will take the slot and give it to another blueprint.

This idea of fixed slots is not really very appealing to me. It sounds
like we're adding a significant amount of buerocratic overhead to our
development process that is going to make us increasingly inefficient.
I don't want to waste time wating for a stalled blueprint to time out
before we give the slot to another blueprint. On any given day when I
have spare review time available I'll just review anything that is up
and waiting for review. If we can set a priority for the things up for
review that is great since I can look at those first, but the idea of
having fixed slots for things we should review does not do anything to
help my review efficiency IMHO.

I also thing it will kill our flexibility in approving  dealing with
changes that are not strategically important, but none the less go
through our blueprint/specs process. There have been a bunch of things
I've dealt with that are not strategic, but have low overhead to code
and review and easily dealt with in the slack time between looking at
the high priority reviews. It sounds like we're going to loose our
flexibility to pull in stuff like this if it only gets a chance when
strategically imporatant stuff is not occupying a slot

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Giulio Fidente

On 08/07/2014 12:56 PM, Jay Pipes wrote:

On 08/07/2014 02:12 AM, Kashyap Chamarthy wrote:

On Thu, Aug 07, 2014 at 07:10:23AM +1000, Michael Still wrote:

On Wed, Aug 6, 2014 at 2:03 AM, Thierry Carrez
thie...@openstack.org wrote:


We seem to be unable to address some key issues in the software we
produce, and part of it is due to strategic contributors (and core
reviewers) being overwhelmed just trying to stay afloat of what's
happening. For such projects, is it time for a pause ? Is it time to
define key cycle goals and defer everything else ?


[. . .]


We also talked about tweaking the ratio of tech debt runways vs
'feature runways. So, perhaps every second release is focussed on
burning down tech debt and stability, whilst the others are focussed
on adding features.



I would suggest if we do such a thing, Kilo should be a stability'
release.


Excellent sugestion. I've wondered multiple times that if we could
dedicate a good chunk (or whole) of a specific release for heads down
bug fixing/stabilization. As it has been stated elsewhere on this list:
there's no pressing need for a whole lot of new code submissions, rather
we focusing on fixing issues that affect _existing_ users/operators.


There's a whole world of GBP/NFV/VPN/DVR/TLA folks that would beg to
differ on that viewpoint. :)

That said, I entirely agree with you and wish efforts to stabilize would
take precedence over feature work.


I'm of this same opinion: I think a periodic, concerted effort to 
stabilize the existing features (which shouldn't be about bugs fixing 
only) would be helpful to work on some of the issues mentioned.


I'm thinking of qa, infra, the tactical contributions, the code clean-up 
and more in general the reviews backlog as some of these.


And I also think it would useful to figure what are the *strategic* 
features needed, as it would provide with some time to gather feedback 
from the field.


--
Giulio Fidente
GPG KEY: 08D733BA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Daniel P. Berrange
On Thu, Aug 07, 2014 at 03:56:04AM -0700, Jay Pipes wrote:
 On 08/07/2014 02:12 AM, Kashyap Chamarthy wrote:
 On Thu, Aug 07, 2014 at 07:10:23AM +1000, Michael Still wrote:
 On Wed, Aug 6, 2014 at 2:03 AM, Thierry Carrez thie...@openstack.org 
 wrote:
 
 We seem to be unable to address some key issues in the software we
 produce, and part of it is due to strategic contributors (and core
 reviewers) being overwhelmed just trying to stay afloat of what's
 happening. For such projects, is it time for a pause ? Is it time to
 define key cycle goals and defer everything else ?
 
 [. . .]
 
 We also talked about tweaking the ratio of tech debt runways vs
 'feature runways. So, perhaps every second release is focussed on
 burning down tech debt and stability, whilst the others are focussed
 on adding features.
 
 I would suggest if we do such a thing, Kilo should be a stability'
 release.
 
 Excellent sugestion. I've wondered multiple times that if we could
 dedicate a good chunk (or whole) of a specific release for heads down
 bug fixing/stabilization. As it has been stated elsewhere on this list:
 there's no pressing need for a whole lot of new code submissions, rather
 we focusing on fixing issues that affect _existing_ users/operators.
 
 There's a whole world of GBP/NFV/VPN/DVR/TLA folks that would beg to differ
 on that viewpoint. :)

Yeah, I think declaring entire cycles to be stabilization vs feature
focused is far to coarse  inflexibile. The most likely effect
of it would be that people who would otherwise contribute useful
features to openstack will simply walk away from the project for
that cycle.

I think that in fact the time when we need the strongest focus on
bug fixing is immediately after sizeable features have merged. I
don't think you want to give people the message that stabalization
work doesn't take place until the next 6 month cycle - that's far
too long to live with unstable code.

Currently we have a bit of focus on stabalization at each milestone
but to be honest most of that focus is on the last milestone only.
I'd like to see us have a much more explicit push for regular
stabalization work during the cycle, to really re-inforce the
idea that stabilization is an activity that should be taking place
continuously. Be really proactive in designating a day of the week
(eg Bug fix wednesdays) and make a concerted effort during that
day to have reviewers  developers concentrate exclusively on
stabilization related activities.

 That said, I entirely agree with you and wish efforts to stabilize would
 take precedence over feature work.

I find it really contradictory that we have such a strong desire for
stabilization and testing of our code, but at the same time so many
people argue that the core teams should have nothing at all todo with
the stable release branches which a good portion of our users will
actually be running. By ignoring stable branches, leaving it upto a
small team to handle, I think we giving the wrong message about what
our priorities as a team team are. I can't help thinking this filters
through to impact the way people think about their work on master.
Stabilization is important and should be baked into the DNA of our
teams to the extent that identifying bug fixes for stable is just
an automatic part of our dev lifecycle. The quantity of patches going
into stable isn't so high that it take up significant resources when
spread across the entire core team.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Mark McLoughlin
On Tue, 2014-08-05 at 18:03 +0200, Thierry Carrez wrote:
 Hi everyone,
 
 With the incredible growth of OpenStack, our development community is
 facing complex challenges. How we handle those might determine the
 ultimate success or failure of OpenStack.
 
 With this cycle we hit new limits in our processes, tools and cultural
 setup. This resulted in new limiting factors on our overall velocity,
 which is frustrating for developers. This resulted in the burnout of key
 firefighting resources. This resulted in tension between people who try
 to get specific work done and people who try to keep a handle on the big
 picture.

Always fun catching up on threads like this after being away ... :)

I think the thread has revolved around three distinct areas:

  1) The per-project review backlog, its implications for per-project 
 velocity, and ideas for new workflows or tooling

  2) Cross-project scaling issues that get worse as we add more 
 integrated projects

  3) The factors that go into deciding whether a project belongs in the 
 integrated release - including the appropriateness of its scope,
 the soundness of its architecture and how production ready it is.

The first is important - hugely important - but I don't think it has any
bearing on the makeup, scope or contents of the integrated release, but
certainly will have a huge bearing on the success of the release and the
project more generally.

The third strikes me as a part of the natural evolution around how we
think about the integrated release. I don't think there's any particular
crisis or massive urgency here. As the TC considers proposals to
integrate (or de-integrate) projects, we'll continue to work through
this. These debates are contentious enough that we should avoid adding
unnecessary drama to them by conflating the issues with more pressing,
urgent issues.

I think the second area is where we should focus. We're concerned that
we're hitting a breaking point with some cross-project issues - like
release management, the gate, a high level of non-deterministic test
failures, insufficient cross-project collaboration on technical debt
(e.g. via Oslo), difficulty in reaching consensus on new cross-project
initiatives (Sean gave the examples of Group Based Policy and Rally) -
such that drastic measures are required. Like maybe we should not accept
any new integrated projects in this cycle while we work through those
issues.

Digging deeper into that means itemizing these cross-project scaling
issues, figuring out which of them need drastic intervention, discussing
what the intervention might be and the realistic overall effects of
those interventions.

AFAICT, the closest we've come in the thread to that level of detail is
Sean's email here:

  http://lists.openstack.org/pipermail/openstack-dev/2014-August/042277.html

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Steven Hardy
On Wed, Aug 13, 2014 at 11:42:52AM +0100, Daniel P. Berrange wrote:
 On Thu, Aug 07, 2014 at 03:56:04AM -0700, Jay Pipes wrote:
  On 08/07/2014 02:12 AM, Kashyap Chamarthy wrote:
  On Thu, Aug 07, 2014 at 07:10:23AM +1000, Michael Still wrote:
  On Wed, Aug 6, 2014 at 2:03 AM, Thierry Carrez thie...@openstack.org 
  wrote:
  
  We seem to be unable to address some key issues in the software we
  produce, and part of it is due to strategic contributors (and core
  reviewers) being overwhelmed just trying to stay afloat of what's
  happening. For such projects, is it time for a pause ? Is it time to
  define key cycle goals and defer everything else ?
  
  [. . .]
  
  We also talked about tweaking the ratio of tech debt runways vs
  'feature runways. So, perhaps every second release is focussed on
  burning down tech debt and stability, whilst the others are focussed
  on adding features.
  
  I would suggest if we do such a thing, Kilo should be a stability'
  release.
  
  Excellent sugestion. I've wondered multiple times that if we could
  dedicate a good chunk (or whole) of a specific release for heads down
  bug fixing/stabilization. As it has been stated elsewhere on this list:
  there's no pressing need for a whole lot of new code submissions, rather
  we focusing on fixing issues that affect _existing_ users/operators.
  
  There's a whole world of GBP/NFV/VPN/DVR/TLA folks that would beg to differ
  on that viewpoint. :)
 
 Yeah, I think declaring entire cycles to be stabilization vs feature
 focused is far to coarse  inflexibile. The most likely effect
 of it would be that people who would otherwise contribute useful
 features to openstack will simply walk away from the project for
 that cycle.
 
 I think that in fact the time when we need the strongest focus on
 bug fixing is immediately after sizeable features have merged. I
 don't think you want to give people the message that stabalization
 work doesn't take place until the next 6 month cycle - that's far
 too long to live with unstable code.
 
 Currently we have a bit of focus on stabalization at each milestone
 but to be honest most of that focus is on the last milestone only.
 I'd like to see us have a much more explicit push for regular
 stabalization work during the cycle, to really re-inforce the
 idea that stabilization is an activity that should be taking place
 continuously. Be really proactive in designating a day of the week
 (eg Bug fix wednesdays) and make a concerted effort during that
 day to have reviewers  developers concentrate exclusively on
 stabilization related activities.
 
  That said, I entirely agree with you and wish efforts to stabilize would
  take precedence over feature work.
 
 I find it really contradictory that we have such a strong desire for
 stabilization and testing of our code, but at the same time so many
 people argue that the core teams should have nothing at all todo with
 the stable release branches which a good portion of our users will
 actually be running. 

Does such an argument actually exist?  My experience has been that
stable-maint folks are very accepting of help, and that it's relatively
easy for core reviewers with an interest in stable branch maintenance to
offer their services and become stable-maint core:

https://wiki.openstack.org/wiki/StableBranch#Joining_the_Team

 By ignoring stable branches, leaving it upto a
 small team to handle, I think we giving the wrong message about what
 our priorities as a team team are. I can't help thinking this filters
 through to impact the way people think about their work on master.

Who is ignoring stable branches?  This sounds like a project specific
failing to me, as all experienced core reviewers should consider offering
their services to help with stable-maint activity.

I don't personally see any reason why the *entire* project core team has to
do this, but a subset of them should feel compelled to participate in the
stable-maint process, if they have sufficient time, interest and historical
context, it's not some other team IMO.

 Stabilization is important and should be baked into the DNA of our
 teams to the extent that identifying bug fixes for stable is just
 an automatic part of our dev lifecycle. The quantity of patches going
 into stable isn't so high that it take up significant resources when
 spread across the entire core team.

+1

Also, contributors should be more actively encouraged to propose their
bugfixes as backports to stable branches themselves, instead of relying on
$someone_else to do it.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Daniel P. Berrange
On Wed, Aug 13, 2014 at 12:55:48PM +0100, Steven Hardy wrote:
 On Wed, Aug 13, 2014 at 11:42:52AM +0100, Daniel P. Berrange wrote:
  On Thu, Aug 07, 2014 at 03:56:04AM -0700, Jay Pipes wrote:
   That said, I entirely agree with you and wish efforts to stabilize would
   take precedence over feature work.
  
  I find it really contradictory that we have such a strong desire for
  stabilization and testing of our code, but at the same time so many
  people argue that the core teams should have nothing at all todo with
  the stable release branches which a good portion of our users will
  actually be running. 
 
 Does such an argument actually exist?  My experience has been that
 stable-maint folks are very accepting of help, and that it's relatively
 easy for core reviewers with an interest in stable branch maintenance to
 offer their services and become stable-maint core:
 
 https://wiki.openstack.org/wiki/StableBranch#Joining_the_Team

There are multiple responses to my mail here to the effect that core
teams should not be involved in stable branch work and leave it upto
the distro maintainers unless individuals wish to volunteer

  http://lists.openstack.org/pipermail/openstack-dev/2014-July/041409.html


  By ignoring stable branches, leaving it upto a
  small team to handle, I think we giving the wrong message about what
  our priorities as a team team are. I can't help thinking this filters
  through to impact the way people think about their work on master.
 
 Who is ignoring stable branches?  This sounds like a project specific
 failing to me, as all experienced core reviewers should consider offering
 their services to help with stable-maint activity.

 I don't personally see any reason why the *entire* project core team has to
 do this, but a subset of them should feel compelled to participate in the
 stable-maint process, if they have sufficient time, interest and historical
 context, it's not some other team IMO.

I think that stable branch review should be a key responsibility for anyone
on the core team, not solely those few who volunteer for stable team. As
the number of projects in openstack grows I think the idea of having a
single stable team with rights to approve across any project is ultimately
flawed because it doesn't scale efficiently and they don't have the same
level of domain knowledge as the respective project teams.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Mark McLoughlin
On Thu, 2014-08-07 at 09:30 -0400, Sean Dague wrote:

 While I definitely think re-balancing our quality responsibilities back
 into the projects will provide an overall better release, I think it's
 going to take a long time before it lightens our load to the point where
 we get more breathing room again.

I'd love to hear more about this re-balancing idea. It sounds like we
have some concrete ideas here and we're saying they're not relevant to
this thread because they won't be an immediate solution?

 This isn't just QA issues, it's a coordination issue on overall
 consistency across projects. Something that worked fine at 5 integrated
 projects, got strained at 9, and I think is completely untenable at 15.

I can certainly relate to that from experience with Oslo.

But if you take a concrete example - as more new projects emerge, it
became harder to get them all using oslo.messaging and using it
consistent ways. That's become a lot better with Doug's idea of Oslo
project delegates.

But if we had not added those projects to the release, the only reason
that the problem would be more manageable is that the use of
oslo.messaging would effectively become a requirement for integration.
So, projects requesting integration have to take cross-project
responsibilities more seriously for fear their application would be
denied.

That's a very sad conclusion. Our only tool for encouraging people to
take this cross-project issue is being accepted into the release and,
once achieved, the cross-project responsibilities aren't taken so
seriously?

I don't think it's so bleak as that - given the proper support,
direction and tracking I think we're seeing in Oslo how projects will
play their part in getting to cross-project consistency.

 I think one of the big issues with a large number of projects is that
 implications of implementation of one project impact others, but people
 don't always realize. Locally correct decisions for each project may not
 be globally correct for OpenStack. The GBP discussion, the Rally
 discussion, all are flavors of this.

I think we need two things here - good examples of how these
cross-project initiatives can succeed so people can learn from them, and
for the initiatives themselves to be patiently lead by those whose goal
is a cross-project solution.

It's hard work, absolutely no doubt. The point again, though, is that it
is possible to do this type of work in such a way that once a small
number of projects adopt the approach, most of the others will follow
quite naturally.

If I was trying to get a consistent cross-project approach in a
particular area, the least of my concerns would be whether Ironic,
Marconi, Barbican or Designate would be willing to fall in line behind a
cross-project consensus.

 People are frustrated in infra load, for instance. It's probably worth
 noting that the 'config' repo currently has more commits landed than any
 other project in OpenStack besides 'nova' in this release. It has 30%
 the core team size as Nova (http://stackalytics.com/?metric=commits).

Yes, infra is an extremely busy project. I'm not sure I'd compare
infra/config commits to Nova commits in order to illustrate that,
though.

Infra is a massive endeavor, it's as critical a part of the project as
any project in the integrated release, and like other strategic
efforts struggles to attract contributors from as diverse a number of
companies as the integrated projects.

 So I do think we need to really think about what *must* be in OpenStack
 for it to be successful, and ensure that story is well thought out, and
 that the pieces which provide those features in OpenStack are clearly
 best of breed, so they are deployed in all OpenStack deployments, and
 can be counted on by users of OpenStack.

I do think we try hard to think this through, but no doubt we need to do
better. Is this conversation concrete enough to really move our thinking
along sufficiently, though?

 Because if every version of
 OpenStack deploys with a different Auth API (an example that's current
 but going away), we can't grow an ecosystem of tools around it.

There's a nice concrete example, but it's going away? What's the best
current example to talk through?

 This is organic definition of OpenStack through feedback with operators
 and developers on what's minimum needed and currently working well
 enough that people are happy to maintain it. And make that solid.
 
 Having a TC that is independently selected separate from the PTLs allows
 that group to try to make some holistic calls here.
 
 At the end of the day, that's probably going to mean saying No to more
 things. Everytime I turn around everyone wants the TC to say No to
 things, just not to their particular thing. :) Which is human nature.
 But I think if we don't start saying No to more things we're going to
 end up with a pile of mud that no one is happy with.

That we're being so abstract about all of this is frustrating. I get
that no-one wants to start a 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 13/08/14 14:07, Daniel P. Berrange wrote:
 On Wed, Aug 13, 2014 at 12:55:48PM +0100, Steven Hardy wrote:
 On Wed, Aug 13, 2014 at 11:42:52AM +0100, Daniel P. Berrange
 wrote:
 On Thu, Aug 07, 2014 at 03:56:04AM -0700, Jay Pipes wrote:
 That said, I entirely agree with you and wish efforts to
 stabilize would take precedence over feature work.
 
 I find it really contradictory that we have such a strong
 desire for stabilization and testing of our code, but at the
 same time so many people argue that the core teams should have
 nothing at all todo with the stable release branches which a
 good portion of our users will actually be running.
 
 Does such an argument actually exist?  My experience has been
 that stable-maint folks are very accepting of help, and that it's
 relatively easy for core reviewers with an interest in stable
 branch maintenance to offer their services and become
 stable-maint core:
 
 https://wiki.openstack.org/wiki/StableBranch#Joining_the_Team
 
 There are multiple responses to my mail here to the effect that
 core teams should not be involved in stable branch work and leave
 it upto the distro maintainers unless individuals wish to
 volunteer
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-July/041409.html

It
 
doesn't indicate that stable maintainers' team is not willing to
get help from core developers. Any core can easily step in and ask for
+2 permission for stable branches, it should not take much time to get
it. Granting +2 should mean that the new member has read and
understood stable branch maintainership procedures (which are short
and clear).

 
 
 By ignoring stable branches, leaving it upto a small team to
 handle, I think we giving the wrong message about what our
 priorities as a team team are. I can't help thinking this
 filters through to impact the way people think about their work
 on master.
 
 Who is ignoring stable branches?  This sounds like a project
 specific failing to me, as all experienced core reviewers should
 consider offering their services to help with stable-maint
 activity.
 
 I don't personally see any reason why the *entire* project core
 team has to do this, but a subset of them should feel compelled
 to participate in the stable-maint process, if they have
 sufficient time, interest and historical context, it's not some
 other team IMO.
 
 I think that stable branch review should be a key responsibility
 for anyone on the core team, not solely those few who volunteer for
 stable team. As the number of projects in openstack grows I think
 the idea of having a single stable team with rights to approve
 across any project is ultimately flawed because it doesn't scale
 efficiently and they don't have the same level of domain knowledge
 as the respective project teams.

Indeed, stable maintainers sometimes lack full understanding of the
proposed patch. Anyway, if a patch is easy and it has a clear
description in its commit message and Launchpad, it's usually easy to
determine whether it's applicable for stable branches.

Yes, sometimes a stable maintainer is not able to determine if a patch
should really go into stable; in that case core developers should be
asked to vote on the patch. In most cases though, it's generally
assumed that the patch contents are ok (they were already merged in
master, meaning, core developers already voted +2 on it before), and
there is no real need for special attention from core developers (that
are usually busy with ongoing work in master).

Note: there are sometimes patches that belong to stable branches only.
In those cases, stable maintainers should not be the ones to decide
whether the patch is going into the tree, because no due review ran
for the patch in master before.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT61h5AAoJEC5aWaUY1u57zQoH/1eo6Ut5D96wAxqfImz5ZEHH
IfTFUI9zXCDr1+EKoK3yyA4nOK+lJQ+80+/19281KyYsOLxlf1lOo0rfpXj6iO5o
Iz/AwPMWPsvn4FHcRr2KD31oRusPKvFQgZAdFEaeoOW6pi+AcMy8tHSh5JYuvipk
e2QvB8RqgRQsLnS5z9dcZ0wdrwKJmUMIWlVOcrzupabFtfWkpRP1eamr6oGHqNDK
z5lJiu91+sp/YlDHXZ9cy2e6sk+C2f9j5rgUeTmVkafZjvkZ/be4vprlU7hFZwt6
yXGLp5Ydjr0XK788QtIo7bnLJFdmWK3mv0Y9jzQRfUcPC7xqFCMx9AZegwrD740=
=dsnL
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Russell Bryant
On 08/12/2014 10:05 PM, Michael Still wrote:
 there are hundreds of proposed features for
 Juno, nearly 100 of which have been accepted. However, we're kidding
 ourselves if we think we can land 100 blueprints in a release cycle.

FWIW, I think this is actually huge improvement from previous cycles.  I
think we had almost double that # of blueprints on the list in the past.

I also don't think 100 is *completely* out of the question.  We're in
the 50-100 range already:

Icehouse - 67
Havana - 91
Grizzly - 66

Anyway, just wanted to share some numbers ... some improvements to
prioritization within that 100 is certainly still a good thing.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Mark McLoughlin
On Fri, 2014-08-08 at 15:36 -0700, Devananda van der Veen wrote:
 On Tue, Aug 5, 2014 at 10:02 AM, Monty Taylor mord...@inaugust.com wrote:

  Yes.
 
  Additionally, and I think we've been getting better at this in the 2 cycles
  that we've had an all-elected TC, I think we need to learn how to say no on
  technical merit - and we need to learn how to say thank you for your
  effort, but this isn't working out Breaking up with someone is hard to do,
  but sometimes it's best for everyone involved.
 
 
 I agree.
 
 The challenge is scaling the technical assessment of projects. We're
 all busy, and digging deeply enough into a new project to make an
 accurate assessment of it is time consuming. Some times, there are
 impartial subject-matter experts who can spot problems very quickly,
 but how do we actually gauge fitness?

Yes, it's important the TC does this and it's obvious we need to get a
lot better at it.

The Marconi architecture threads are an example of us trying harder (and
kudos to you for taking the time), but it's a little disappointing how
it has turned out. On the one hand there's what seems like a this
doesn't make any sense gut feeling and on the other hand an earnest,
but hardly bite-sized justification for how the API was chosen and how
it lead to the architecture. Frustrating that appears to not be
resulting in either improved shared understanding, or improved
architecture. Yet everyone is trying really hard.

 Letting the industry field-test a project and feed their experience
 back into the community is a slow process, but that is the best
 measure of a project's success. I seem to recall this being an
 implicit expectation a few years ago, but haven't seen it discussed in
 a while.

I think I recall us discussing a must have feedback that it's
successfully deployed requirement in the last cycle, but we recognized
that deployers often wait until a project is integrated.

 I'm not suggesting we make a policy of it, but if, after a
 few cycles, a project is still not meeting the needs of users, I think
 that's a very good reason to free up the hold on that role within the
 stack so other projects can try and fill it (assuming that is even a
 role we would want filled).

I'm certainly not against discussing de-integration proposals. But I
could imagine a case for de-integrating every single one of our
integrated projects. None of our software is perfect. How do we make
sure we approach this sanely, rather than run the risk of someone
starting a witch hunt because of a particular pet peeve?

I could imagine a really useful dashboard showing the current state of
projects along a bunch of different lines - summary of latest
deployments data from the user survey, links to known scalability
issues, limitations that operators should take into account, some
capturing of trends so we know whether things are improving. All of this
data would be useful to the TC, but also hugely useful to operators.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Mark McLoughlin
On Tue, 2014-08-12 at 14:26 -0400, Eoghan Glynn wrote:
   It seems like this is exactly what the slots give us, though. The core 
 review
  team picks a number of slots indicating how much work they think they can
  actually do (less than the available number of blueprints), and then
  blueprints queue up to get a slot based on priorities and turnaround time
  and other criteria that try to make slot allocation fair. By having the
  slots, not only is the review priority communicated to the review team, it
  is also communicated to anyone watching the project.
 
 One thing I'm not seeing shine through in this discussion of slots is
 whether any notion of individual cores, or small subsets of the core
 team with aligned interests, can champion blueprints that they have
 a particular interest in.
 
 For example it might address some pain-point they've encountered, or
 impact on some functional area that they themselves have worked on in
 the past, or line up with their thinking on some architectural point.
 
 But for whatever motivation, such small groups of cores currently have
 the freedom to self-organize in a fairly emergent way and champion
 individual BPs that are important to them, simply by *independently*
 giving those BPs review attention.
 
 Whereas under the slots initiative, presumably this power would be
 subsumed by the group will, as expressed by the prioritization
 applied to the holding pattern feeding the runways?
 
 I'm not saying this is good or bad, just pointing out a change that
 we should have our eyes open to.

Yeah, I'm really nervous about that aspect.

Say a contributor proposes a new feature, a couple of core reviewers
think it's important exciting enough for them to champion it but somehow
the 'group will' is that it's not a high enough priority for this
release, even if everyone agrees that it is actually cool and useful.

What does imposing that 'group will' on the two core reviewers and
contributor achieve? That the contributor and reviewers will happily
turn their attention to some of the higher priority work? Or we lose a
contributor and two reviewers because they feel disenfranchised?
Probably somewhere in the middle.

On the other hand, what happens if work proceeds ahead even if not
deemed a high priority? I don't think we can say that the contributor
and two core reviewers were distracted from higher priority work,
because blocking this work is probably unlikely to shift their focus in
a productive way. Perhaps other reviewers are distracted because they
feel the work needs more oversight than just the two core reviewers? It
places more of a burden on the gate?

I dunno ... the consequences of imposing group will worry me more than
the consequences of allowing small groups to self-organize like this.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Kyle Mestery
On Wed, Aug 13, 2014 at 5:15 AM, Daniel P. Berrange berra...@redhat.com wrote:
 On Mon, Aug 11, 2014 at 10:30:12PM -0700, Joe Gordon wrote:
 On Fri, Aug 8, 2014 at 6:58 AM, Kyle Mestery mest...@mestery.com wrote:
   I really like this idea, as Michael and others alluded to in above, we
  are
   attempting to set cycle goals for Kilo in Nova. but I think it is worth
   doing for all of OpenStack. We would like to make a list of key goals
  before
   the summit so that we can plan our summit sessions around the goals. On a
   really high level one way to look at this is, in Kilo we need to pay down
   our technical debt.
  
   The slots/runway idea is somewhat separate from defining key cycle
  goals; we
   can be approve blueprints based on key cycle goals without doing slots.
   But
   with so many concurrent blueprints up for review at any given time, the
   review teams are doing a lot of multitasking and humans are not very
  good at
   multitasking. Hopefully slots can help address this issue, and hopefully
   allow us to actually merge more blueprints in a given cycle.
  
  I'm not 100% sold on what the slots idea buys us. What I've seen this
  cycle in Neutron is that we have a LOT of BPs proposed. We approve
  them after review. And then we hit one of two issues: Slow review
  cycles, and slow code turnaround issues. I don't think slots would
  help this, and in fact may cause more issues. If we approve a BP and
  give it a slot for which the eventual result is slow review and/or
  code review turnaround, we're right back where we started. Even worse,
  we may have not picked a BP for which the code submitter would have
  turned around reviews faster. So we've now doubly hurt ourselves. I
  have no idea how to solve this issue, but by over subscribing the
  slots (e.g. over approving), we allow for the submissions with faster
  turnaround a chance to merge quicker. With slots, we've removed this
  capability by limiting what is even allowed to be considered for
  review.
 

 Slow review: by limiting the number of blueprints up we hope to focus our
 efforts on fewer concurrent things
 slow code turn around: when a blueprint is given a slot (runway) we will
 first make sure the author/owner is available for fast code turnaround.

 If a blueprint review stalls out (slow code turnaround, stalemate in review
 discussions etc.) we will take the slot and give it to another blueprint.

 This idea of fixed slots is not really very appealing to me. It sounds
 like we're adding a significant amount of buerocratic overhead to our
 development process that is going to make us increasingly inefficient.
 I don't want to waste time wating for a stalled blueprint to time out
 before we give the slot to another blueprint. On any given day when I
 have spare review time available I'll just review anything that is up
 and waiting for review. If we can set a priority for the things up for
 review that is great since I can look at those first, but the idea of
 having fixed slots for things we should review does not do anything to
 help my review efficiency IMHO.

 I also thing it will kill our flexibility in approving  dealing with
 changes that are not strategically important, but none the less go
 through our blueprint/specs process. There have been a bunch of things
 I've dealt with that are not strategic, but have low overhead to code
 and review and easily dealt with in the slack time between looking at
 the high priority reviews. It sounds like we're going to loose our
 flexibility to pull in stuff like this if it only gets a chance when
 strategically imporatant stuff is not occupying a slot

I agree with all of Daniel's comments here, and these are the same
reason I'm not in favor of fixed slots or runways. As ttx has
stated in this thread, we have done a really poor job as a project of
understanding what are the priority items for a release, and sticking
to those. Trying to solve that to put focus on the priority items,
while allowing for smaller, low-overhead code and reviews should be
the priority here.

Thanks,
Kyle

 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Mark McLoughlin
On Tue, 2014-08-12 at 14:12 -0700, Joe Gordon wrote:


 Here is the full nova proposal on  Blueprint in Kilo: Runways and
 Project Priorities
  
 https://review.openstack.org/#/c/112733/
 http://docs-draft.openstack.org/33/112733/4/check/gate-nova-docs/5f38603/doc/build/html/devref/runways.html

Thanks again for doing this.

Four points in the discussion jump out at me. Let's see if I can
paraphrase without misrepresenting :)

  - ttx - we need tools to be able to visualize these runways

  - danpb - the real problem here is that we don't have good tools to 
help reviewers maintain a todo list which feeds, in part, off 
blueprint prioritization

  - eglynn - what are the implications for our current ability for 
groups within the project to self-organize?

  - russellb - why is different from reviewers sponsoring blueprints, 
how will it work better?


I've been struggling to articulate a tooling idea for a while now. Let
me try again based on the runways idea and the thoughts above ...


When a reviewer sits down to do some reviews, their goal should be to
work through the small number of runways they're signed up to and drive
the list of reviews that need their attention to zero.

Reviewers should be able to create their own runways and allow others
sign up to them.

The reviewers responsible for that runway are responsible for pulling
new reviews from explicitly defined feeder runways.

Some feeder runways could be automated; no more than a search query for
say new libvirt patches which aren't already in the libvirt driver
runway.

All of this activity should be visible to everyone. It should be
possible to look at all the runways, see what runways a patch is in,
understand the flow between runways, etc.


There's a lot of detail that would have to be worked out, but I'm pretty
convinced there's an opportunity to carve up the review backlog, empower
people to help out with managing the backlog, give reviewers manageable
queues for them to stay on top of, help ensure that project priorization
is one of the drivers of reviewer activity and increases contributor
visibility into how decisions are made.

Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Russell Bryant
On 08/13/2014 08:52 AM, Mark McLoughlin wrote:
 On Tue, 2014-08-12 at 14:26 -0400, Eoghan Glynn wrote:
   It seems like this is exactly what the slots give us, though. The core 
 review
 team picks a number of slots indicating how much work they think they can
 actually do (less than the available number of blueprints), and then
 blueprints queue up to get a slot based on priorities and turnaround time
 and other criteria that try to make slot allocation fair. By having the
 slots, not only is the review priority communicated to the review team, it
 is also communicated to anyone watching the project.

 One thing I'm not seeing shine through in this discussion of slots is
 whether any notion of individual cores, or small subsets of the core
 team with aligned interests, can champion blueprints that they have
 a particular interest in.

 For example it might address some pain-point they've encountered, or
 impact on some functional area that they themselves have worked on in
 the past, or line up with their thinking on some architectural point.

 But for whatever motivation, such small groups of cores currently have
 the freedom to self-organize in a fairly emergent way and champion
 individual BPs that are important to them, simply by *independently*
 giving those BPs review attention.

 Whereas under the slots initiative, presumably this power would be
 subsumed by the group will, as expressed by the prioritization
 applied to the holding pattern feeding the runways?

 I'm not saying this is good or bad, just pointing out a change that
 we should have our eyes open to.
 
 Yeah, I'm really nervous about that aspect.
 
 Say a contributor proposes a new feature, a couple of core reviewers
 think it's important exciting enough for them to champion it but somehow
 the 'group will' is that it's not a high enough priority for this
 release, even if everyone agrees that it is actually cool and useful.
 
 What does imposing that 'group will' on the two core reviewers and
 contributor achieve? That the contributor and reviewers will happily
 turn their attention to some of the higher priority work? Or we lose a
 contributor and two reviewers because they feel disenfranchised?
 Probably somewhere in the middle.
 
 On the other hand, what happens if work proceeds ahead even if not
 deemed a high priority? I don't think we can say that the contributor
 and two core reviewers were distracted from higher priority work,
 because blocking this work is probably unlikely to shift their focus in
 a productive way. Perhaps other reviewers are distracted because they
 feel the work needs more oversight than just the two core reviewers? It
 places more of a burden on the gate?
 
 I dunno ... the consequences of imposing group will worry me more than
 the consequences of allowing small groups to self-organize like this.

Yes, this is by far my #1 concern with the plan.

I think perhaps some middle ground makes sense.

1) Start doing a better job of generating a priority list, and
identifying the highest priority items based on group will.

2) Expect that reviewers use the priority list to influence their
general review time.

3) Don't actually block other things, should small groups self-organize
and decide it's important enough to them, even if not to the group as a
whole.

That sort of approach still sounds like an improvement to what we have
today, which is alack of good priority communication to direct general
review time.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Daniel P. Berrange
On Wed, Aug 13, 2014 at 09:11:26AM -0400, Russell Bryant wrote:
 On 08/13/2014 08:52 AM, Mark McLoughlin wrote:
  On Tue, 2014-08-12 at 14:26 -0400, Eoghan Glynn wrote:
It seems like this is exactly what the slots give us, though. The core 
  review
  team picks a number of slots indicating how much work they think they can
  actually do (less than the available number of blueprints), and then
  blueprints queue up to get a slot based on priorities and turnaround time
  and other criteria that try to make slot allocation fair. By having the
  slots, not only is the review priority communicated to the review team, it
  is also communicated to anyone watching the project.
 
  One thing I'm not seeing shine through in this discussion of slots is
  whether any notion of individual cores, or small subsets of the core
  team with aligned interests, can champion blueprints that they have
  a particular interest in.
 
  For example it might address some pain-point they've encountered, or
  impact on some functional area that they themselves have worked on in
  the past, or line up with their thinking on some architectural point.
 
  But for whatever motivation, such small groups of cores currently have
  the freedom to self-organize in a fairly emergent way and champion
  individual BPs that are important to them, simply by *independently*
  giving those BPs review attention.
 
  Whereas under the slots initiative, presumably this power would be
  subsumed by the group will, as expressed by the prioritization
  applied to the holding pattern feeding the runways?
 
  I'm not saying this is good or bad, just pointing out a change that
  we should have our eyes open to.
  
  Yeah, I'm really nervous about that aspect.
  
  Say a contributor proposes a new feature, a couple of core reviewers
  think it's important exciting enough for them to champion it but somehow
  the 'group will' is that it's not a high enough priority for this
  release, even if everyone agrees that it is actually cool and useful.
  
  What does imposing that 'group will' on the two core reviewers and
  contributor achieve? That the contributor and reviewers will happily
  turn their attention to some of the higher priority work? Or we lose a
  contributor and two reviewers because they feel disenfranchised?
  Probably somewhere in the middle.
  
  On the other hand, what happens if work proceeds ahead even if not
  deemed a high priority? I don't think we can say that the contributor
  and two core reviewers were distracted from higher priority work,
  because blocking this work is probably unlikely to shift their focus in
  a productive way. Perhaps other reviewers are distracted because they
  feel the work needs more oversight than just the two core reviewers? It
  places more of a burden on the gate?
  
  I dunno ... the consequences of imposing group will worry me more than
  the consequences of allowing small groups to self-organize like this.
 
 Yes, this is by far my #1 concern with the plan.
 
 I think perhaps some middle ground makes sense.
 
 1) Start doing a better job of generating a priority list, and
 identifying the highest priority items based on group will.
 
 2) Expect that reviewers use the priority list to influence their
 general review time.
 
 3) Don't actually block other things, should small groups self-organize
 and decide it's important enough to them, even if not to the group as a
 whole.
 
 That sort of approach still sounds like an improvement to what we have
 today, which is alack of good priority communication to direct general
 review time.

A key thing for the priority list is that it is in a machine consumable
format we can query somehow - even if that's a simple static text file
in a CSV format or something. As long as I can automate fetching  parsing
to correlate priorities with gerrit query results in some manner, that's
the key from my POV.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Eoghan Glynn


It seems like this is exactly what the slots give us, though. The core
review
   team picks a number of slots indicating how much work they think they can
   actually do (less than the available number of blueprints), and then
   blueprints queue up to get a slot based on priorities and turnaround time
   and other criteria that try to make slot allocation fair. By having the
   slots, not only is the review priority communicated to the review team,
   it
   is also communicated to anyone watching the project.
  
  One thing I'm not seeing shine through in this discussion of slots is
  whether any notion of individual cores, or small subsets of the core
  team with aligned interests, can champion blueprints that they have
  a particular interest in.
  
  For example it might address some pain-point they've encountered, or
  impact on some functional area that they themselves have worked on in
  the past, or line up with their thinking on some architectural point.
  
  But for whatever motivation, such small groups of cores currently have
  the freedom to self-organize in a fairly emergent way and champion
  individual BPs that are important to them, simply by *independently*
  giving those BPs review attention.
  
  Whereas under the slots initiative, presumably this power would be
  subsumed by the group will, as expressed by the prioritization
  applied to the holding pattern feeding the runways?
  
  I'm not saying this is good or bad, just pointing out a change that
  we should have our eyes open to.
 
 Yeah, I'm really nervous about that aspect.
 
 Say a contributor proposes a new feature, a couple of core reviewers
 think it's important exciting enough for them to champion it but somehow
 the 'group will' is that it's not a high enough priority for this
 release, even if everyone agrees that it is actually cool and useful.
 
 What does imposing that 'group will' on the two core reviewers and
 contributor achieve? That the contributor and reviewers will happily
 turn their attention to some of the higher priority work? Or we lose a
 contributor and two reviewers because they feel disenfranchised?
 Probably somewhere in the middle.

Yeah, the outcome probably depends on the motivation/incentives that
are operating for individual contributors.

If their brief or primary interest was to land *specific* features,
then they may sit out the cycle, or just work away on their pet features
anyway under the radar.

If, OTOH, they have more of a over-arching make the project better
goal, they may gladly (or reluctantly) apply themselves to the group-
defined goals.

However, human nature being what it is, I'd suspect that the energy
levels applied to self-selected goals may be higher in the average case.
Just a gut feeling on that, no hard data to back it up. 

 On the other hand, what happens if work proceeds ahead even if not
 deemed a high priority? I don't think we can say that the contributor
 and two core reviewers were distracted from higher priority work,
 because blocking this work is probably unlikely to shift their focus in
 a productive way. Perhaps other reviewers are distracted because they
 feel the work needs more oversight than just the two core reviewers? It
 places more of a burden on the gate?

Well I think we have accept the reality that we can't force people to
work on stuff they don't want to, or entirely stop them working on the
stuff that they do.

So inevitably there will be some deviation from the shining path, as
set out in the group will. Agreed that blocking this work from say
being proposed on gerrit won't necessarily have the desired outcome

(OK, it could stop the transitive distraction of other reviewers, and
remove the gate load, but won't restore the time spent working off-piste
by the contributor and two cores in your example)

 I dunno ... the consequences of imposing group will worry me more than
 the consequences of allowing small groups to self-organize like this.

Yep, this capacity for self-organization of informal groups with aligned
interests (as opposed to corporate affiliations) is, or at least should
be IMO, seen as one of the primary strengths of the open source model.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >