Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-21 Thread Mike Bayer



On 05/20/2017 12:04 PM, Julien Danjou wrote:

On Fri, May 19 2017, Mike Bayer wrote:


IMO that's a bug for them.


Of course it's a bug. IIRC Mehdi tried to fix it without much success.


I'm inspired to see that Keystone, Nova etc. are
able to move between and eventlet backend and a mod_wsgi backend.IMO
eventlet is really not needed for those services that present a REST interface.
Although for a message queue with lots of long-running connections that receive
events, that's a place where I *would* want to use a polling / non-blocking
model.  But I'd use it explicitly, not with monkeypatching.


+1


I'd ask why not oslo.cotyledon but it seems there's a faction here that is
overall moving out of the Openstack umbrella in any case.


Not oslo because it can be used by other projects than just OpenStack.
And it's a condition of success. As Mehdi said, Oslo has been deserted
in the recent cycles, so putting a lib there as very little chance of
seeing its community and maintenance help grow. Whereas trying to reach
the whole Python ecosystem is more likely to get traction.

As a maintainer of SQLAlchemy I'm surprised you even suggest that. Or do
you plan on doing oslo.sqlalchemy? ;)


I do oslo.db (which also is not "abandoned" in any way).  the point of 
oslo is that it is an openstack-centric mediation layer between some 
common service/library and openstack.


it looks like there already is essentially such a layer for cotyledon. 
I'd just name it "oslo.cotyledon" :)  or oslo. something.  We have a 
moose.  It's cool.






Basically I think openstack should be getting off eventlet in a big way so I
guess my sentiment here is that the Gnocchi / Cotyledon /etc. faction is just
splitting off rather than serving as any kind of direction for the rest of
Openstack to start looking.  But that's only an impression, maybe projects will
use Cotyledon anyway.   If every project goes off and uses something completely
different though, then I think we're losing.   The point of oslo was to prevent
that.


I understand your concern and opinion. I think you, me and Mehdi don't
have the experience as contributors in OpenStack. I invite you to try
moving any major OpenStack project to something like oslo.service2 or
Cotyledon or to achieve any technical debt resolution in OpenStack to
have a view on hard it is to tackle. Then you'll see where we stand. :)


Sure, that's an area where I think the whole direction of openstack 
would benefit from more centralized planning, but i have been here just 
enough to observe that this kind of thing has been discussed before and 
it is of course very tricky to implement.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-20 Thread Julien Danjou
On Fri, May 19 2017, Mike Bayer wrote:

> IMO that's a bug for them.

Of course it's a bug. IIRC Mehdi tried to fix it without much success.

> I'm inspired to see that Keystone, Nova etc. are
> able to move between and eventlet backend and a mod_wsgi backend.IMO
> eventlet is really not needed for those services that present a REST 
> interface.
> Although for a message queue with lots of long-running connections that 
> receive
> events, that's a place where I *would* want to use a polling / non-blocking
> model.  But I'd use it explicitly, not with monkeypatching.

+1

> I'd ask why not oslo.cotyledon but it seems there's a faction here that is
> overall moving out of the Openstack umbrella in any case.

Not oslo because it can be used by other projects than just OpenStack.
And it's a condition of success. As Mehdi said, Oslo has been deserted
in the recent cycles, so putting a lib there as very little chance of
seeing its community and maintenance help grow. Whereas trying to reach
the whole Python ecosystem is more likely to get traction.

As a maintainer of SQLAlchemy I'm surprised you even suggest that. Or do
you plan on doing oslo.sqlalchemy? ;)

> Basically I think openstack should be getting off eventlet in a big way so I
> guess my sentiment here is that the Gnocchi / Cotyledon /etc. faction is just
> splitting off rather than serving as any kind of direction for the rest of
> Openstack to start looking.  But that's only an impression, maybe projects 
> will
> use Cotyledon anyway.   If every project goes off and uses something 
> completely
> different though, then I think we're losing.   The point of oslo was to 
> prevent
> that.

I understand your concern and opinion. I think you, me and Mehdi don't
have the experience as contributors in OpenStack. I invite you to try
moving any major OpenStack project to something like oslo.service2 or
Cotyledon or to achieve any technical debt resolution in OpenStack to
have a view on hard it is to tackle. Then you'll see where we stand. :)

Especially when your job is not doing that, but e.g. working on
Telemetry. :)

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-19 Thread Mike Bayer



On 05/19/2017 04:23 AM, Mehdi Abaakouk wrote:



And some applications rely

on implicit internal contract/behavior/assumption.


IMO that's a bug for them.I'm inspired to see that Keystone, Nova 
etc. are able to move between and eventlet backend and a mod_wsgi 
backend.IMO eventlet is really not needed for those services that 
present a REST interface.   Although for a message queue with lots of 
long-running connections that receive events, that's a place where I 
*would* want to use a polling / non-blocking model.  But I'd use it 
explicitly, not with monkeypatching.




Since a new API is needed, why not writing a new lib. Anyways when you
get rid of eventlet you have so many thing to change to ensure your
performance will not drop. 


While I don't know the specifics for your project(s), I don't buy that 
in general because IMO eventlet is not giving us any performance boost 
in the majority of cases.   most of our IO is blocking on the database 
and all the applications have DB connections throttled at about 50 per 
process at the max, and that's only recently, it used to be just 15.




Changing from oslo.service to cotyledon is

really easy on the side.


I'd ask why not oslo.cotyledon but it seems there's a faction here that 
is overall moving out of the Openstack umbrella in any case.





Docs state: "oslo.service being impossible to fix and bringing an 
heavy dependency on eventlet, "  is there a discussion thread on that?


Not really, I just put some comments on reviews and discus this on IRC.
Since nobody except Telemetry have expressed/try to get rid of eventlet.


Many (most?) of the web services can run under mod_wsgi with threads, 
Keystone seems to be standard on this now and I get the impression Nova 
is going in that direction too.(anyone correct me if I'm wrong on 
any of that, I looked to ask around on IRC but it's too late in the day).






For the story we first get rid of eventlet in Telemetry, fixes couple of
performance issue due to using threading/process instead
greenlet/greenthread.

Then we fall into some weird issue due to oslo.service internal
implementation. Process not exiting properly, signals not received,
deadlock when signal are received, unkillable process,
tooz/oslo.messaging heartbeat not scheduled correctly, worker not
restarted when they are dead. All of what we expect from oslo.service
was not working correctly anymore because we remove the line
'eventlet.monkeypatch()'.


So, I've used gevent more than eventlet in my own upstream non-blocking 
work, and while this opinion is like spilling water in the ocean, I 
think applications should never use monkeypatching.   They should call 
into the eventlet/gevent greenlet API directly if that's what they want 
to do.


Of course this means that database logic has to move out of greenlets 
entirely since none of the DBAPIs use non-blocking IO.  That's fine. 
Database-oriented business logic should not be in greenlets.I've 
written about this as well.If one is familiar enough with greenlets 
and threads you can write an application that makes explicit use of 
both.   However, that's application level stuff.   Web service apps like 
Nova conductor  / Neutron / Keystone should not be aware of any of that. 
  They should be coded to assume nothing about context switching.  IMO 
the threading model is "safer" to code towards since you have to handle 
locking and concurrency contingencies explicitly without hardwiring that 
to your assumptions about when context switching is to take place and 
when it's not.





For example, when oslo.service receive a signal, it can arrive on any
thread, this thread is paused, the callback is run in this thread
context, but if the callback try to discus to your code in this thread,
the process lockup, because your code is paused. Python
offers tool to avoid that (signal.set_wakeup_fd), but oslo.service don't
use it. I have tried to run callbacks only on the main thread with
set_wakeup_fd, to avoid this kind of issue but I fail. The whole
oslo.service code is clearly not designed to be threadsafe/signalsafe.
Well, it works for eventlet because you have only one real thread.

And this is just one example on complicated thing I have tried to fix,
before starting cotyledon.


I've no doubt oslo.service has major eventlet problems baked in, I've 
looked at it a little bit but didn't go too far with it.   That still 
doesn't mean there shouldn't be an "oslo.service2" that can effectively 
produce a concurrency-agnostic platform.It of course would have the 
goal in mind of moving projects off eventlet since as I mentioned, 
eventlet monkeypatching should not be used which means our services 
should do most of their "implicitly concurrent" work within threads.


Basically I think openstack should be getting off eventlet in a big way 
so I guess my sentiment here is that the Gnocchi / Cotyledon /etc. 
faction is just splitting off rather than serving as any kind of 

Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-19 Thread Joshua Harlow

Mehdi Abaakouk wrote:

Not really, I just put some comments on reviews and discus this on IRC.
Since nobody except Telemetry have expressed/try to get rid of eventlet.


Octavia is using cotyledon and they have gotten rid of eventlet. Didn't 
seem like it was that hard either to do it (of course the experience in 
how easy it was is likely not transferable to other projects...)


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-19 Thread Doug Hellmann
Excerpts from Mehdi Abaakouk's message of 2017-05-19 10:23:09 +0200:
> On Thu, May 18, 2017 at 03:16:20PM -0400, Mike Bayer wrote:
> >
> >
> >On 05/18/2017 02:37 PM, Julien Danjou wrote:
> >>On Thu, May 18 2017, Mike Bayer wrote:
> >>
> >>>I'm not understanding this?  do you mean this?
> >>
> >>In the long run, yes. Unfortunately, we're not happy with the way Oslo
> >>libraries are managed and too OpenStack centric. I've tried for the last
> >>couple of years to move things on, but it's barely possible to deprecate
> >>anything and contribute, so I feel it's safer to start fresh and better
> >>alternative. Cotyledon by Mehdi is a good example of what can be
> >>achieved.
> >
> >
> >here's cotyledon:
> >
> >https://cotyledon.readthedocs.io/en/latest/
> >
> >
> >replaces oslo.service with a multiprocessing approach that doesn't use 
> >eventlet.  great!  any openstack service that rides on oslo.service 
> >would like to be able to transparently switch from eventlet to 
> >multiprocessing the same way they can more or less switch to mod_wsgi 
> >at the moment.   IMO this should be part of oslo.service itself.   
> 
> I have quickly presented cotyledon some summit ago, we said we will wait
> to see if other projects want to get rid of eventlet before adopting
> such new lib (or merge it with oslo.service).
> 
> But for now, the lib is still under telemetry umbrella.
> 
> Keeping the current API and supporting both are (I think) impossible.
> The current API is too eventlet centric. And some applications rely
> on implicit internal contract/behavior/assumption.
> 
> Dealing about concurrent/thread/signal safety in multithreading app or
> eventlet app is already hard enough. So having the lib that deals with
> both is even harder. We already have oslo.messaging that deals with
> 3 threads models, this is just an unending story of race conditions.
> 
> Since a new API is needed, why not writing a new lib. Anyways when you
> get rid of eventlet you have so many thing to change to ensure your
> performance will not drop. Changing from oslo.service to cotyledon is
> really easy on the side.
> 
> >Docs state: "oslo.service being impossible to fix and bringing an 
> >heavy dependency on eventlet, "  is there a discussion thread on that?
> 
> Not really, I just put some comments on reviews and discus this on IRC.
> Since nobody except Telemetry have expressed/try to get rid of eventlet.
> 
> For the story we first get rid of eventlet in Telemetry, fixes couple of
> performance issue due to using threading/process instead
> greenlet/greenthread.
> 
> Then we fall into some weird issue due to oslo.service internal
> implementation. Process not exiting properly, signals not received,
> deadlock when signal are received, unkillable process,
> tooz/oslo.messaging heartbeat not scheduled correctly, worker not
> restarted when they are dead. All of what we expect from oslo.service
> was not working correctly anymore because we remove the line
> 'eventlet.monkeypatch()'.
> 
> For example, when oslo.service receive a signal, it can arrive on any
> thread, this thread is paused, the callback is run in this thread
> context, but if the callback try to discus to your code in this thread,
> the process lockup, because your code is paused. Python
> offers tool to avoid that (signal.set_wakeup_fd), but oslo.service don't
> use it. I have tried to run callbacks only on the main thread with
> set_wakeup_fd, to avoid this kind of issue but I fail. The whole
> oslo.service code is clearly not designed to be threadsafe/signalsafe.
> Well, it works for eventlet because you have only one real thread.
> 
> And this is just one example on complicated thing I have tried to fix,
> before starting cotyledon.
>
> >I'm finding it hard to believe that only a few years ago, everyone saw 
> >the wisdom of not re-implementing everything in their own projects and 
> >using a common layer like oslo, and already that whole situation is 
> >becoming forgotten - not just for consistency, but also when a bug is 
> >found, if fixed in oslo it gets fixed for everyone.
> 
> Because the internal of cotyledon and oslo.service are so different.
> Having the code in oslo or not doesn't help for maintenance anymore.
> Cotyledon is a lib, code and bugs :) can already be shared between
> projects that doesn't want eventlet.

Yes, I remember discussing this some time ago and I agree that starting
a new library was the right approach. The changes needed to make
oslo.service work without eventlet are too big, and rather than have 2
separate implementations in the same library a second library makes
sense.

> >An increase in the scope of oslo is essential to dealing with the 
> >issue of "complexity" in openstack. 
> 
> Increasing the scope of oslo works only if libs have maintainers. But
> most of them lack of people today. Most of oslo libs are in maintenance
> mode. But that another subject.
> 
> > The state of openstack as dozens 
> >of individual software projects 

Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-19 Thread Julien Danjou
On Fri, May 19 2017, Mehdi Abaakouk wrote:

> Not really, I just put some comments on reviews and discus this on IRC.
> Since nobody except Telemetry have expressed/try to get rid of eventlet.

TBH Keystone get rid of it too. But they only provide WSGI servers. They
don't build any daemon so they don't need and user either Cotyledon nor
oslo.service. :)

> Because the internal of cotyledon and oslo.service are so different.
> Having the code in oslo or not doesn't help for maintenance anymore.
> Cotyledon is a lib, code and bugs :) can already be shared between
> projects that doesn't want eventlet.

Cotyledon is explicitly better just by being out of Oslo, because it's
usable by the whole Python ecosystem. :)

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-19 Thread Mehdi Abaakouk

On Thu, May 18, 2017 at 03:16:20PM -0400, Mike Bayer wrote:



On 05/18/2017 02:37 PM, Julien Danjou wrote:

On Thu, May 18 2017, Mike Bayer wrote:


I'm not understanding this?  do you mean this?


In the long run, yes. Unfortunately, we're not happy with the way Oslo
libraries are managed and too OpenStack centric. I've tried for the last
couple of years to move things on, but it's barely possible to deprecate
anything and contribute, so I feel it's safer to start fresh and better
alternative. Cotyledon by Mehdi is a good example of what can be
achieved.



here's cotyledon:

https://cotyledon.readthedocs.io/en/latest/


replaces oslo.service with a multiprocessing approach that doesn't use 
eventlet.  great!  any openstack service that rides on oslo.service 
would like to be able to transparently switch from eventlet to 
multiprocessing the same way they can more or less switch to mod_wsgi 
at the moment.   IMO this should be part of oslo.service itself.   


I have quickly presented cotyledon some summit ago, we said we will wait
to see if other projects want to get rid of eventlet before adopting
such new lib (or merge it with oslo.service).

But for now, the lib is still under telemetry umbrella.

Keeping the current API and supporting both are (I think) impossible.
The current API is too eventlet centric. And some applications rely
on implicit internal contract/behavior/assumption.

Dealing about concurrent/thread/signal safety in multithreading app or
eventlet app is already hard enough. So having the lib that deals with
both is even harder. We already have oslo.messaging that deals with
3 threads models, this is just an unending story of race conditions.

Since a new API is needed, why not writing a new lib. Anyways when you
get rid of eventlet you have so many thing to change to ensure your
performance will not drop. Changing from oslo.service to cotyledon is
really easy on the side.

Docs state: "oslo.service being impossible to fix and bringing an 
heavy dependency on eventlet, "  is there a discussion thread on that?


Not really, I just put some comments on reviews and discus this on IRC.
Since nobody except Telemetry have expressed/try to get rid of eventlet.

For the story we first get rid of eventlet in Telemetry, fixes couple of
performance issue due to using threading/process instead
greenlet/greenthread.

Then we fall into some weird issue due to oslo.service internal
implementation. Process not exiting properly, signals not received,
deadlock when signal are received, unkillable process,
tooz/oslo.messaging heartbeat not scheduled correctly, worker not
restarted when they are dead. All of what we expect from oslo.service
was not working correctly anymore because we remove the line
'eventlet.monkeypatch()'.

For example, when oslo.service receive a signal, it can arrive on any
thread, this thread is paused, the callback is run in this thread
context, but if the callback try to discus to your code in this thread,
the process lockup, because your code is paused. Python
offers tool to avoid that (signal.set_wakeup_fd), but oslo.service don't
use it. I have tried to run callbacks only on the main thread with
set_wakeup_fd, to avoid this kind of issue but I fail. The whole
oslo.service code is clearly not designed to be threadsafe/signalsafe.
Well, it works for eventlet because you have only one real thread.

And this is just one example on complicated thing I have tried to fix,
before starting cotyledon.

I'm finding it hard to believe that only a few years ago, everyone saw 
the wisdom of not re-implementing everything in their own projects and 
using a common layer like oslo, and already that whole situation is 
becoming forgotten - not just for consistency, but also when a bug is 
found, if fixed in oslo it gets fixed for everyone.


Because the internal of cotyledon and oslo.service are so different.
Having the code in oslo or not doesn't help for maintenance anymore.
Cotyledon is a lib, code and bugs :) can already be shared between
projects that doesn't want eventlet.

An increase in the scope of oslo is essential to dealing with the 
issue of "complexity" in openstack. 


Increasing the scope of oslo works only if libs have maintainers. But
most of them lack of people today. Most of oslo libs are in maintenance
mode. But that another subject.

The state of openstack as dozens 
of individual software projects each with their own idiosyncratic 
quirks, CLIs, process and deployment models, and everything else that 
is visible to operators is ground zero for perceived operator 
complexity.


Cotyledon have been written to be Openstack agnostic. But I have also
write an optional module within the library to glue oslo.config and
cotyledon. Mainly to mimic the oslo.config options/reload of
oslo.service and make operators experience unchanged for Openstack
people.

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature

Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-18 Thread Julien Danjou
On Thu, May 18 2017, Mike Bayer wrote:

> replaces oslo.service with a multiprocessing approach that doesn't use
> eventlet.  great!  any openstack service that rides on oslo.service would like
> to be able to transparently switch from eventlet to multiprocessing the same
> way they can more or less switch to mod_wsgi at the moment.IMO this should
> be part of oslo.service itself.   Docs state: "oslo.service being impossible 
> to
> fix and bringing an heavy dependency on eventlet, "  is there a discussion
> thread on that?

Yes, and many reviews around that. I'll let Mehdi comments if he feels
like it. :)

> I'm finding it hard to believe that only a few years ago, everyone saw the
> wisdom of not re-implementing everything in their own projects and using a
> common layer like oslo, and already that whole situation is becoming forgotten
> - not just for consistency, but also when a bug is found, if fixed in oslo it
> gets fixed for everyone.

I guess it depends what you mean by everyone. FTR, one of the two first
projects in OpenStack, Swift, never used anything from Oslo for anything
and always refused to depends on any of its library.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-18 Thread Mike Bayer



On 05/18/2017 02:37 PM, Julien Danjou wrote:

On Thu, May 18 2017, Mike Bayer wrote:


I'm not understanding this?  do you mean this?


In the long run, yes. Unfortunately, we're not happy with the way Oslo
libraries are managed and too OpenStack centric. I've tried for the last
couple of years to move things on, but it's barely possible to deprecate
anything and contribute, so I feel it's safer to start fresh and better
alternative. Cotyledon by Mehdi is a good example of what can be
achieved.



here's cotyledon:

https://cotyledon.readthedocs.io/en/latest/


replaces oslo.service with a multiprocessing approach that doesn't use 
eventlet.  great!  any openstack service that rides on oslo.service 
would like to be able to transparently switch from eventlet to 
multiprocessing the same way they can more or less switch to mod_wsgi at 
the moment.IMO this should be part of oslo.service itself.   Docs 
state: "oslo.service being impossible to fix and bringing an heavy 
dependency on eventlet, "  is there a discussion thread on that?


I'm finding it hard to believe that only a few years ago, everyone saw 
the wisdom of not re-implementing everything in their own projects and 
using a common layer like oslo, and already that whole situation is 
becoming forgotten - not just for consistency, but also when a bug is 
found, if fixed in oslo it gets fixed for everyone.


An increase in the scope of oslo is essential to dealing with the issue 
of "complexity" in openstack.  The state of openstack as dozens of 
individual software projects each with their own idiosyncratic quirks, 
CLIs, process and deployment models, and everything else that is visible 
to operators is ground zero for perceived operator complexity.









Though to comment on your example, oslo.db is probably the most useful
Oslo library that Gnocchi depends on and that won't go away in a snap.
:-(



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-18 Thread Julien Danjou
On Thu, May 18 2017, Mike Bayer wrote:

> I'm not understanding this?  do you mean this?

In the long run, yes. Unfortunately, we're not happy with the way Oslo
libraries are managed and too OpenStack centric. I've tried for the last
couple of years to move things on, but it's barely possible to deprecate
anything and contribute, so I feel it's safer to start fresh and better
alternative. Cotyledon by Mehdi is a good example of what can be
achieved.

Though to comment on your example, oslo.db is probably the most useful
Oslo library that Gnocchi depends on and that won't go away in a snap.
:-(

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-18 Thread Mike Bayer



On 05/16/2017 05:42 AM, Julien Danjou wrote:

On Wed, Apr 19 2017, Julien Danjou wrote:


So Gnocchi gate is all broken (agan) because it depends on "pbr" and
some new release of oslo.* depends on pbr!=2.1.0.


Same things happened today with Babel. As far as Gnocchi is concerned,
we're going to take the easiest route and remove all our oslo
dependencies over the next months for sanely maintained alternative at
this point.


I'm not understanding this?  do you mean this?

diff --git a/gnocchi/indexer/sqlalchemy.py b/gnocchi/indexer/sqlalchemy.py
index 3497b52..0ae99fd 100644
--- a/gnocchi/indexer/sqlalchemy.py
+++ b/gnocchi/indexer/sqlalchemy.py
@@ -22,11 +22,7 @@ import uuid

 from alembic import migration
 from alembic import operations
-import oslo_db.api
-from oslo_db import exception
-from oslo_db.sqlalchemy import enginefacade
-from oslo_db.sqlalchemy import utils as oslo_db_utils
-from oslo_log import log
+from ??? import ???
 try:
 import psycopg2
 except ImportError:








Cheers,



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-16 Thread Julien Danjou
On Tue, May 16 2017, Andreas Jaeger wrote:

> It is needed to generate the translations, but can't we move it for
> oslo-i18n into test-requirements?

I've pushed this and it seems to work, pretty sure it's safe.

  https://review.openstack.org/#/c/465014/

If we can merge this today and then release quickly after that'd be a
great help -_-

> But os-testr does not need Babel at all - let's remove it,
> https://review.openstack.org/465023

Arf, sure!
I can only +1 though :(

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-16 Thread Andreas Jaeger
On 2017-05-16 12:10, Julien Danjou wrote:
> On Tue, May 16 2017, Andreas Jaeger wrote:
> 
>> what exactly happened with Babel?
>>
>> I see in global-requirements the following:
>> Babel>=2.3.4,!=2.4.0  # BSD
>>
>> that shouldn't case a problem - or does it? Or what's the problem?
> 
> Damn, at the moment I pressed the `Sent' button I thought "You just
> complained without including much detail idiot". Sorry about that!

no worries.

> One of the log that fails:
> 
>  
> http://logs.openstack.org/13/464713/2/check/gate-gnocchi-tox-py27-mysql-ceph-upgrade-from-3.1-ubuntu-xenial/db61bdf/console.html
> 
> 
> Basically oslo.policy pulls oslo.i18n which pulls Babel!=2.4.0
> But Babel is already pulled by os-testr which depends on >=2.3.4.

and os-testr is not importing global-requirements:
https://review.openstack.org/#/c/454511/

> So pip does not solve that (unfortunately) and then the failure is:
> 
> 2017-05-16 05:08:43.629772 | 2017-05-16 05:08:43.503 10699 ERROR gnocchi
> ContextualVersionConflict: (Babel 2.4.0
> (/home/jenkins/workspace/gate-gnocchi-tox-py27-mysql-ceph-upgrade-from-3.1-ubuntu-xenial/upgrade/lib/python2.7/site-packages),
> Requirement.parse('Babel!=2.4.0,>=2.3.4'), set(['oslo.i18n']))
> 
> I'm pretty sure Babel should not even be in the requirements list of
> oslo.i18n since it's not a runtime dependency AFAIU.

It is needed to generate the translations, but can't we move it for
oslo-i18n into test-requirements?

But os-testr does not need Babel at all - let's remove it,
https://review.openstack.org/465023

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-16 Thread Julien Danjou
On Tue, May 16 2017, Andreas Jaeger wrote:

> what exactly happened with Babel?
>
> I see in global-requirements the following:
> Babel>=2.3.4,!=2.4.0  # BSD
>
> that shouldn't case a problem - or does it? Or what's the problem?

Damn, at the moment I pressed the `Sent' button I thought "You just
complained without including much detail idiot". Sorry about that!

One of the log that fails:

 
http://logs.openstack.org/13/464713/2/check/gate-gnocchi-tox-py27-mysql-ceph-upgrade-from-3.1-ubuntu-xenial/db61bdf/console.html


Basically oslo.policy pulls oslo.i18n which pulls Babel!=2.4.0
But Babel is already pulled by os-testr which depends on >=2.3.4.
So pip does not solve that (unfortunately) and then the failure is:

2017-05-16 05:08:43.629772 | 2017-05-16 05:08:43.503 10699 ERROR gnocchi
ContextualVersionConflict: (Babel 2.4.0
(/home/jenkins/workspace/gate-gnocchi-tox-py27-mysql-ceph-upgrade-from-3.1-ubuntu-xenial/upgrade/lib/python2.7/site-packages),
Requirement.parse('Babel!=2.4.0,>=2.3.4'), set(['oslo.i18n']))

I'm pretty sure Babel should not even be in the requirements list of
oslo.i18n since it's not a runtime dependency AFAIU.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-16 Thread Andreas Jaeger
On 2017-05-16 11:42, Julien Danjou wrote:
> On Wed, Apr 19 2017, Julien Danjou wrote:
> 
>> So Gnocchi gate is all broken (agan) because it depends on "pbr" and
>> some new release of oslo.* depends on pbr!=2.1.0.
> 
> Same things happened today with Babel. As far as Gnocchi is concerned,
> we're going to take the easiest route and remove all our oslo
> dependencies over the next months for sanely maintained alternative at
> this point.

what exactly happened with Babel?

I see in global-requirements the following:
Babel>=2.3.4,!=2.4.0  # BSD

that shouldn't case a problem - or does it? Or what's the problem?

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-16 Thread Julien Danjou
On Wed, Apr 19 2017, Julien Danjou wrote:

> So Gnocchi gate is all broken (agan) because it depends on "pbr" and
> some new release of oslo.* depends on pbr!=2.1.0.

Same things happened today with Babel. As far as Gnocchi is concerned,
we're going to take the easiest route and remove all our oslo
dependencies over the next months for sanely maintained alternative at
this point.

Cheers,
-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-22 Thread Julien Danjou
On Fri, Apr 21 2017, Doug Hellmann wrote:

> My memory of the outcome of that session was that we needed to maintain
> co-installability; that we could continue to keep an eye on the
> container space as an alternative; and that a new team of maintainers
> would take over the requirements list (which was my secret agenda for
> proposing that we stop doing it at all).

Just to come back to the original topic, co-installability is *not*
impacted by my proposal of stopping the sync. On the opposite, last week
it was very hard to install both Gnocchi and Oslo because of Oslo…

So it would *improve* co-installability, not remove it.

My 2c,

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-21 Thread Doug Hellmann
Excerpts from Joshua Harlow's message of 2017-04-20 22:31:19 -0700:
> Doug Hellmann wrote:
> > Excerpts from gordon chung's message of 2017-04-20 17:12:26 +:
> >> On 20/04/17 01:32 AM, Joshua Harlow wrote:
> >>> Wasn't there also some decision made in austin (?) about how we as a
> >>> group stated something along the line of co-installability isn't as
> >>> important as it once was (and may not even be practical or what people
> >>> care about anymore anyway)?
> >
> > I don't remember that, but I may not have been in the room at the
> > time.  In the past when we've discussed that idea, we've continued
> > to maintain that co-installability is still needed for distributors
> > who have packaging constraints that require it and for use cases
> > like single-node deployments for POCs.
> 
> Ya, looking back I think it was:
> 
> https://etherpad.openstack.org/p/newton-global-requirements
> 
> I think that was robert that lead that session, but I might be incorrect 
> there.

That was me, though Robert was definitely present and vocal.

My memory of the outcome of that session was that we needed to maintain
co-installability; that we could continue to keep an eye on the
container space as an alternative; and that a new team of maintainers
would take over the requirements list (which was my secret agenda for
proposing that we stop doing it at all).

During the session in Barcelona (I previously said Austin, but
misremembered the location) we agreed that we could stop syncing,
as long as we maintained co-installability by ensuring that everyone's
requirements lists intersect with the upper-constraints.txt list. That
work has been started.

As far as I know, we have never said we could drop co-installability as
a requirement. We have wished we could, but have not said we can.

Doug

> 
> >
> >>> With kolla becoming more popular (tripleo I think is using it, and ...)
> >>> and the containers it creates making isolated per-application
> >>> environments it makes me wonder what of global-requirements is still
> >>> valid (as a concept) and what isn't.
> >
> > We still need to review dependencies for license compatibility, to
> > minimize redundancy, and to ensure that we're not adding things to
> > the list that are not being maintained upstream. Even if we stop syncing
> > versions, official projects need to be those reviews, and having the
> > global list is a way to ensure that the reviews are done.
> >
> >>> I do remember the days of free for all requirements (or requirements
> >>> sometimes just put/stashed in devstack vs elsewhere), which I don't
> >>> really want to go back to; but if we finally all agree that
> >>> co-installability isn't what people actually do and/or care about
> >>> (anymore?) then maybe we can re-think some things?
> >> agree with all of ^... but i imagine to make progress on this, we'd have
> >> to change/drop devstack usage in gate and that will take forever and a
> >> lifetime (is that a chick flick title?) given how embedded devstack is
> >> in everything. it seems like the solution starts with devstack.
> >>
> >> cheers,
> >>
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-21 Thread ChangBo Guo
2017-04-19 23:10 GMT+08:00 Clark Boylan :

> On Wed, Apr 19, 2017, at 05:54 AM, Julien Danjou wrote:
> > Hoy,
> >
> > So Gnocchi gate is all broken (agan) because it depends on "pbr" and
> > some new release of oslo.* depends on pbr!=2.1.0.
> >
> > Neither Gnocchi nor Oslo cares about whatever bug there is in pbr 2.1.0
> > that got in banished by requirements Gods. It does not prevent it to be
> > used e.g. to install the software or get version information. But it
> > does break anything that is not in OpenStack because well, pip installs
> > the latest pbr (2.1.0) and then oslo.* is unhappy about it.
>
> It actually breaks everything, including OpenStack. Shade and others are
> affected by this as well. The specific problem here is that PBR is a
> setup_requires which means it gets installed by easy_install before
> anything else. This means that the requirements restrictions are not
> applied to it (neither are the constraints). So you get latest PBR from
> easy_install then later when something checks the requirements
> (pkg_resources console script entrypoints?) they break because latest
> PBR isn't allowed.
>
>   What we disscuss here is the way to avoid break things,  not sure  we
 add pbr into  periodic-**-with-oslo-master works in
https://review.openstack.org/458753

We need to stop pinning PBR and more generally stop pinning any
> setup_requires (there are a few more now since setuptools itself is
> starting to use that to list its deps rather than bundling them).
>
> > So I understand the culprit is probably pip installation scheme, and we
> > can blame him until we fix it. I'm also trying to push pbr 2.2.0 to
> > avoid the entire issue.
>
> Yes, a new release of PBR undoing the "pin" is the current sane step
> forward for fixing this particular issue. Monty also suggested that we
> gate global requirements changes on requiring changes not pin any
> setup_requires.
>
> > But for the future, could we stop updating the requirements in oslo libs
> > for no good reason? just because some random OpenStack project hit a bug
> > somewhere?
> >
> > For example, I've removed requirements update on tooz¹ for more than a
> > year now, which did not break *anything* in the meantime, proving that
> > this process is giving more problem than solutions. Oslo libs doing that
> > automatic update introduce more pain for all consumers than anything (at
> > least not in OpenStack).
>
> You are likely largely shielded by the constraints list here which is
> derivative of the global requirements list. Basically by using
> constraints you get distilled global requirements and even without being
> part of the requirements updates you'd be shielded from breakages when
> installed via something like devstack or other deployment method using
> constraints.
>
> > So if we care about Oslo users outside OpenStack, I beg us to stop this
> > crazyness. If we don't, we'll just spend time getting rid of Oslo over
> > the long term…
>
> I think we know from experience that just stopping (eg reverting to the
> situation we had before requirements and constraints) would lead to
> sadness. Installations would frequently be impossible due to some
> unresolvable error in dependency resolution. Do you have some
> alternative in mind? Perhaps we loosen the in project requirements and
> explicitly state that constraints are known to work due to testing and
> users should use constraints? That would give users control to manage
> their own constraints list too if they wish. Maybe we do this in
> libraries while continuing to be more specific in applications?
>
> >
> > My 2c,
> >
> > Cheers,
> >
> > ¹ Unless some API changed in a dep and we needed to raise the dep,
> > obviously.
> >
> > --
> > Julien Danjou
> > # Free Software hacker
> > # https://julien.danjou.info
>
> I don't have all the answers, but am fairly certain the situation we
> have today is better than the one from several years ago. It is just not
> perfect. I think we are better served by refining the current setup or
> replacing it with something better but not by reverting.
>
> Clark
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-20 Thread Joshua Harlow

Doug Hellmann wrote:

Excerpts from gordon chung's message of 2017-04-20 17:12:26 +:

On 20/04/17 01:32 AM, Joshua Harlow wrote:

Wasn't there also some decision made in austin (?) about how we as a
group stated something along the line of co-installability isn't as
important as it once was (and may not even be practical or what people
care about anymore anyway)?


I don't remember that, but I may not have been in the room at the
time.  In the past when we've discussed that idea, we've continued
to maintain that co-installability is still needed for distributors
who have packaging constraints that require it and for use cases
like single-node deployments for POCs.


Ya, looking back I think it was:

https://etherpad.openstack.org/p/newton-global-requirements

I think that was robert that lead that session, but I might be incorrect 
there.





With kolla becoming more popular (tripleo I think is using it, and ...)
and the containers it creates making isolated per-application
environments it makes me wonder what of global-requirements is still
valid (as a concept) and what isn't.


We still need to review dependencies for license compatibility, to
minimize redundancy, and to ensure that we're not adding things to
the list that are not being maintained upstream. Even if we stop syncing
versions, official projects need to be those reviews, and having the
global list is a way to ensure that the reviews are done.


I do remember the days of free for all requirements (or requirements
sometimes just put/stashed in devstack vs elsewhere), which I don't
really want to go back to; but if we finally all agree that
co-installability isn't what people actually do and/or care about
(anymore?) then maybe we can re-think some things?

agree with all of ^... but i imagine to make progress on this, we'd have
to change/drop devstack usage in gate and that will take forever and a
lifetime (is that a chick flick title?) given how embedded devstack is
in everything. it seems like the solution starts with devstack.

cheers,



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-20 Thread Doug Hellmann
Excerpts from gordon chung's message of 2017-04-20 17:12:26 +:
> 
> On 20/04/17 01:32 AM, Joshua Harlow wrote:
> > Wasn't there also some decision made in austin (?) about how we as a
> > group stated something along the line of co-installability isn't as
> > important as it once was (and may not even be practical or what people
> > care about anymore anyway)?

I don't remember that, but I may not have been in the room at the
time.  In the past when we've discussed that idea, we've continued
to maintain that co-installability is still needed for distributors
who have packaging constraints that require it and for use cases
like single-node deployments for POCs.

> > With kolla becoming more popular (tripleo I think is using it, and ...)
> > and the containers it creates making isolated per-application
> > environments it makes me wonder what of global-requirements is still
> > valid (as a concept) and what isn't.

We still need to review dependencies for license compatibility, to
minimize redundancy, and to ensure that we're not adding things to
the list that are not being maintained upstream. Even if we stop syncing
versions, official projects need to be those reviews, and having the
global list is a way to ensure that the reviews are done.

> > I do remember the days of free for all requirements (or requirements
> > sometimes just put/stashed in devstack vs elsewhere), which I don't
> > really want to go back to; but if we finally all agree that
> > co-installability isn't what people actually do and/or care about
> > (anymore?) then maybe we can re-think some things?
> 
> agree with all of ^... but i imagine to make progress on this, we'd have 
> to change/drop devstack usage in gate and that will take forever and a 
> lifetime (is that a chick flick title?) given how embedded devstack is 
> in everything. it seems like the solution starts with devstack.
> 
> cheers,
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-20 Thread gordon chung


On 20/04/17 01:32 AM, Joshua Harlow wrote:
> Wasn't there also some decision made in austin (?) about how we as a
> group stated something along the line of co-installability isn't as
> important as it once was (and may not even be practical or what people
> care about anymore anyway)?
>
> With kolla becoming more popular (tripleo I think is using it, and ...)
> and the containers it creates making isolated per-application
> environments it makes me wonder what of global-requirements is still
> valid (as a concept) and what isn't.
>
> I do remember the days of free for all requirements (or requirements
> sometimes just put/stashed in devstack vs elsewhere), which I don't
> really want to go back to; but if we finally all agree that
> co-installability isn't what people actually do and/or care about
> (anymore?) then maybe we can re-think some things?

agree with all of ^... but i imagine to make progress on this, we'd have 
to change/drop devstack usage in gate and that will take forever and a 
lifetime (is that a chick flick title?) given how embedded devstack is 
in everything. it seems like the solution starts with devstack.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-20 Thread Doug Hellmann
Excerpts from Matthew Oliver's message of 2017-04-20 14:41:38 +1000:
> We have started this work. I've been working on:
> https://review.openstack.org/#/c/444718/

Wonderful! I'm sorry I didn't realize you were working on it. Thank you!

> Which will do requirement checks, as specified in the Pike PTG ehterpad for
> Tuesday morning:
> https://etherpad.openstack.org/p/relmgt-stable-requirements-ptg-pike (line
> 40+).
> 
> Once done, Tony and I were going to start testing it on the experimental
> pipeline for Swift and Nova.

That sounds like a good approach. I'll subscribe to the review and
follow along.

Doug

> 
> Regards,
> Matt
> 
> On Thu, Apr 20, 2017 at 2:34 AM, Doug Hellmann 
> wrote:
> 
> > Excerpts from Clark Boylan's message of 2017-04-19 08:10:43 -0700:
> > > On Wed, Apr 19, 2017, at 05:54 AM, Julien Danjou wrote:
> > > > Hoy,
> > > >
> > > > So Gnocchi gate is all broken (agan) because it depends on "pbr"
> > and
> > > > some new release of oslo.* depends on pbr!=2.1.0.
> > > >
> > > > Neither Gnocchi nor Oslo cares about whatever bug there is in pbr 2.1.0
> > > > that got in banished by requirements Gods. It does not prevent it to be
> > > > used e.g. to install the software or get version information. But it
> > > > does break anything that is not in OpenStack because well, pip installs
> > > > the latest pbr (2.1.0) and then oslo.* is unhappy about it.
> > >
> > > It actually breaks everything, including OpenStack. Shade and others are
> > > affected by this as well. The specific problem here is that PBR is a
> > > setup_requires which means it gets installed by easy_install before
> > > anything else. This means that the requirements restrictions are not
> > > applied to it (neither are the constraints). So you get latest PBR from
> > > easy_install then later when something checks the requirements
> > > (pkg_resources console script entrypoints?) they break because latest
> > > PBR isn't allowed.
> > >
> > > We need to stop pinning PBR and more generally stop pinning any
> > > setup_requires (there are a few more now since setuptools itself is
> > > starting to use that to list its deps rather than bundling them).
> > >
> > > > So I understand the culprit is probably pip installation scheme, and we
> > > > can blame him until we fix it. I'm also trying to push pbr 2.2.0 to
> > > > avoid the entire issue.
> > >
> > > Yes, a new release of PBR undoing the "pin" is the current sane step
> > > forward for fixing this particular issue. Monty also suggested that we
> > > gate global requirements changes on requiring changes not pin any
> > > setup_requires.
> > >
> > > > But for the future, could we stop updating the requirements in oslo
> > libs
> > > > for no good reason? just because some random OpenStack project hit a
> > bug
> > > > somewhere?
> > > >
> > > > For example, I've removed requirements update on tooz¹ for more than a
> > > > year now, which did not break *anything* in the meantime, proving that
> > > > this process is giving more problem than solutions. Oslo libs doing
> > that
> > > > automatic update introduce more pain for all consumers than anything
> > (at
> > > > least not in OpenStack).
> > >
> > > You are likely largely shielded by the constraints list here which is
> > > derivative of the global requirements list. Basically by using
> > > constraints you get distilled global requirements and even without being
> > > part of the requirements updates you'd be shielded from breakages when
> > > installed via something like devstack or other deployment method using
> > > constraints.
> > >
> > > > So if we care about Oslo users outside OpenStack, I beg us to stop this
> > > > crazyness. If we don't, we'll just spend time getting rid of Oslo over
> > > > the long term…
> > >
> > > I think we know from experience that just stopping (eg reverting to the
> > > situation we had before requirements and constraints) would lead to
> > > sadness. Installations would frequently be impossible due to some
> > > unresolvable error in dependency resolution. Do you have some
> > > alternative in mind? Perhaps we loosen the in project requirements and
> > > explicitly state that constraints are known to work due to testing and
> > > users should use constraints? That would give users control to manage
> > > their own constraints list too if they wish. Maybe we do this in
> > > libraries while continuing to be more specific in applications?
> >
> > At the meeting in Austin, the requirements team accepted my proposal
> > to stop syncing requirements updates into projects, as described
> > in https://etherpad.openstack.org/p/ocata-requirements-notes
> >
> > We haven't been able to find anyone to work on the implementation,
> > though. I is my understanding that Tony did contact the Telemetry
> > and Swift teams, who are most interested in this area of change,
> > about devoting some resources to the tasks outlined in the proposal.
> >
> > Doug
> >
> > >
> > > >
> > 

Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-19 Thread Joshua Harlow


Doug Hellmann wrote:

Excerpts from Clark Boylan's message of 2017-04-19 08:10:43 -0700:

On Wed, Apr 19, 2017, at 05:54 AM, Julien Danjou wrote:

Hoy,

So Gnocchi gate is all broken (agan) because it depends on "pbr" and
some new release of oslo.* depends on pbr!=2.1.0.

Neither Gnocchi nor Oslo cares about whatever bug there is in pbr 2.1.0
that got in banished by requirements Gods. It does not prevent it to be
used e.g. to install the software or get version information. But it
does break anything that is not in OpenStack because well, pip installs
the latest pbr (2.1.0) and then oslo.* is unhappy about it.

It actually breaks everything, including OpenStack. Shade and others are
affected by this as well. The specific problem here is that PBR is a
setup_requires which means it gets installed by easy_install before
anything else. This means that the requirements restrictions are not
applied to it (neither are the constraints). So you get latest PBR from
easy_install then later when something checks the requirements
(pkg_resources console script entrypoints?) they break because latest
PBR isn't allowed.

We need to stop pinning PBR and more generally stop pinning any
setup_requires (there are a few more now since setuptools itself is
starting to use that to list its deps rather than bundling them).


So I understand the culprit is probably pip installation scheme, and we
can blame him until we fix it. I'm also trying to push pbr 2.2.0 to
avoid the entire issue.

Yes, a new release of PBR undoing the "pin" is the current sane step
forward for fixing this particular issue. Monty also suggested that we
gate global requirements changes on requiring changes not pin any
setup_requires.


But for the future, could we stop updating the requirements in oslo libs
for no good reason? just because some random OpenStack project hit a bug
somewhere?

For example, I've removed requirements update on tooz¹ for more than a
year now, which did not break *anything* in the meantime, proving that
this process is giving more problem than solutions. Oslo libs doing that
automatic update introduce more pain for all consumers than anything (at
least not in OpenStack).

You are likely largely shielded by the constraints list here which is
derivative of the global requirements list. Basically by using
constraints you get distilled global requirements and even without being
part of the requirements updates you'd be shielded from breakages when
installed via something like devstack or other deployment method using
constraints.


So if we care about Oslo users outside OpenStack, I beg us to stop this
crazyness. If we don't, we'll just spend time getting rid of Oslo over
the long term…

I think we know from experience that just stopping (eg reverting to the
situation we had before requirements and constraints) would lead to
sadness. Installations would frequently be impossible due to some
unresolvable error in dependency resolution. Do you have some
alternative in mind? Perhaps we loosen the in project requirements and
explicitly state that constraints are known to work due to testing and
users should use constraints? That would give users control to manage
their own constraints list too if they wish. Maybe we do this in
libraries while continuing to be more specific in applications?


At the meeting in Austin, the requirements team accepted my proposal
to stop syncing requirements updates into projects, as described
in https://etherpad.openstack.org/p/ocata-requirements-notes

We haven't been able to find anyone to work on the implementation,
though. I is my understanding that Tony did contact the Telemetry
and Swift teams, who are most interested in this area of change,
about devoting some resources to the tasks outlined in the proposal.

Doug


My 2c,

Cheers,



Wasn't there also some decision made in austin (?) about how we as a 
group stated something along the line of co-installability isn't as 
important as it once was (and may not even be practical or what people 
care about anymore anyway)?


With kolla becoming more popular (tripleo I think is using it, and ...) 
and the containers it creates making isolated per-application 
environments it makes me wonder what of global-requirements is still 
valid (as a concept) and what isn't.


I do remember the days of free for all requirements (or requirements 
sometimes just put/stashed in devstack vs elsewhere), which I don't 
really want to go back to; but if we finally all agree that 
co-installability isn't what people actually do and/or care about 
(anymore?) then maybe we can re-think some things?


I personally still like having an ability to know some set of 
requirements works for certain project X for a given release Z (as 
tested by the gate); though I am not really concerned about if the same 
set of requirements works for certain project Y (also in release Z). If 
this is something others agree with then perhaps we just need to store 
those requirements and the 

Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-19 Thread Matthew Oliver
We have started this work. I've been working on:
https://review.openstack.org/#/c/444718/

Which will do requirement checks, as specified in the Pike PTG ehterpad for
Tuesday morning:
https://etherpad.openstack.org/p/relmgt-stable-requirements-ptg-pike (line
40+).

Once done, Tony and I were going to start testing it on the experimental
pipeline for Swift and Nova.

Regards,
Matt

On Thu, Apr 20, 2017 at 2:34 AM, Doug Hellmann 
wrote:

> Excerpts from Clark Boylan's message of 2017-04-19 08:10:43 -0700:
> > On Wed, Apr 19, 2017, at 05:54 AM, Julien Danjou wrote:
> > > Hoy,
> > >
> > > So Gnocchi gate is all broken (agan) because it depends on "pbr"
> and
> > > some new release of oslo.* depends on pbr!=2.1.0.
> > >
> > > Neither Gnocchi nor Oslo cares about whatever bug there is in pbr 2.1.0
> > > that got in banished by requirements Gods. It does not prevent it to be
> > > used e.g. to install the software or get version information. But it
> > > does break anything that is not in OpenStack because well, pip installs
> > > the latest pbr (2.1.0) and then oslo.* is unhappy about it.
> >
> > It actually breaks everything, including OpenStack. Shade and others are
> > affected by this as well. The specific problem here is that PBR is a
> > setup_requires which means it gets installed by easy_install before
> > anything else. This means that the requirements restrictions are not
> > applied to it (neither are the constraints). So you get latest PBR from
> > easy_install then later when something checks the requirements
> > (pkg_resources console script entrypoints?) they break because latest
> > PBR isn't allowed.
> >
> > We need to stop pinning PBR and more generally stop pinning any
> > setup_requires (there are a few more now since setuptools itself is
> > starting to use that to list its deps rather than bundling them).
> >
> > > So I understand the culprit is probably pip installation scheme, and we
> > > can blame him until we fix it. I'm also trying to push pbr 2.2.0 to
> > > avoid the entire issue.
> >
> > Yes, a new release of PBR undoing the "pin" is the current sane step
> > forward for fixing this particular issue. Monty also suggested that we
> > gate global requirements changes on requiring changes not pin any
> > setup_requires.
> >
> > > But for the future, could we stop updating the requirements in oslo
> libs
> > > for no good reason? just because some random OpenStack project hit a
> bug
> > > somewhere?
> > >
> > > For example, I've removed requirements update on tooz¹ for more than a
> > > year now, which did not break *anything* in the meantime, proving that
> > > this process is giving more problem than solutions. Oslo libs doing
> that
> > > automatic update introduce more pain for all consumers than anything
> (at
> > > least not in OpenStack).
> >
> > You are likely largely shielded by the constraints list here which is
> > derivative of the global requirements list. Basically by using
> > constraints you get distilled global requirements and even without being
> > part of the requirements updates you'd be shielded from breakages when
> > installed via something like devstack or other deployment method using
> > constraints.
> >
> > > So if we care about Oslo users outside OpenStack, I beg us to stop this
> > > crazyness. If we don't, we'll just spend time getting rid of Oslo over
> > > the long term…
> >
> > I think we know from experience that just stopping (eg reverting to the
> > situation we had before requirements and constraints) would lead to
> > sadness. Installations would frequently be impossible due to some
> > unresolvable error in dependency resolution. Do you have some
> > alternative in mind? Perhaps we loosen the in project requirements and
> > explicitly state that constraints are known to work due to testing and
> > users should use constraints? That would give users control to manage
> > their own constraints list too if they wish. Maybe we do this in
> > libraries while continuing to be more specific in applications?
>
> At the meeting in Austin, the requirements team accepted my proposal
> to stop syncing requirements updates into projects, as described
> in https://etherpad.openstack.org/p/ocata-requirements-notes
>
> We haven't been able to find anyone to work on the implementation,
> though. I is my understanding that Tony did contact the Telemetry
> and Swift teams, who are most interested in this area of change,
> about devoting some resources to the tasks outlined in the proposal.
>
> Doug
>
> >
> > >
> > > My 2c,
> > >
> > > Cheers,
> > >
> > > ¹ Unless some API changed in a dep and we needed to raise the dep,
> > > obviously.
> > >
> > > --
> > > Julien Danjou
> > > # Free Software hacker
> > > # https://julien.danjou.info
> >
> > I don't have all the answers, but am fairly certain the situation we
> > have today is better than the one from several years ago. It is just not
> > perfect. I think we are better served by refining 

Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-19 Thread Doug Hellmann
Excerpts from Clark Boylan's message of 2017-04-19 08:10:43 -0700:
> On Wed, Apr 19, 2017, at 05:54 AM, Julien Danjou wrote:
> > Hoy,
> > 
> > So Gnocchi gate is all broken (agan) because it depends on "pbr" and
> > some new release of oslo.* depends on pbr!=2.1.0.
> > 
> > Neither Gnocchi nor Oslo cares about whatever bug there is in pbr 2.1.0
> > that got in banished by requirements Gods. It does not prevent it to be
> > used e.g. to install the software or get version information. But it
> > does break anything that is not in OpenStack because well, pip installs
> > the latest pbr (2.1.0) and then oslo.* is unhappy about it.
> 
> It actually breaks everything, including OpenStack. Shade and others are
> affected by this as well. The specific problem here is that PBR is a
> setup_requires which means it gets installed by easy_install before
> anything else. This means that the requirements restrictions are not
> applied to it (neither are the constraints). So you get latest PBR from
> easy_install then later when something checks the requirements
> (pkg_resources console script entrypoints?) they break because latest
> PBR isn't allowed.
> 
> We need to stop pinning PBR and more generally stop pinning any
> setup_requires (there are a few more now since setuptools itself is
> starting to use that to list its deps rather than bundling them).
> 
> > So I understand the culprit is probably pip installation scheme, and we
> > can blame him until we fix it. I'm also trying to push pbr 2.2.0 to
> > avoid the entire issue.
> 
> Yes, a new release of PBR undoing the "pin" is the current sane step
> forward for fixing this particular issue. Monty also suggested that we
> gate global requirements changes on requiring changes not pin any
> setup_requires.
> 
> > But for the future, could we stop updating the requirements in oslo libs
> > for no good reason? just because some random OpenStack project hit a bug
> > somewhere?
> > 
> > For example, I've removed requirements update on tooz¹ for more than a
> > year now, which did not break *anything* in the meantime, proving that
> > this process is giving more problem than solutions. Oslo libs doing that
> > automatic update introduce more pain for all consumers than anything (at
> > least not in OpenStack).
> 
> You are likely largely shielded by the constraints list here which is
> derivative of the global requirements list. Basically by using
> constraints you get distilled global requirements and even without being
> part of the requirements updates you'd be shielded from breakages when
> installed via something like devstack or other deployment method using
> constraints.
> 
> > So if we care about Oslo users outside OpenStack, I beg us to stop this
> > crazyness. If we don't, we'll just spend time getting rid of Oslo over
> > the long term…
> 
> I think we know from experience that just stopping (eg reverting to the
> situation we had before requirements and constraints) would lead to
> sadness. Installations would frequently be impossible due to some
> unresolvable error in dependency resolution. Do you have some
> alternative in mind? Perhaps we loosen the in project requirements and
> explicitly state that constraints are known to work due to testing and
> users should use constraints? That would give users control to manage
> their own constraints list too if they wish. Maybe we do this in
> libraries while continuing to be more specific in applications?

At the meeting in Austin, the requirements team accepted my proposal
to stop syncing requirements updates into projects, as described
in https://etherpad.openstack.org/p/ocata-requirements-notes

We haven't been able to find anyone to work on the implementation,
though. I is my understanding that Tony did contact the Telemetry
and Swift teams, who are most interested in this area of change,
about devoting some resources to the tasks outlined in the proposal.

Doug

> 
> > 
> > My 2c,
> > 
> > Cheers,
> > 
> > ¹ Unless some API changed in a dep and we needed to raise the dep,
> > obviously.
> > 
> > -- 
> > Julien Danjou
> > # Free Software hacker
> > # https://julien.danjou.info
> 
> I don't have all the answers, but am fairly certain the situation we
> have today is better than the one from several years ago. It is just not
> perfect. I think we are better served by refining the current setup or
> replacing it with something better but not by reverting.
> 
> Clark
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-19 Thread Julien Danjou
On Wed, Apr 19 2017, Clark Boylan wrote:

> I think we know from experience that just stopping (eg reverting to the
> situation we had before requirements and constraints) would lead to
> sadness. Installations would frequently be impossible due to some
> unresolvable error in dependency resolution. Do you have some
> alternative in mind? Perhaps we loosen the in project requirements and
> explicitly state that constraints are known to work due to testing and
> users should use constraints? That would give users control to manage
> their own constraints list too if they wish. Maybe we do this in
> libraries while continuing to be more specific in applications?

Most of the problem that requirements is trying to solve is related to
upper-constraints, blocking new releases. And this upper constraints are
used in most jobs, preventing most failure that are seen in gates. It
would have "covered" the pbr issue.

What I want to stop here, is the automatic push of blacklisting/capping
of stuff to *everything* in OpenStack as soon as one project have a
problem with something.
> I don't have all the answers, but am fairly certain the situation we
> have today is better than the one from several years ago. It is just not
> perfect. I think we are better served by refining the current setup or
> replacing it with something better but not by reverting.

Agreed, I'm not suggesting to revert everything. Just the automatic push
of random requirements limits and binding to Oslo. And other projects if
you like, we don't do it for a good year anymore in Telemetry, and again
here we saw 0 breakage due to that change. Just more easyness to
install stuff.

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-19 Thread Clark Boylan
On Wed, Apr 19, 2017, at 05:54 AM, Julien Danjou wrote:
> Hoy,
> 
> So Gnocchi gate is all broken (agan) because it depends on "pbr" and
> some new release of oslo.* depends on pbr!=2.1.0.
> 
> Neither Gnocchi nor Oslo cares about whatever bug there is in pbr 2.1.0
> that got in banished by requirements Gods. It does not prevent it to be
> used e.g. to install the software or get version information. But it
> does break anything that is not in OpenStack because well, pip installs
> the latest pbr (2.1.0) and then oslo.* is unhappy about it.

It actually breaks everything, including OpenStack. Shade and others are
affected by this as well. The specific problem here is that PBR is a
setup_requires which means it gets installed by easy_install before
anything else. This means that the requirements restrictions are not
applied to it (neither are the constraints). So you get latest PBR from
easy_install then later when something checks the requirements
(pkg_resources console script entrypoints?) they break because latest
PBR isn't allowed.

We need to stop pinning PBR and more generally stop pinning any
setup_requires (there are a few more now since setuptools itself is
starting to use that to list its deps rather than bundling them).

> So I understand the culprit is probably pip installation scheme, and we
> can blame him until we fix it. I'm also trying to push pbr 2.2.0 to
> avoid the entire issue.

Yes, a new release of PBR undoing the "pin" is the current sane step
forward for fixing this particular issue. Monty also suggested that we
gate global requirements changes on requiring changes not pin any
setup_requires.

> But for the future, could we stop updating the requirements in oslo libs
> for no good reason? just because some random OpenStack project hit a bug
> somewhere?
> 
> For example, I've removed requirements update on tooz¹ for more than a
> year now, which did not break *anything* in the meantime, proving that
> this process is giving more problem than solutions. Oslo libs doing that
> automatic update introduce more pain for all consumers than anything (at
> least not in OpenStack).

You are likely largely shielded by the constraints list here which is
derivative of the global requirements list. Basically by using
constraints you get distilled global requirements and even without being
part of the requirements updates you'd be shielded from breakages when
installed via something like devstack or other deployment method using
constraints.

> So if we care about Oslo users outside OpenStack, I beg us to stop this
> crazyness. If we don't, we'll just spend time getting rid of Oslo over
> the long term…

I think we know from experience that just stopping (eg reverting to the
situation we had before requirements and constraints) would lead to
sadness. Installations would frequently be impossible due to some
unresolvable error in dependency resolution. Do you have some
alternative in mind? Perhaps we loosen the in project requirements and
explicitly state that constraints are known to work due to testing and
users should use constraints? That would give users control to manage
their own constraints list too if they wish. Maybe we do this in
libraries while continuing to be more specific in applications?

> 
> My 2c,
> 
> Cheers,
> 
> ¹ Unless some API changed in a dep and we needed to raise the dep,
> obviously.
> 
> -- 
> Julien Danjou
> # Free Software hacker
> # https://julien.danjou.info

I don't have all the answers, but am fairly certain the situation we
have today is better than the one from several years ago. It is just not
perfect. I think we are better served by refining the current setup or
replacing it with something better but not by reverting.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev