On Feb 7, 2014, at 8:21 AM, Jesse Noller <jesse.nol...@rackspace.com> wrote:

> 
> On Feb 7, 2014, at 1:51 AM, Chris Behrens <cbehr...@codestud.com> wrote:
> 
>> 
>> On Feb 6, 2014, at 11:07 PM, Joshua Harlow <harlo...@yahoo-inc.com> wrote:
>> 
>>> +1
>>> 
>>> To give an example as to why eventlet implicit monkey patch the world isn't 
>>> especially great (although it's what we are currently using throughout 
>>> openstack).
>>> 
>>> The way I think about how it works is to think about what libraries that a 
>>> single piece of code calls and how it is very hard to predict whether that 
>>> code will trigger a implicit switch (conceptually similar to a context 
>>> switch).
>> 
>> Conversely, switching to asyncio means that every single module call that 
>> would have blocked before monkey patching… will now block. What is worse? :)
>> 
> 
> Are we perhaps thinking about this in the wrong way?
> 
> As I’m looking at the services that make heavy use of eventlet/etc - many of 
> them (to me) would benefit more from the typical task queue pattern most SOA 
> systems use. At that point your producers + consumers would use a common 
> abstracted back end - a simple:
> 
> class Reader(object):
>     def put():
>     def get():
> 
> abstraction means that the Reader class could extend from - or be extended - 
> to encompass the various models out there - local threads + queue.Queue, 
> asyncIO, eventlet / etc. This means that you force everyone into a message 
> passing/"shared nothing” architecture where even on a deployment level, a 
> given individual could swap in say, twisted, or tornado, or…
> 
> It seems that baking concurrency models into the individual clients / 
> services adds some opinionated choices that may not scale, or fit the needs 
> of a large-scale deployment. This is one of the things looking at the client 
> tools I’ve noticed - don’t dictate a concurrency backend, treat it as 
> producer/consumer/message passing and you end up with something that can 
> potentially scale out a lot more.

I like this idea in principle, but I think it suffers from a bit of abstraction 
idealism. Switching out backends for things is never as easy as it sounds 
above. I speak from a lot of personal experience here. When nova was much 
smaller we changed the concurrency backend multiple times. Every time we had a 
good “reason” and every time we were trading one set of headaches for another 
while investing a huge amount of developer time in the switchover.

To be clear, since many people weren’t around in ye olde days, nova started 
using tornado. We exchanged tornado for twisted, and finally moved to eventlet. 
People have suggested gevent and threads in the past, and now asyncio. There 
are advantages to all of these other solutions, but a change at this point is 
going to be a huge pain, even the abstracting one you mention above.

If we are going to invest the time in making another change, I think we need a 
REALLY good reason to do so. Some reasons that might be good enough to be worth 
considering:

a) the cost of porting the library to a maintained python version (3.X at some 
point) is greater than replacing it with something else
b) the performance of the other option is an order of magnitude better. I’m 
really talking 10X here.

Anything less than those and I think we just end up making changes for the sake 
of it and paying a large price.

Vish

> 
> jesse
> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to