OK, here's my attempt.    Takes place pretty much at the pool level and is not 
too intrusive, also leaves in place all the existing reconnect stuff which 
should just work as is, not to mention leaves "creator()" in place which also 
has some exception handling.     I'm hoping you can test it out and grok the 
general idea in case it needs adjustment, and if we can write full tests it can 
be part of the distribution too, maybe as an ext.




On Nov 7, 2011, at 3:32 AM, Tarek Ziadé wrote:

> Here's my v3, with a mechanism to refresh the shared pool when an
> engine rebuilds it via dispose
> 
>  http://tarek.pastebin.mozilla.org/1376367
> 
> It also have a few thread locks to try to make it thread-safe.
> 
> I've written some tests with hundreds of threads and it seems to work
> fine, but I don't really know if there are other places where I should
> do something.
> 
> 
> On Fri, Nov 4, 2011 at 4:45 PM, Michael Bayer <mike...@zzzcomputing.com> 
> wrote:
>> 
>> On Nov 4, 2011, at 3:41 AM, Tarek Ziadé wrote:
>> 
>>> On Fri, Nov 4, 2011 at 10:33 AM, Tarek Ziadé <ziade.ta...@gmail.com> wrote:
>>> ...
>>>> Will try and come back ;)
>>> 
>>> Hello, I am back  \o/
>>> 
>>> Here's my v2: http://tarek.pastebin.mozilla.org/1373520
>>> 
>>> This time I have created a custom Strategy and Engine classes, and the
>>> get_engine() function takes care of instanciating a pool for the
>>> server if needed, and an engine instance per sqlurl, using that pool/
>>> 
>>> I have used Engine.contextual_connect for the schema switching, and
>>> Engine.execute has our custom execution things.
>>> 
>>> I guess this is much cleaner since anyone can use create_engine() with
>>> my new strategy.
>>> 
>>> The only assumption I am making is that there's a shared pool on the same 
>>> host.
>>> 
>>> 
>>> Does this look better ?
>>> 
>>> Thanks for your help so far
>> 
>> OK this is clearer to me.     So you're really trying to get it so that any 
>> number of create_engine() calls all use the same pool based on hostname, OK.
>> 
>> If i have time today I might want to try paring this down a lot more.     
>> Also not sure why you need the "disconnect check" stuff in execute() , SQLA 
>> has that stuff built in.    But it does raise an important point that the 
>> mechanism by which we dispose the pool when a disconnect is detected 
>> probably should be different here, since the disposal is per-engine, just 
>> gives itself a new pool.  The other engines will still point to the old, 
>> disposed pool, probably causing some kind of pileup.
>> 
>> 
>> 
>> 
>> 
>> 
>>> 
>>> Cheers
>>> Tarek
>>> 
>>> --
>>> Tarek Ziadé | http://ziade.org
>>> 
>>> --
>>> You received this message because you are subscribed to the Google Groups 
>>> "sqlalchemy" group.
>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>> To unsubscribe from this group, send email to 
>>> sqlalchemy+unsubscr...@googlegroups.com.
>>> For more options, visit this group at 
>>> http://groups.google.com/group/sqlalchemy?hl=en.
>>> 
>> 
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy" group.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> To unsubscribe from this group, send email to 
>> sqlalchemy+unsubscr...@googlegroups.com.
>> For more options, visit this group at 
>> http://groups.google.com/group/sqlalchemy?hl=en.
>> 
>> 
> 
> 
> 
> -- 
> Tarek Ziadé | http://ziade.org
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> To unsubscribe from this group, send email to 
> sqlalchemy+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/sqlalchemy?hl=en.
> 

from sqlalchemy import event, pool, create_engine
from sqlalchemy.engine import url

class ProxyRecord(pool._ConnectionRecord):
    def __init__(self, pool):
        super(ProxyRecord, self).__init__(pool)
        self.proxied = self.connection._connection_record

    def close(self):
        self.proxied.close()

    def invalidate(self, e=None):
        self.proxied.invalidate(e=e)

class ProxyPool(pool.Pool):
    def __init__(self, proxying):
        self.proxying = proxying
        pool.Pool.__init__(self, self.__creator)

    def __creator(self):
        return self.proxying.connect()

    def status(self):
        return "ProxyPool(%s)" % self.proxying.status()

    def _do_return_conn(self, conn):
        self.proxying._do_return_conn(conn.proxied)

    def _do_get(self):
        return ProxyRecord(self)

    def dispose(self):
        self.proxying.dispose()

    def recreate(self):
        return self

def singleton_engine(key_from_url, new_engine):
    _reg = {}

    def create_engine_singleton(name_or_url, **kw):
        u = url.make_url(name_or_url)
        key = key_from_url(u)
        if key in _reg:
            eng = _reg[key]
        else:
            eng = _reg[key] = create_engine(u, **kw)

        kw['pool'] = pool = ProxyPool(eng.pool)

        eng = create_engine(u, **kw)
        new_engine(eng)
        return eng

    return create_engine_singleton

if __name__ == '__main__':
    def key_from_url(u):
        return u.drivername, u.host, u.username

    def new_engine(engine):
        dbname = engine.url.database
        @event.listens_for(engine, "checkout")
        def checkout(dbapi_con, rec, proxy):
            print "switching schema to %s!" % dbname
            # cursor = dbapi_con.cursor()
            # cursor.execute("use %s" % dbname)
            # cursor.close()

    engine_registry = singleton_engine(key_from_url, new_engine)

    e1 = engine_registry("mysql://scott:tiger@localhost/test")
    e2 = engine_registry("mysql://scott:tiger@localhost/test2")

    @event.listens_for(e2, "first_connect")
    def e1_connections(conn, rec):
        print "e2 first connect !"

    print e1.execute("select 1").fetchall()
    print e2.execute("select 1").fetchall()

    @event.listens_for(e2, "checkout")
    def e2_connections(conn, rec, proxy):
        print "e2 checkout !"

    @event.listens_for(e2, "checkin")
    def e2_connections(conn, rec):
        print "e2 checkin !"

    print e1.execute("select 1").fetchall()
    print e2.execute("select 1").fetchall()

    # test that the dialect got first connect
    print e1.dialect.server_version_info

    c = e1.connect()
    c.invalidate()

    e1.connect()
-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to