[sqlalchemy] Re: Possible bug in orm/state.py
On May 23, 7:56 pm, Michael Bayer mike...@zzzcomputing.com wrote: On May 23, 2009, at 1:47 PM, Bob Farrell wrote: Hi, using scoped_session(sessionmaker()) to create my sessions, if I hammer a request (using Pylons) by repeatedly clicking on a link that uses the ORM somewhat extensively for the relevant request, it seems that another thread is getting involved with SQLAlchemy internals and pulling the rug out from under its feet. that means you are sharing a mapped instance between threads. A mapped instance, when associated with a Session (i.e. not detatched), should be considered as an extension of that Session's internal state. The Session isn't threadsafe so you can't share a persistent instance between threads. If you are using some kind of persistent/ cached instances, make sure they are detatched from their original Session first, or merge() the instances in to the Session local to the request before using them (you can send the dont_load=True flag to merge() if you want to cut down on potentially needless SELECT statements). Okay, thanks - I'll track down the problem with that in mind. And here's how I'm dealing with creating the sessions: threadlocal = threading.local() Session = scoped_session(sessionmaker(autocommit=True)) Session.metadata = None def setup_db(): if hasattr(threadlocal, 'engine'): return uri = config['main.engine.dburi'] threadlocal.engine = create_engine(uri) Session.configure(bind=threadlocal.engine) if Session.metadata is None: Session.metadata = MetaData(threadlocal.engine) model.initialise(Session.metadata) the threading.local() is unnecessary...unless you are planning for the same application to be run with different .ini files in each thread which would be extremely unusual. scoped_session() already handles the thread local part for you as far as Sessions are concerned, and Engine objects are threadsafe. Ah, so engine = create_engine(...) will provide separate connections as needed, rather than just a single connection ? The reason I wrote this code was because the original code we had was causing big problems - we were repeatedly getting QueuePool limit of size 30 overflow 10 reached, connection timed out, timeout 30 errors, so what I was hoping to do here was limit the connections to one per thread, but thinking about it I suppose this code doesn't achieve that at all, as any code can create a new connection with engine.connect() I'm going to have to track down what bits of code in SQLAlchemy implicitly create new connections so I can figure out where we're not closing them - hopefully autocommit=True will remedy this somewhat. Anyway, thanks a lot for the info. :-) Session = scoped_session(sessionmaker(autocommit=True)) metadata = None def setup_db(): global metadata if metadata is not None: return uri = config['main.engine.dburi'] engine = create_engine(uri) Session.configure(bind=engine) metadata = MetaData(engine) model.initialise(metadata) if OTOH you had some reason for the threadlocal engines, then you dont want to use Session.configure, which configures the whole scoped_session(). You'd want to say Session(bind=my_threadlocal_engine). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Possible bug in orm/state.py
On May 26, 3:31 pm, Michael Bayer mike...@zzzcomputing.com wrote: Bob Farrell wrote: Hi Michael, I found this (your writing) in a thread from quite a while back: A common pattern which can cause what you see there is if your templates are accessing lazy loaders on objects, but the Session which loaded those objects had been clear()'ed or otherwise closed before the template was allowed to render. You should try to keep the same Session open throughout the full request lifecycle, and you should avoid removing any objects from the Session which you wish to continue using. Hi - came across this because I've started getting the same problem and was somewhat relieved to see that the solution is a well-known one (given that rendering the template raises this error). I have code like this: [do stuff involving the session] return self.render_response('settings.mako', t_pars) in my controller methods and it's the return where the error gets raised. Can you tell me a good approach for making the session stay alive here ? I'm assuming the problem is that self.render_response returns something lazy and so by the time the template actually renders the objects relating to the session have gone out of scope. the whole request is wrapped within a block that handles Session lifecycle, so that the Session is still just fine when render_response is being called. In Pylons, an appropriate base.py is provided for you which does this. A description is athttp://www.sqlalchemy.org/docs/05/session.html#lifespan-of-a-contextu... . The Session has a behavior whereby after a commit(), it expires its contents. This so that it reloads everything upon access to get access to what any concurrent transactions have done. If your pattern is something like this: Session.commit() return self.render_response(...) your render phase may issue a lot of SQL to reload things (though nothing should break). Two ways to work around this are to set expire_on_commit=False in your sessionmaker(), or to wrap your whole controller method in a commit, such as: @commits_transaction def my_method(self): do stuff return self.render_response(...) commits_transaction looks like: @decorator def commits_transaction(fn, self, *args, **kw): try: ret = fn(self, *args, **kw) Session.commit() return ret except: Session.rollback() raise the rollback() may not be needed if your overall handler calls rollback() in all cases. Great, thanks very much - looks like gutting our horrible connection- handling code and moving to scoped_session is really making things better and this should (hopefully) be the last problem needing ironing out. Shouldn't be a problem given what you mentioned above. You have, as ever, been a tremendous help. :-) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Possible bug in orm/state.py
Bob Farrell wrote: Hi Michael, I found this (your writing) in a thread from quite a while back: A common pattern which can cause what you see there is if your templates are accessing lazy loaders on objects, but the Session which loaded those objects had been clear()'ed or otherwise closed before the template was allowed to render. You should try to keep the same Session open throughout the full request lifecycle, and you should avoid removing any objects from the Session which you wish to continue using. Hi - came across this because I've started getting the same problem and was somewhat relieved to see that the solution is a well-known one (given that rendering the template raises this error). I have code like this: [do stuff involving the session] return self.render_response('settings.mako', t_pars) in my controller methods and it's the return where the error gets raised. Can you tell me a good approach for making the session stay alive here ? I'm assuming the problem is that self.render_response returns something lazy and so by the time the template actually renders the objects relating to the session have gone out of scope. the whole request is wrapped within a block that handles Session lifecycle, so that the Session is still just fine when render_response is being called. In Pylons, an appropriate base.py is provided for you which does this.A description is at http://www.sqlalchemy.org/docs/05/session.html#lifespan-of-a-contextual-session . The Session has a behavior whereby after a commit(), it expires its contents. This so that it reloads everything upon access to get access to what any concurrent transactions have done. If your pattern is something like this: Session.commit() return self.render_response(...) your render phase may issue a lot of SQL to reload things (though nothing should break).Two ways to work around this are to set expire_on_commit=False in your sessionmaker(), or to wrap your whole controller method in a commit, such as: @commits_transaction def my_method(self): do stuff return self.render_response(...) commits_transaction looks like: @decorator def commits_transaction(fn, self, *args, **kw): try: ret = fn(self, *args, **kw) Session.commit() return ret except: Session.rollback() raise the rollback() may not be needed if your overall handler calls rollback() in all cases. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Possible bug in orm/state.py
On May 23, 2009, at 1:47 PM, Bob Farrell wrote: Hi, using scoped_session(sessionmaker()) to create my sessions, if I hammer a request (using Pylons) by repeatedly clicking on a link that uses the ORM somewhat extensively for the relevant request, it seems that another thread is getting involved with SQLAlchemy internals and pulling the rug out from under its feet. that means you are sharing a mapped instance between threads.A mapped instance, when associated with a Session (i.e. not detatched), should be considered as an extension of that Session's internal state. The Session isn't threadsafe so you can't share a persistent instance between threads. If you are using some kind of persistent/ cached instances, make sure they are detatched from their original Session first, or merge() the instances in to the Session local to the request before using them (you can send the dont_load=True flag to merge() if you want to cut down on potentially needless SELECT statements). And here's how I'm dealing with creating the sessions: threadlocal = threading.local() Session = scoped_session(sessionmaker(autocommit=True)) Session.metadata = None def setup_db(): if hasattr(threadlocal, 'engine'): return uri = config['main.engine.dburi'] threadlocal.engine = create_engine(uri) Session.configure(bind=threadlocal.engine) if Session.metadata is None: Session.metadata = MetaData(threadlocal.engine) model.initialise(Session.metadata) the threading.local() is unnecessary...unless you are planning for the same application to be run with different .ini files in each thread which would be extremely unusual.scoped_session() already handles the thread local part for you as far as Sessions are concerned, and Engine objects are threadsafe. Session = scoped_session(sessionmaker(autocommit=True)) metadata = None def setup_db(): global metadata if metadata is not None: return uri = config['main.engine.dburi'] engine = create_engine(uri) Session.configure(bind=engine) metadata = MetaData(engine) model.initialise(metadata) if OTOH you had some reason for the threadlocal engines, then you dont want to use Session.configure, which configures the whole scoped_session(). You'd want to say Session(bind=my_threadlocal_engine). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---