Hi Michael, I found this (your writing) in a thread from quite a while
back:

> A common pattern which can cause what you see there is if your
> templates are accessing lazy loaders on objects, but the Session which
> loaded those objects had been clear()'ed or otherwise closed before
> the template was allowed to render.  You should try to keep the same
> Session open throughout the full request lifecycle, and you should
> avoid removing any objects from the Session which you wish to continue
> using.
>
Hi - came across this because I've started getting the same problem
and was somewhat relieved to see that the solution is a well-known one
(given that rendering the template raises this error). I have code
like this:

        [do stuff involving the session]
        return self.render_response('settings.mako', t_pars)

in my controller methods and it's the return where the error gets
raised. Can you tell me a good approach for making the session stay
alive here ? I'm assuming the problem is that self.render_response
returns something lazy and so by the time the template actually
renders the objects relating to the session have gone out of scope.

Thanks for any advice you can offer.

On May 26, 9:33 am, Bob Farrell <robertanthonyfarr...@googlemail.com>
wrote:
> On May 23, 7:56 pm, Michael Bayer <mike...@zzzcomputing.com> wrote:
>
> > On May 23, 2009, at 1:47 PM, Bob Farrell wrote:
>
> > > Hi, using scoped_session(sessionmaker()) to create my sessions, if I
> > > hammer a
> > > request (using Pylons) by repeatedly clicking on a link that uses the
> > > ORM
> > > somewhat extensively for the relevant request, it seems that another
> > > thread is
> > > getting involved with SQLAlchemy internals and pulling the rug out
> > > from under
> > > its feet.
>
> > that means you are sharing a mapped instance between threads.    A  
> > mapped instance, when associated with a Session (i.e. not detatched),  
> > should be considered as an extension of that Session's internal  
> > state.  The Session isn't threadsafe so you can't share a persistent  
> > instance between threads.  If you are using some kind of persistent/
> > cached instances, make sure they are detatched from their original  
> > Session first, or merge() the instances in to the Session local to the  
> > request before using them (you can send the dont_load=True flag to  
> > merge() if you want to cut down on potentially needless SELECT  
> > statements).
>
> Okay, thanks - I'll track down the problem with that in mind.
>
>
>
>
>
> > > And here's how I'm dealing with creating the sessions:
>
> > > threadlocal = threading.local()
>
> > > Session = scoped_session(sessionmaker(autocommit=True))
> > > Session.metadata = None
>
> > > def setup_db():
> > >    if hasattr(threadlocal, 'engine'):
> > >        return
> > >    uri = config['main.engine.dburi']
> > >    threadlocal.engine = create_engine(uri)
> > >    Session.configure(bind=threadlocal.engine)
> > >    if Session.metadata is None:
> > >        Session.metadata = MetaData(threadlocal.engine)
> > >        model.initialise(Session.metadata)
>
> > the threading.local() is unnecessary...unless you are planning for the  
> > same application to be run with different .ini files in each thread  
> > which would be extremely unusual.    scoped_session() already handles  
> > the thread local part for you as far as Sessions are concerned, and  
> > Engine objects are threadsafe.
>
> Ah, so "engine = create_engine(...)" will provide separate connections
> as needed, rather than just a single connection ? The reason I wrote
> this code was because the original code we had was causing big
> problems - we were repeatedly getting "QueuePool limit of size 30
> overflow 10 reached, connection timed out, timeout 30" errors, so what
> I was hoping to do here was limit the connections to one per thread,
> but thinking about it I suppose this code doesn't achieve that at all,
> as any code can create a new connection with "engine.connect()"
>
> I'm going to have to track down what bits of code in SQLAlchemy
> implicitly create new connections so I can figure out where we're not
> closing them - hopefully autocommit=True will remedy this somewhat.
>
> Anyway, thanks a lot for the info. :-)
>
> > Session = scoped_session(sessionmaker(autocommit=True))
> > metadata = None
>
> > def setup_db():
> >      global metadata
> >      if metadata is not None:
> >          return
> >     uri = config['main.engine.dburi']
> >     engine = create_engine(uri)
> >     Session.configure(bind=engine)
> >     metadata = MetaData(engine)
> >     model.initialise(metadata)
>
> > if OTOH you had some reason for the threadlocal engines, then you dont  
> > want to use Session.configure, which configures the whole  
> > scoped_session().  You'd want to say  
> > Session(bind=my_threadlocal_engine).
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to