I have a lot of questions, so bear with me.  I've been having some
doubts about whether I'm really using sqlalchemy in a good way.

--

Is there any use case for having more than one session active in the
same thread?  Or does everyone use threadlocal sessions?  If you bind
different tables to different engines in the same metadata, can one
session service them all?  If not, this would be a use case for
multiple sessions, which would make threadlocal sessions inconvenient,
unless you made a different threadlocal session for each engine.

Is it a common practice to pass the current session into the
constructor of an ORM model?  At this point, it can't be in a session
yet, so Session.object_session(self) wont help you.  So what if the
constructor needs to do some queries, perhaps to find-or-create some
related models?  Is this why pylons uses threadlocal sessions?

-- For a specific example, say you have a constructor for a Thing that
can be in a Category, and you want to pass the category name into the
Thing constructor, and expect the constructor to find or create the
associated Category and increment the number of Things in that
category (because there are too many to count).  You'd need a session
to do those queries.  I couldn't find a way to get the mapper to do
this without a session, but maybe I'm overlooking some of the
capabilities of lazy='dynamic'.

Is connection pooling the sqlalchemy way really what we want?  Say for
example I have a variety of projects running on the same machine, all
using sqlalchemy.  Since the connection pool is in the engine
instance, there is no way these projects would be sharing information
about the connection pool, so how could you know how many connections
your server is actually generating?  The problem gets worse if you're
running those applications with multiple processes or instances,
because then you really have no idea how many connections there could
be.  This has already lead to some serious problems for me.

Would it be better to use a connection pooling solution external to my
python applications?  One that had shared knowledge of all of them?

If I'm running a web application with fastcgi, I'm already affected by
this framentation of connection pools, right?  Considering fastcgi
uses multiple processes.  Should I set my pool size to the size I
expect only a single process to use?  I wouldn't expect a single
process to use more than one connection at a time, if no threading is
going on.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to