On Jun 3, 8:53 am, Michael Bayer <mike...@zzzcomputing.com> wrote:
> > Is it a common practice to pass the current session into the
> > constructor of an ORM model?  
>
> no.   The session consumes your objects, not the other way around.  An object 
> can always get its current session via object_session(self).
>
> > At this point, it can't be in a session
> > yet, so Session.object_session(self) wont help you.  So what if the
> > constructor needs to do some queries, perhaps to find-or-create some
> > related models?  Is this why pylons uses threadlocal sessions?
>
> the scoped session does serve as a way for all areas of an application to get 
> to "the session", yes.   But methods on an object should probably use 
> object_session(self) for portability.

Can you explain how this would work?  Here's a contrived example to
illustrate my problem:

class Thing(object):
    """A user-facing class"""
    def __init__(self, color):
        # what if I wanted to do something like this?  Where would
this session come from?  A thread-local?
        color = session.query(Color).filter_by(name=color) or
Color(color) # find-or-create
        color.count = color.count + 1
        self.color = color

class Color(object):
    """This class is just there to count things"""
    def __init__(self, name):
        self.name = name
        self.count = 0 # number of things of this color

metadata = MetaData()
things_table = Table('things', metadata, autoload=True)
mapper(things_table, metadata, Thing)
mapper(colors_table, metadata, Color)

session = Session()
thing = Thing(params) # Thing.__init__ doesn't know about the session
at this point
session.add(thing)
session.commit()
session.close()

> > The problem gets worse if you're
> > running those applications with multiple processes or instances,
> > because then you really have no idea how many connections there could
> > be.  
>
> you have an exact idea.  The max size of the pool is deterministic and 
> explicit.

Just to be perfectly clear, the number of connections it will maintain
is equal to the number of processes * pool_size, right?  (or the
number of processes * (pool_size+max_overflow) for the maximum
possible connections).

> > Would it be better to use a connection pooling solution external to my
> > python applications?  One that had shared knowledge of all of them?
>
> the django camp is big on pgpool.  To me it seems unnecessary.  I've run 
> applications that spawn off lots of child forks, the child forks typically 
> use one session for the life of the subprocess, so its pretty much one 
> connection per subprocess.   I don't see why it needs to be any more 
> complicated than that.     If your child processes are using more connections 
> than they should be, pgpool doesn't solve that, you should usually have one 
> connection per process per thread.
>
> > If I'm running a web application with fastcgi, I'm already affected by
> > this framentation of connection pools, right?  Considering fastcgi
> > uses multiple processes.  Should I set my pool size to the size I
> > expect only a single process to use?  I wouldn't expect a single
> > process to use more than one connection at a time, if no threading is
> > going on.
>
> that's correct, so if you limit the pool size to exactly 1 with no overflow, 
> you'll get an error and a stack trace at immediately the point at which your 
> app is opening a second connection, so that it can be corrected.

So if I never intend to use multi-threading or multiple persistent
sessions, I should always set the pool_size to 1?  Is that what you do?

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to