On Sep 6, 9:27 am, Michael Bayer <[EMAIL PROTECTED]> wrote: > On Sep 6, 2008, at 8:03 AM, gatto wrote: > > On Sep 5, 12:19 pm, gatto <[EMAIL PROTECTED]> wrote: > >> hi everyone. just started using sqlalchemy and elixir recently. i'm > >> not having any issues with coding yet, just this one: > > >> OperationalError: (OperationalError) (1040, 'Too many connections') > > >> as a workaround for this problem, i created a cron job to restart > >> apache once every hour and that is mostly working. but i really need > >> to find out what is causing this. i'm using a custom built mvc > >> framework that runs atop mod_python, and in the base application > >> class > >> i have the following code, which executes once at the beginning of > >> the > >> request: > > >> metadata.bind = 'mysql://' + app_config.database.user + ':' + > >> app_config.database.password + '@' + app_config.database.host + ': > >> 3306/' + app_config.database.schema > >> metadata.bind.echo = True > >> metadata.bind.recycle = 3600 > > If this is executed "at the beginning of a request", do you mean > "each" request ? If so, this code is creating a new engine on each > request, since metadata.bind = "somestring" calls create_engine(). > For a mod_python application, create_engine() should be called exactly > once at the module level, so that one connection pool is created for > the lifespan of the entire process. Though in theory, if you were > creating new engines on each request they should be garbage collected > if they became unreferenced. After ensuring that you have only one > engine per process, for further research I'd advise enabling > connection pool logging to see just where all the connections are > getting opened.
yes, each request. so what i'm gathering is that this is happening because mod_python is keeping the engine alive.. i don't know exactly what to do about that, but i worked out something just now that may work. i'm somewhat new to both python and mod_python so my approach might not be the best one.. i created a global variable that tracks the request phase, whether a new interpreter is getting loaded or if it is already running. i'm not using connection pooling just yet, just taking it a step at a time and starting simple... global phase try: if phase == 'start': phase = 'running' except: phase = 'start' if phase == 'start': self.engine = create_engine('mysql://' + app_config.database.user + ':' + app_config.database.password + '@' + app_config.database.host + ':3306/' + app_config.database.schema) metadata.bind = self.engine metadata.bind.recycle = 3600 will that work for me, or is there a chance that even though 'phase' might be set to 'running', the engine has been garbage collected and the request will fail? so far in my simple tests on just hitting reload a bunch of times and printing the 'phase' var, sometimes it prints 'start' but mostly it prints 'running', and the corresponding connections in mysql are as expected, only creating new ones when i see the word 'start' printed out. i also have one other concern.. in periods of high traffic, the number of apache processes might outpace the number of mysql max_connections i configured (300 at the moment). it seems somewhat unlikely, especially given the amount of traffic our site receives right now. but i'm looking toward the future... perhaps there isn't much i can do about that until i move on to connection pooling? --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~----------~----~----~----~------~----~------~--~---