I finally came back to this.  Here's what I ended up with:

# I tried to enable profiling on a per-engine level before resorting to this
# hack. (Monkey-patching classes by scanning the gc! Woot!)
#
# Per-engine profile turns out to totally not work because there's so many
# layers of "clever" stuff going on (well, primarily PoolConnectionProvider
# returning proxies instead of real Connections) that it's really impossible to
# decorate Connections in a general manner by relying on
engine.connection_provider.
# Too bad, because it was rather more elegant.
#
# This will work no matter how many layers of proxies there are...
def enable_profiling():
   import gc
   for o in gc.get_objects():
       if isinstance(o, type):
           if o == Connection or Connection in o.__bases__:
               o._execute = _profilingexecute
               o._executemany = _profilingexecutemany

where the _profilingexecute methods do pretty much what was discussed before.

On 10/30/06, Michael Bayer <[EMAIL PROTECTED]> wrote:

id look into building this as a ProxyEngine.  _execute and
_executemany might be better targets for profiling but its not super-
important.

On Oct 30, 2006, at 6:38 PM, Jonathan Ellis wrote:

>
> For me it has been useful in the past to track overall database query
> speed so I could optimize the query taking the most aggregate time.
> (I.e., execution time * times executed.)  It looks to me like this
> could be hooked in to SA pretty easily, with just a minor change to
> Connection._execute_raw, using statement as the key to aggregate on.
> (You could even define two _execute_raws and pick one at runtime to
> avoid any overhead when not in profiling mode.)  This seems to work
> fine:
>
>             start = time.time()
>             if parameters is not None and isinstance(parameters, list)
> and len(parameters) > 0 and (isinstance(parameters[0], list) or
> isinstance(parameters[0], dict)):
>                 self._executemany(cursor, statement, parameters,
> context=context)
>             else:
>                 self._execute(cursor, statement, parameters,
> context=context)
>             end = time.time()
>             self._autocommit(statement)
>             profile_data[statement] = profile_data.get(statement, 0) +
> (end - start)
>
> Of course, this only tells you what generated SQL is slow, not what
> code caused those queries to run, but it's easy enough to grab caller
> info from the stack.  But am I missing other code paths that would
> have to be tracked?
>
> --
> Jonathan Ellis
> http://spyced.blogspot.com
>
> >


>


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to