For me it has been useful in the past to track overall database query speed so I could optimize the query taking the most aggregate time. (I.e., execution time * times executed.) It looks to me like this could be hooked in to SA pretty easily, with just a minor change to Connection._execute_raw, using statement as the key to aggregate on. (You could even define two _execute_raws and pick one at runtime to avoid any overhead when not in profiling mode.) This seems to work fine:
start = time.time() if parameters is not None and isinstance(parameters, list) and len(parameters) > 0 and (isinstance(parameters[0], list) or isinstance(parameters[0], dict)): self._executemany(cursor, statement, parameters, context=context) else: self._execute(cursor, statement, parameters, context=context) end = time.time() self._autocommit(statement) profile_data[statement] = profile_data.get(statement, 0) + (end - start) Of course, this only tells you what generated SQL is slow, not what code caused those queries to run, but it's easy enough to grab caller info from the stack. But am I missing other code paths that would have to be tracked? -- Jonathan Ellis http://spyced.blogspot.com --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy -~----------~----~----~----~------~----~------~--~---