I've used timing around the events to determine long statement execution 
and transaction times. A trick I learned was logging that stuff to a 
separate database 'autocommit' enabled database and session.  

• If a statement took took long to execute, I'd log the query + params.
• If the session took too long before commit, I'd log all the queries + 
params that happened within it.
• I would use events to store that information separately along with the 
timing, then write or discard on commit.  
That sort of overhead is not good for production, but for testing and QA 
it's pretty great.  The nice thing about storing it in SQL (instead of 
logging) is that you can segment the data a bit and the run analytics on 
the database.  
On dev and production, I also use SqlAlchemy to log the exceptions + some 
environment variables to a database as well.  That has been really helpful 
in finding the various causes.

If you know you're getting a single specific OperationalError , that's a 
good thing.  It makes it easier to catch and record to pinpoint.  

Have you looked at Sentry?  It's pretty easy to integrate and fairly good 
for helping pinpoint the causes for things like this too.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to