[sqlalchemy] Re: is MapperExtension.after_update() called before the UPDATE is issued to the DB?
On Wed, Apr 30, 2008 at 10:56 AM, Rick Morrison [EMAIL PROTECTED] wrote: then queried the db directly using sql. It looks like the change hasn't made it to the DB yet Also possible is that you're using a an MVCC DB such as Postgres or Oracle, and you're looking at an old, pre-update version of the data, as your direct SQL would be in a separate transaction We are using Postgres (8.1.6), but in the case of the sqlalchemy code I've got transactional set to off for the session. I'm not too familiar with the specifics of MVCC (just had to google it, actually)...is it possible the new data isn't visible to the other process if the updating process still has a cursor live? -- david bonner [EMAIL PROTECTED] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: is MapperExtension.after_update() called before the UPDATE is issued to the DB?
On Wed, Apr 30, 2008 at 9:29 AM, Michael Bayer [EMAIL PROTECTED] wrote: after_update() is called after all UPDATE statements have been issued for that particular mapper. This includes *only* the table that is mapped by that mapper, not any other mappers. Is it possible that you are seeing an UPDATE being issued for an item that is related to your parent via a many-to-one ? Or that you have multiple ME's with after_update() at play and perhaps you're seeing a different one fire off. It was, but I've managed to reproduce the problem with a fairly minimal test case: import pdb import datetime import pprint from pkg_resources import require require('SQLAlchemy==0.4.3') from sqlalchemy import * from sqlalchemy.orm import * class MyExtension (MapperExtension) : def after_update (self, mapper, connection, instance) : #pdb.set_trace() pprint.pprint(instance) return EXT_CONTINUE class Schedule (object) : pass dburl = 'postgres://names have been changed to protect the innocent' engine = create_engine(dburl, strategy='threadlocal') meta = MetaData(engine) Session = scoped_session(sessionmaker(bind=engine, autoflush=False, transactional=False)) mapper = Session.mapper meta.reflect() mapper(Schedule, meta.tables['schedules'], extension=MyExtension()) s = Schedule.query.first() s.notes = str(datetime.datetime.now()) Session.flush([s]) When this code drops me into pdb, the data in instance.notes looks like the new value, but querying the db in a separate process gets me the old value. -- david bonner [EMAIL PROTECTED] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: is MapperExtension.after_update() called before the UPDATE is issued to the DB?
oops, uncommenting the pdb.set_trace(), obviously. sorry. class MyExtension (MapperExtension) : def after_update (self, mapper, connection, instance) : #pdb.set_trace() pprint.pprint(instance) return EXT_CONTINUE -- david bonner [EMAIL PROTECTED] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: is MapperExtension.after_update() called before the UPDATE is issued to the DB?
On Wed, Apr 30, 2008 at 11:30 AM, Michael Bayer [EMAIL PROTECTED] wrote: flush() always uses a transaction, so when your pdb process hits, the transaction has not been committed yet and results are not visible outside of the transaction. the transactional keyword on Session does not mean don't use transactions at all, it means don't automatically enter a transaction outside of a flush. Its been renamed to autocommit in 0.5. Ah, yeah, that would do it. Any suggestions for other hooks that might do what I'm looking for, or should I just handle this myself before and after the flush? -- david bonner [EMAIL PROTECTED] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: is MapperExtension.after_update() called before the UPDATE is issued to the DB?
On Wed, Apr 30, 2008 at 2:50 PM, Michael Bayer [EMAIL PROTECTED] wrote: if you want pre/post flush activities theres a SessionExtension which hooks into Session for that. You can set it up with the sessionmaker() function so that its always plugged in. thanks, i'll look into that. and thanks again for the amazingly-quick and helpful replies. -- david bonner [EMAIL PROTECTED] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] is MapperExtension.after_update() called before the UPDATE is issued to the DB?
Hi, I'm trying to write a mapper extension that notifies a daemon about changes made to the DB that it needs to care about. But it looks like after_update() is actually getting called before the UPDATE is sent to the db. Not knowing a better way to debug it, I just threw a pdb.set_trace() into my after_update method to pause the process making the change, then queried the db directly using sql. It looks like the change hasn't made it to the DB yet, even though pdb.set_trace() is being triggered. I'm using 0.4.3, with psycopg2. Is this a known bug? Or am I just using the MapperExtension incorrectly? Any help would be greatly appreciated. Thanks. -- david bonner [EMAIL PROTECTED] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Getting the table name that a mapped object came from
On Fri, Apr 25, 2008 at 2:34 PM, TP [EMAIL PROTECTED] wrote: Hi, does anyone know a way to get the underlying table name from the DB for a mapped object? foo = session.Query(Foo).filter_by(a=x) I'd like to now find out what table name foo came from. You'd want sqlalchemy.orm.object_mapper(foo).local_table.name -- david bonner [EMAIL PROTECTED] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] should a dns error return true in dialect.is_disconnect() ?
Hi, I've run into a problem running SA 0.4.0 on top of psycopg2. We had a DNS hiccup, and the next attempt to execute a query triggered a ProgrammingError. Unfortunately, it seems that error didn't also invalidate the (implicit) connection, which was then returned to the pool. Successive queries returned a couple different errors (I can track down the exact sequence of errors if you need it) but eventually we end up getting InvalidRequestError(This connection is closed) every time we run a query. The backtrace looks something like: 2007-12-04 08:10:10,585 ERRORStorageWatcher-/build/storage-test/ release-Thread-2 - Unable to retrieve the list of builds on /build/ storage-test/release: This connection is closed Traceback (most recent call last): File /mts-cm/home/dbonner/clients/storagetest/bin/ storagemanager.py, line 844, in getBuildList builds = sess.query(Build).order_by(Build.buildid)\ File /local/toolchain/lin32/python-2.4.3/lib/python2.4/site- packages/SQLAlchemy-0.4.0-py2.4.egg/sqlalchemy/orm/query.py, line 571, in all File /local/toolchain/lin32/python-2.4.3/lib/python2.4/site- packages/SQLAlchemy-0.4.0-py2.4.egg/sqlalchemy/orm/query.py, line 619, in __iter__ File /local/toolchain/lin32/python-2.4.3/lib/python2.4/site- packages/SQLAlchemy-0.4.0-py2.4.egg/sqlalchemy/orm/query.py, line 622, in _execute_and_instances File /local/toolchain/lin32/python-2.4.3/lib/python2.4/site- packages/SQLAlchemy-0.4.0-py2.4.egg/sqlalchemy/orm/session.py, line 527, in execute File /local/toolchain/lin32/python-2.4.3/lib/python2.4/site- packages/SQLAlchemy-0.4.0-py2.4.egg/sqlalchemy/engine/base.py, line 779, in execute File /local/toolchain/lin32/python-2.4.3/lib/python2.4/site- packages/SQLAlchemy-0.4.0-py2.4.egg/sqlalchemy/engine/base.py, line 829, in _execute_clauseelement File /local/toolchain/lin32/python-2.4.3/lib/python2.4/site- packages/SQLAlchemy-0.4.0-py2.4.egg/sqlalchemy/engine/base.py, line 838, in _execute_compiled File /local/toolchain/lin32/python-2.4.3/lib/python2.4/site- packages/SQLAlchemy-0.4.0-py2.4.egg/sqlalchemy/engine/base.py, line 846, in __create_execution_context File /local/toolchain/lin32/python-2.4.3/lib/python2.4/site- packages/SQLAlchemy-0.4.0-py2.4.egg/sqlalchemy/databases/postgres.py, line 303, in create_execution_context File /local/toolchain/lin32/python-2.4.3/lib/python2.4/site- packages/SQLAlchemy-0.4.0-py2.4.egg/sqlalchemy/engine/default.py, line 157, in __init__ File /local/toolchain/lin32/python-2.4.3/lib/python2.4/site- packages/SQLAlchemy-0.4.0-py2.4.egg/sqlalchemy/databases/postgres.py, line 255, in create_cursor File /local/toolchain/lin32/python-2.4.3/lib/python2.4/site- packages/SQLAlchemy-0.4.0-py2.4.egg/sqlalchemy/pool.py, line 338, in cursor File /local/toolchain/lin32/python-2.4.3/lib/python2.4/site- packages/SQLAlchemy-0.4.0-py2.4.egg/sqlalchemy/pool.py, line 327, in invalidate InvalidRequestError: This connection is closed I poked around a little, and I'm guessing one fix might be to change databases.postgres.PGDialect.is_disconnect() to treat the initial ProgrammingError with no route to host in it as a disconnect error. I just wanted to check and see if that was the right way to fix this before I tried filing a trac ticket and submitting a patch. Please let me know if I'm missing any useful details. Thanks. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: should a dns error return true in dialect.is_disconnect() ?
On Dec 4, 4:32 pm, Michael Bayer [EMAIL PROTECTED] wrote: if the error message isnt caught by is_disconnect(), then yes the specific error message should be installed in there. But also note that psycopg2 has some specific issues with disconnects, namely that the exception is not always raised cleanly..i had discussed this on the psycopg2 list and supplied test scripts illustrating the issue but i dont know if anyone took the time to verify what i was illustrating. by not clean i mean the exception would get thrown in an asynchronous fashion so that we really couldnt catch it at all. ive seen tickets in psycopg2's trac which seem to address related issues. yeah, the more i think about it, the more i realize this is an error when the dbapi connection is created, it's not a disconnect condition. the original error message is No route to host, which I'm pretty sure is EHOSTUNREACH, which you get on a socket connect, not on a send. which means that yep, this is likely an async problem...the connection seems to be fine when you create it, but when you use it, it throws the connection error. but also, that you were getting this connection is closed would indicate that a recycle did occur, although if you have pool logs that would describe it more clearly. But i wonder if you had long running Connection objects opened; at the moment, the Connection itself doesnt get a hold of a new DBAPI connection when a recycle occurs, youd have to open a new Connection. Ive been meaning to change this behavior in trunk so that even holding open Connection would still allow a new recycle to happen. Just curious how you configured on that end. it's a long-running multi-threaded daemon process, using a TLEngine and all implicit connections. the lifespan of the sessions are pretty short, so i've assumed my threads weren't holding an open connection object. haven't configured pool logging to test it yet, though. suppose it's a good time to learn how to do that in 0.4 now. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---