[sqlalchemy] MSSQL dislikes schema= on table.create()
Hi, semi-lurker here, I'm using pyodbc, by the way. I had to comment out ``table.schema = queer`` before calling metadata.create_all() because MSSQL blows up: File c:\python24\lib\site-packages\sqlalchemy-0.4.5-py2.4.egg\sqlalchemy\data bases\mssql.py, line 499, in do_execute cursor.execute(SET IDENTITY_INSERT %s OFF % self.identifier_preparer.format_table(context.compiled.statement.table)) SystemError: 'finally' pops bad exception Anyway, I think that either means I should be using schema=dbo or that the schema value should be prepended to the table names with an underscore. I'm using schema=queer as a way to set my tables apart from pre-existing ones. Should I do something else? signature.asc Description: OpenPGP digital signature
[sqlalchemy] Re: MSSQL dislikes schema= on table.create()
That error would be thrown by an insert, not a table create, and I believe there are other users using pyodbc with schema-specified tables without problems. I won't have a chance to look at this under pyodbc until tomorrow. In the meantime, if you could try with pymssql to see if you get the same error, that will help in debugging this. BTW, the dbo schema is implicit if you don't specify an explicit schema. Rick --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SQLAlchemy experts wanted
On Fri, May 16, 2008 at 9:23 PM, Jim R. Wilson [EMAIL PROTECTED] wrote: Hi all, SQLAlchemy is a great project and a growing niche. As it becomes even more popular, there will be increasing demand for experts in the field. I am compiling a contact list of SQLAlchemy experts who may be interested in opportunities under the right circumstances. I am not a recruiter - I'm a regular developer who sometimes gets asked for referrals when I'm not personally available. If you'd like to be on my shortlist of go-to experts, please contact me off-list at: [EMAIL PROTECTED] Wouldn't it be useful to have such a list publicly available (such as on the Wiki)? -- Gaëtan de Menten http://openhex.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SQLAlchemy experts wanted
On Monday 19 May 2008 16:09:14 Gaetan de Menten wrote: On Fri, May 16, 2008 at 9:23 PM, Jim R. Wilson [EMAIL PROTECTED] wrote: Hi all, SQLAlchemy is a great project and a growing niche. As it becomes even more popular, there will be increasing demand for experts in the field. I am compiling a contact list of SQLAlchemy experts who may be interested in opportunities under the right circumstances. I am not a recruiter - I'm a regular developer who sometimes gets asked for referrals when I'm not personally available. If you'd like to be on my shortlist of go-to experts, please contact me off-list at: [EMAIL PROTECTED] Wouldn't it be useful to have such a list publicly available (such as on the Wiki)? why not... Mercenaries (:-) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] update_or_insert
Hey, I'm using the SQL generation libs, and i'm wondering where the update_or_insert functionality is. I know at least mysql and sqlite have this functionality. I basically would like to be able to insert a new row without having to check if it exists first. Thanks David --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MSSQL dislikes schema= on table.create()
Rick Morrison wrote: I won't have a chance to look at this under pyodbc until tomorrow. In the meantime, if you could try with pymssql to see if you get the same error, that will help in debugging this. What is the recommended module to use on win32? Yes, pymssql does help: I had to create the schema (wtfrak was I thinking...) Meanwhile, the stack trace isn't corrupted when using pymssql: ---8--- File c:\python24\lib\site-packages\sqlalchemy-0.4.5-py2.4.egg\sqlalchemy\engi ne\base.py, line 942, in _handle_dbapi_exception raise exceptions.DBAPIError.instance(statement, parameters, e, connection_in validated=is_disconnect) sqlalchemy.exceptions.DatabaseError: (DatabaseError) internal error: SQL Server message 2760, severity 16, state 1, line 2: The specified schema name queer either does not exist or you do not have permi ssion to use it. ---8--- Side note: I wish easy_install pymssql*.exe would have put ntwdblib.dll inside the .egg directory; that was a false start. Also, pymssql+SA doesn't support SSPI authentication using the syntax presented earlier: mssql://:@host/db Luis Bruno wrote: File c:\python24\lib\site-packages\sqlalchemy-0.4.5-py2.4.egg\sqlalchemy\data bases\mssql.py, line 499, in do_execute cursor.execute(SET IDENTITY_INSERT %s OFF % self.identifier_preparer.format_table(context.compiled.statement.table)) SystemError: 'finally' pops bad exception That error would be thrown by an insert, not a table create, and I believe there are other users using pyodbc with schema-specified tables without problems. It happens on metadata.create_all(). Also, I've verified that finally pops bad exception happens on other exceptions, like failure to connect (this using pyodbc). signature.asc Description: OpenPGP digital signature
[sqlalchemy] Re: MSSQL dislikes schema= on table.create()
Hi, The specified schema name queer either does not exist or you do not have permi ssion to use it. This isn't an SQLAlchemy or DBAPI issue - you just need to create the schema with the correct permissions. Paul --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MSSQL dislikes schema= on table.create()
Paul Johnston escreveu: This isn't an SQLAlchemy or DBAPI issue - you just need to create the schema with the correct permissions. True; I've already solved the problem, after I got this (much better) error message by using pymssql. The first traceback: * happened in a SET IDENTITY ... instruction, not on a CREATE TABLE... * had a broken exception message So now I'm of two minds about which module to use and if I should use a schema or not for these porposes. signature.asc Description: OpenPGP digital signature
[sqlalchemy] Re: MSSQL dislikes schema= on table.create()
Hi, So now I'm of two minds about which module to use and if I should use a schema or not for these porposes. I'm using PyODBC and schemas, and things work just fine for me. Ok, some error messages aren't quite there, but it works well enough for me. Paul --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: update_or_insert
you can use straight text or a text() construct to issue a mysql REPLACE statement, since only MySQL supports that operation. Theres also a recipe floating around somewhere which uses text() but automates the generation of the statement based on columns present, you might search on the mailing list for that, though its quite easy to write this. On May 19, 2008, at 1:58 AM, David Novakovic wrote: Hey, I'm using the SQL generation libs, and i'm wondering where the update_or_insert functionality is. I know at least mysql and sqlite have this functionality. I basically would like to be able to insert a new row without having to check if it exists first. Thanks David --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: transactional sessions not transactional?
yes, this is kind of an unfortunate default that we've had throughout the 0.4 series, which we wont have in 0.5. To get the behavior you want, use this: engine = create_engine('postgres://...', pool_threadlocal=False) Alternatively, bind each session to the non-contextual connection, using session(bind=engine.connect()), though thats a little more effort. On May 18, 2008, at 9:52 PM, Matthew Dennis wrote: #!/usr/bin/python from sqlalchemy.sql import text from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker engine = create_engine('postgres://[EMAIL PROTECTED]/ testsatransaction') new_session = sessionmaker(bind=engine, transactional=True) engine.execute(text(drop table if exists foo)) engine.execute(text(create table foo(c1 int))) s0 = new_session() s1 = new_session() s0.execute(text(insert into foo values(1))) (one,) = s0.execute(text(select * from foo)).fetchone() assert one == 1 print engine.execute(text(select * from foo)).fetchone() print s1.execute(text(select * from foo)).fetchone() s0.rollback() print engine.execute(text(select * from foo)).fetchone() print s1.execute(text(select * from foo)).fetchone() --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MSSQL dislikes schema= on table.create()
So now I'm of two minds about which module to use and if I should use a schema or not for these porposes. There's a few arcane limitations when using the pymssql module, pyodbc will be better-supported going into the future. As for using schema vs. other namespace tricks, that's up to you. I would stick with schemas - SA includes good schema support. had a broken exception message Does anyone know what the story is with pyodbc and mangled tracebacks? That sounds kind of disturbing, like a corrupted stack or otherwise scrambled frame data inside of the Python interpreter. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MSSQL dislikes schema= on table.create()
Rick Morrison escreveu: So now I'm of two minds about which module to use and if I should use a schema or not for these porposes. There's a few arcane limitations when using the pymssql module, pyodbc will be better-supported going into the future. As for using schema vs. other namespace tricks, that's up to you. I would stick with schemas - SA includes good schema support. Got it! had a broken exception message Does anyone know what the story is with pyodbc and mangled tracebacks? That sounds kind of disturbing, like a corrupted stack or otherwise scrambled frame data inside of the Python interpreter. To be honest, I was hoping that someone would whack me over the head and tell me it's a local problem... So you've seen the same problem, yes? I'll try and install everything in a clean VM, but not today. signature.asc Description: OpenPGP digital signature
[sqlalchemy] Re: Is inserting new data with column_mapped_collection inconsistent?
Allen Bierbaum wrote: On Fri, May 16, 2008 at 4:54 PM, jason kirtland [EMAIL PROTECTED] wrote: [..] Anyway, I think this is a bit non-intuitive. What I propose instead is that SA could automatically set the 'keyword' attribute of the Note object as part of the process of assigning it to the mapped collection dictionary. This way the insert could look more like: item.notes['not-color'] = Note(value='blue') and behind the scenes SA would call: new Note.keyword = 'not-color' Any thoughts on this? Has anyone tried this in the past? MappedCollection doesn't currently have a mismatch guard on __setitem__ (d[key] = val) or setdefault(), but easily could. There *is* a guard protecting against item.notes = {'not-color': Note('color', 'blue')}, so that machinery is available and applying it to the other setters is straightforward. Automatically setting the value for the attribute_ and column_mapped dict collections would be pretty convenient and DRY. This is a great time to integrate that feature, if you want to try your hand at putting together a patch and tests. If it's not too disruptive to existing users it could slide right in as a new feature of 0.5. I would be more then happy to look into this (I already have), but I think my skills aren't quite up to the challenge. Could you point me in the general direction? You might start looking at _convert here: http://www.sqlalchemy.org/trac/browser/sqlalchemy/trunk/lib/sqlalchemy/orm/collections.py#L1402 It could be the case that that logic can be combined with the proposed value-setting logic and used for @converter, __setitem__, etc. The basic MappedCollection would probably have a default implementation that does no attribute setting, being as there's no reasonable way to intuit a reverse operation given only an arbitrary keying function lambda. The attribute_ and column_mapped_ front ends would set up their own implementations of the function that does key checking plus attribute setting behavior. On a related note, I think it would be good to make this behavior come through a user customizable callback method that takes the index value and the newly assigned class item as values. This would allow users to add more automatic behavior that may be needed. For example I my current relationship is actually like this: 'input_vars' : relation(Var, primaryjoin = and_(script_table.c.id == var_table.c.script_id, var_table.c.input_output_type == 0), collection_class=column_mapped_collection(var_table.c.name)), So I would want to not only set the name automatically based on the key, but I would want to set the input_output_type to 0 in this case. Something like this would be good. def input_cb(key, item): item.name = key item.input_output_type = 0 If the setup I described above works out, this kind of thing could be had pretty much for free. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Custom caching
Hi, we have a DB app that uses SQLAlchemy and we'd like to add some custom DB caching logic. Unfortunately, there are no simple choke points that everything flows through where we could add this caching other than SQLAlchemy. We'd like to do things such as say invalidate the cache if tables X, Y and Z are updated / inserted or batch inserts to tables X and Y and only write them out every 10 minutes. I noticed that Sessions have an interface for SessionExtensions that would let me look at pending instances associated with the session at commit time. However, it looks like this would only capture updates made through the ORM APIs? Unfortunately we have people doing (relatively simple) direct SQL execution through Session.execute() as well. Any ideas other than rewrite everything that uses a proper API above SQLAlchemy to do the caching? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Custom caching
if youre looking to work at the statement interception level, we do have an API for that, called ProxyConnection, available in 0.5 under lib/sqlalchemy/interfaces.py. It seems like this would be required since you'd like to make decisions based on direct SQL execution. not sure how you'd do things like only write tables out every 10 minutes riding underneath the ORM session though, if you mean to be intercepting its attempts to write rows and then delay that.That sounds like something that would ride on *top* of the ORM. On May 19, 2008, at 2:15 PM, TP wrote: Hi, we have a DB app that uses SQLAlchemy and we'd like to add some custom DB caching logic. Unfortunately, there are no simple choke points that everything flows through where we could add this caching other than SQLAlchemy. We'd like to do things such as say invalidate the cache if tables X, Y and Z are updated / inserted or batch inserts to tables X and Y and only write them out every 10 minutes. I noticed that Sessions have an interface for SessionExtensions that would let me look at pending instances associated with the session at commit time. However, it looks like this would only capture updates made through the ORM APIs? Unfortunately we have people doing (relatively simple) direct SQL execution through Session.execute() as well. Any ideas other than rewrite everything that uses a proper API above SQLAlchemy to do the caching? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Custom caching
On May 19, 3:35 pm, Michael Bayer [EMAIL PROTECTED] wrote: if youre looking to work at the statement interception level, we do have an API for that, called ProxyConnection, available in 0.5 under lib/sqlalchemy/interfaces.py. It seems like this would be required since you'd like to make decisions based on direct SQL execution. Thanks, I'll check that out. not sure how you'd do things like only write tables out every 10 minutes riding underneath the ORM session though, if you mean to be intercepting its attempts to write rows and then delay that. That sounds like something that would ride on *top* of the ORM. Yeah, can do it on top, but for various reasons I was hoping there was some sort of interface that would let me interpose code before inserts were executed and then I could grab the insert and queue it for later rather than have it execute now. Sounds like the ProxyConnection may let me do this. On May 19, 2008, at 2:15 PM, TP wrote: Hi, we have a DB app that uses SQLAlchemy and we'd like to add some custom DB caching logic. Unfortunately, there are no simple choke points that everything flows through where we could add this caching other than SQLAlchemy. We'd like to do things such as say invalidate the cache if tables X, Y and Z are updated / inserted or batch inserts to tables X and Y and only write them out every 10 minutes. I noticed that Sessions have an interface for SessionExtensions that would let me look at pending instances associated with the session at commit time. However, it looks like this would only capture updates made through the ORM APIs? Unfortunately we have people doing (relatively simple) direct SQL execution through Session.execute() as well. Any ideas other than rewrite everything that uses a proper API above SQLAlchemy to do the caching? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Custom caching
On May 19, 2008, at 3:40 PM, TP wrote: Yeah, can do it on top, but for various reasons I was hoping there was some sort of interface that would let me interpose code before inserts were executed and then I could grab the insert and queue it for later rather than have it execute now. Sounds like the ProxyConnection may let me do this. when the Session goes to insert/update/delete a row, thats not a passive operation, it gets back data in all cases - either the newly generated primary key, or the count of rows affected. it definitely needs the PK value in order to place things in its identity mapping. So im not sure how you'd defer that for later. It also has a transactional model that would totally go out the window if the things it were issuing weren't actually happening. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] SqlAlchemy and Stored Procedures with variables
Hi, We're trying to use SQLAlchemy with MS SQL Server 2000 using stored procedures. We connect to the server doing something like this: engine = sqlalchemy.create_engine('mssql:// user:[EMAIL PROTECTED]/db') cur = engine.connect() print cur.execute(execute stored_proc @query='gra%';).fetchall() This gives us the following traceback: Traceback (most recent call last): File \\ubuntuwww\mcisroot\opt\Plone-2.5\sqltest.py, line 7, in module print cur.execute(execute sp_EDEN_Vendors (@query='gra %');).fetchall() File c:\python25\lib\site-packages\sqlalchemy-0.4.5-py2.5.egg \sqlalchemy\engine\base.py, line 844, in execute return Connection.executors[c](self, object, multiparams, params) File c:\python25\lib\site-packages\sqlalchemy-0.4.5-py2.5.egg \sqlalchemy\engine\base.py, line 854, in _execute_text self.__execute_raw(context) File c:\python25\lib\site-packages\sqlalchemy-0.4.5-py2.5.egg \sqlalchemy\engine\base.py, line 916, in __execute_raw self._cursor_execute(context.cursor, context.statement, context.parameters[0], context=context) File c:\python25\lib\site-packages\sqlalchemy-0.4.5-py2.5.egg \sqlalchemy\engine\base.py, line 960, in _cursor_execute self._handle_dbapi_exception(e, statement, parameters, cursor) File c:\python25\lib\site-packages\sqlalchemy-0.4.5-py2.5.egg \sqlalchemy\engine\base.py, line 942, in _handle_dbapi_exception raise exceptions.DBAPIError.instance(statement, parameters, e, connection_invalidated=is_disconnect) DatabaseError: (DatabaseError) internal error: None execute sp_EDEN_Vendors (@query='gra%'); {} If we take out the @query='gra%'; part, it runs fine. However, we need to be able to pass arguments to the stored procedure. How do we do this? Thanks! Mike --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SqlAlchemy and Stored Procedures with variables
Does the same statement work in an interactive query window, complete with the embedded semicolon you're using? Also, you should be able to use positional parameters instead of named parameters in your call: cur.execute(execute stored_proc 'gra%' ) Note that as of yet there is no SQLAlchemy support for OUT or IN/OUT parameters; currently you can return a single set of results via a SELECT in the stored procedure. Rick --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SqlAlchemy and Stored Procedures with variables
On May 19, 2008, at 5:30 PM, Rick Morrison wrote: Does the same statement work in an interactive query window, complete with the embedded semicolon you're using? Also, you should be able to use positional parameters instead of named parameters in your call: cur.execute(execute stored_proc 'gra%' ) Note that as of yet there is no SQLAlchemy support for OUT or IN/OUT parameters; currently you can return a single set of results via a SELECT in the stored procedure. theres OUT param support for Oracle - MS-SQL dialect could follow the same approach, provided pyodbc allows it. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] (scoped_session.)commit() not doing anything
Hi, This commit()-not-doing-anything problem is driving me crazy :( All my other commits in the same code base are working perfectly, it's just this one. My Session is a scoped_session -- Session = scoped_session( sessionmaker(autoflush=True, transactional=True, ...)) A debug goes -- (Pdb) list 277 import pdb; pdb.set_trace() 278 - item_property = itemproperty_q.filter_by( 279 a_itemid=a_itemid, b_itemid=b_itemid, 280 propertyid=propertyid, 281 userid=userid).first() 282 item_property.fav_itemid = fav_itemid 283 model.Session.commit() (Pdb) n /item.py(279)compare() - a_itemid=a_itemid, b_itemid=b_itemid, (Pdb) n /item.py(280)compare() - propertyid=propertyid, (Pdb) n /item.py(281)compare() - userid=userid).first() (Pdb) n So far so good, generates lots of output from the select statement. The next statement sets item_property.fav_itemid to 10002 from None -- /item.py(282)compare() - item_property.fav_itemid = fav_itemid (Pdb) list 277 import pdb; pdb.set_trace() 278 item_property = itemproperty_q.filter_by( 279 a_itemid=a_itemid, b_itemid=b_itemid, 280 propertyid=propertyid, 281 userid=userid).first() 282 - item_property.fav_itemid = fav_itemid 283 model.Session.commit() (Pdb) item_property.fav_itemid (Pdb) fav_itemid 10002 But then the next Session.commit() statement does _nothing_ -- (Pdb) n /item.py(283)compare() - model.Session.commit() (Pdb) item_property.fav_itemid 10002 (Pdb) n /item.py(254)compare() I spent much time ripping the mapped table into an IPython session and saw it working fine -- In [82]: item_property.fav_itemid = 10002 In [83]: Session.commit() ...executes the update statement What could have gong wrong? _Many_ thanks in advance! Jerry --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---