[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?
On 10 Dic, 03:11, Michael Bayer [EMAIL PROTECTED] wrote: I cant reproduce your problem, although i dont have access to MSSQL here and there may be some issue on that end. Attached is your script using an in-memory sqlite database, with the update inside of a while loop, and it updates regularly.A few things to try on the MSSQL side, if the issue is due to some typing issue, try not using autoload=True, try using generic types instead of the MSSQL specific ones, etc., in an effort to narrow down what might be the problem. I've redefined the table using only generic types: jobs = sa.Table('jobs', metadata, sa.Column('identifier', sa.VARCHAR(20), primary_key=True), sa.Column('section', sa.VARCHAR(20)), sa.Column(start,sa.DATETIME, primary_key=True), sa.Column(stop,sa.DATETIME), sa.Column(station, sa.VARCHAR(20)), autoload=False) and also autoload=False made no difference. I'll trying changing something else... also ive added MSSQL/pyodbc to the subject line here in case any of the MSSQL crew wants to try out your script with pyodbc. Thanks. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Design: mapped objects everywhere?
Yowser. Thanks to both of you - that's exactly what I mean. Any pointers on where I can find an example of a class that is unaware if it is in the db? Or is there a good example of the second solution, of a single class that does the what and why, and an interchangeable layer/context that does load/saving? I'm digging through dbcook.sf.net but haven't found anything just yet. On 2007 Dec 7, at 22:07, [EMAIL PROTECTED] wrote: Paul Johnston wrote: A Sample may be created by the web application or fetched from the database. Later on, it may be disposed of, edited or checked back into the db. Sounds like you want your app to be mostly unaware of whether a class is saved in the db or not (i.e. persistent)? If so, I'd use a single class, design the properties so they work in non-persistent mode, and then they'll work in persistent mode as well. or like a single class that does the what and why, and an interchangeable layer/context that does load/saving (and the relations!). in such situations declarative programming helps a lot, so u dont bind your self to (the) db (or whatever persistency). Check dbcook.sf.net. My own latest experience is about turning a project that was thought for db/using dbcook into non-db simple-file-based persistency. The change was relatively small, like 5-10 lines per class - as long as there are Collections etc similar notions so Obj side of ORM looks same. -- Dr Paul-Michael Agapow: VieDigitale / Inst. for Animal Health [EMAIL PROTECTED] / [EMAIL PROTECTED] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Design: mapped objects everywhere?
On Monday 10 December 2007 12:12:19 Paul-Michael Agapow wrote: Yowser. Thanks to both of you - that's exactly what I mean. Any pointers on where I can find an example of a class that is unaware if it is in the db? Or is there a good example of the second solution, of a single class that does the what and why, and an interchangeable layer/context that does load/saving? I'm digging through dbcook.sf.net but haven't found anything just yet. well... good example - no. there is a bad example: dbcook/dbcook/usage/example/example1.py The classes are plain classes (.Base can be anything/object), with some DB-related declarations/metainfo in them. they do not have to know that they are DB-related. if u dont give them to dbcook.builder.Builder, they will not become such. If u give them, they will become SA-instrumented etc, but u still do not have to change anything - as long as your methods do not rely (too much) on being (or not being) DB. see dbcook.expression as attempt to wrap some queries in independent manner. more, if u redefine the Reflector u can have different syntax for db-metainfo - or get it from different place, not at all inside the class. So u can plug that in and out whenever u decide to (no example on this, its theoretical ;-). Still, the final class (or object) will be always aware about being in the db or not; it is _you_ who should know when u do not care (95%) and when u do (5%). All this is proper design and then self-discipline issue: u have to keep the things separate (and i tell u, it is NOT easy) if u start putting it any db-stuff in the classes, no framework will help u. complete opaque separation is probably possible, but will probably mean having 2 paralel class hierarchies instead of one. On 2007 Dec 7, at 22:07, [EMAIL PROTECTED] wrote: Paul Johnston wrote: A Sample may be created by the web application or fetched from the database. Later on, it may be disposed of, edited or checked back into the db. Sounds like you want your app to be mostly unaware of whether a class is saved in the db or not (i.e. persistent)? If so, I'd use a single class, design the properties so they work in non-persistent mode, and then they'll work in persistent mode as well. or like a single class that does the what and why, and an interchangeable layer/context that does load/saving (and the relations!). in such situations declarative programming helps a lot, so u dont bind your self to (the) db (or whatever persistency). Check dbcook.sf.net. My own latest experience is about turning a project that was thought for db/using dbcook into non-db simple-file-based persistency. The change was relatively small, like 5-10 lines per class - as long as there are Collections etc similar notions so Obj side of ORM looks same. -- Dr Paul-Michael Agapow: VieDigitale / Inst. for Animal Health [EMAIL PROTECTED] / [EMAIL PROTECTED] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] SessionExtension and Transactions: how to coordinate all SessionExtension funcs
Foreword: sqlalchemy is really amazing! Hello, I'm trying to build a database where users become aware of what has been changed by other users: there is a SessionExtension that collects info about changes and then dispatch some messages with a Pyro Event Server. I'm trying to understand what can be my link point between flush, commit and rollback operations to store the single IDs into a per-transaction dictionary and then, when there is a commit on a non-nested transaction, gather all and dispatch the message through Pyro I thought I could use id(session.transaction) as a dictionary key, but - it changes between before_flush and after_flush - remains the same between before_flush, after_commit and rollback I need to gather IDs in after_commit because theID is a serial Postgres value that is available only after flush() The only correct link I found it in an obscure (to me) _SessionTransaction__parent, so I suspect this is not the correct way to get to my goal. Is there a cleaner way to do such a thing? Thanks in advance, here is some code: in mySessionExtension I put: def after_rollback(self, session): print after_rollback, id(session.transaction) def after_commit(self, session): print after_commit, id(session.transaction) def before_flush(self, session, flush_context, objects): print before_flush, id(session.transaction) def after_flush(self, session, flush_context): print after_flush, id(session.transaction), id(session.transaction._SessionTransaction__parent) In the main I made: ss.begin() rep = dict((x.codice, x) for x in mappers.Reparto.query.all()) r = rep['3'] r.descrizione += '!' ss.flush() if 1: ss.begin_nested() r = rep['2'] r.descrizione += '!' ss.flush() ss.rollback() ss.commit() The resulting dump was before_flush 147722732 after_flush 149425612 147722732 before_flush 149423788 after_flush 149489964 149423788 after_rollback 149423788 after_commit 147722732 I'm working with Python 2.5, SQA 0.4.1 with scoped_session (transactional=False, autoflush=False) Thanks -- Cordialmente Stefano Bartaletti Responsabile Software G.Tosi Spa Tintoria Skype account: stefano.bartaletti ICQ contact : 1271960 Viale dell'Industria 61 21052 Busto Arsizio (VA) Tel. +39 0331 34 48 11 Fax +39 0331 35 21 23 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SessionExtension and Transactions: how to coordinate all SessionExtension funcs
The only correct link I found it in an obscure (to me) _SessionTransaction__parent, so I suspect this is not the correct way to get to my goal. Is there a cleaner way to do such a thing? Just one thing: I know the syntax _class__attribute to access hidden python attirbutes, obscure is the way I'm trying to use this, is it good or not? :-) -- Cordialmente Stefano Bartaletti Responsabile Software G.Tosi Spa Tintoria Skype account: stefano.bartaletti ICQ contact : 1271960 Viale dell'Industria 61 21052 Busto Arsizio (VA) Tel. +39 0331 34 48 11 Fax +39 0331 35 21 23 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SessionExtension and Transactions: how to coordinate all SessionExtension funcs
Stefano Bartaletti wrote: I need to gather IDs in after_commit because theID is a serial Postgres value that is available only after flush() Not really... in postgres, you can ask to consume the next sequence value with SELECT NEXTVAL('sequence_name') and explicitly set that as primary key value. -- This e-mail (and any attachment(s)) is strictly confidential and for use only by intended recipient(s). Any use, distribution, reproduction or disclosure by any other person is strictly prohibited. The content of this e-mail does not constitute a commitment by the Company except where provided for in a written agreement between this e-mail addressee and the Company. If you are not an intended recipient(s), please notify the sender promptly and destroy this message and its attachments without reading or saving it in any manner. Any non authorized use of the content of this message constitutes a violation of the obligation to abstain from learning of the correspondence among other subjects, except for more serious offence, and exposes the person responsible to the relevant consequences. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SessionExtension and Transactions: how to coordinate all SessionExtension funcs
Alle lunedì 10 dicembre 2007, Marco Mariani ha scritto: Stefano Bartaletti wrote: I need to gather IDs in after_commit because theID is a serial Postgres value that is available only after flush() Not really... in postgres, you can ask to consume the next sequence value with SELECT NEXTVAL('sequence_name') and explicitly set that as primary key value. Thanks I already thougt of this solution, but if I need to do the same for different attributes that are not valued until flush I'm in trouble again -- Cordialmente Stefano Bartaletti Responsabile Software G.Tosi Spa Tintoria Skype account: stefano.bartaletti ICQ contact : 1271960 Viale dell'Industria 61 21052 Busto Arsizio (VA) Tel. +39 0331 34 48 11 Fax +39 0331 35 21 23 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Possible to build a query object from a relation property?
Thanks. This looks like it should work. I will give it a try. -Allen On Dec 9, 2007 10:39 PM, Michael Bayer [EMAIL PROTECTED] wrote: On Dec 9, 2007, at 10:55 PM, Allen Bierbaum wrote: I am using SA 0.3.11 and I would like to know if there is a way to get a query object from a relation property. I have several one-to-many relationships in my application. These are all setup and work very well, but I find that I often want to perform further filtering of the objects in the relationship list property. I could write python code to do it, but if I could get SA to do it on the server, then all the better. it is the dynamic relation that you want, but for 0.3 you can write your own read-only property via: class MyClass(object): def _get_prop(self): return object_session(self).query(ChildClass).with_parent(self, 'attributename') attributename = property(_get_prop) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Question about using python expressions to create a GIS query
I am trying to figure out how to best use SA to create a GIS query. In my application I am actually using ORM objects and mappers, but to keep my question focused on clauses and python expressions, I am just trying to test this out without the ORM first. The SQL query I would like to generate is this: select AsText(the_geom), * from pt where SetSRID('BOX3D(-95.0 28.5, -95.8 28.8)'::box3d,4326) the_geom and contains(SetSRID('BOX3D(-95.0 28.5, -95.8 28.8)'::box3d,4326), the_geom) limit 100; So far the best I have been able to come up with is this: pt.select( sa.and_( pt.c.pos.op('')(func.SetSRID('BOX3D(-95.0 28.5, -95.8 28.8)'::box3d,4326)), func.contains(func.SetSRID('BOX3D(-95 28.5, -95.8 28.8)'::box3d,4326), pt.c.pos) ) ) Not the most readable way to represent it, but it seems to work. I have a couple questions though. - I reuse func.SetSRID('BOX3D(-95 28.5, -95.8 28.8)'::box3d,4326) twice. Is there a way to split this out into something I can just reuse? - Is there any way to write an extension operator or something that could generate this for me? If I had my way, I would want the query to look like this: pt.select( smart_contains( ((-95 28.5, -95.8 28.82), 4326), pt.c.pos)) - Can anyone point out a better way I could construct this query? Is there anything I am missing? Thanks, Allen --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Matching a DateTime-field
On Dec 10, 1:16 am, Rick Morrison [EMAIL PROTECTED] wrote: Any query using sql expressions is going to want to use correctly typed data -- you're trying to query a date column with a string value. The LIKE operator is for string data. I'm not up on my mssql date expressions, but the answer is going to resemble something like this: .filter(and_(func.datepart('year', List.expire) == 2007, func.datepart('month', List.expire) == the_month_number)) Ah yes, i had no idea how to match the dates the way you presented. Many Thanks ! br Adam --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Slow relation based assignment.
hey mike, Just to confirm - trunk fixes problem with deletion. Additionally, I have removed the lazy loading condition and it maintains the speed of the query. Thanks again to the team, Martin On Dec 7, 4:14 pm, Michael Bayer [EMAIL PROTECTED] wrote: hey martin - this bug is fixed in trunk r3868, so if you use the svn trunk you can either keep using the dynamic or go back to the regular relation, you should be good in both cases. - mike --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Matching a DateTime-field
On Dec 10, 1:16 am, Rick Morrison [EMAIL PROTECTED] wrote: Any query using sql expressions is going to want to use correctly typed data -- you're trying to query a date column with a string value. The LIKE operator is for string data. I'm not up on my mssql date expressions, but the answer is going to resemble something like this: .filter(and_(func.datepart('year', List.expire) == 2007, func.datepart('month', List.expire) == the_month_number)) Ok, isnt this mssql specifik? I only find datepart in various VB / .net documentation/solutions. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Matching a DateTime-field
Yeah, it was a for instance answer, you'll need to use the correct MySql syntax of course. On 12/10/07, Adam B [EMAIL PROTECTED] wrote: On Dec 10, 1:16 am, Rick Morrison [EMAIL PROTECTED] wrote: Any query using sql expressions is going to want to use correctly typed data -- you're trying to query a date column with a string value. The LIKE operator is for string data. I'm not up on my mssql date expressions, but the answer is going to resemble something like this: .filter(and_(func.datepart('year', List.expire) == 2007, func.datepart('month', List.expire) == the_month_number)) Ok, isnt this mssql specifik? I only find datepart in various VB / .net documentation/solutions. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?
This works here on MSSQL/pymssql with a small change: -- j = Job(TEST1, datetime.datetime.now()) ++ j = Job(1, datetime.datetime.now()) MSSQL (and most other db engines) are going to enforce type on the 'identifier' column. In the new code, it's an int, so...no strings allowed. The original example user uniqueidentifier, which is a rather odd duck, and I'm not sure would support an arbitrary string as a key. Unless you need real GUID keys for some reason, I would suggest using a normal string or int surrogate key like the new example does. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: From arbitrary SELECT to Query
On Dec 7, 2007, at 2:39 PM, Artur Siekielski wrote: The problem is that I get normal Python list, which eats much resources when database is big. Much better would be Query object which supports lazy loading. Note that I cannot use Query.filter(compoundSelect._whereclause) because CompundSelect doesn't have _whereclause. id just point out also that, we havent decided against the Query object yielding results as theyre received. i pointed out earlier in this thread that its complicated, but this is something we might finally try to tackle soon. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Lazy ID Fetching/generation
Rick Morrison wrote: Wouldn't a flavor of .save() that always flush()'ed work for this case? say, Session.persist(obj) Which would then chase down the relational references and persist the object graph of that object...and then add the now-persisted object to the identity map. ...something like a 'mini-flush'. Almost, except I would want it to only flush if I tried to access a db-generated attribute. The normal lazy behavior otherwise makes perfect sense to me. -Adam Batkin --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Lazy ID Fetching/generation
But another thing, is that the whole idea of save/update/save-or- update, which we obviously got from hibernate, is something ive been considering ditching, in favor of something more oriented towards a container like add(). since i think even hibernate's original idea of save/update has proven to be naive (for example, this is why they had to implement saveOrUpdate()). we like to keep things explicit as much as possible since thats a central philosophical tenet of Python. Hmm, that sounds interesting. Would it have similar flush() semantics like .save(), or would it be a kind of auto-flush thing? The issues with any implicit kind of flush() are tricky. Maybe not so much for the instance being .add() ed or .save() ed, those are usually somewhat stratightforward. The tricky parts are the related instances. Would relation()-based instances also be auto-flushed() and etc. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?
I did not get any exception... doh! :) What kind of exception did you get? The traceback I get is below. If you're not getting one, it may be a pyodbc issue, which I don't have installed right now. /me faces toward UK, where it's about midnight right now... /me yells HEY PAUL!! YOU WATCHING THIS THREAD?? Traceback (most recent call last): File test.py, line 31, in ? sa_session.commit() File /usr/lib/python2.4/site-packages/SQLAlchemy- 0.4.2dev_r3844-py2.4.egg/sqlalchemy/orm/session.py, line 484, in commit self.transaction = self.transaction.commit() File /usr/lib/python2.4/site-packages/SQLAlchemy- 0.4.2dev_r3844-py2.4.egg/sqlalchemy/orm/session.py, line 211, in commit self.session.flush() File /usr/lib/python2.4/site-packages/SQLAlchemy- 0.4.2dev_r3844-py2.4.egg/sqlalchemy/orm/session.py, line 684, in flush self.uow.flush(self, objects) File /usr/lib/python2.4/site-packages/SQLAlchemy- 0.4.2dev_r3844-py2.4.egg/sqlalchemy/orm/unitofwork.py, line 207, in flush flush_context.execute() File /usr/lib/python2.4/site-packages/SQLAlchemy- 0.4.2dev_r3844-py2.4.egg/sqlalchemy/orm/unitofwork.py, line 434, in execute UOWExecutor().execute(self, head) File /usr/lib/python2.4/site-packages/SQLAlchemy- 0.4.2dev_r3844-py2.4.egg/sqlalchemy/orm/unitofwork.py, line 1053, in execute self.execute_save_steps(trans, task) File /usr/lib/python2.4/site-packages/SQLAlchemy- 0.4.2dev_r3844-py2.4.egg/sqlalchemy/orm/unitofwork.py, line 1067, in execute_save_steps self.save_objects(trans, task) File /usr/lib/python2.4/site-packages/SQLAlchemy- 0.4.2dev_r3844-py2.4.egg/sqlalchemy/orm/unitofwork.py, line 1058, in save_objects task.mapper.save_obj(task.polymorphic_tosave_objects, trans) File /usr/lib/python2.4/site-packages/SQLAlchemy- 0.4.2dev_r3844-py2.4.egg/sqlalchemy/orm/mapper.py, line 1129, in save_obj c = connection.execute(statement.values(value_params), params) File /usr/lib/python2.4/site-packages/SQLAlchemy- 0.4.2dev_r3844-py2.4.egg/sqlalchemy/engine/base.py, line 796, in execute return Connection.executors[c](self, object, multiparams, params) File /usr/lib/python2.4/site-packages/SQLAlchemy- 0.4.2dev_r3844-py2.4.egg/sqlalchemy/engine/base.py, line 847, in execute_clauseelement return self._execute_compiled(elem.compile(dialect=self.dialect, column_keys=keys, inline=len(params) 1), distilled_params=params) File /usr/lib/python2.4/site-packages/SQLAlchemy- 0.4.2dev_r3844-py2.4.egg/sqlalchemy/engine/base.py, line 859, in _execute_compiled self.__execute_raw(context) File /usr/lib/python2.4/site-packages/SQLAlchemy- 0.4.2dev_r3844-py2.4.egg/sqlalchemy/engine/base.py, line 871, in __execute_raw self._cursor_execute(context.cursor, context.statement, context.parameters[0], context=context) File /usr/lib/python2.4/site-packages/SQLAlchemy- 0.4.2dev_r3844-py2.4.egg/sqlalchemy/engine/base.py, line 887, in _cursor_execute raise exceptions.DBAPIError.instance(statement, parameters, e) sqlalchemy.exceptions.DatabaseError: (DatabaseError) internal error: SQL Server message 245, severity 16, state 1, line 1: Conversion failed when converting the varchar value 'TEST1' to data type int. DB-Lib error message 20018, severity 5: General SQL Server error: Check messages from the SQL Server. 'INSERT INTO jobs (identifier, section, start, stop, station) VALUES (%(identifier)s, %(section)s, %(start)s, %(stop)s, %(station)s)' {'start': datetime.datetime(2007, 12, 10, 18, 15, 23, 170889), 'section': None, 'station': None, 'stop': None, 'identifier': 'TEST1'} --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?
Hi, /me faces toward UK, where it's about midnight right now... /me yells HEY PAUL!! YOU WATCHING THIS THREAD?? Ok, you got my attention :-) Not at my best right now after being out drinking, but hey... After a little tweak to the code (removing autoload=True, adding metadata.create_all() ) I get this: sqlalchemy.exceptions.ProgrammingError: (ProgrammingError) ('42000', '[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Conversion failed when converting from a character string to uniqueidentifier. (8169)') u'INSERT INTO jobs (identifier, section, start, stop, station) VALUES (?, ?, ?, ?, ?)' ['TEST1', None, datetime.datetime(2007, 12, 10, 23, 40, 30,593000), None, None] So, follow Rick's advice on fixing it. This does work with SQLite, but that's an accident of SQLite's funky type system more than anything. Paul --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: concurent modification
Thanks a lot, seems I've managed resolve problem with concurrent modifications by commit(), clear() and close() at each thread, but stuck with another one: Exception in thread Thread-62: Traceback (most recent call last): File threading.py, line 442, in __bootstrap self.run() File ./camper.py, line 109, in run theone = s.query(User).filter_by(username=user).first() File /usr/lib/python2.4/site-packages/sqlalchemy/orm/query.py, line 627, in first ret = list(self[0:1]) File /usr/lib/python2.4/site-packages/sqlalchemy/orm/query.py, line 656, in __iter__ return self._execute_and_instances(context) File /usr/lib/python2.4/site-packages/sqlalchemy/orm/query.py, line 659, in _execute_and_instances result = self.session.execute(querycontext.statement, params=self._params, mapper=self.mapper, instance=self._refresh_instance) File /usr/lib/python2.4/site-packages/sqlalchemy/orm/session.py, line 528, in execute return self.__connection(engine, close_with_result=True).execute(clause, params or {}) File /usr/lib/python2.4/site-packages/sqlalchemy/orm/session.py, line 510, in __connection return self.transaction.get_or_add(engine) File /usr/lib/python2.4/site-packages/sqlalchemy/orm/session.py, line 188, in get_or_add c = bind.contextual_connect() File /usr/lib/python2.4/site-packages/sqlalchemy/engine/base.py, line 1160, in contextual_connect return Connection(self, self.pool.connect(), close_with_result=close_with_result, **kwargs) File sqlalchemy/pool.py, line 163, in connect File sqlalchemy/pool.py, line 296, in __init__ File sqlalchemy/pool.py, line 173, in get File sqlalchemy/pool.py, line 571, in do_get TimeoutError: QueuePool limit of size 5 overflow 10 reached, connection timed out, timeout 30 Perhaps problem it is caused by long running threads that locking table. So another threads lines up in queue and exception appears after limit is reached. The question is it exists a way to resolve this problem not touching default values like size of queue or timeout ? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---