[sqlalchemy] Re: To select a tree with PG connectby() in a single request
After debugging, i've noticed that the issue is related to eager loaded relations. If you try the example script with _descendants relation having lazy=None or True, then the extension method is not called anymore. Is there a way to fire the extension method even without eadger loading? i cant see any problem from what I see here. The example script definitely works and append_result is called, so just try to see what's different here vs. that script, and try stepping through with pdb for more detail. If all else fails, send along a full reproducing test case. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: To select a tree with PG connectby() in a single request
Sorry, this was a bug. The byroot_tree example had been changed to work around some recent design decisions (it also works more intelligently but is dependent on SA's eager load methodology), which led me to not realize that append_result() was no longer working in the way it was designed. This functionality has been restored in r4520 so that append_result() is called for all single-entity queries. On Apr 18, 2008, at 5:35 AM, jean-philippe dutreve wrote: After debugging, i've noticed that the issue is related to eager loaded relations. If you try the example script with _descendants relation having lazy=None or True, then the extension method is not called anymore. Is there a way to fire the extension method even without eadger loading? i cant see any problem from what I see here. The example script definitely works and append_result is called, so just try to see what's different here vs. that script, and try stepping through with pdb for more detail. If all else fails, send along a full reproducing test case. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Sqlautocode error
On 17 Apr, 18:10, Lele Gaifax [EMAIL PROTECTED] wrote: On Thu, 17 Apr 2008 07:40:02 -0700 (PDT) Either get the trunk of autocode, or apply the simple patch available athttp://code.google.com/p/sqlautocode/issues/detail?id=2 I tried, but http://sqlautocode.googlecode.com/svn/trunk/ grabs the 0.3 SA compatible version so it won't work for me. But I've applied your patch and now it works perfectly :) Thank you --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: subquery and inheritance
On Apr 17, 2008, at 10:42 PM, kris wrote: I am building a tree structure of D1 and D2 nodes.. I am progressively generating a query as before execution using subqueries. s = session.query(D1).filter (...)._values(D1.c.id).statement ... q = session.query (D2).select_from (s).filter (s.base_id == D2.parent_id) s is going to select columns for D1. q is going to select columns for D2. So the above manuever doesnt make much sense as s will not supply the columns which D2 needs. a select_from() call is intended to provide a selectable which provides the same table columns as a base query against D2 would use. The mapper isnt going to guess that the given selectable sort of corresponds to a particular base table (and in this case it doesn't even do that exactly since D1's columns are involved). print q SELECT anon_1.base_id AS anon_1_base_id FROM (SELECT base.id AS base_id FROM base JOIN derived1 ON base.id = derived1.id WHERE ) AS anon_1, base, derived2 WHERE anon_1.base_id = derived2.parent_id ORDER BY anon_1.base_id This seems to generating extra join expression with 'base' without the filtering right, the Query has no idea what you're trying to do and adds in derived2 to the FROM clause since it is just adding those columns into the columns clause (whose table then shows up in the FROM). Any help appreciated. write out the exact SQL statement from which you'd like to select rows from, either D1 or D2 rows or both, and we'll go from there. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mssql, Linux, unixODBC - Could not loca te column in row for column
On 2008-04-17 22:24, Lukasz Szybalski wrote: On Thu, Apr 17, 2008 at 3:04 PM, Rick Morrison [EMAIL PROTECTED] wrote: It's a two-line change that pops the new keyword out of the config dict just like the others that were added. Mike, can you take a quick look at mssql.py line 804 and see why this might be complaining? I've got to run out. well I don't know if that is a right place to add that? from the code this what it would expect e = sqlalchemy.create_engine(mssql://xxx:[EMAIL PROTECTED]:1433/xxx?odbc_options=Driver=TDS;TDS_Version=8.0) vs you said you wanted: sqlalchemy.create_engine('mssql://xxx:[EMAIL PROTECTED]:1433/',odbc_options='DRIVER={TDS};TDS_Version=8.0') to create Server=xx;Database=xx;Port=1433;UID=xx;PWD=xxx;Driver={TDS};TDS_Version=8.0 Reading this thread, I keep wondering why you are trying to put all that connection setup configuration into the connection string... Such setting are normally configured in the odbc.ini file and then you just reference data source name in the connection string. That's the standard way of using ODBC and the reason why you have ODBC managers with nice setup GUIs. A DSN-less setup like the one created by SA bypasses the ODBC manager configuration. SA should really adapt to the ODBC standard of using data source names, as it moves the connection configuration where it should be: Into the scope of the ODBC manager you are using to configure your ODBC drivers. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 18 2008) Python/Zope Consulting and Support ...http://www.egenix.com/ mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/ Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: To select a tree with PG connectby() in a single request
Thank you for your support. You have done an awesome work overall. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column
Reading this thread, I keep wondering why you are trying to put all that connection setup configuration into the connection string... Such setting are normally configured in the odbc.ini file and then you just reference data source name in the connection string. That's the standard way of using ODBC and the reason why you have ODBC managers with nice setup GUIs. A DSN-less setup like the one created by SA bypasses the ODBC manager configuration. Only via an option: DSN connections have been supported for some time via the 'dsn' keyword, the OP seems to either not want that or can't get it to work. SA should really adapt to the ODBC standard of using data source names, as it moves the connection configuration where it should be: Into the scope of the ODBC manager you are using to configure your ODBC drivers. SA already has a de-facto standard using a db-uri scheme that works with non-ODBC datasources as well. It makes sense for ODBC compliant SA database drivers to conform to that form, not to just displace it and force a config-file style of setup. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column
Rick Morrison wrote: Yeah, I was under the impression that config args passed in via create_engine() ctor and via dburi were treated the same, but looking over engine/strategies.py, it looks as if they have two separate injection points. I'll see if I can get it to allow either, stay tuned. create_engine('mssql://h/db', connect_args=dict(odbc_options='bar')) create_engine('mssql://h/db?odbc_options=bar') --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column
hey thanks Jason, that's a nice shortcut. Lukasz, can you please give that a try? On Fri, Apr 18, 2008 at 12:07 PM, jason kirtland [EMAIL PROTECTED] wrote: Rick Morrison wrote: Yeah, I was under the impression that config args passed in via create_engine() ctor and via dburi were treated the same, but looking over engine/strategies.py, it looks as if they have two separate injection points. I'll see if I can get it to allow either, stay tuned. create_engine('mssql://h/db', connect_args=dict(odbc_options='bar')) create_engine('mssql://h/db?odbc_options=bar') --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column
Yeah, I was under the impression that config args passed in via create_engine() ctor and via dburi were treated the same, but looking over engine/strategies.py, it looks as if they have two separate injection points. I'll see if I can get it to allow either, stay tuned. On Thu, Apr 17, 2008 at 4:24 PM, Lukasz Szybalski [EMAIL PROTECTED] wrote: On Thu, Apr 17, 2008 at 3:04 PM, Rick Morrison [EMAIL PROTECTED] wrote: It's a two-line change that pops the new keyword out of the config dict just like the others that were added. Mike, can you take a quick look at mssql.py line 804 and see why this might be complaining? I've got to run out. well I don't know if that is a right place to add that? from the code this what it would expect e = sqlalchemy.create_engine(mssql://xxx:[EMAIL PROTECTED] :1433/xxx?odbc_options=Driver=TDS;TDS_Version=8.0) vs you said you wanted: sqlalchemy.create_engine('mssql://xxx:[EMAIL PROTECTED] :1433/',odbc_options='DRIVER={TDS};TDS_Version=8.0') to create Server=xx;Database=xx;Port=1433;UID=xx;PWD=xxx;Driver={TDS};TDS_Version=8.0 Lucas --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column
On Fri, Apr 18, 2008 at 11:07 AM, jason kirtland [EMAIL PROTECTED] wrote: Rick Morrison wrote: Yeah, I was under the impression that config args passed in via create_engine() ctor and via dburi were treated the same, but looking over engine/strategies.py, it looks as if they have two separate injection points. I'll see if I can get it to allow either, stay tuned. create_engine('mssql://h/db', connect_args=dict(odbc_options='bar')) create_engine('mssql://h/db?odbc_options=bar') so to make it clear. create_engine('mssql://h/db', connect_args=dict(odbc_options='DRIVER={TDS};TDS_Version=8.0')) This doesn't append driver and tds version to connection string. Isn't connect_args for sqlalchemy specific actions and not for pyodbc connector string. create_engine('mssql://h/db?odbc_options=bar') this will work if I had a single option, but I will supply multiple options with '=' operator. Example: This string need to be appended to connection string when passed to pyodbc. 'DRIVER={TDS};TDS_Version=8.0' so If you guys don't mind making the whole expression double quotes we can move on to: create_engine(mssql://h/db?odbc_options='DRIVER={TDS};TDS_Version=8.0' ) but now the driver that I just supplied in (line 804), was previously added in 783 so now you have 2 drivers. which leads me to try: e = sqlalchemy.create_engine(mssql://xxx:[EMAIL PROTECTED]:1433/xxx?driver=TDS?odbc_options='TDS_Version=8.0') this only takes the first argument driver and skips odbc_options solution : option 1: make it so mssql://xxx:[EMAIL PROTECTED]:1433/xxx?driver=TDS?odbc_options='TDS_Version=8.0' handles both parameters driver and odbc_options option 2: We either move line 804 above line 783 and then check if it includes driver already? line 784 if 'dsn' in keys: connectors = ['dsn=%s' % keys['dsn']] else: connectors = [DRIVER={%s} % keys.pop('driver', 'SQL Server'), 'Server=%s' % keys['host'], 'Database=%s' % keys['database'] ] line 804: if 'odbc_options' in keys: connectors.append(keys.pop('odbc_options')) or option3? Lucas --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Ensuring a clean database for each test case
How can I realize this concept of a completely new and isolated DB environment for each single test case that's being run? Not sure if this is a useful answer or not, but I just made starting sql files for each of my test suites. It's an extra step but then you have a convenient file to put starting data in. def setup(self): os.sys(mysql -ufoo -pbar -Ddb_name test_db_1.sql) etc etc Works ok and is actually pretty quick. Iain --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column
On Fri, Apr 18, 2008 at 1:36 PM, Rick Morrison [EMAIL PROTECTED] wrote: Err, on a second look, that's no good. The connect_args are passed directly through to connect(). This thing needs to construct an ODBC connection string from some fragments provided by the dburi, and from some engine options. So we'll either need some parsing tricks for the URL to allow strings with embedded equals signs, or some way to get that parameter to Dialect.create_connect_args() so it can be combined with the username, etc. I've looked up how to pass these arguments and this will work: e = sqlalchemy.create_engine(mssql://xxx:[EMAIL PROTECTED]:1433/xx?driver=TDSodbc_options='TDS_Version=8.0') I'll test this and see if it works. Lucas create_engine('mssql://h/db', connect_args=dict(odbc_options='bar')) create_engine('mssql://h/db?odbc_options=bar') -- Automotive Recall Database. Cars, Trucks, etc. http://www.lucasmanual.com/recall/ TurboGears Manual-Howto http://lucasmanual.com/pdf/TurboGears-Manual-Howto.pdf --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: subquery and inheritance
The problem stems from a tree structure and creating self joins on a very large base table.. I am trying to create datasets of items and filter on the contents of datasets in single query that it built up progressively. base = Table ('base', Column('id', Integer, primarykey=True) Column('owner', String(255)) ... ) dataset = Table('dataset', Column('id', Integer, ForiegnKey('base.id'), primarykey=True) Column('parent_id', Integer, ForiegnKey('base.id')) Column('something_id', Integer, ForiegnKey('base.id')) ) tags = Table('tags', Column('id', Integer, ForiegnKey('base.id'), primarykey=True) Column('parent_id', Integer, ForiegnKey('base.id')) Column('name', Text)) item = Table('item', Column('id', Integer, ForiegnKey('base.id'), primarykey=True) Column('parent_id', Integer, ForiegnKey('base.id')) Column('stuff', Text)) mapper(Dataset, dataset, inherits = base) ... # Find a dataset owned by me s = session.query(Dataset).filter(Dataset.owner ==me) # Filter it so that it is tagged with good s = s.query (and_(tag.name == good, tag.parent_id = Dataset.id)) # Find all somethings in the above dataset s = s._value(Dataset.c.something_id).statement # The something are items with tags # Find those items in the datasets (i.e. all that join w base) q = session.query(Item).select_from (s) q = q.filter (and_(tag.c.name==really, tag.parent_id == dataset.c.id)) print q.all() I think I want something like the following: select item.id from item, (select dataset.something_id from base, dataset where base.id = dataset.id and base.owner ='me' tag.c.name=good and tag.c.parent_id == base.id ) as datasetsomething where item.id = datasetsomthing.id and tag.c.name=good and tag.c.parent_id == item.id or even further as I think that subqueries are better performing that massive self joins even when doing simple filtering: select item.id from item, (select dataset_me.something_id from (select * from base, dataset where base.id = dataset.id and base.owner=me) as dataset_me where tag.c.name=good and tag.c.parent_id == dataset_me.id ) as datasetsomething where item.id = datasetsomthing.id and tag.c.name=good and tag.c.parent_id == item.id Thanks, kris --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: subquery and inheritance
On Apr 18, 2008, at 3:18 PM, kris wrote: I think I want something like the following: select item.id from item, (select dataset.something_id from base, dataset where base.id = dataset.id and base.owner ='me' tag.c.name=good and tag.c.parent_id == base.id ) as datasetsomething where item.id = datasetsomthing.id and tag.c.name=good and tag.c.parent_id == item.id or even further as I think that subqueries are better performing that massive self joins even when doing simple filtering: select item.id from item, (select dataset_me.something_id from (select * from base, dataset where base.id = dataset.id and base.owner=me) as dataset_me where tag.c.name=good and tag.c.parent_id == dataset_me.id ) as datasetsomething where item.id = datasetsomthing.id and tag.c.name=good and tag.c.parent_id == item.id what I notice about both of these are that you're using correlations. So right off , using the Query, which has opinions about how to build select statements, to build up a statement like this with its current functionality (as well as what we're planning to do in 0.5) is awkward if not impossible. The Query thinks in terms of entities, such as Dataset or Item - when it sees that one of those is desired, it's going to use the full selectable for each of those entities, such as dataset join base or item join base for example.The Query is meant for when you want to think more in terms of objects and not about statement optimization, nor the specifics of joining together the individual tables used to load an inheriting class. The from_self() use case, which is essentially what you're playing with, is a new feature we have which provides a way to select from the results of a query, but is still not as flexible as raw select() constructs. To build up a select() statement at a fine grained level like this is more appropriate using the select() construct directly. You can then feed this construct into the Query using query.from_statement() which will load a distinct entity from each row (or alternatively you can configure it to load multiple entities horizontally from each row). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column
nope. Seems like the SQLA connection is still not using the appropriate magical incantation to get all that ODBC stuff to behave reasonably. might I suggest just circumventing the URL entirely and just using creator=lambda: pyodbc .connect (DRIVER ={TDS};SERVER=;UID=;PWD=;TDS_Version=7.0) ? for something this incredibly hacked and specific, it seems like the right way to go. On Apr 18, 2008, at 3:35 PM, Lukasz Szybalski wrote: On Wed, Apr 2, 2008 at 4:24 PM, Michael Bayer [EMAIL PROTECTED] wrote: Gottit. Notice, pyodbc is truncating all the column names to 30 characters. anyway, thats why its broken. Is this a bug in pyodbc ? I thought MS-SQL has a much bigger limit than this ? I finally got the driver name and tds version into sqlalchemy and it should be passing it on to pyodbc but it doesn't seem to work. I'm attaching a file. I have query to mssql via pyodbc and tds 7/8 and it returns long field description. but same thing in sqlalchemy doesn't. Is there some other processing done on it? Here is the file. Thanks, Lucas sqlalchemy3.txt --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Sqlautocode error
On Fri, 18 Apr 2008 01:06:41 -0700 (PDT) Vortexmind [EMAIL PROTECTED] wrote: On 17 Apr, 18:10, Lele Gaifax [EMAIL PROTECTED] wrote: On Thu, 17 Apr 2008 07:40:02 -0700 (PDT) Either get the trunk of autocode, ... I tried, but http://sqlautocode.googlecode.com/svn/trunk/ grabs the 0.3 SA compatible version so it won't work for me. Sorry, that confused me too: the current version, 0.5 is developed in a branch. ciao,lele. -- nickname: Lele Gaifax| Quando vivrò di quello che ho pensato ieri real: Emanuele Gaifas| comincerò ad aver paura di chi mi copia. [EMAIL PROTECTED] | -- Fortunato Depero, 1929. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Ensuring a clean database for each test case
something like: - clean _all_ your refs to SA stuff like tables, mappers etc - sqlalchemy.orm.clear_mappers() - metadata.drop_all() - dbengine.dispose() plus eventualy del x._instance_key del x._state for all instances that are still around, and u want them resued in another db/mapping/... plus each db-dialect has its own way of removing/recreating a db (e.g. delete the file for sqlite, or dropdb xxx fos postgres etc). see dbcook.usage.sa_manager (methods destroy() and detach_instances() ) and dbcook.usage.sa_engine_defs, various recreate()s ciao On Friday 18 April 2008 21:34:03 iain duncan wrote: How can I realize this concept of a completely new and isolated DB environment for each single test case that's being run? Not sure if this is a useful answer or not, but I just made starting sql files for each of my test suites. It's an extra step but then you have a convenient file to put starting data in. def setup(self): os.sys(mysql -ufoo -pbar -Ddb_name test_db_1.sql) etc etc Works ok and is actually pretty quick. Iain --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] SA confusing timestamp with interval?
I'm using SA 0.4.3 and PostgreSQL 8.3.1 I'm new to SA, so perhaps I'm doing something wrong or just not understanding something, but I think SA is trying to treat my timestamps as intervals in some cases. If I run the equivalent (select c0 from t0 where c0 current_timestamp - interval '1 hour') via psql, it works as expected. However, If I run: #!/usr/bin/python from datetime import datetime from sqlalchemy.sql import text from sqlalchemy import create_engine, MetaData engine = create_engine('postgres://[EMAIL PROTECTED]/testdb', encoding='utf-8') metadata = MetaData(bind=engine) engine.execute(text(drop table if exists t0)) engine.execute(text(create table t0(c0 timestamp(0) with time zone))) engine.execute(text(insert into t0 values(current_timestamp))) engine.execute(text(select c0 from t0 where c0 :bindArg - interval '1 hour'), bindArg=datetime.utcnow()) I get: Traceback (most recent call last): File ./saerr.py, line 14, in module engine.execute(text(select c0 from t0 where c0 :bindArg - interval '1 hour'), bindArg=datetime.utcnow()) File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py, line 1214, in execute return connection.execute(statement, *multiparams, **params) File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py, line 846, in execute return Connection.executors[c](self, object, multiparams, params) File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py, line 897, in execute_clauseelement return self._execute_compiled(elem.compile(dialect=self.dialect, column_keys=keys, inline=len(params) 1), distilled_params=params) File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py, line 909, in _execute_compiled self.__execute_raw(context) File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py, line 918, in __execute_raw self._cursor_execute(context.cursor, context.statement, context.parameters[0], context=context) File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py, line 962, in _cursor_execute self._handle_dbapi_exception(e, statement, parameters, cursor) File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py, line 944, in _handle_dbapi_exception raise exceptions.DBAPIError.instance(statement, parameters, e, connection_invalidated=is_disconnect) sqlalchemy.exceptions.ProgrammingError: (ProgrammingError) invalid input syntax for type interval: 2008-04-18T15:37:02.235955 select c0 from t0 where c0 %(bindArg)s - interval '1 hour' {'bindArg': datetime.datetime(2008, 4, 18, 15, 37, 2, 235955)} --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SA confusing timestamp with interval?
On Apr 18, 2008, at 4:42 PM, Matthew Dennis wrote: I'm using SA 0.4.3 and PostgreSQL 8.3.1 I'm new to SA, so perhaps I'm doing something wrong or just not understanding something, but I think SA is trying to treat my timestamps as intervals in some cases. If I run the equivalent (select c0 from t0 where c0 current_timestamp - interval '1 hour') via psql, it works as expected. However, If I run: #!/usr/bin/python from datetime import datetime from sqlalchemy.sql import text from sqlalchemy import create_engine, MetaData engine = create_engine('postgres://[EMAIL PROTECTED]/testdb', encoding='utf-8') metadata = MetaData(bind=engine) engine.execute(text(drop table if exists t0)) engine.execute(text(create table t0(c0 timestamp(0) with time zone))) engine.execute(text(insert into t0 values(current_timestamp))) engine.execute(text(select c0 from t0 where c0 :bindArg - interval '1 hour'), bindArg=datetime.utcnow()) the text() above sends through the bind argument to psycopg2 directly, which knows how to handle datetime objects. What happens if you test with raw psycopg2 ? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SA confusing timestamp with interval?
I get a similar result if I use psycopg2 directly: #!/usr/bin/python import psycopg2 from datetime import datetime conn = psycopg2.connect('''dbname=testdb user=postgres host=localhost''') cur = conn.cursor() cur.execute(drop table if exists t0) cur.execute(create table t0(c0 timestamp(0) with time zone)) cur.execute(insert into t0 values(current_timestamp)) cur.execute(select c0 from t0 where c0 %(bindArg)s - interval '1 hour', {'bindArg':datetime.utcnow()}) On Fri, Apr 18, 2008 at 6:26 PM, Michael Bayer [EMAIL PROTECTED] wrote: On Apr 18, 2008, at 4:42 PM, Matthew Dennis wrote: I'm using SA 0.4.3 and PostgreSQL 8.3.1 I'm new to SA, so perhaps I'm doing something wrong or just not understanding something, but I think SA is trying to treat my timestamps as intervals in some cases. If I run the equivalent (select c0 from t0 where c0 current_timestamp - interval '1 hour') via psql, it works as expected. However, If I run: #!/usr/bin/python from datetime import datetime from sqlalchemy.sql import text from sqlalchemy import create_engine, MetaData engine = create_engine('postgres://[EMAIL PROTECTED]/testdb', encoding='utf-8') metadata = MetaData(bind=engine) engine.execute(text(drop table if exists t0)) engine.execute(text(create table t0(c0 timestamp(0) with time zone))) engine.execute(text(insert into t0 values(current_timestamp))) engine.execute(text(select c0 from t0 where c0 :bindArg - interval '1 hour'), bindArg=datetime.utcnow()) the text() above sends through the bind argument to psycopg2 directly, which knows how to handle datetime objects. What happens if you test with raw psycopg2 ? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SA confusing timestamp with interval?
On Apr 18, 2008, at 10:20 PM, Matthew Dennis wrote: I get a similar result if I use psycopg2 directly: #!/usr/bin/python import psycopg2 from datetime import datetime conn = psycopg2.connect('''dbname=testdb user=postgres host=localhost''') cur = conn.cursor() cur.execute(drop table if exists t0) cur.execute(create table t0(c0 timestamp(0) with time zone)) cur.execute(insert into t0 values(current_timestamp)) cur.execute(select c0 from t0 where c0 %(bindArg)s - interval '1 hour', {'bindArg':datetime.utcnow()}) great. lets let them know on the psycopg2 list. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SA confusing timestamp with interval?
I posted it on the psycopg list at http://lists.initd.org/pipermail/psycopg/2008-April/006026.html, but it mangled my link to this discussion (by eating a space after the URL and appending the first word of the next sentence) On Fri, Apr 18, 2008 at 9:34 PM, Michael Bayer [EMAIL PROTECTED] wrote: On Apr 18, 2008, at 10:20 PM, Matthew Dennis wrote: I get a similar result if I use psycopg2 directly: #!/usr/bin/python import psycopg2 from datetime import datetime conn = psycopg2.connect('''dbname=testdb user=postgres host=localhost''') cur = conn.cursor() cur.execute(drop table if exists t0) cur.execute(create table t0(c0 timestamp(0) with time zone)) cur.execute(insert into t0 values(current_timestamp)) cur.execute(select c0 from t0 where c0 %(bindArg)s - interval '1 hour', {'bindArg':datetime.utcnow()}) great. lets let them know on the psycopg2 list. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---