[sqlalchemy] Two enhancement proposals for orm.Query
What do you think about the following enhancements to orm.Query? 1. A new method (let's call it any() or exists()) to compute in the most efficient way whether a query will yield any result. It relates to __nonnull__ like count() relates to __len__, and it is similar to any() on a relation() attribute. Not looking at the result somewhat defeats the purpose of an ORM query. However, it is not uncommon at all. The best approximation seems to be query.first() is not None which can select a lot of columns. I often see query.count() 0 which can become quite expensive on a DBMS like PostgreSQL. The new method should help to avoid such traps. 2. It is not easy to join two tables/mapped classes using a relation() attribute in the reverse direction. (I started a thread about that which I cannot dig up anymore.) For example, if ref is a relation from A to B, one can say session.query(A).join(ref) to join A and B. But ref cannot be used in a session.query(B) to join B and A. Michael Bayer's advice was to introduce a backref. However, I think that backrefs should correspond to navigabilities in the class model, not to technicalities in the data model. Would it be difficult to make something like session.query(B).join(ref, backward=True) work? Best regards, Klaus --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] orm.Query: Is there a way to build joins using relations only?
Let's assume there is a foreign key A.fk - B.id from table A to table B, and these tables and columns are mapped to corresponding classes. Then let's translate the foreign key to a reference A.ref - B via a relation. We can easily join B onto A: session.query(A).join(ref) or session.query(A).join(A.ref). Joining in the opposite direction is much uglier: session.query(B).join ((A, A.fk == B.id)). It seems to need the basic foreign key column fk (if it is mapped). Is there a way to express this using only the relation? Something like session.query(B).join((A, A.ref == B)) or any notation like session.query(B).join((A, A.ref == B.self)) perhaps? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Comparing a relation attribute with null() fails - bug or feature? (SQLAlchemy 0.5.4p1)
OMG, I didn't read the second sentence on the page. Sorry for that. Klaus On 19 Mai, 19:21, Michael Bayer mike...@zzzcomputing.com wrote: log in as guest/guest On May 19, 2009, at 11:52 AM, klaus wrote: How do I get the privileges for that? On 19 Mai, 17:33, Michael Bayer mike...@zzzcomputing.com wrote: it seems like a small bug and you can file a ticket for that, but the intent is that you'd be using None to represent NULL in the general case. On May 19, 2009, at 11:21 AM, klaus wrote: Hi! Here is a small (and not very useful) example to demonstrate the problem. A table user contains a reference to itself (so that I don't need a second table). This foreign key translates to a relation. When I compare the corresponding attribute to null(), I get a traceback. It all works if I compare it to None instead. from sqlalchemy.engine import create_engine from sqlalchemy.schema import MetaData, Table, Column, ForeignKey from sqlalchemy.types import Integer from sqlalchemy.orm import mapper, relation, create_session from sqlalchemy.sql import null engine = create_engine(postgres://...) metadata = MetaData(engine) user = Table(user, metadata, Column(id, Integer, nullable=False, primary_key=True), Column(fk, Integer, ForeignKey(public.user.id)), schema=public, useexisting=True, autoload=False) class User(object): pass mapper(User, user, properties={ref: relation(User, uselist=False)}) metadata.create_all() session = create_session() print session.query(User).filter(User.ref == null()).all() metadata.drop_all() (The last line is not executed.) Here's the traceback: Traceback (most recent call last): File sqlatest.py, line 24, in ? print session.query(User).filter(User.ref == null()).all() File /home/barthelmannk/PortaleHeadZope/sandbox/eggs/ SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/sql/expression.py, line 1253, in __eq__ return self.operate(operators.eq, other) File /home/barthelmannk/PortaleHeadZope/sandbox/eggs/ SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/orm/attributes.py, line 120, in operate return op(self.comparator, *other, **kwargs) File /home/barthelmannk/PortaleHeadZope/sandbox/eggs/ SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/orm/properties.py, line 492, in __eq__ return _orm_annotate(self.property._optimized_compare(other, adapt_source=self.adapter)) File /home/barthelmannk/PortaleHeadZope/sandbox/eggs/ SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/orm/properties.py, line 623, in _optimized_compare value = attributes.instance_state(value) AttributeError: '_Null' object has no attribute '_sa_instance_state' Cheers Klaus --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Comparing a relation attribute with null() fails - bug or feature? (SQLAlchemy 0.5.4p1)
Hi! Here is a small (and not very useful) example to demonstrate the problem. A table user contains a reference to itself (so that I don't need a second table). This foreign key translates to a relation. When I compare the corresponding attribute to null(), I get a traceback. It all works if I compare it to None instead. from sqlalchemy.engine import create_engine from sqlalchemy.schema import MetaData, Table, Column, ForeignKey from sqlalchemy.types import Integer from sqlalchemy.orm import mapper, relation, create_session from sqlalchemy.sql import null engine = create_engine(postgres://...) metadata = MetaData(engine) user = Table(user, metadata, Column(id, Integer, nullable=False, primary_key=True), Column(fk, Integer, ForeignKey(public.user.id)), schema=public, useexisting=True, autoload=False) class User(object): pass mapper(User, user, properties={ref: relation(User, uselist=False)}) metadata.create_all() session = create_session() print session.query(User).filter(User.ref == null()).all() metadata.drop_all() (The last line is not executed.) Here's the traceback: Traceback (most recent call last): File sqlatest.py, line 24, in ? print session.query(User).filter(User.ref == null()).all() File /home/barthelmannk/PortaleHeadZope/sandbox/eggs/ SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/sql/expression.py, line 1253, in __eq__ return self.operate(operators.eq, other) File /home/barthelmannk/PortaleHeadZope/sandbox/eggs/ SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/orm/attributes.py, line 120, in operate return op(self.comparator, *other, **kwargs) File /home/barthelmannk/PortaleHeadZope/sandbox/eggs/ SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/orm/properties.py, line 492, in __eq__ return _orm_annotate(self.property._optimized_compare(other, adapt_source=self.adapter)) File /home/barthelmannk/PortaleHeadZope/sandbox/eggs/ SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/orm/properties.py, line 623, in _optimized_compare value = attributes.instance_state(value) AttributeError: '_Null' object has no attribute '_sa_instance_state' Cheers Klaus --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Comparing a relation attribute with null() fails - bug or feature? (SQLAlchemy 0.5.4p1)
How do I get the privileges for that? On 19 Mai, 17:33, Michael Bayer mike...@zzzcomputing.com wrote: it seems like a small bug and you can file a ticket for that, but the intent is that you'd be using None to represent NULL in the general case. On May 19, 2009, at 11:21 AM, klaus wrote: Hi! Here is a small (and not very useful) example to demonstrate the problem. A table user contains a reference to itself (so that I don't need a second table). This foreign key translates to a relation. When I compare the corresponding attribute to null(), I get a traceback. It all works if I compare it to None instead. from sqlalchemy.engine import create_engine from sqlalchemy.schema import MetaData, Table, Column, ForeignKey from sqlalchemy.types import Integer from sqlalchemy.orm import mapper, relation, create_session from sqlalchemy.sql import null engine = create_engine(postgres://...) metadata = MetaData(engine) user = Table(user, metadata, Column(id, Integer, nullable=False, primary_key=True), Column(fk, Integer, ForeignKey(public.user.id)), schema=public, useexisting=True, autoload=False) class User(object): pass mapper(User, user, properties={ref: relation(User, uselist=False)}) metadata.create_all() session = create_session() print session.query(User).filter(User.ref == null()).all() metadata.drop_all() (The last line is not executed.) Here's the traceback: Traceback (most recent call last): File sqlatest.py, line 24, in ? print session.query(User).filter(User.ref == null()).all() File /home/barthelmannk/PortaleHeadZope/sandbox/eggs/ SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/sql/expression.py, line 1253, in __eq__ return self.operate(operators.eq, other) File /home/barthelmannk/PortaleHeadZope/sandbox/eggs/ SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/orm/attributes.py, line 120, in operate return op(self.comparator, *other, **kwargs) File /home/barthelmannk/PortaleHeadZope/sandbox/eggs/ SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/orm/properties.py, line 492, in __eq__ return _orm_annotate(self.property._optimized_compare(other, adapt_source=self.adapter)) File /home/barthelmannk/PortaleHeadZope/sandbox/eggs/ SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/orm/properties.py, line 623, in _optimized_compare value = attributes.instance_state(value) AttributeError: '_Null' object has no attribute '_sa_instance_state' Cheers Klaus --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Possible documentation glitch?
The last example in the reference for Query.join (http:// www.sqlalchemy.org/docs/05/reference/orm/query.html#the-query-object) seems to contain a few typos: Articles.id should be Article.id article_keywords.c (2x) does not exist anymore. (The .c should simply be left out, I think.) Cheers Klaus --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Tiny feature request (+patch): Adding __len__ to orm.query.Query
This seems to become an FAQ. ;-) Klaus On 9 Mrz., 11:34, Thomas Wittek [EMAIL PROTECTED] wrote: On Mar 8, 5:49 pm, Michael Bayer [EMAIL PROTECTED] wrote: __len__() is called first before __iter__() and has the effect of two SQL statements being issued. Actually it seems to be the other way around, but nonetheless two queries are executed. Unfortunately I don't even see a reliable way to find out if __len__ was called directly after __iter__. If that could be detected the result length could possibly be cached by __iter__ and __len__ could use the cached result without executing a useless query. Pity. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Object state change tracking
Well, we are using SQLAlchemy in a multi-threaded environment. It is integrated into Zope by means of z3c.sqlalchemy. All sessions should be cleared before they are discarded. There are no exotic types involved. We have no useful profile output. The info is provided by the request monitor. Calls to __cleanup show up in all unusual places (when the garbage collector is activated) and take up considerable time. Best regards Klaus On 7 Feb., 16:00, Michael Bayer [EMAIL PROTECTED] wrote: On Feb 7, 2008, at 8:14 AM, klaus wrote: State tracking, the method __cleanup in sqlalchemy.orm.attributes in particular, is among the most time consuming tasks on our platform. The mutex lock seems to be a real bottleneck. Is there any way to work around this problem? Apparently, the weakrefs cannot be switched off (as in the session). Can you recommend something special to avoid in an application, something that triggers this resurrection? this is the first im hearing about that mutex being an issue (its a straight Lock; mutexes like that are extremely fast if used primarily in just a single thread which is the case here). I would imagine that the checks inside the __resurrect method are whats actually taking up the time...but even that check is extremely fast *unless* you are using a mutable scalar type, such as a mapped Pickle column. And if youre using a mapped Pickle column you can set the PickleType to mutable=False to disable the deep checking in that case - this condition is the one thing that could specifically make the cleanup operation a significant operation versus an almost instantaneous one. So I'd want to see more specifically what's causing a slowdown here. To avoid the critical section altogether, calling session.close() or session.clear() on any session that is about to be discarded should prevent any cleanup handlers from hitting that section (and of course, not dropping references to dirty objects on the outside until they are flushed). If it truly is just overal latency of __cleanup, in theory its not needed if using a strongly-referenced identity map so we could perhaps disable it in that case. id definitely need to see some profile output to determine more accurately what the cause of the slowdown is. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Readonly objects/protecting objects from modifications
Hi all, it's only now that I came across this interesting discussion. I tried similar things but what I wanted to protect was my cached data. And session.merge(obj, dont_load=True) triggers these AssertionErrors. :-( So I went for a MapperExtension instead. The after_update method can still prevent changes from being committed to the database. This solution is not very elegant, however. Best regards Klaus On 22 Dez. 2007, 17:03, Michael Bayer [EMAIL PROTECTED] wrote: On Dec 22, 2007, at 12:34 AM, Andreas Jung wrote: --On 21. Dezember 2007 16:33:34 -0500 Michael Bayer [EMAIL PROTECTED] wrote: On Dec 21, 2007, at 3:13 PM, Rick Morrison wrote: I think the only way something like this should be done is as a test fixture which decorates classes during unit tests.It would be fairly clumsy to have in production code. If you have coworkers who write broken code, the way you solve that is by having unit tests which will fail when the coworkers in question do something theyre not supposed to. If other people are writing code that sets attrbutes its not supposed to and breaks things, you need more tests to catch those conditions. If youre putting code into production that hasnt been tested, then you need a build process, automated testing, etc.There is definitely a best practice here and test driven development is it. With all respect, this is not a useful answer. Even with tests (unittests and weeks of manual tests) I had the case that a simple programming error (of my own) produced a data disaster after some weeks. There is no 100% test coverage. Tests don't solve all problems. There is sometimes the need for a better security belt. I am certainly suggesting a fixture that detects illegal assignments to attributes. That it be limited to just unit tests is only a suggestion.To establish this functionality regardless of environment, like Rick said just create properties which prohibit assignment. Create mappers like this: class AttrGetter(object): def __init__(self, name): self.name = name def __get__(self, instance, name): if instance is None: return self return getattr(instance, '_' + name) def __set__(self, instance, value): raise AssertionError(Sets are not allowed) def __delete__(self, instance): raise AssertionError(Deletes are not allowed) class MyClass(object): somecolumn = AttrGetter('somecolumn') someothercolumn = AttrGetter('someothercolumn') mapper(MyClass, sometable, properties={ '_somecolumn':sometable.c.somecolumn, '_someothercolumn':sometable.c.someothercolumn }) To automate the above process with no modifications to source code, create an instrumented mapper() function which applies the above recipe to all table columns: from sqlalchemy.orm import mapper as _mapper def mapper(cls, table, **kwargs): attrs = {} for c in table.c: attrs['_' + c.key] = c setattr(cls, c.key, AttrGetter(c.key)) properties = kwargs.setdefault('properties', {}) properties.update(attrs) return _mapper(cls, table, **kwargs) Hope this helps. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] MapperExtension.after_update and Session.merge(..., dont_load=True).
Hi all, sometime since version 0.4.2, the after_update hook of a MapperExtension fires even if no SQL UPDATE statement is generated. Is this a bug or a feature? In my case, an object is marked as dirty because a backref has changed, not because of any change in the object itself. A merge(..., dont_load=True) is also part of the mix. If it's a bug, I'll try to provide more details. Best regards Klaus --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MapperExtension.after_update and Session.merge(..., dont_load=True).
Nice, thanks! On 14 Jan., 16:16, Michael Bayer [EMAIL PROTECTED] wrote: On Jan 14, 2008, at 5:30 AM, klaus wrote: Hi all, sometime since version 0.4.2, the after_update hook of a MapperExtension fires even if no SQL UPDATE statement is generated. Is this a bug or a feature? this is a feature, the save_obj() method is including your object but no UPDATE is emitted because no column-mapped values have changed. However youll notice that before_update() *is* being called, which has to since we dont know yet if we're doing an UPDATE at that point (and before_update() can even change that outcome), so its consistent that after_update() should be called for every before_update() method. In my case, an object is marked as dirty because a backref has changed, not because of any change in the object itself. A merge(..., dont_load=True) is also part of the mix. If it's a bug, I'll try to provide more details. if youd like to do the same check that save_obj() is doing on an object for changed, just do this: session.is_modified(instance, include_collections=False) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: merge with dont_load=True and lazy relations
Fine with me. ;-) Klaus On Nov 23, 6:37 am, Michael Bayer [EMAIL PROTECTED] wrote: Hi there - there are actually two bugs here, the second one is revealed after repairing the first. The first bug is that the merge() dont_load operation was not completing all the necessary housekeeping on the newly generated copy of the instance, such that the lazy loader for zone (or any other lazy loader) mistakenly detected that the instance had no mapper. this was an easy fix. The second bug is more complex, and its that the committed_state collection for the newly generated copy of the instance contained data from the unmerged copy, in this case the merged User contained the unmerged Address in its committed state. Since the new copy's own addresses collection contained the merged Address but not the unmerged one, the flush() would mistakenly update the unmerged address to not have any User object. At the moment I'd rather not get into the added complexity of regenerating a committed_state collection on the merged object which corresponds to exact changes made on the unmerged object. since the use case for dont_load=True is cached objects being brought into a session, they shouldn't have any pending changes on them anywayso for now I have it raising an error if you try to merge an entity that's marked as dirty, and the newly merged instances get a clean committed state. if someone is merging objects with pending changes its really best not to use dont_load anyway so that the committed_state collection represents the most recently available database state. comments welcome on this one as usual. These changes are in r3811. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: AttributeError: 'ColumnProperty' object has no attribute 'strategy'
Is it possible that your mapped class has a regular Python property with the same name as a mapped relation? Klaus On 23 Nov., 17:08, lur ibargutxi [EMAIL PROTECTED] wrote: Hi! I made an aplication in my local machine but when I tried to do the same in the server I had this problem: Module Python expression view.indicators(), line 1, in expression Module Products.odr.lugabe_db.browser.sql, line 23, in indicators Module Products.odr.lugabe_db.query, line 48, in first_group Module sqlalchemy.orm.query, line 939, in all Module sqlalchemy.orm.query, line 895, in list Module sqlalchemy.orm.query, line 964, in __iter__ Module sqlalchemy.orm.query, line 358, in select_whereclause Module sqlalchemy.orm.query, line 1197, in compile Module sqlalchemy.orm.interfaces, line 160, in setup Module sqlalchemy.orm.interfaces, line 143, in _get_context_strategy AttributeError: 'ColumnProperty' object has no attribute 'strategy' Does anyone know about this problem? -- Lur Ibargutxi [EMAIL PROTECTED] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Wrong SQL statement for mapped select involving in_
This is strange. I had problems reproducing the bug for a long time and was now quite happy that I succeeded. Yes, I'm using PostgreSQL 8.1.5 and psycopg2. And I still see the behavior that I reported. That should narrow the problem down to something in my local setup... I'm sorry for wasting your time. Klaus On 7 Nov., 01:00, Michael Bayer [EMAIL PROTECTED] wrote: I cant reproduce this one. I see you have named bind params so I tried with postgres. it also works with sqlite. works with release 0.4.0 as well as the trunk.output is (with echoing): SELECT testView.id AS testView_id, testView.data AS testView_data FROM (SELECT test.id AS id, test.data AS data FROM test WHERE test.id IN (%(test_id)s, %(test_id_1)s, %(test_id_2)s, % (test_id_3)s, %(test_id_4)s, %(test_id_5)s, %(test_id_6)s, % (test_id_7)s, %(test_id_8)s, %(test_id_9)s, %(test_id_10)s)) AS testView WHERE testView.id = %(param_1)s ORDER BY testView.id 2007-11-06 18:57:05,087 INFO sqlalchemy.engine.base.Engine.0x..b0 {'test_id_3': 5, 'test_id_10': 45, 'param_1': 2, 'test_id_8': 30, 'test_id_9': 44, 'test_id_1': 3, 'test_id_2': 4, 'test_id': 2, 'test_id_4': 8, 'test_id_5': 10, 'test_id_6': 11, 'test_id_7': 13} __main__.Test object at 0xc4e610 2007-11-06 18:57:05,090 INFO sqlalchemy.engine.base.Engine.0x..b0 SELECT referer.id AS referer_id, referer.fk AS referer_fk FROM referer WHERE referer.id = %(param_1)s ORDER BY referer.id 2007-11-06 18:57:05,090 INFO sqlalchemy.engine.base.Engine.0x..b0 {'param_1': 1} 2 __main__.Test object at 0xc4e610 try coming up with a reproducing test case and reopen ticket #853 if you can come up with it. On Nov 6, 2007, at 10:33 AM, klaus wrote: Hi all, the following mapped select results in the wrong query. The problem seems to be related to the number of values in a list passed to in_ and maybe to holes in the list of chosen values. from sqlalchemy import * from sqlalchemy.orm import * metadata = MetaData(...) metadata.bind.echo=True table = Table(test, metadata, Column(id, Integer, primary_key=True), Column(data, String)) table.create() table.insert().execute([{data: 1}, {data: 2}, {data: 3}, {data: 4}, {data: 5}, {data: 6}, {data: 7}, {data: 8}, {data: 9}, {data: 10}, {data: 11}, {data: 12}, {data: 13}, {data: 14}, {data: 15}, {data: 30}, {data: 44}, {data: 55}]) test = table.select(table.c.id.in_([2, 3, 4, 5, 8, 10, 11, 13, 30, 44, 45])).alias(testView) class Test(object): pass mapper(Test, test) referer = Table(referer, metadata, Column(id, Integer, primary_key=True), Column(fk, Integer, ForeignKey(test.id))) referer.create() referer.insert().execute([{fk: 2}]) class Referer(object): pass mapper(Referer, referer, properties={ref: relation(Test)}) session = create_session() t = session.query(Test).get(2) print t r = session.query(Referer).get(1) print r.fk, r.ref It prints None 2 None and the SQL statement for the first get should have param_1=1 instead of None. 2007-11-06 16:26:30,394 INFO sqlalchemy.engine.base.Engine.0x..34 SELECT testView.id AS testView_id, testView.data AS testView_data FROM (SELECT test.id AS id, test.data AS data FROM test WHERE test.id IN (%(test_id)s, %(test_id_1)s, %(test_id_2)s, % (test_id_3)s, %(test_id_4)s, %(test_id_5)s, %(test_id_6)s, % (test_id_7)s, %(test_id_8)s, %(test_id_9)s, %(test_id_10)s)) AS testView WHERE testView.id = %(param_1)s ORDER BY testView.id 2007-11-06 16:26:30,394 INFO sqlalchemy.engine.base.Engine.0x..34 {'test_id_3': 5, 'test_id_10': 45, 'param_1': None, 'test_id_8': 30, 'test_id_9': 44, 'test_id_1': 3, 'test_id_2': 4, 'test_id': 2, 'test_id_4': 8, 'test_id_5': 10, 'test_id_6': 11, 'test_id_7': 13} Best regards Klaus --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Wrong SQL statement for mapped select involving in_
By the way, on a larger database, drop_all() runs into an endless loop. However, I have no small script to reproduce this yet. Klaus On 7 Nov., 16:39, klaus [EMAIL PROTECTED] wrote: That's very kind. I'm on the trunk (0.4.1dev_r3747) and on linux. And I think your script (with a changed connection string, nothing else) fails on my machine. Here's the complete output: 2007-11-07 16:33:14,848 INFO sqlalchemy.engine.base.Engine.0x..b4 select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where n.nspname=current_schema() and lower(relname)=%(name)s 2007-11-07 16:33:14,848 INFO sqlalchemy.engine.base.Engine.0x..b4 {'name': 'referer'} 2007-11-07 16:33:14,940 INFO sqlalchemy.engine.base.Engine.0x..b4 select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where n.nspname=current_schema() and lower(relname)=%(name)s 2007-11-07 16:33:14,940 INFO sqlalchemy.engine.base.Engine.0x..b4 {'name': 'test'} 2007-11-07 16:33:14,941 INFO sqlalchemy.engine.base.Engine.0x..b4 DROP TABLE referer 2007-11-07 16:33:14,941 INFO sqlalchemy.engine.base.Engine.0x..b4 {} 2007-11-07 16:33:15,304 INFO sqlalchemy.engine.base.Engine.0x..b4 COMMIT 2007-11-07 16:33:15,397 INFO sqlalchemy.engine.base.Engine.0x..b4 DROP TABLE test 2007-11-07 16:33:15,397 INFO sqlalchemy.engine.base.Engine.0x..b4 {} 2007-11-07 16:33:15,430 INFO sqlalchemy.engine.base.Engine.0x..b4 COMMIT 2007-11-07 16:33:15,439 INFO sqlalchemy.engine.base.Engine.0x..b4 select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where n.nspname=current_schema() and lower(relname)=%(name)s 2007-11-07 16:33:15,439 INFO sqlalchemy.engine.base.Engine.0x..b4 {'name': 'test'} 2007-11-07 16:33:15,440 INFO sqlalchemy.engine.base.Engine.0x..b4 select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where n.nspname=current_schema() and lower(relname)=%(name)s 2007-11-07 16:33:15,440 INFO sqlalchemy.engine.base.Engine.0x..b4 {'name': 'referer'} 2007-11-07 16:33:15,441 INFO sqlalchemy.engine.base.Engine.0x..b4 CREATE TABLE test ( id SERIAL NOT NULL, data TEXT, PRIMARY KEY (id) ) 2007-11-07 16:33:15,442 INFO sqlalchemy.engine.base.Engine.0x..b4 {} 2007-11-07 16:33:15,893 INFO sqlalchemy.engine.base.Engine.0x..b4 COMMIT 2007-11-07 16:33:15,894 INFO sqlalchemy.engine.base.Engine.0x..b4 CREATE TABLE referer ( id SERIAL NOT NULL, fk INTEGER, PRIMARY KEY (id), FOREIGN KEY(fk) REFERENCES test (id) ) 2007-11-07 16:33:15,894 INFO sqlalchemy.engine.base.Engine.0x..b4 {} 2007-11-07 16:33:15,958 INFO sqlalchemy.engine.base.Engine.0x..b4 COMMIT 2007-11-07 16:33:15,963 INFO sqlalchemy.engine.base.Engine.0x..b4 INSERT INTO test (data) VALUES (%(data)s) 2007-11-07 16:33:15,963 INFO sqlalchemy.engine.base.Engine.0x..b4 [{'data': 1}, {'data': 2}, {'data': 3}, {'data': 4}, {'data': 5}, {'data': 6}, {'data': 7}, {'data': 8}, {'data': 9}, {'data': 10}, {'data': 11}, {'data': 12}, {'data': 13}, {'data': 14}, {'data': 15}, {'data': 30}, {'data': 44}, {'data': 55}] 2007-11-07 16:33:15,968 INFO sqlalchemy.engine.base.Engine.0x..b4 COMMIT 2007-11-07 16:33:15,969 INFO sqlalchemy.engine.base.Engine.0x..b4 select nextval('referer_id_seq') 2007-11-07 16:33:15,969 INFO sqlalchemy.engine.base.Engine.0x..b4 None 2007-11-07 16:33:15,970 INFO sqlalchemy.engine.base.Engine.0x..b4 INSERT INTO referer (id, fk) VALUES (%(id)s, %(fk)s) 2007-11-07 16:33:15,970 INFO sqlalchemy.engine.base.Engine.0x..b4 {'fk': 2, 'id': 1L} 2007-11-07 16:33:15,971 INFO sqlalchemy.engine.base.Engine.0x..b4 COMMIT 2007-11-07 16:33:15,976 INFO sqlalchemy.engine.base.Engine.0x..b4 SELECT testView.id AS testView_id, testView.data AS testView_data FROM (SELECT test.id AS id, test.data AS data FROM test WHERE test.id IN (%(test_id)s, %(test_id_1)s, %(test_id_2)s, % (test_id_3)s, %(test_id_4)s, %(test_id_5)s, %(test_id_6)s, % (test_id_7)s, %(test_id_8)s, %(test_id_9)s, %(test_id_10)s)) AS testView WHERE testView.id = %(param_1)s ORDER BY testView.id 2007-11-07 16:33:15,976 INFO sqlalchemy.engine.base.Engine.0x..b4 {'test_id_3': 5, 'test_id_10': 45, 'param_1': None, 'test_id_8': 30, 'test_id_9': 44, 'test_id_1': 3, 'test_id_2': 4, 'test_id': 2, 'test_id_4': 8, 'test_id_5': 10, 'test_id_6': 11, 'test_id_7': 13} None 2007-11-07 16:33:15,979 INFO sqlalchemy.engine.base.Engine.0x..b4 SELECT referer.id AS referer_id, referer.fk AS referer_fk FROM referer WHERE referer.id = %(param_1)s ORDER BY referer.id 2007-11-07 16:33:15,979 INFO sqlalchemy.engine.base.Engine.0x..b4 {'param_1': 1} 22007-11-07 16:33:15,982 INFO sqlalchemy.engine.base.Engine.0x..b4 SELECT testView.id AS testView_id, testView.data AS testView_data FROM (SELECT test.id AS id, test.data AS data FROM test WHERE test.id IN (%(test_id)s, %(test_id_1)s, %(test_id_2)s, % (test_id_3)s, %(test_id_4)s, %(test_id_5)s, %(test_id_6)s, % (test_id_7)s, %(test_id_8)s
[sqlalchemy] Re: Wrong SQL statement for mapped select involving in_
, 'test_id_4': 8, 'test_id_5': 10, 'test_id_6': 11, 'test_id_7': 13} None What I checked today is that Python (2.5.1 versus the installed 2.4.1) plus the latest psycopg2 does not make a difference. Best regards Klaus On 7 Nov., 16:24, Michael Bayer [EMAIL PROTECTED] wrote: On Nov 7, 2007, at 6:01 AM, klaus wrote: This is strange. I had problems reproducing the bug for a long time and was now quite happy that I succeeded. Yes, I'm using PostgreSQL 8.1.5 and psycopg2. And I still see the behavior that I reported. That should narrow the problem down to something in my local setup... I'm sorry for wasting your time. are you on release 0.4.0 ? attached is the full script i was using, i moved the tables around so that I could use drop_all()/create_all(). Additionally, I just remembered that there are often dictionary- ordering related issues which will occur on linux but not OSX (which is my normal platform). So I'll try on linux later today...or if anyone else wants to run the attached script with PG, SA0.4 and a linux machine that would be helpful. test.py 1KHerunterladen --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] A Query object seems to be altered by a count() - is this a bug?
Hi all, when I try to build up a complicated query like this: query = session.query(Class) query = query.filter_by(...).add_entity(...).join(...) count = query.count() query = query.add_entity(...).join(...).order_by(...) print query.all() the last statement fails due to a broken SELECT. The error disappears if I remove the line with the query.count(). The following is an example to reproduce the behavior. I'm sorry that it is so complicated, but a certain complexity seems to be necessary to trigger the bug. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: A Query object seems to be altered by a count() - is this a bug?
binary AS binary_alias ON binary.fk2 = binary_alias.fk1 \nWHERE True GROUP BY binary.fk1, binary_alias.fk2) AS squared JOIN unary ON nullary.id = unary.fk \nWHERE %(param_1)s = squared.fk2 AND %(param_1_1)s = squared.fk1 ORDER BY squared.n' {'param_1_1': 1, 'param_1': 1} Best regards Klaus On 6 Nov., 15:40, klaus [EMAIL PROTECTED] wrote: Hi all, when I try to build up a complicated query like this: query = session.query(Class) query = query.filter_by(...).add_entity(...).join(...) count = query.count() query = query.add_entity(...).join(...).order_by(...) print query.all() the last statement fails due to a broken SELECT. The error disappears if I remove the line with the query.count(). The following is an example to reproduce the behavior. I'm sorry that it is so complicated, but a certain complexity seems to be necessary to trigger the bug. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Wrong SQL statement for mapped select involving in_
Hi all, the following mapped select results in the wrong query. The problem seems to be related to the number of values in a list passed to in_ and maybe to holes in the list of chosen values. from sqlalchemy import * from sqlalchemy.orm import * metadata = MetaData(...) metadata.bind.echo=True table = Table(test, metadata, Column(id, Integer, primary_key=True), Column(data, String)) table.create() table.insert().execute([{data: 1}, {data: 2}, {data: 3}, {data: 4}, {data: 5}, {data: 6}, {data: 7}, {data: 8}, {data: 9}, {data: 10}, {data: 11}, {data: 12}, {data: 13}, {data: 14}, {data: 15}, {data: 30}, {data: 44}, {data: 55}]) test = table.select(table.c.id.in_([2, 3, 4, 5, 8, 10, 11, 13, 30, 44, 45])).alias(testView) class Test(object): pass mapper(Test, test) referer = Table(referer, metadata, Column(id, Integer, primary_key=True), Column(fk, Integer, ForeignKey(test.id))) referer.create() referer.insert().execute([{fk: 2}]) class Referer(object): pass mapper(Referer, referer, properties={ref: relation(Test)}) session = create_session() t = session.query(Test).get(2) print t r = session.query(Referer).get(1) print r.fk, r.ref It prints None 2 None and the SQL statement for the first get should have param_1=1 instead of None. 2007-11-06 16:26:30,394 INFO sqlalchemy.engine.base.Engine.0x..34 SELECT testView.id AS testView_id, testView.data AS testView_data FROM (SELECT test.id AS id, test.data AS data FROM test WHERE test.id IN (%(test_id)s, %(test_id_1)s, %(test_id_2)s, % (test_id_3)s, %(test_id_4)s, %(test_id_5)s, %(test_id_6)s, % (test_id_7)s, %(test_id_8)s, %(test_id_9)s, %(test_id_10)s)) AS testView WHERE testView.id = %(param_1)s ORDER BY testView.id 2007-11-06 16:26:30,394 INFO sqlalchemy.engine.base.Engine.0x..34 {'test_id_3': 5, 'test_id_10': 45, 'param_1': None, 'test_id_8': 30, 'test_id_9': 44, 'test_id_1': 3, 'test_id_2': 4, 'test_id': 2, 'test_id_4': 8, 'test_id_5': 10, 'test_id_6': 11, 'test_id_7': 13} Best regards Klaus --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: save/update and caching
On 2 Nov., 19:09, Michael Bayer [EMAIL PROTECTED] wrote: On Nov 2, 2007, at 1:15 PM, klaus wrote: Thanks, but this doesn't seem to do what I wanted. The merge modifies the object and therefore tries to update the underlying table on session.flush(). So it might work if you prevented the save also. OK, ive committed the patch plus a fix + test for that specific behavior to the trunk. If you try out the trunk, youll see that you can merge(x, dont_load=True) and the objects get copied into the session with the same commited state as the incoming object; so that if no changes are present against the incoming object, no changes will be tracked on the merged version either. Yep, this seems to do the trick. Nice work! I would prefer a solution, though, that allowed these objects to be immutable like their tables. That is, copy.copy could be used but no constructor and no setattr/delattr (and no remove). My implementation currently catches these with a NotImplementedError. immutable to me is a class behavior, so thats out of the SQLAlchemy realm IMHO. the merge() with dont_load right now will copy the committed state of the incoming object so it should act the same as the cached object. copy.copy() is not smart enough here since it cant detect child objects which are already present in the session (which is the issue youre having). I agree, it is a class behavior, although one that I like very much. :-) Since the object has to be in the current session, which will reflect every change to the underlying read-only table, it is quite nice to catch programming errors on the first possible occasion. By the way, I still don't know what the child objects are in my setup. The tables that I would like to cache resemble enumeration types -- an id and a name, not much more, no references to other objects and no backrefs from those objects referencing them. They are simple and often used, seemingly ideal candidates for holding in memory. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: save/update and caching
Thanks, but this doesn't seem to do what I wanted. The merge modifies the object and therefore tries to update the underlying table on session.flush(). So it might work if you prevented the save also. (In my case, the update is not only a waste, it even fails because the table is write-protected.) I would prefer a solution, though, that allowed these objects to be immutable like their tables. That is, copy.copy could be used but no constructor and no setattr/delattr (and no remove). My implementation currently catches these with a NotImplementedError. Best regards Klaus On 31 Okt., 18:11, Michael Bayer [EMAIL PROTECTED] wrote: On Oct 31, 2007, at 12:43 PM, klaus wrote: When trying to cache the contents of some small tables, I followed a recipe given by Michael Bayer on this list (at least how I understood it): create a dead session without database connection and move all these objects into it. However, every outside object that references one of these objects pulls it into the current session. That's a problem because objects are created in a multithreaded environment with concurrent and relatively short-lived sessions. (That's the reason for the caching in the first place; otherwise the session alone could handle it). So I tried to copy an object on access from the cache and enter it into the session that requested it. (I had to fiddle with _instance_key and _sa_session_id because a plain session.update() wouldn't accept the copy.) session.query.get is overridden in a MapperExtension, so that references from objects in the current session also get copies of cached objects. But despite all these precautions, original objects from the cache with their original (i.e. wrong) session ids end up in the current session all the time. How can they leak in? The current session is not even supposed to see these original objects in the cache. Is there a hidden connection between different sessions? I hope you can make sense of these vague description. theres no connection between sessions. however trying to shuttle objects between two sessions invariably leads to issues like the above because of the relations between the objects. just modifying the lead object's session attributes wont help if the lead object references whole collections of things that are tied to the old session. youd have to design your operation to cascade along all relations. we do have some functions which you can make use of for cascading an operation, mapper.cascade_iterator() and mapper.cascade_callable()although if you get into those then you are pretty much writing us a new library function. the best function for use here is merge(), which was meant to move state between sessions, and it cascades along relations so that everything stays on one side of the equation. however, its not ideal for caching since it issues queries by itself and defeats the purpose of caching.but there is a ticket in trac to provide some merge() hook that is better suited for caching, so it would be a great help if you could test the attached patch; it provides a dont_load flag which should disable loading instances. if you then say myobject = session.merge(cachedobject, dont_load=True), the returned object is the copy you're looking for cascaded all the way down across all relations. merge_dont_load.patch 3KHerunterladen --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] save/update and caching
When trying to cache the contents of some small tables, I followed a recipe given by Michael Bayer on this list (at least how I understood it): create a dead session without database connection and move all these objects into it. However, every outside object that references one of these objects pulls it into the current session. That's a problem because objects are created in a multithreaded environment with concurrent and relatively short-lived sessions. (That's the reason for the caching in the first place; otherwise the session alone could handle it). So I tried to copy an object on access from the cache and enter it into the session that requested it. (I had to fiddle with _instance_key and _sa_session_id because a plain session.update() wouldn't accept the copy.) session.query.get is overridden in a MapperExtension, so that references from objects in the current session also get copies of cached objects. But despite all these precautions, original objects from the cache with their original (i.e. wrong) session ids end up in the current session all the time. How can they leak in? The current session is not even supposed to see these original objects in the cache. Is there a hidden connection between different sessions? I hope you can make sense of these vague description. Best regards Klaus --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Please add some __len__ methods
Seems like I am learning more python on this list than I ever wanted... On 22 Okt., 15:37, Michael Bayer [EMAIL PROTECTED] wrote: On Oct 22, 2007, at 6:32 AM, klaus wrote: Hi all, I wonder why some classes/objects implement part of a list interface - but without a __len__ method. Obvious examples are: Query has __iter__ and __getitem__, both of which access the database. __len__ would be a nice alternative to a basic count(). can't put __len__ on query for the reasons simon described. would issue COUNT queries for every list() evaluation (which is basically every query). this is a python limitation that theres no way to force list() to use only __iter__ without calling __len__() first. Session has __iter__ and __contains__. __len__ could be used to indicate when the session gets too large and should be cleared. persistent, non-dirty objects in the session are weakly referenced in 0.4 and will automatically fall out of scope if not referenced elsewhere. that said, theres no reason __len__() couldnt be on Session anyway. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Please add some __len__ methods
Hi all, I wonder why some classes/objects implement part of a list interface - but without a __len__ method. Obvious examples are: Query has __iter__ and __getitem__, both of which access the database. __len__ would be a nice alternative to a basic count(). Session has __iter__ and __contains__. __len__ could be used to indicate when the session gets too large and should be cleared. I already suggested this some months ago but attracted no attention. ;-) Best regards Klaus --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SQL error for mapped select (new in version 0.4) on PostgreSQL
Thanks a lot! The solution is so simple that I feel a little embarassed... On 16 Okt., 18:15, Michael Bayer [EMAIL PROTECTED] wrote: On Oct 16, 2007, at 10:45 AM, klaus wrote: The only thing that I could find out about the reason is that in engine/base.py (1290) context.column_labels contains the wrong entries. It should contain something like {..., 'table_column': 'table_column', ...}. In the case above, however, it contains {..., 'table_column': 'column', 'column': 'column', ...}. (That is, it contains two items for each column, but the values are 'column' instead of 'table_column'.) After that, I got lost. These values are generated by the postgres driver. But I could not find where it takes its input from the Alias object to generate something different than for the Table object. that is exactly correct, and was one of two issues present in 0.4. both issues, one of which occurs in 0.3 and 0.4 and the other just in 0.4 (though its hiding in 0.3 to some degree as well) originate from two distinct name confusions that arise because you're using the name of the table as the name of the alias. If you name the alias something other than test then all issues go away. so one issue was a hardcoded notion of bind parameter names used in the ORM by query.get(), which also is used for a many-to-one lazyload - it used the label of the column which in this case is test_id, and conflicted with the existing test_id name (the compiler would rename the param as test_id_1 but the Query object wasn't tracking that). So 0.3 now generates a random name whereas 0.4 uses anonymous bind parameters now (which are like random bind param names except they are compile-time generated in a deterministic fashion). second issue is that the column_labels dictionary was being populated with column labels from all selects embedded in the statement, not just the top level one, so again the test_id column at the top level conflicted with the embedded id column. since compilation maintains a stack of select objects, the fix is easy as it means we only operate when the stack is only one select object deep (i.e. its the outermost select). 0.3 rev 3624 and 0.4 rev 3625 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Wrong SQL statement for mapped select (versions 0.3.10, 0.3.11 and others)
SQLAlchemy generates wrong SELECTs for classes that are mapped to select expressions. Consider the following example: from sqlalchemy import * metadata = MetaData(..., echo=True) table = Table(test, metadata, Column(id, Integer, primary_key=True), Column(data, String)) table.create() table.insert().execute([{data: 1}, {data: 2}]) test = table.select(table.c.id != 1).alias(test) class Test(object): pass mapper(Test, test) referer = Table(referer, metadata, Column(id, Integer, primary_key=True), Column(fk, Integer, ForeignKey(test.id))) referer.create() referer.insert().execute([{fk: 2}]) class Referer(object): pass mapper(Referer, referer, properties={ref: relation(Test)}) session = create_session() t = session.query(Test).get(2) print t r = session.query(Referer).get(1) print r.fk, r.ref This prints among others: SELECT test.data AS test_data, test.id AS test_id FROM (SELECT test.id AS id, test.data AS data FROM test WHERE test.id != %(test_id)s) AS test WHERE test.id = %(test_id_1)s ORDER BY test.id 2007-10-16 15:37:21,817 INFO sqlalchemy.engine.base.Engine.0x..34 {'test_id_1': None, 'test_id': 2} SELECT referer.fk AS referer_fk, referer.id AS referer_id FROM referer WHERE referer.id = %(referer_id)s ORDER BY referer.id {'referer_id': 1} SELECT test.data AS test_data, test.id AS test_id FROM (SELECT test.id AS id, test.data AS data FROM test WHERE test.id != %(test_id)s) AS test WHERE test.id = %(test_id_1)s ORDER BY test.id {'test_id_1': None, 'test_id': 2} The first nested SELECT is wrong: test_id should be 1 and test_id_1 should be 2. The same holds for the third SELECT that tries to follow the foreign key. Therefore, the program prints None 2 None Best regards Klaus --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] SQL error for mapped select (new in version 0.4) on PostgreSQL
The following used to work in SQLAlchemy 0.3: from sqlalchemy import * from sqlalchemy.orm import * metadata = MetaData(...) metadata.bind.echo=True table = Table(test, metadata, Column(id, Integer, primary_key=True), Column(active, Boolean)) table.create() table.insert().execute([{active: False}, {active: True}]) test = table.select(table.c.active).alias(test) class Test(object): pass mapper(Test, test) session = create_session() print session.query(Test).all() The SQL statement looks fine: SELECT test.id AS test_id, test.active AS test_active FROM (SELECT test.id AS id, test.active AS active FROM test WHERE test.active) AS test ORDER BY test.id {} But it fails with the following traceback: Traceback (most recent call last): File wrong_select_new.py, line 24, in ? t = session.query(Test).all() File build/bdist.linux-i686/egg/sqlalchemy/orm/query.py, line 571, in all File build/bdist.linux-i686/egg/sqlalchemy/orm/query.py, line 619, in __iter__ File build/bdist.linux-i686/egg/sqlalchemy/orm/query.py, line 624, in _execute_and_instances File build/bdist.linux-i686/egg/sqlalchemy/orm/query.py, line 680, in instances File build/bdist.linux-i686/egg/sqlalchemy/orm/mapper.py, line 1373, in _instance File build/bdist.linux-i686/egg/sqlalchemy/orm/mapper.py, line 873, in identity_key_from_row File build/bdist.linux-i686/egg/sqlalchemy/engine/base.py, line 1580, in __getitem__ File build/bdist.linux-i686/egg/sqlalchemy/engine/base.py, line 1384, in _get_col File build/bdist.linux-i686/egg/sqlalchemy/util.py, line 84, in __getitem__ File build/bdist.linux-i686/egg/sqlalchemy/engine/base.py, line 1294, in lookup_key sqlalchemy.exceptions.NoSuchColumnError: Could not locate column in row for column 'test.id' The only thing that I could find out about the reason is that in engine/base.py (1290) context.column_labels contains the wrong entries. It should contain something like {..., 'table_column': 'table_column', ...}. In the case above, however, it contains {..., 'table_column': 'column', 'column': 'column', ...}. (That is, it contains two items for each column, but the values are 'column' instead of 'table_column'.) After that, I got lost. These values are generated by the postgres driver. But I could not find where it takes its input from the Alias object to generate something different than for the Table object. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Wrong SQL statement for mapped select (versions 0.3.10, 0.3.11 and others)
Maybe I should add that this bug appears under PostgreSQL and persists in version 0.4. On 16 Okt., 15:50, klaus [EMAIL PROTECTED] wrote: SQLAlchemy generates wrong SELECTs for classes that are mapped to select expressions. Consider the following example: from sqlalchemy import * metadata = MetaData(..., echo=True) table = Table(test, metadata, Column(id, Integer, primary_key=True), Column(data, String)) table.create() table.insert().execute([{data: 1}, {data: 2}]) test = table.select(table.c.id != 1).alias(test) class Test(object): pass mapper(Test, test) referer = Table(referer, metadata, Column(id, Integer, primary_key=True), Column(fk, Integer, ForeignKey(test.id))) referer.create() referer.insert().execute([{fk: 2}]) class Referer(object): pass mapper(Referer, referer, properties={ref: relation(Test)}) session = create_session() t = session.query(Test).get(2) print t r = session.query(Referer).get(1) print r.fk, r.ref This prints among others: SELECT test.data AS test_data, test.id AS test_id FROM (SELECT test.id AS id, test.data AS data FROM test WHERE test.id != %(test_id)s) AS test WHERE test.id = %(test_id_1)s ORDER BY test.id 2007-10-16 15:37:21,817 INFO sqlalchemy.engine.base.Engine.0x..34 {'test_id_1': None, 'test_id': 2} SELECT referer.fk AS referer_fk, referer.id AS referer_id FROM referer WHERE referer.id = %(referer_id)s ORDER BY referer.id {'referer_id': 1} SELECT test.data AS test_data, test.id AS test_id FROM (SELECT test.id AS id, test.data AS data FROM test WHERE test.id != %(test_id)s) AS test WHERE test.id = %(test_id_1)s ORDER BY test.id {'test_id_1': None, 'test_id': 2} The first nested SELECT is wrong: test_id should be 1 and test_id_1 should be 2. The same holds for the third SELECT that tries to follow the foreign key. Therefore, the program prints None 2 None Best regards Klaus --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: postgresql: need schema-qualified foreign keys
Oops, obviously I made a mess of my program when pasting it into the web form. Of course, there was a foreign key right from the beginning. A whole line including a parenthesis and a comma dropped out: referer = Table(referer, metadata, Column(id, Integer, primary_key=True), Column(ref, Integer, ForeignKey(subject.id)), schema=alt_schema) Sorry for that. The rest of your post seems to imply that the patch indeed does work. I have to try that at home. On 8 Jul., 23:24, Michael Bayer [EMAIL PROTECTED] wrote: On Jul 8, 2007, at 11:38 AM, klaus wrote: (Finally continuing this old thread.) The suggested fix does not seem to work. Here is an example: 1. To build the tables, create a schema test and run the following code: cant reproduce. using trunk, with or without the patch, this script: from sqlalchemy import * metadata = BoundMetaData('postgres://scott:[EMAIL PROTECTED]/test', echo=True) subject = Table(subject, metadata, Column(id, Integer, primary_key=True), schema=public) referer = Table(referer, metadata, Column(id, Integer, primary_key=True), Column(ref, Integer), schema=alt_schema) metadata.create_all() produces: CREATE TABLE public.subject ( id SERIAL NOT NULL, PRIMARY KEY (id) ) CREATE TABLE alt_schema.referer ( id SERIAL NOT NULL, ref INTEGER, PRIMARY KEY (id) ) The problem can be resolved by writing the ForeignKey more explicitly: ForeignKey(public.subject.id) you should have the ForeignKey here since your mappers need it. 2. Now try to use the tables with the following similar code: (The main difference is autoload=True instead of the column definitions.) the issue here is public being the default schema combined with reflection of tables, and it cant create the join condition. Postgres system tables do not return the name public when reflecting the foreign key; therefore leave it blank. the schema argument is specifically for when using a schema that is *not* the default schema. the patch allows postgres reflection of this kind to work; the lack of a schema from the reflection sets referer's foreign key to point to the subject table in the default schema. changeset 2866 commits this fix and two new unit tests testing combinations of PG tables between public and alt schemas and between two different alt schemas. the foreign key reflects correctly in both as well as an existing test that tests reflection across two tables in a single alt schema. for the example see: http://www.sqlalchemy.org/trac/changeset/2866 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: postgresql: need schema-qualified foreign keys
I'm not sure whether I understody your posting correctly. Now I've checked out revision 2867 and nothing has changed, that is, I still get the same exceptions. Best regards Klaus On Jul 9, 9:44 am, klaus [EMAIL PROTECTED] wrote: Oops, obviously I made a mess of my program when pasting it into the web form. Of course, there was a foreign key right from the beginning. A whole line including a parenthesis and a comma dropped out: referer = Table(referer, metadata, Column(id, Integer, primary_key=True), Column(ref, Integer, ForeignKey(subject.id)), schema=alt_schema) Sorry for that. The rest of your post seems to imply that the patch indeed does work. I have to try that at home. On 8 Jul., 23:24, Michael Bayer [EMAIL PROTECTED] wrote: On Jul 8, 2007, at 11:38 AM, klaus wrote: (Finally continuing this old thread.) The suggested fix does not seem to work. Here is an example: 1. To build the tables, create a schema test and run the following code: cant reproduce. using trunk, with or without the patch, this script: from sqlalchemy import * metadata = BoundMetaData('postgres://scott:[EMAIL PROTECTED]/test', echo=True) subject = Table(subject, metadata, Column(id, Integer, primary_key=True), schema=public) referer = Table(referer, metadata, Column(id, Integer, primary_key=True), Column(ref, Integer), schema=alt_schema) metadata.create_all() produces: CREATE TABLE public.subject ( id SERIAL NOT NULL, PRIMARY KEY (id) ) CREATE TABLE alt_schema.referer ( id SERIAL NOT NULL, ref INTEGER, PRIMARY KEY (id) ) The problem can be resolved by writing the ForeignKey more explicitly: ForeignKey(public.subject.id) you should have the ForeignKey here since your mappers need it. 2. Now try to use the tables with the following similar code: (The main difference is autoload=True instead of the column definitions.) the issue here is public being the default schema combined with reflection of tables, and it cant create the join condition. Postgres system tables do not return the name public when reflecting the foreign key; therefore leave it blank. the schema argument is specifically for when using a schema that is *not* the default schema. the patch allows postgres reflection of this kind to work; the lack of a schema from the reflection sets referer's foreign key to point to the subject table in the default schema. changeset 2866 commits this fix and two new unit tests testing combinations of PG tables between public and alt schemas and between two different alt schemas. the foreign key reflects correctly in both as well as an existing test that tests reflection across two tables in a single alt schema. for the example see: http://www.sqlalchemy.org/trac/changeset/2866 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: How many objects are in a session
And when I am at it (this is going off topic): What do you think about len(query) in addition to or instead of query.count()? IMO, count() is nearly as SQLish als select(). Best regards Klaus On Jul 6, 3:20 pm, klaus [EMAIL PROTECTED] wrote: I noticed that Session has no method len. Wouldn't it be useful to add one, like this: --- orm/session.py 2007-07-06 14:38:22.0 +0200 +++ orm/session.py.orig 2007-07-06 14:43:48.0 +0200 @@ -642,6 +642,9 @@ def __iter__(self): return iter(list(self.uow.new) + self.uow.identity_map.values()) +def __len__(self): +return len(self.uow.new) + len(self.uow.identity_map) + def _get(self, key): return self.identity_map[key] Then, one could flush() and clear() it if it grows too large. (Generally, I think any class with __iter__ should have __len__ too.) Best regards Klaus --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] How many objects are in a session
I noticed that Session has no method len. Wouldn't it be useful to add one, like this: --- orm/session.py 2007-07-06 14:38:22.0 +0200 +++ orm/session.py.orig 2007-07-06 14:43:48.0 +0200 @@ -642,6 +642,9 @@ def __iter__(self): return iter(list(self.uow.new) + self.uow.identity_map.values()) +def __len__(self): +return len(self.uow.new) + len(self.uow.identity_map) + def _get(self, key): return self.identity_map[key] Then, one could flush() and clear() it if it grows too large. (Generally, I think any class with __iter__ should have __len__ too.) Best regards Klaus --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Mapper based on a select generates wrong and unnecessary INSERTs
Yes, of course. ;-) But is it possible not to map certain columns? Preferably, without using a select and listing every single column that should be present? Foreign key columns are obvious candidates to be left out, because the relations should be used instead. Currently, I always rename them into private attributes (with a leading underscore). BTW, the mapper might want to check that keys are nonempty strings and no two columns are mapped to the same attribute (currently, the last one wins). Best regards Klaus On 2 Jul., 18:12, Michael Bayer [EMAIL PROTECTED] wrote: On Jul 2, 2007, at 10:40 AM, klaus wrote: I would rather ask: How can this behavior be avoided? Is it possible to mark some columns as read-only/no-write-back? just remove the primary key columns of the non-write tables from the columns clause of the select. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Naming and mapping
Hi, I'm curious why mapped Selectables are named the way they are. Consider the following code: from sqlalchemy import * metadata = BoundMetaData(...) class Data(object): pass table1 = Table(table1, metadata, Column(id, Integer, nullable=False, primary_key=True), ) table2 = Table(table2, metadata, Column(id, Integer, nullable=False, primary_key=True), Column(fk, Integer, ForeignKey(table1.id)), ) table = join(table1, table2, table1.c.id == table2.c.fk) mapper(Data, table) print table.c.keys() # This prints ['table1_id', 'table2_id', 'table2_fk'] as expected. # Now let us delete the mapper and then add a select to the join: table = join(table1, table2.select(table2.c.id == 42).alias(s), table1.c.id == table2.c.fk) mapper(Data, table) print table.c.keys() # This prints ['table1_id', 's_id', 's_fk'], also as expected. # Now let us add the select in a different position. (Assume that mappers are deleted again.) table = join(table1, table2, table1.c.id == table2.c.fk).select(table2.c.id == 42).alias(s) mapper(Data, table) print table.c.keys() # This prints ['id', 'fk']. Shouldn't there be three columns? And where is the prefix s? Best regards Klaus --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: postgresql: need schema-qualified foreign keys
This looks like a good solution. I'll need some time to provide a test case, however. If the change breaks existing code, how are cross-schema references supposed to be handled? Best regards Klaus On 18 Jun., 21:54, Michael Bayer [EMAIL PROTECTED] wrote: On Jun 18, 2007, at 4:25 AM, [EMAIL PROTECTED] wrote: In my first experiments with elixir I noticed that sqlalchemy doesn't handle foreign keys correctly on autoloaded tables. This has to to with schema handling in the postgresql driver. A foreign key referencing a table in the public schema from outside gets the table name without the schema part and thus locates the table in its own schema. I've appended a trivial fix below. im afraid this patch breaks several postgres reflection unit tests, since the schema attribute of Table is assumed to be of the default schema if not present. by making it present here, it fails to locate tables within its own metadata. i found when trying to create the test case for this issue it required explicitly stating the default schema name as the schema of the inside table. that is also a bug. the patch that fixes the all issues for the test I was able to create is: Index: lib/sqlalchemy/schema.py === --- lib/sqlalchemy/schema.py(revision 2742) +++ lib/sqlalchemy/schema.py(working copy) @@ -701,7 +701,7 @@ raise exceptions.ArgumentError(Invalid foreign key column specification: + self._colspec) if m.group(3) is None: (tname, colname) = m.group(1, 2) -schema = parenttable.schema +schema = None else: (schema,tname,colname) = m.group(1,2,3) table = Table(tname, parenttable.metadata, mustexist=True, schema=schema) meaning, if you say ForeignKey(sometable.somecolumn) for a particular Column, the schema is assumed to be the default schema, not the schema specified for the Table which contains the ForeignKey object. this means creating a ForeignKey between two tables A and B, both with schema=myschema, would have to look like ForeignKey(myschema.a.somecol), even though both tables are in the same myschema schema. Im OK with that but not sure how disruptive this change would be. if you can provide a test case illustrating your scenario (using Table/MetaData/Engine objects only; no elixir classes please), that would help greatly. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: How to map a Selectable such that objects can be created/inserted, updated, and deleted?
Sorry but I don't see how the select_table parameter (used in inheritance hierarchies) relates to my question. Selecting things from a Selectable is not much of a problem, for example, and I need more than one table. On 18 Jun., 13:00, svilen [EMAIL PROTECTED] wrote: mapper() has a selectable= argument, go read about it. it can do just about anything u fancy. Is there a way to create object mappers with bean managed persistence, as the Java folks would call it? What I would like to do is to map a class to a Selectable such as a join. Now when an object is created, its attributes should be inserted into several tables. Likewise, updates or deletes of objects should spread over several tables. For this to happen, I think I would need to hook into the ORM. Any ideas? I'm afraid that this is connected with the VIEWs on the sqlalchemy todo list. Best regards, Klaus --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: How to map a Selectable such that objects can be created/inserted, updated, and deleted?
The MapperExtension might be what I need, however. On 18 Jun., 13:48, [EMAIL PROTECTED] wrote: Sorry but I don't see how the select_table parameter (used in inheritance hierarchies) relates to my question. Selecting things from a Selectable is not much of a problem, for example, and I need more than one table. On 18 Jun., 13:00, svilen [EMAIL PROTECTED] wrote: mapper() has a selectable= argument, go read about it. it can do just about anything u fancy. Is there a way to create object mappers with bean managed persistence, as the Java folks would call it? What I would like to do is to map a class to a Selectable such as a join. Now when an object is created, its attributes should be inserted into several tables. Likewise, updates or deletes of objects should spread over several tables. For this to happen, I think I would need to hook into the ORM. Any ideas? I'm afraid that this is connected with the VIEWs on the sqlalchemy todo list. Best regards, Klaus --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---