[sqlalchemy] Re: Questions about polymorphic mappers
Michael Bayer wrote: i think using the polymorphic_map is OK. i downplayed its existence since I felt it was confusing to people, which is also the reason i made the _polymorphic_map argument to mapper private; it was originally public. but it seemed like it was producing two ways of doing the same thing so i made it private. OK - I'll carry on using that then. using class_mapper() function instead of class.mapper Ah - that's what I was missing. I hadn't seen the class_mapper function. Thanks for that. as far as having multiple polymorphic_identity values map to the same class, i would think we could just have polymorphic_identity be a list instead of a scalar. right now, if you just inserted multiple values for the same class in polymorphic_map, it would *almost* work except that the save() process is hardwiring the polymorphic_on column to the single polymorphic_identity value no matter what its set to. so attached is an untested patch which accepts either a scalar or a list value for polymorphic_identity, and if its a list then instances need their polymorphic_on attribute set to a valid entry before flushing. try this out and see if it does what you need, and i can easily enough add this to the trunk to be available in the next release (though id need to write some tests also). I think this would definitely be a useful feature, and in fact I was originally going to attempt (or at least suggest!) something like that myself. I'll try the patch and let you know how well it works. However, I still have a situation where I would like to be able to use a default class for unknown types. I don't want to hard-code all the possible options up-front - only the ones that I actually want to treat specially. I've been playing around with some different options, and this is what I've ended up with: class EmployeeMeta(type): def __call__(cls, kind, _fix_class=True, **kwargs): if not _fix_class: return type.__call__(cls, kind=kind, **kwargs) cls = get_employee_class(kind) return cls(kind=kind, _fix_class=False, **kwargs) def get_employee_class(kind): if kind == 'manager': return Manager else: return Employee class Employee(object): __metaclass__ = EmployeeMeta class Manager(Employee): pass class EmployeeMapperExtension(sa.MapperExtension): def create_instance(self, mapper, selectcontext, row, class_): cls = get_employee_class(row[employee_table.c.kind]) if class_ != cls: return sa.class_mapper(cls)._instance(selectcontext, row) return sa.EXT_PASS assign_mapper(ctx, Employee, employee_table, extension=EmployeeMapperExtension()) assign_mapper(ctx, Manager, inherits=Employee.mapper) This seems to do the right thing - Manager instances get created for managers, but any other row becomes an Employee. To add a subclass for another row type, I just need to adapt the get_employee_class function and add another call to assign_mapper. With a bit more work in the metaclass, it could all be done with a special attribute in the subclass. The only thing I'm not sure about is the mapper extension - is it OK to call the mapper._instance method, or is there a better way to do this? Thanks again, Simon --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~--- inheritance_test.py Description: inheritance_test.py
[sqlalchemy] Re: left join with mappers ?
In fact what I want to be able to do is : select a.id, (select b.name from invasive_names b, languages c where b.invasive_id=a.id and b.language_id=c.id and c.iso_code='en') as name_en from invasives a order by foo; where mappers are : a = Invasive, b = InvasiveName, c = Language Julien Cigar wrote: Hello, I'm using SQLAlchemy 0.3.3. I have 3 tables : - invasives (mapped to class Invasive) - languages (mapped to class Language) - invasive_names (mapper to class InvasiveName) In other words, I have a table with species which could have a scientific name, English name, etc I'm using the active mapper extension and I have a problem when I sort on the English name of the species with the following : Invasive.select( and_( Invasive.join_to('names'),InvasiveName.join_to('language') Language.c.iso_code=='en', ) order_by=InvasiveName.c.name) The problem I have is when a species has no English name, it is not selected ... In other words, I want to do a LEFT JOIN in place of a JOIN with the mapper ... How can I do this ? In advance, thanks Julien -- Julien Cigar Belgian Biodiversity Platform http://www.biodiversity.be Université Libre de Bruxelles Campus de la Plaine CP 257 Bâtiment NO, Bureau 4 N4 115C (Niveau 4) Boulevard du Triomphe, entrée ULB 2 B-1050 Bruxelles office: [EMAIL PROTECTED] home: [EMAIL PROTECTED] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Column aliases
Hi there This relates to Turbogears, but is really a SA question. I've customized TG authentication authorization to use my autloaded tables in Postgres and SqlAlchemy 0.3.3. In my schema, I have User.c.uid, the login name of the users, as a primary key TG uses a User mapper with two distinct columns: User.c.user_id (the primary key) and User.c.user_name (the logname). Since I am an avid fan of meaningful primary keys (and have a legacy db to support) I want to keep things my way, but TG does some user handling that I have to fix. So, to avoid patches to the TG source or useless sub-classing, I'd like to access the same column by any of the three names. I cannot do that with a python property on the mapper because TG should be able to use get_by and friends. I've come up with: assign_mapper(context, User, tbl['users'], properties = { 'user_id': tbl['users'].c.uid, # alias for SqlAlchemyIdentity 'user_name': tbl['users'].c.uid, # alias for SqlAlchemyIdentity 'uid': tbl['users'].c.uid, }) This seems to work (I added the third property to make 'uid' reappear!) , but makes it impossible, for instance, to create new users: In [1]: user = User(uid='xxx') In [2]: session.flush() [...] SQLError: (IntegrityError) null value in column uid violates not-null constraint 'INSERT INTO users (uid, nome, cognome, codice_fiscale) VALUES (%(uid)s, %(name)s, %(surname)s)' {'surname': None, 'uid': None, 'name': None} In [3]: user = User(user_id='xxx') In [4]: session.flush() [...] SQLError: (IntegrityError) null value in column uid violates not-null constraint 'INSERT INTO users (uid, nome, cognome, codice_fiscale) VALUES (%(uid)s, %(name)s, %(surname)s)' {'surname': None, 'uid': None, 'name': None} I reckon I should probably go ahead and patch TG, but maybe there is a clean way to do what I have in mind? Thank you. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] how to find out the last insert id ?
what is the easiest way to find out the last insert id? (MySQL 5) supplast = select([func.last_insert_id()],app_schema.SupplierTable.c.pr_supplier_ID 0).execute().fetchone()[0] does not work for some reason Dennis --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: autoloading oracle tables and the owner parameter
Thanks, I will give that a try. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: How to query like sys_id=42 AND ts_created 90 minutes ago ?
You can shorten it a little by having the db do the date operation: History.c.ts_created func.now() - '90 minutes' On 1/11/07, Chris Shenton [EMAIL PROTECTED] wrote: I've got a bunch of history and other timestamped information I will need to query against. The columns are created with type DateTime and get set upon row creation: history_table = Table( 'history', metadata, Column('history_id', Integer,primary_key=True), Column('system_id', Integer, ForeignKey('system.system_id'), nullable=False), Column('ts_created', DateTime, default=func.current_timestamp()), Column('ts_updated', DateTime, onupdate=func.current_timestamp()), ) I'm going to want to do lots of queries on this 'history' table for a specific 'system_id' and a 'ts_created' within some duration in the past -- like 5 or 60 minutes. It's taken me a while to figure out the SQLAlchemy syntax to make this work and it seems a bit verbose: session.query(History).select(and_(History.c.system_id==42, History.c.ts_created datetime.datetime.now() - datetime.timedelta(minutes=90))) Is there a better, more concise way to say this? Thanks. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Column aliases
to have aliases of properties that are usable with get_by(), use the synonym function, described in: http://www.sqlalchemy.org/docs/adv_datamapping.myt#advdatamapping_properties_overriding --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: left join with mappers ?
Julien Cigar wrote: In fact what I want to be able to do is : select a.id, (select b.name from invasive_names b, languages c where b.invasive_id=a.id and b.language_id=c.id and c.iso_code='en') as name_en from invasives a order by foo; where mappers are : a = Invasive, b = InvasiveName, c = Language right...since you are redefining the list of columns that youd like to create instances from, in other words youd like to get the a.name_en column out of a different table than the invasive table, you would have to either use the instances() method off of a session.query(Invasive), or make a new mapper that maps to that query specifically. otherwise, if you just want to put a large series of joins in a query.select(), instead of using join_to()/join_via() you can just create the joins yourself (such as invasives.join(invasive_names).outerjoin(language) or whatever) and send it to query.select() using the from_obj=[myjoin] parameter. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Questions about polymorphic mappers
King Simon-NFHD78 wrote: class EmployeeMapperExtension(sa.MapperExtension): def create_instance(self, mapper, selectcontext, row, class_): cls = get_employee_class(row[employee_table.c.kind]) if class_ != cls: return sa.class_mapper(cls)._instance(selectcontext, row) return sa.EXT_PASS The only thing I'm not sure about is the mapper extension - is it OK to call the mapper._instance method, or is there a better way to do this? if you call _instance like that, youre already well inside the _instance method of the calling mapper...so its not a great way to do it since a lot of redundant stuff will happen which may have negative side effects. id rather just add another plugin point on MapperExtension for this, which takes place before the polymorphic decision stage at the top of the _instance method, like get_polymorphic_identity(). that way you could do all of this stuff cleanly in an extension (and id do that instead of making polymorphic_identity into a list). hows that sound? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: DynamicMetaData + create/drop table = confusion
uggh, pysqlite 2.1 fixed some other problems I was having in transactions iirc. I'm scared to upgrade/downgrade :( On 1/11/07, Michael Bayer [EMAIL PROTECTED] wrote: the important thing is your sqlite version. im on 3.2.1. my pysqlite seems to be2.0.2 ? maybe a new pysqlite bug, not sure. also the sql echo shows that something is weird...its inserting a row, then deleting it, using id 1, which is what the id should be. the rowcount should definitely be 1 and not 0. On Jan 11, 2007, at 8:30 PM, Kumar McMillan wrote: hi. the reason for the flush strangeness is I grabbed these statements from separate areas of the app to reproduce the scenario (but I guess the app needs cleanup, heh). What version of pysqilte did you test with? Mine is pysqlite 2.3.2 on python 2.4.3, sqlalchemy dev r 2183; maybe that's all it is (memory weirdness). below is my sql, output, etc, showing the same test failing for me. I see what you mean about the sqlite instances. Actually when I run it w/ the 2nd call as a connection to postgres, there are no errors. Since that is a better representation of my real problem I'm not so worried about the failure anymore. And currently I'm working around it all using two separate BoundMetaData which is OK to me. But I'm still curious as to what could be wrong with my setup. _ 2007-01-11 19:16:55,779 INFO sqlalchemy.engine.base.Engine.0x..30 PRAGMA table_info(offers) 2007-01-11 19:16:55,779 INFO sqlalchemy.engine.base.Engine.0x..30 {} 2007-01-11 19:16:55,781 INFO sqlalchemy.engine.base.Engine.0x..30 CREATE TABLE offers ( id INTEGER NOT NULL, name TEXT, PRIMARY KEY (id) ) 2007-01-11 19:16:55,781 INFO sqlalchemy.engine.base.Engine.0x..30 None 2007-01-11 19:16:55,782 INFO sqlalchemy.engine.base.Engine.0x..30 COMMIT 2007-01-11 19:16:55,784 INFO sqlalchemy.engine.base.Engine.0x..30 BEGIN 2007-01-11 19:16:55,786 INFO sqlalchemy.engine.base.Engine.0x..30 INSERT INTO offers (name) VALUES (?) 2007-01-11 19:16:55,786 INFO sqlalchemy.engine.base.Engine.0x..30 ['foobar'] 2007-01-11 19:16:55,787 INFO sqlalchemy.engine.base.Engine.0x..30 COMMIT 2007-01-11 19:16:55,790 INFO sqlalchemy.engine.base.Engine.0x..30 SELECT offers.id AS offers_id, offers.name AS offers_name FROM offers ORDER BY offers.oid 2007-01-11 19:16:55,790 INFO sqlalchemy.engine.base.Engine.0x..30 [] 2007-01-11 19:16:55,791 INFO sqlalchemy.engine.base.Engine.0x..30 BEGIN 2007-01-11 19:16:55,792 INFO sqlalchemy.engine.base.Engine.0x..30 DELETE FROM offers WHERE offers.id = ? 2007-01-11 19:16:55,793 INFO sqlalchemy.engine.base.Engine.0x..30 [1] 2007-01-11 19:16:55,793 INFO sqlalchemy.engine.base.Engine.0x..30 COMMIT 2007-01-11 19:16:55,794 INFO sqlalchemy.engine.base.Engine.0x..30 SELECT offers.id AS offers_id, offers.name AS offers_name FROM offers ORDER BY offers.oid 2007-01-11 19:16:55,795 INFO sqlalchemy.engine.base.Engine.0x..30 [] 2007-01-11 19:16:55,796 INFO sqlalchemy.engine.base.Engine.0x..30 PRAGMA table_info(offers) 2007-01-11 19:16:55,796 INFO sqlalchemy.engine.base.Engine.0x..30 {} 2007-01-11 19:16:55,796 INFO sqlalchemy.engine.base.Engine.0x..30 DROP TABLE offers 2007-01-11 19:16:55,797 INFO sqlalchemy.engine.base.Engine.0x..30 None 2007-01-11 19:16:55,797 INFO sqlalchemy.engine.base.Engine.0x..30 COMMIT 2007-01-11 19:16:55,799 INFO sqlalchemy.engine.base.Engine.0x..30 PRAGMA table_info(offers) 2007-01-11 19:16:55,799 INFO sqlalchemy.engine.base.Engine.0x..30 {} 2007-01-11 19:16:55,800 INFO sqlalchemy.engine.base.Engine.0x..30 CREATE TABLE offers ( id INTEGER NOT NULL, name TEXT, PRIMARY KEY (id) ) 2007-01-11 19:16:55,800 INFO sqlalchemy.engine.base.Engine.0x..30 None 2007-01-11 19:16:55,800 INFO sqlalchemy.engine.base.Engine.0x..30 COMMIT 2007-01-11 19:16:55,802 INFO sqlalchemy.engine.base.Engine.0x..30 BEGIN 2007-01-11 19:16:55,803 INFO sqlalchemy.engine.base.Engine.0x..30 INSERT INTO offers (name) VALUES (?) 2007-01-11 19:16:55,803 INFO sqlalchemy.engine.base.Engine.0x..30 ['foobar'] 2007-01-11 19:16:55,804 INFO sqlalchemy.engine.base.Engine.0x..30 COMMIT 2007-01-11 19:16:55,805 INFO sqlalchemy.engine.base.Engine.0x..30 SELECT offers.id AS offers_id, offers.name AS offers_name FROM offers ORDER BY offers.oid 2007-01-11 19:16:55,805 INFO sqlalchemy.engine.base.Engine.0x..30 [] 2007-01-11 19:16:55,806 INFO sqlalchemy.engine.base.Engine.0x..30 BEGIN 2007-01-11 19:16:55,807 INFO sqlalchemy.engine.base.Engine.0x..30 DELETE FROM offers WHERE offers.id = ? 2007-01-11 19:16:55,807 INFO sqlalchemy.engine.base.Engine.0x..30 [1] 2007-01-11 19:16:55,808 INFO sqlalchemy.engine.base.Engine.0x..30 ROLLBACK Traceback (most recent call last): File test_sa_concurrent.py, line 56, in ? db_roundtrip('sqlite:///:memory:') File test_sa_concurrent.py, line 42, in
[sqlalchemy] Re: DynamicMetaData + create/drop table = confusion
oh, nice. upgrading sqlite to 3.3.7 and rebuilding pysqlite2 fixed it -- sorry for the noise. On 1/12/07, Kumar McMillan [EMAIL PROTECTED] wrote: uggh, pysqlite 2.1 fixed some other problems I was having in transactions iirc. I'm scared to upgrade/downgrade :( On 1/11/07, Michael Bayer [EMAIL PROTECTED] wrote: the important thing is your sqlite version. im on 3.2.1. my pysqlite seems to be2.0.2 ? maybe a new pysqlite bug, not sure. also the sql echo shows that something is weird...its inserting a row, then deleting it, using id 1, which is what the id should be. the rowcount should definitely be 1 and not 0. On Jan 11, 2007, at 8:30 PM, Kumar McMillan wrote: hi. the reason for the flush strangeness is I grabbed these statements from separate areas of the app to reproduce the scenario (but I guess the app needs cleanup, heh). What version of pysqilte did you test with? Mine is pysqlite 2.3.2 on python 2.4.3, sqlalchemy dev r 2183; maybe that's all it is (memory weirdness). below is my sql, output, etc, showing the same test failing for me. I see what you mean about the sqlite instances. Actually when I run it w/ the 2nd call as a connection to postgres, there are no errors. Since that is a better representation of my real problem I'm not so worried about the failure anymore. And currently I'm working around it all using two separate BoundMetaData which is OK to me. But I'm still curious as to what could be wrong with my setup. _ 2007-01-11 19:16:55,779 INFO sqlalchemy.engine.base.Engine.0x..30 PRAGMA table_info(offers) 2007-01-11 19:16:55,779 INFO sqlalchemy.engine.base.Engine.0x..30 {} 2007-01-11 19:16:55,781 INFO sqlalchemy.engine.base.Engine.0x..30 CREATE TABLE offers ( id INTEGER NOT NULL, name TEXT, PRIMARY KEY (id) ) 2007-01-11 19:16:55,781 INFO sqlalchemy.engine.base.Engine.0x..30 None 2007-01-11 19:16:55,782 INFO sqlalchemy.engine.base.Engine.0x..30 COMMIT 2007-01-11 19:16:55,784 INFO sqlalchemy.engine.base.Engine.0x..30 BEGIN 2007-01-11 19:16:55,786 INFO sqlalchemy.engine.base.Engine.0x..30 INSERT INTO offers (name) VALUES (?) 2007-01-11 19:16:55,786 INFO sqlalchemy.engine.base.Engine.0x..30 ['foobar'] 2007-01-11 19:16:55,787 INFO sqlalchemy.engine.base.Engine.0x..30 COMMIT 2007-01-11 19:16:55,790 INFO sqlalchemy.engine.base.Engine.0x..30 SELECT offers.id AS offers_id, offers.name AS offers_name FROM offers ORDER BY offers.oid 2007-01-11 19:16:55,790 INFO sqlalchemy.engine.base.Engine.0x..30 [] 2007-01-11 19:16:55,791 INFO sqlalchemy.engine.base.Engine.0x..30 BEGIN 2007-01-11 19:16:55,792 INFO sqlalchemy.engine.base.Engine.0x..30 DELETE FROM offers WHERE offers.id = ? 2007-01-11 19:16:55,793 INFO sqlalchemy.engine.base.Engine.0x..30 [1] 2007-01-11 19:16:55,793 INFO sqlalchemy.engine.base.Engine.0x..30 COMMIT 2007-01-11 19:16:55,794 INFO sqlalchemy.engine.base.Engine.0x..30 SELECT offers.id AS offers_id, offers.name AS offers_name FROM offers ORDER BY offers.oid 2007-01-11 19:16:55,795 INFO sqlalchemy.engine.base.Engine.0x..30 [] 2007-01-11 19:16:55,796 INFO sqlalchemy.engine.base.Engine.0x..30 PRAGMA table_info(offers) 2007-01-11 19:16:55,796 INFO sqlalchemy.engine.base.Engine.0x..30 {} 2007-01-11 19:16:55,796 INFO sqlalchemy.engine.base.Engine.0x..30 DROP TABLE offers 2007-01-11 19:16:55,797 INFO sqlalchemy.engine.base.Engine.0x..30 None 2007-01-11 19:16:55,797 INFO sqlalchemy.engine.base.Engine.0x..30 COMMIT 2007-01-11 19:16:55,799 INFO sqlalchemy.engine.base.Engine.0x..30 PRAGMA table_info(offers) 2007-01-11 19:16:55,799 INFO sqlalchemy.engine.base.Engine.0x..30 {} 2007-01-11 19:16:55,800 INFO sqlalchemy.engine.base.Engine.0x..30 CREATE TABLE offers ( id INTEGER NOT NULL, name TEXT, PRIMARY KEY (id) ) 2007-01-11 19:16:55,800 INFO sqlalchemy.engine.base.Engine.0x..30 None 2007-01-11 19:16:55,800 INFO sqlalchemy.engine.base.Engine.0x..30 COMMIT 2007-01-11 19:16:55,802 INFO sqlalchemy.engine.base.Engine.0x..30 BEGIN 2007-01-11 19:16:55,803 INFO sqlalchemy.engine.base.Engine.0x..30 INSERT INTO offers (name) VALUES (?) 2007-01-11 19:16:55,803 INFO sqlalchemy.engine.base.Engine.0x..30 ['foobar'] 2007-01-11 19:16:55,804 INFO sqlalchemy.engine.base.Engine.0x..30 COMMIT 2007-01-11 19:16:55,805 INFO sqlalchemy.engine.base.Engine.0x..30 SELECT offers.id AS offers_id, offers.name AS offers_name FROM offers ORDER BY offers.oid 2007-01-11 19:16:55,805 INFO sqlalchemy.engine.base.Engine.0x..30 [] 2007-01-11 19:16:55,806 INFO sqlalchemy.engine.base.Engine.0x..30 BEGIN 2007-01-11 19:16:55,807 INFO sqlalchemy.engine.base.Engine.0x..30 DELETE FROM offers WHERE offers.id = ? 2007-01-11 19:16:55,807 INFO sqlalchemy.engine.base.Engine.0x..30 [1]
[sqlalchemy] Re: Mapping special child rows
Okay, I have another question related to this. Now that I have max_order defined, I want to do a query on it (give me the users whose max_order==5). My code is max_orders_by_user = select([func.max(orders.c.order_id).label('order_id')], group_by=[orders.c.user_id]).alias('max_orders_by_user') max_orders = orders.select(orders.c.order_id==max_orders_by_user.c.order_id).alias('max_orders') mapper(User, users, properties={ 'orders':relation(class_mapper(Order), backref='user'), 'max_order':relation(mapper(Order, max_orders, non_primary=True), uselist=False), 'addresses':relation(mapper(Address, addresses), backref='user'), }) It seemed like if I labeled the column something unique in max_orders_by_user, then I should be able to use that in select_by, a la http://www.sqlalchemy.org/docs/datamapping.myt#datamapping_selectrelations_relselectby So I changed that to max_order_id and tried max_orders_by_user = select([func.max(orders.c.order_id).label('max_order_id')], group_by=[orders.c.user_id]).alias('max_orders_by_user') max_orders = orders.select(orders.c.order_id==max_orders_by_user.c.max_order_id).alias('max_orders') # mapper as above session.query(User).select_by(max_order_id=5) and got sqlalchemy.exceptions.InvalidRequestError: Cant locate property named 'max_order_id' I did get it to work with session.query(User).select(max_orders.c.order_id==5, from_obj=[max_orders]) Is there a way to do this with select_by? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Mapping special child rows
errr, the relations that mapper setup can feed into select_by would include all the attributes you have on Order, which does not include max_order_id. youve got two relations that both point to an Order relation, so select_by is more or less out at that point since it can only be given order_id, and it will just pick one or the other. however, you might get away with it if you do it like this: mapper(User, users, properties={ 'orders':relation(class_mapper(Order), backref='user'), 'max_order':relation(mapper(Order, max_orders, non_primary=True, properties={'max_order_id':synonym('order_id')}), uselist=False), 'addresses':relation(mapper(Address, addresses), backref='user'), }) session.query(User).select_by(max_order_id=10) just a guess. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---