[sqlalchemy] Re: Error in the behaviour of dynamic relation backreferences
Works for me on trunk. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: LIMIT in queries
Or you could use .limit(1). I've always found that more clear than the Python indexing notation because I think of it slicing a list, not applying a limit to a query. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Undeferring attributes off joined entities
On further consideration of the interface, what about adding two new keywords to options() - mapper, and id? That way someone with a large group of options to set won't have to type the entity name and/or id on every single option. session.query(Class).options(options for main entity).options(options for other entity, mapper=SomeClass, id=class1) On Dec 11, 8:26 pm, Chris M [EMAIL PROTECTED] wrote: Don't care as long as the functionality is available somehow :) Unless others have objections I say go with whatevers currently implemented. On Dec 11, 7:52 pm, Michael Bayer [EMAIL PROTECTED] wrote: On Dec 11, 2007, at 6:21 PM, Chris M wrote: Okay, does this mean that options() doesn't go off the last entity or is this interface just in addition to that behavior? this currently is in lieu of the options goes off the last entity idea. i tried it that way but this way felt less surprising to me. we can do a poll if you want. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Undeferring attributes off joined entities
Okay, does this mean that options() doesn't go off the last entity or is this interface just in addition to that behavior? On Dec 11, 2:35 pm, Michael Bayer [EMAIL PROTECTED] wrote: I got the eager loading case to work in 3912. not sure why it didnt work for you since the approach was similar, but i changed the interface of eagerload() and similar so you can do eagerload('foo', SomeOtherClass), where SomeOtherClass you added using add_entity(). thats a little more explicit which i think is less prone to confusion. havent tested it for deferred attributes yet (and the same mapper option should be added to defer, undefer). On Dec 9, 2007, at 3:59 PM, Chris M wrote: I'll commit what I have and some tests sometime soon so you can see what's going on (unless you're by chance magical and already know what's going on!) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Undeferring attributes off joined entities
Is this how you want to do it? Unfortunately, just your fix alone doesn't do the trick BUT if you change line 901 of query.py to context.exec_with_path(m, value.key, value.setup, context, parentclauses=clauses) it works, and all ORM tests run fine. On Dec 8, 10:59 pm, Michael Bayer [EMAIL PROTECTED] wrote: On Dec 8, 2007, at 9:55 PM, Chris M wrote: One thing I'd be worried about is that after an add_entity there is no way to set options on the main entity afterwards. You could provide a reset_entitypoint, but it wouldn't work the same as with joins because after a reset_joinpoint you can rejoin along the same path to filter more criterion if necessary. Still, I think some functionality is better than no functionality... it's not that big of a deal, is it? when the notion of reset_entitypoint comes in, things have just gottten out of hand. at some point we're going to have to decide that order is significant with query's generative behavior, and people will have to use it with that knowledge in mind. its already been suggested as a result of other behaviors (such as query[5:10].order_by('foo') doesnt generate a subquery, for example). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Undeferring attributes off joined entities
Nope, eagerloads are a no-go. I tried changing 901 of query.py again to: context.exec_with_path(m, value.key, value.setup, context, parentclauses=clauses, parentmapper=m) but that did not work either. The code around exec_with_path and setup_query confuses me, I'm not sure I can fix eagerloads by myself. Currently without setting parentmapper=m it tries to find the columns in the table of the main entity, so I think this is at least a step in the right direction. On Dec 9, 2:15 pm, Michael Bayer [EMAIL PROTECTED] wrote: On Dec 9, 2007, at 1:21 PM, Chris M wrote: Is this how you want to do it? Unfortunately, just your fix alone doesn't do the trick BUT if you change line 901 of query.py to context.exec_with_path(m, value.key, value.setup, context, parentclauses=clauses) it works, and all ORM tests run fine. I think we should go for it, if for no other reason than add_entity() is a fairly new method, so its better we start establishing the ordered behavior sooner rather than later. Id be curious to know if acutal eager loads work off the second entity also (im thinking...maybe ? ). you can commit this change if you'd like, but id ask that a few (very short) tests be added to test/orm/query.py which validate the behavior of options() both with and without an add_entity() (i.e., a test that would fail if you didnt implement the feature). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Undeferring attributes off joined entities
I'll commit what I have and some tests sometime soon so you can see what's going on (unless you're by chance magical and already know what's going on!) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Undeferring attributes off joined entities
t1, t2 = Table1.options(undefer(table2.large_col)).join(table2).add_entity(Table2).first() does not load large_col (or even put it in the SQL sent) on t2. Is undefer meant for eager loading in this scenario only, or have I stumbled upon a bug? (if the former, is there a way to achieve what I was trying to do?) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Undeferring attributes off joined entities
options() could work like joinpoints do - after an add_entity, options() refers to that entity. On Dec 8, 7:12 pm, Michael Bayer [EMAIL PROTECTED] wrote: On Dec 8, 2007, at 6:17 PM, Chris M wrote: t1, t2 = Table1 .options (undefer (table2.large_col)).join(table2).add_entity(Table2).first() does not load large_col (or even put it in the SQL sent) on t2. Is undefer meant for eager loading in this scenario only, or have I stumbled upon a bug? (if the former, is there a way to achieve what I was trying to do?) query options currently only apply to the main entity.in this case specfically a new syntax would need to be introduced since its not clear if the above option applies to the attribute as applied to the Table1.table2.large_col attribute or on the large_col attribute on the secondary entity (since the two collections of Table2 could be disjoint). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Undeferring attributes off joined entities
One thing I'd be worried about is that after an add_entity there is no way to set options on the main entity afterwards. You could provide a reset_entitypoint, but it wouldn't work the same as with joins because after a reset_joinpoint you can rejoin along the same path to filter more criterion if necessary. Still, I think some functionality is better than no functionality... it's not that big of a deal, is it? On Dec 8, 8:23 pm, Michael Bayer [EMAIL PROTECTED] wrote: On Dec 8, 2007, at 7:25 PM, Chris M wrote: options() could work like joinpoints do - after an add_entity, options() refers to that entity. probably. ive been hesitant to make things go off of add_entity() as of yet.though actually this is probably not very hard to do. youd have to add the option as undefer('large_col'), and i bet if you were to change line 542 of interfaces.py to: if query._entities: mapper = query._entities[-1][0] else: mapper = query.mapper it *might* work with just that change. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: filter_by VS python properties/descriptors VS composite properties
I think it's a good idea, if Mike agrees then I will submit a patch to do this later today. (Except MyClass.my_prop and my_prop_name won't be equiv, you'll have to have whatever your property returns support __eq__) On Nov 20, 4:37 am, Gaetan de Menten [EMAIL PROTECTED] wrote: Hi people, I have some classes with standard python properties which target another python object and also uses several columns in the database. I also got a global factory function to create an instance of that target object out of the value of the columns (the class of that target object can vary). Now, I'd like to use those properties in filter criteria, as so: session.query(MyClass).filter(MyClass.my_prop == value)... session.query(MyClass).filter_by(my_prop_name=value)... I've tried using composite properties for that (passing the factory function instead of a composite class), and it actually works, but I'm a little nervous about it: can I have some bad side effect provided that in *some* cases (but not always) the target object is loaded from the database. I also dislike the fact I have to provide a __composite_values__ on all the possible target classes, while in my case, I would prefer to put that logic on the property side. I'd prefer if I could provide a callable which'd take an instance and output the tuple of values, instead of the method. Would that be considered a valid use case for composite properties or am I abusing the system? I've also tried to simply change those properties to descriptors so that I can override __eq__ on the object returned by accessing the property on the class. This worked fine for filter. But I also want to be able to use filter_by. So, I'd wish that query(Class).filter_by(name=value) would be somehow equal to query(Class).filter(Class.name == value), but it's not. filter_by only accepts MapperProperties and not my custom property. So what do people think? -- Gaëtan de Mentenhttp://openhex.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: filter_by VS python properties/descriptors VS composite properties
I actually deleted my post on further consideration of this problem, I guess it lagged a bit so you got to it :) If this was implemented I'd rather it work with a new MapperProperty that takes the name of the attribute on the class to access instead of automatically detecting which attributes on the class were descriptors and which aren't (obviously, you don't want to be able to do stupid stuff like .filter_by(_private_variable_here=item). I don't currently have time to submit a patch for that kind of behavior. On Nov 20, 9:54 am, Gaetan de Menten [EMAIL PROTECTED] wrote: On Nov 20, 2007 3:12 PM, Chris M [EMAIL PROTECTED] wrote: I think it's a good idea, if Mike agrees then I will submit a patch to do this later today. (Except MyClass.my_prop and my_prop_name won't be equiv, I never said that's what I wanted. Notice that in my example, i speak about filter and filter_by. you'll have to have whatever your property returns support __eq__) Of course. On Nov 20, 4:37 am, Gaetan de Menten [EMAIL PROTECTED] wrote: Hi people, I have some classes with standard python properties which target another python object and also uses several columns in the database. I also got a global factory function to create an instance of that target object out of the value of the columns (the class of that target object can vary). Now, I'd like to use those properties in filter criteria, as so: session.query(MyClass).filter(MyClass.my_prop == value)... session.query(MyClass).filter_by(my_prop_name=value)... I've tried using composite properties for that (passing the factory function instead of a composite class), and it actually works, but I'm a little nervous about it: can I have some bad side effect provided that in *some* cases (but not always) the target object is loaded from the database. I also dislike the fact I have to provide a __composite_values__ on all the possible target classes, while in my case, I would prefer to put that logic on the property side. I'd prefer if I could provide a callable which'd take an instance and output the tuple of values, instead of the method. Would that be considered a valid use case for composite properties or am I abusing the system? I've also tried to simply change those properties to descriptors so that I can override __eq__ on the object returned by accessing the property on the class. This worked fine for filter. But I also want to be able to use filter_by. So, I'd wish that query(Class).filter_by(name=value) would be somehow equal to query(Class).filter(Class.name == value), but it's not. filter_by only accepts MapperProperties and not my custom property. So what do people think? -- Gaëtan de Mentenhttp://openhex.org -- Gaëtan de Mentenhttp://openhex.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SQLAlchemy 0.4.1 released
I hear Super Mario Galaxy is one hell of a game! On Nov 18, 7:26 pm, Michael Bayer [EMAIL PROTECTED] wrote: Hello alchemers - This is an awesome release. I'm excited about this one. With our new shiny clean 0.4 codebase, internals are starting to look a lot more intelligent, and new things are becoming possible. Call counts are going down like a rock. Intents and behaviors are clarifying and sharpeningplus Super Mario Galaxy arrives tomorrow so its time for a break. Some highlights of this release: - you might notice that some eager load operations are suddenly a lot faster, particularly on MySQL. This is because we've improved the queries that are issued when you use eager loading with LIMIT and/or OFFSET; whereas we previously would wrap the LIMITed query in a subquery, join back to the mapped table, and then add the eager criterion outer joined against the mapped table, a trick we've been doing since 0.1.0, we now outer join the eager criterion directly against the subquery, and the main mapper pulls rows straight from the subquery columns. Improved SQL expression functionality has allowed this to be possible. What it means is, an eager load with LIMIT/ OFFSET uses one less JOIN in all cases. This is an example of SA's very rich expression constructs paying off - since a query that is much more efficient on the database side trumps the hundred or so method calls spent compiling the query anyday. - session.refresh() and session.expire() can now operate on individual instance attributes. Just say session.expire(myobject, ['items', 'description', 'name']), and all three of those attributes, whether they're just columns or relations to other objects, will go blank until you next access them on the instance, at which point they are refreshed from the DB. Column attributes will be grouped together in a single select() statement and related tables will be lazy loaded individually right now. Also, the internal mechanisms used by deferred() columns, refresh/expire operations, and polymorphically deferred columns have all been merged into one system, which means less internal complexity and more consistent behavior. - the API of the session has been hardened. This means its going to check more closely that operations make sense (and it also no longer raises some errors that did not make sense in certain circumstances). The biggest gotcha we've observed so far from people using trunk is that session.save() is used *only* for entities that have not been saved to the database yet. If you put an already-stored instance in save(), you'll get an error. This has always been the contract, it just hasn't complained previously. If you want to put things in the session without caring if they've already been saved or not, use session.save_or_update(myinstance). We've also fixed things regarding entities that have been de-pickled and placed back into the session - some annoying errors that used to occur have been fixed. - still in the session category, the merge() method gets a dont_load=True argument. Everyone using caches like memcached can now place copies of their cached objects back in the session using myinstance = merge(mycachedinstance, dont_load=True), and the instance will be fully copied as though it were loaded from the database, *without* a load operation proceeding; it will trust that you want that instance state in the session. - query.options() are way more intelligent. Suppose you have a large bidirectional chain of relations. If you say something like query.options(eagerload('orders.items.keywords.items.orders')), it will accurately target the 'orders' relation at the end of that chain and nothing else. On a similar topic, self-referential eagerloads can be set up on the fly, such as query.options(eagerload_all('children.children.children')) without needing to set the join_depth flag on relation(). - method call overhead continues to be cut down. Many expensive calls in statement compilation, clauseelement construction, and statement execution have been whacked away completely and replaced with simpler and more direct behaviors, and results are more accurate and correct. This continues along from all that we've done in 0.4 and at this point most call counts should be half of what they were in 0.3. I invite everyone to take a tour around expression.py, compiler.py, and critique; we've had a huge amount of housecleaning in these modules (and others), and further suggestions/ideas/flames are entirely welcome (though not too early in the morning) on sqlalchemy-devel. - in the fringe category, you can now define methods like __hash__(), __nonzero__(), and __eq__() on your mapped instances and the ORM won't get confused; we've rearranged things so that those methods are not accessed by the ORM. - a new, experimental MaxDB dialect, lots of typing fixes for MySQL and Oracle, and lots more. As
[sqlalchemy] Re: trunk's new anon_N labeled subqueries break some eagerloading queries
It's quite simple actually, there is absolutely no need for 99% of my code to know these values after they have been set. I view the criterion of a SELECT as being the values I actually use in my application and the values of the key fields are only used inside the query itself to specify which tables get joined - they have no other use. I'll play devil's advocate here - why on earth would you need to load the values of these keys after they have been set? Do I use them in my application? No. Does SQLAlchemy actually _need_ for them to be loaded? Negative, it just has to generate SQL to the tune of relation.id = table.relation_id instead of relation.id = %s % table_instance.relation_id. It's simple enough to defer them in the mapping and save some data here and there for the massive resultsets I send to memcached (and load later), so why not take advantage of it? In the entire codebase for my (rather large) web site, there is only one location (which happens to be in the peer edit version control system that powers the news and articles) where I explicitly need the value of a key column. Regardless of the nature of my use, if deferred columns aren't to be allowed as keys in eagerload joins it should explicitly raise an error or the documentation should be supplemented with information about it :) I have done a lot of hacking of the SQLAlchemy internals (especially in 0.3.x, not so much 0.4.x) to better understand how it works because the.. erm... unique nature of my website requires complex demands out of the SQL generated by ORMs. Until 0.4.x, I couldn't even consider using SQLAlchemy's ORM so I hacked up a quick one for my own use. Since then, I have switched back to SQLAlchemy's ORM because I did not have the time to maintain my own solution in tandem with my website, my design/programming job, school, and other time constraints like my social life (as unnecessary it is!) Right now I'm working on Windows CHM generator for SQLAlchemy's documentation. I'd really like to contribute to this project in a meaningful way, it's the only ORM I have ever encountered that hasn't been completely inadequate for heavy duty production use in my opinion. Of course I could just write SQL by hand, but I really just don't have the time for that and it's hard to scale when I add new logic to the applications. If you have anything I could help with, I'd be glad to dedicate some of my limited time to hacking a bit! On Nov 14, 9:42 am, Michael Bayer [EMAIL PROTECTED] wrote: On Nov 13, 2007, at 9:17 PM, Chris M wrote: I didn't see any tickets about this on the trac, so I thought I'd bring it to everyones attention. Since it's a development version I wasn't sure if this mattered (or was known about), so if it does I can draft up a quick test case. thanks for the test case, im looking into it today. one question. why on earth would you need to defer a column that is just a simple foreign key to another table ? deferred was intended for things like large text fields and such which are expensive to fetch. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: trunk's new anon_N labeled subqueries break some eagerloading queries
not entirely true... we need to know those values in sqlalchemy - we couldn't issue lazy loads Not true at all. Maybe in the current implementation, but it isn't necessary whatsoever. Let's say you have class Thing, which is many to one to class Owner. If you want to lazyload relation owner later, all it would have to do is check if owner was present (or None) in the InstanceState. If that's the case, issue the SQL If that's the case, issue the SQL SELECT owner attributes FROM thing_table JOIN owner_table ON literal join criterion, owner_table.id = thing_table.owner_id WHERE primary key relation to get thing_table and load the data into a new Owner object. I'm confused to why you need the actual key value for this? properly issue flushes if we didnt keep track of foreign key values (hm, maybe the flushes work). I don't know how your flush implementation works, but I doubt that's necessary either... while we could try to hide all columns marked as foreign key somewhere, im can assure you a very large portion of our userbase would be very unhappy with that :). the behavior of a mapper is that it maps *all* columns of a table to instance attributes by default, it doesnt try to guess what columns you might be interested in. I think you misunderstand, I am not asking for another solution (I don't even want this particular solution either) when deferring key works fine. I don't want it to guess, no radical changes here. blog postblog post... :) Sorry! Was trying to lead into the next paragraph. On Nov 14, 11:29 am, Michael Bayer [EMAIL PROTECTED] wrote: On Nov 14, 2007, at 11:07 AM, Chris M wrote: It's quite simple actually, there is absolutely no need for 99% of my code to know these values after they have been set. I view the criterion of a SELECT as being the values I actually use in my application and the values of the key fields are only used inside the query itself to specify which tables get joined - they have no other use. I'll play devil's advocate here - why on earth would you need to load the values of these keys after they have been set? Do I use them in my application? No. Does SQLAlchemy actually _need_ for them to be loaded? Negative, it just has to generate SQL to the tune of relation.id = table.relation_id instead of relation.id = %s % table_instance.relation_id. not entirely true... we need to know those values in sqlalchemy - we couldn't issue lazy loads or properly issue flushes if we didnt keep track of foreign key values (hm, maybe the flushes work). while we could try to hide all columns marked as foreign key somewhere, im can assure you a very large portion of our userbase would be very unhappy with that :). the behavior of a mapper is that it maps *all* columns of a table to instance attributes by default, it doesnt try to guess what columns you might be interested in. It's simple enough to defer them in the mapping and save some data here and there for the massive resultsets I send to memcached (and load later), so why not take advantage of it? In the entire codebase for my (rather large) web site, there is only one location (which happens to be in the peer edit version control system that powers the news and articles) where I explicitly need the value of a key column. Regardless of the nature of my use, if deferred columns aren't to be allowed as keys in eagerload joins it should explicitly raise an error or the documentation should be supplemented with information about it :) theres no reason you shouldnt be able to do it. however the way im fixing this, those columns are going to get undeferred by the eager loader, so the foreign key attribute will be there when you do the eager load. in fact im pretty certain that theyre going to get uncondionally populated by flushes too. so a lot of things work against your deferring of that attribute but I dont mind it if you dont (well no, actualy i dont mind it :) ). its almost like youre looking for a hidden column feature here. that could be interesting (but is probably not worth the extra complexity). I have done a lot of hacking of the SQLAlchemy internals (especially in 0.3.x, not so much 0.4.x) to better understand how it works because the.. erm... unique nature of my website requires complex demands out of the SQL generated by ORMs. Until 0.4.x, I couldn't even consider using SQLAlchemy's ORM so I hacked up a quick one for my own use. Since then, I have switched back to SQLAlchemy's ORM because I did not have the time to maintain my own solution in tandem with my website, my design/programming job, school, and other time constraints like my social life (as unnecessary it is!) blog postblog post... :) yeah the 0.4 internals are pretty revamped arent they. Right now I'm working on Windows CHM generator for SQLAlchemy's documentation. I'd really like to contribute to this project in a meaningful way
[sqlalchemy] Re: trunk's new anon_N labeled subqueries break some eagerloading queries
not entirely true... we need to know those values in sqlalchemy - we couldn't issue lazy loads Not true at all. Maybe in the current implementation, but it isn't necessary whatsoever. Let's say you have class Thing, which is many to one to class Owner. If you want to lazyload relation owner later, all it would have to do is check if owner was present (or None) in the InstanceState. If that's the case, issue the SQL SELECT owner attributes FROM owner_table WHERE literal join criterion, owner_table.id = thing_table.owner_id and load the data into a new Owner object. I'm confused to why you need the actual key value for this? properly issue flushes if we didnt keep track of foreign key values (hm, maybe the flushes work). I don't know how your flush implementation works, but I doubt that's necessary either... while we could try to hide all columns marked as foreign key somewhere, im can assure you a very large portion of our userbase would be very unhappy with that :). the behavior of a mapper is that it maps *all* columns of a table to instance attributes by default, it doesnt try to guess what columns you might be interested in. I think you misunderstand, I am not asking for another solution (I don't even want this particular solution either) when deferring key works fine. I don't want it to guess, no radical changes here. blog postblog post... :) Sorry! Was trying to lead into the next paragraph. On Nov 14, 11:29 am, Michael Bayer [EMAIL PROTECTED] wrote: On Nov 14, 2007, at 11:07 AM, Chris M wrote: It's quite simple actually, there is absolutely no need for 99% of my code to know these values after they have been set. I view the criterion of a SELECT as being the values I actually use in my application and the values of the key fields are only used inside the query itself to specify which tables get joined - they have no other use. I'll play devil's advocate here - why on earth would you need to load the values of these keys after they have been set? Do I use them in my application? No. Does SQLAlchemy actually _need_ for them to be loaded? Negative, it just has to generate SQL to the tune of relation.id = table.relation_id instead of relation.id = %s % table_instance.relation_id. not entirely true... we need to know those values in sqlalchemy - we couldn't issue lazy loads or properly issue flushes if we didnt keep track of foreign key values (hm, maybe the flushes work). while we could try to hide all columns marked as foreign key somewhere, im can assure you a very large portion of our userbase would be very unhappy with that :). the behavior of a mapper is that it maps *all* columns of a table to instance attributes by default, it doesnt try to guess what columns you might be interested in. It's simple enough to defer them in the mapping and save some data here and there for the massive resultsets I send to memcached (and load later), so why not take advantage of it? In the entire codebase for my (rather large) web site, there is only one location (which happens to be in the peer edit version control system that powers the news and articles) where I explicitly need the value of a key column. Regardless of the nature of my use, if deferred columns aren't to be allowed as keys in eagerload joins it should explicitly raise an error or the documentation should be supplemented with information about it :) theres no reason you shouldnt be able to do it. however the way im fixing this, those columns are going to get undeferred by the eager loader, so the foreign key attribute will be there when you do the eager load. in fact im pretty certain that theyre going to get uncondionally populated by flushes too. so a lot of things work against your deferring of that attribute but I dont mind it if you dont (well no, actualy i dont mind it :) ). its almost like youre looking for a hidden column feature here. that could be interesting (but is probably not worth the extra complexity). I have done a lot of hacking of the SQLAlchemy internals (especially in 0.3.x, not so much 0.4.x) to better understand how it works because the.. erm... unique nature of my website requires complex demands out of the SQL generated by ORMs. Until 0.4.x, I couldn't even consider using SQLAlchemy's ORM so I hacked up a quick one for my own use. Since then, I have switched back to SQLAlchemy's ORM because I did not have the time to maintain my own solution in tandem with my website, my design/programming job, school, and other time constraints like my social life (as unnecessary it is!) blog postblog post... :) yeah the 0.4 internals are pretty revamped arent they. Right now I'm working on Windows CHM generator for SQLAlchemy's documentation. I'd really like to contribute to this project in a meaningful way, it's the only ORM I have ever encountered that hasn't been completely inadequate for heavy
[sqlalchemy] Re: trunk's new anon_N labeled subqueries break some eagerloading queries
Understandable - there is no reason to change how it works as long as its possible to eagerload with deferred keys. Someone who is doing my pattern of access isn't going to be lazyloading much anyway. Not asking for any radical change, just discussing. On Nov 14, 12:23 pm, Michael Bayer [EMAIL PROTECTED] wrote: On Nov 14, 2007, at 11:44 AM, Chris M wrote: not entirely true... we need to know those values in sqlalchemy - we couldn't issue lazy loads Not true at all. Maybe in the current implementation, but it isn't necessary whatsoever. Let's say you have class Thing, which is many to one to class Owner. If you want to lazyload relation owner later, all it would have to do is check if owner was present (or None) in the InstanceState. If that's the case, issue the SQL SELECT owner attributes FROM owner_table WHERE literal join criterion, owner_table.id = thing_table.owner_id and load the data into a new Owner object. I'm confused to why you need the actual key value for this? OK yes, im the one confused, we dont need foreign key attributes to issue lazy loads.But keep in mind that in SA, we dont really fundamentally care about foreign key attributes; we are far more agnostic than that. we only care about remote side columns which can be anything you put into the primaryjoin condition.so in any case, we aren't in much of a position to guess which columns the user wants accessible as attributes.even in the case of foreign keys its very common that someone is using natural keys (like 'username') and can save queries by making usage of them in their foreign-key state. properly issue flushes if we didnt keep track of foreign key values (hm, maybe the flushes work). I don't know how your flush implementation works, but I doubt that's necessary either... the mapper.save_obj() receives an instance thats attached to a parent. the dependency steps in the flush have already identified the parent's identifying columns (based, as usual, on the primaryjoin, doesnt even have to be a PK column), and sychronized the value of those columns to the remote side attributes on the child object (which are usually foreign keys, but again dont have to be). save_obj() only gets the child, it has no knowledge of the parent relation. it then loops through its attributes and issues an INSERT. So in that case, the remote values are sitting on the child object as regular attributes just like everything else. we can make it so that those attributes are hidden, like inside the _state or elsewhere, or we could really go nuts and change how the dependency synchronization work so that they return data directly to save_obj (that would be a *really* big refactoring). SA just uses the instance itself and its attributes to represent the full row of data to be inserted. this is the most straightforward way to do it, regular users can understand it, they can set the FK values themselves if they want (and they do). so yes its not necessary...but IMO most other approaches would be more complicated and less user-friendly for no reason. I work with Hibernate which does *not* do it this way, and its a pain in the ass if i just want to set a parent id without loading the parent object and attaching it. A central tenet of SA is that we arent trying to hide the database behind a huge shroud like it doesnt exist - real world applications need access to foreign key columns. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] trunk's new anon_N labeled subqueries break some eagerloading queries
I didn't see any tickets about this on the trac, so I thought I'd bring it to everyones attention. Since it's a development version I wasn't sure if this mattered (or was known about), so if it does I can draft up a quick test case. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: trunk's new anon_N labeled subqueries break some eagerloading queries
http://www.sqlalchemy.org/trac/ticket/864 On Nov 13, 10:35 pm, Michael Bayer [EMAIL PROTECTED] wrote: definitely need a test case illustrating what problem you encountered. On Nov 13, 2007, at 9:17 PM, Chris M wrote: I didn't see any tickets about this on the trac, so I thought I'd bring it to everyones attention. Since it's a development version I wasn't sure if this mattered (or was known about), so if it does I can draft up a quick test case. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: add_column behavior inconsistent, 0.4.0
gladly http://www.sqlalchemy.org/trac/ticket/858 On Nov 8, 10:19 am, Michael Bayer [EMAIL PROTECTED] wrote: Hi Chris - can you please assemble a fully reproducing test case and create a new ticket in trac ? I can vaguely think of why the add_column() youre doing there might not work correctly and its probably not that hard of a fix, but we're a little overloaded with issues/enhancements this week and having a short test case with which to assemble a unit test would be helpful. thanks, - mike On Nov 7, 2007, at 11:31 PM, Chris M wrote: I haven't tested with the trunk yet, but at least in 0.4.0 there are some inconsistencies with how Query.add_column works. Assuming I have instrumented class Class: Class.query.add_column(Class.some_data) # The added column is completely ignored Class.query.add_column(Class.c.some_data) # ... but this works? The odd part is that in the first example, there is no error message or anything, just a complete ignore. I was surprised by this behavior - I'm not required to use the .c. prefix on most things in SQLAlchemy, and where I can't I at least get some sort of error message. It took me a few tries to actually figure out what was going on and this isn't mentioned anywhere in the documentation, so I figured I'd bring it up. I figure others will be confused as well since select([Class.some_data]) works fine. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Updating deferred columns
I have a table with a deferred column. I want to fetch an object from the database and update this column, but have no use for the actual value of it. However, it seems when I change the value of the column it first fetches the value for it and then sets it before doing the update. Is there any way to stop this behavior? It's expensive especially for updating columns that are relations, as it fetches the deferred column, then the row it references, then sets it before updating. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---