[sqlalchemy] Generative query methods for aggregates.
Here is an experimental patch to add generative aggregate methods on query objects. My use case is that I have a class which is often queried with a sum query, from different places. But those different places usually add a filter on top of that basic query. And since I don't like repeating myself, I wanted a way to factor the common part. Here is some (untested) example: score_query = Query(Tag).sum_clause(tags_table.c.score1 * tags_table.c.score2) fun_score_query = score_query.filter_by(name='fun') [... on another place ...] user_fun_score = fun_score_query.filter_by(user=self).scalar() If you agree with the concept and way to do it, I'd happily complete it for the other aggregates and document it. PS: for such new feature suggestions, do you prefer if I add a ticket directly or if I discuss the thing here first? -- Gaëtan de Menten http://openhex.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~--- generative_aggregate.diff Description: application/text
[sqlalchemy] Re: Selectable/subquery for a (scalar) column?
On 4/24/07, Gaetan de Menten [EMAIL PROTECTED] wrote: On 4/23/07, Michael Bayer [EMAIL PROTECTED] wrote: On Apr 23, 2007, at 3:49 PM, Gaetan de Menten wrote: Hello there, In a mapped object, is there any way to map a scalar attribute to an arbitrary selectable/subquery? Jonathan Ellis demonstrated how to do that for relations on this page: http://spyced.blogspot.com/2007/01/why-sqlalchemy-impresses-me.html I'd like to do that for scalars. I've thought about using a property returning a query, but this still generates one query per user (my mapped object) and I need to do everything in one pass. See attached file for an example of what I'd like to do. I've the feeling it might already be possible but I don't see how. If it's not possible yet, do you have any pointer how I could implement that? there are probably three general ways to do what youre doing there. Thanks for the quick answer! the oldest way is something i did in the zblog demo before SA 0.1 was released, which is that you map to the full query you want: s = select([users_table, func.sum(tags_table.c.score1 * tags_table.c.score2).label('score')], users_table.c.user_id==tags_table.c.user_id, group_by=[c for c in users_table.c]) mapper(User, s) the effect above is that your func() becomes another ColumnProperty. That's what I was looking for. So simple... and I didn't think of it. Damn... Hmmm, on second thought, it's not enough for my needs. I need to be able to defer that column and undefer it per query. I know I could use several mappers with entity_names but since I'll have several such columns, it'll be too much trouble to create an alternate mapper for each and every possible combination of those columns being deferred or not. So I guess I'll need to use the ugly intermediary class solution for now. -- Gaëtan de Menten http://openhex.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: different number range for primary key
On 4月24日, 下午5时09分, [EMAIL PROTECTED] wrote: orm-wise, u can probably use MapperExtension.before_insert(), and set up the value there. but this is worst as speed, synchronisation etc. Seems workable. I will try this. Although a solution on the SQL level would be better. At lower level (SQL) - wait for other answers. There were some threads about using sql-functions for primary keys, see last week or so. I tried to use a default sql-function, but the problem is again that I can find no way to pass a column as parameter to the sql function. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: pymssql and encoding - I can not get \x92 to be an '
I finally got the encoding to work. I moved from linux to windows, and now the encoding works with both pymssql and pyodbc. So it had to do with using FreeTDS. I experimented with FreeTDS.conf to use version 7.0 and 8.0 and various charsets, but could not get it to work, so I'll man up and use windows. db = create_engine('mssql://./test', module=pyodbc, module_name='pyodbc') On Apr 11, 11:50 am, Rick Morrison [EMAIL PROTECTED] wrote: Last I heard, pyodbc was working on any POSIX system that supports odbc (most likely via unixodbc or iodbc) http://sourceforge.net/projects/pyodbc/ -- check out the supported platforms On 4/11/07, Marco Mariani [EMAIL PROTECTED] wrote: Rick Morrison wrote: ...and while I'm making this thread unnecessarily long, I should add that while pymssql may not understand Unicode data, the pyodbc DB-API interface does. Thanks to recent work by Paul Johnston, it's on fast-track to becoming the preferred MSSQL db-api for SA. Since he starts with unfortunately, we have a ms sql server at work, maybe he's not developing on windows, and pyodbc is windows-specific. I think the data could be encoded with the 1252 charset, which is similar to 8859-1 but has an apostrophe in chr(146) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Selectable/subquery for a (scalar) column?
On Apr 24, 2007, at 4:44 AM, Gaetan de Menten wrote: the next way is to do it almost the same as Jonathan's blog says to do it, except youd map the relation to some intermediary class like Score, and then use AssociationProxy to apply the scalar property to the class. I thought about something like this but it felt sooo hacky I disregarded it quickly. this is probably the least hacky as like i said im probably going to implement a feature that works just like this. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: UpdateClause
On Apr 24, 2007, at 8:36 PM, Monty Taylor wrote: When I get into visit_update and look into the update_stmt.whereclause, I have a CompoundClause, which contains a single clause, and an AND condition, which is cool. But when I look at the clause, I have a column and a parameter. If I print column.id and parameter.__dict__, I get: id {'shortname': 'testproject_id', 'unique': True, 'type': Integer(), 'value': None, 'key': 'testproject_id'} Why is the value none? Shouldn't we know that project1 has an id of 291? Do I need to do something in a visitor to fill in the value here that I'm not doing? Is it expected that the ExecutionContext provides this value from the object in some way? the value field inside of _BindParamClause, which is what youre looking at there, is optional. its usually set when you do things like: mytable.c.somecolumn == 5 because we have a 5 there, it becomes literal(5) which is just _BindParamClause('literal', 5, unique=True). the 5 is bound to the bind param. but bind params can just be names, and the values come in with the execute() arguments. you can also mix both approaches in which case the params sent to execute() override the bind param values. so im guessing youre looking at the Updates created inside of mapper.save_obj(), which look like, in a simplified way: u = table.update(col == sql.bindparam('pk_column')) i.e. bind param with no value. and then it executes via u.execute({'pk_column': 291, 'col_1': col1value, 'col_2': col2value ...}) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Generated slow JOIN sql statement/specifying a left JOIN
On Apr 24, 2007, at 4:27 AM, Andreas Jung wrote: Because both tables are big the query takes forever. Using a LEFT JOIN would definitely be faster. Is there a way to configure the 'tools' property in a smarter way? not within the relation(), we dont support self-referential eager loads automatically. you can however construct whatever query you want using select(), and use it via query.select(myselect) or query.instances(myselect.execute ()). to affect eager loading from such a call, use the contains_eager() query option. see the example in the advanced data mapping docs for an example of contains_eager()...it would be sometihng along the lines of: node_table_alias = node_table.alias('foo') s = select([node_table, node_table_alias], node_table.c.node_id==node_table_alias.c.parent_id) result = session.query(HierarchyNode).contains_eager('tools', alias=node_table_alias).select(s) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: UpdateClause
Michael Bayer wrote: On Apr 24, 2007, at 8:36 PM, Monty Taylor wrote: When I get into visit_update and look into the update_stmt.whereclause, I have a CompoundClause, which contains a single clause, and an AND condition, which is cool. But when I look at the clause, I have a column and a parameter. If I print column.id and parameter.__dict__, I get: id {'shortname': 'testproject_id', 'unique': True, 'type': Integer(), 'value': None, 'key': 'testproject_id'} Why is the value none? Shouldn't we know that project1 has an id of 291? Do I need to do something in a visitor to fill in the value here that I'm not doing? Is it expected that the ExecutionContext provides this value from the object in some way? the value field inside of _BindParamClause, which is what youre looking at there, is optional. its usually set when you do things like: mytable.c.somecolumn == 5 because we have a 5 there, it becomes literal(5) which is just _BindParamClause('literal', 5, unique=True). the 5 is bound to the bind param. but bind params can just be names, and the values come in with the execute() arguments. you can also mix both approaches in which case the params sent to execute() override the bind param values. so im guessing youre looking at the Updates created inside of mapper.save_obj(), which look like, in a simplified way: u = table.update(col == sql.bindparam('pk_column')) i.e. bind param with no value. and then it executes via u.execute({'pk_column': 291, 'col_1': col1value, 'col_2': col2value ...}) Well, that explains everything. Thanks! I was about to say I didn't know how to accomplish that... but I think that, in fact, while writing that I figured it out. :) Monty --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Generated slow JOIN sql statement/specifying a left JOIN
--On 24. April 2007 08:54:55 -0400 Michael Bayer [EMAIL PROTECTED] wrote: On Apr 24, 2007, at 4:27 AM, Andreas Jung wrote: Because both tables are big the query takes forever. Using a LEFT JOIN would definitely be faster. Is there a way to configure the 'tools' property in a smarter way? not within the relation(), we dont support self-referential eager loads automatically. hmm..Why has this to do with self-referential mappers? Wouldn't the generated SQL be same if it wasn't a self-referential mapper but just mapper with a property for a one-to-many relationship? And it's not about eager loading. I am perfectly fine with lazy loading. I am just saying that the generated SQL for lazy-loading the 'tools' property isn't perfect. A left join would be much faster but I don't know if it is possible to influence that on the configuration level?! Andreas -- ZOPYX Ltd. Co. KG - Charlottenstr. 37/1 - 72070 Tübingen - Germany Web: www.zopyx.com - Email: [EMAIL PROTECTED] - Phone +49 - 7071 - 793376 Registergericht: Amtsgericht Stuttgart, Handelsregister A 381535 Geschäftsführer/Gesellschafter: ZOPYX Limited, Birmingham, UK E-Publishing, Python, Zope Plone development, Consulting pgpeMEQbojBdP.pgp Description: PGP signature