[sqlalchemy] Re: column_prefix with synonym
Thanks for the quick fixe. Couldn't this read only class property also be set on any mapped attribute so that I can remove the boilerplate Synonym and just use the original mapped attribute as a read only one? A mapper level readonly option (like column_prefix) would be very useful too. On 27 juin, 22:47, Michael Bayer <[EMAIL PROTECTED]> wrote: > On Jun 27, 2007, at 3:48 PM, jdu wrote: > > > > > > > It seems that both options don't work in common: > > > dates = Table('dates', meta, > > Column('date', Date, primary_key=True) > > ) > > mapper(MyDate, dates, column_prefix='_', properties=dict( > > date = synonym('_date'), > > ) > > > produces with 0.3.8: > > > ArgumentError: WARNING: column 'date' not being added due to property > > ''. > > Specify 'allow_column_override=True' to mapper() to ignore this > > condition. > > this is a bug, fixed in r2795. > > > > > I don't want override. My underlying goal is to make MyDate readOnly. > > As mapped attributes are already properties, it would be great to be > > able to > > 'declare' the readonly behaviour in the mapper. > > id say this is out of scope for mapper. id favor adding a "property" > argument to SynonymProperty that specifies a property class (i.e. > __get__(), __set__(), __delete__()) to be assembled on the class > (currently the proxy=True argument will assemble a default accessor > to the column-mapped attribute). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Future of migrate project
I'm still around; unforunately, I'm afraid I lack the time/motivation to put much more into migrate. I've no objections to someone else taking over the project, if anyone's interested. For what it's worth, there's a mostly-finished svn branch that removes monkeypatching, no changes to SA required: http://erosson.com/migrate/trac/browser/migrate/branches/monkeypatch_removal Haven't moved it to trunk/released it because I never got around to fixing constraints. Looks like it may be out of date once again too; but if anyone's up for taking over migrate, it's a better place to start than trunk. On 6/27/07, Michael Bayer <[EMAIL PROTECTED]> wrote: > > Based on the non-responsiveness on the migrate ML, I think migrate > needs someone to take it over at this point. I would favor a rewrite > that doesn't rely upon any monkeypatching within SA (which would have > prevented the breakage upon version change), and I also had agreed > earlier to allow "ALTER TABLE" hooks into SA's dialect to smooth over > those particular operations which currently are bolted on by migrate's > current approach. > > > > > --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Group by? Still a problem
On Jun 27, 2007, at 7:34 PM, voltron wrote: > > Could you point me to the url where this example is? I wonder why > order_by and other things work with the ORM then and group_by left out > also you need to understand the behavior of group by, as far as SQL. postgres in particular, as well as oracle, are very strict about it; whereas MySQL is not, hence different people are getting different results with it. the "official" behavior of GROUP BY in SQL is that it is applied to SELECT statements which contain "aggregate" functions in their columns clause, such as count(), max(), sum(), etc. however, it specifically must be applied to all other columns in the columns clause that are *not* aggregates, such as: select a, b, c, max(d) from table requires that a, b, c be stated in the GROUP BY clause: select a, b, c, max(d) from table group by a, b, c on the other hand the aggregate functions such as max(d) can be listed in the HAVING clause: select a, b, c, max(d) from table group by a, b, c having max(d)=10 the specific error youre getting is raised by postgres, and is noting this requirement. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Group by? Still a problem
On Jun 27, 4:34 pm, voltron <[EMAIL PROTECTED]> wrote: > Could you point me to the url where this example is? I wonder why > order_by and other things work with the ORM then and group_by left out Here is where to find the group_by method in the documentation: >From the main table of contents, select "Generated Documentation", then search for "Class Query" and click on that link. Now you're here: http://www.sqlalchemy.org/docs/sqlalchemy_orm.html#docstrings_sqlalchemy.orm_Query Now scroll down to method group_by. For more usage suggestions, see the following: http://www.sqlalchemy.org/docs/datamapping.html#datamapping_query_callingstyles And here is exactly where the example given by Huy Do is found: http://www.sqlalchemy.org/docs/adv_datamapping.html#advdatamapping_selects --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Group by? Still a problem
Could you point me to the url where this example is? I wonder why order_by and other things work with the ORM then and group_by left out Thanks On Jun 28, 1:19 am, Huy Do <[EMAIL PROTECTED]> wrote: > I think you should listen to that error message. > > user.id must appear in the "group by" or be used in an aggregate > function i.e count, sum, avg etc. > > The other problem you are using the ORM interface. You should be using > the SQL select. > > I'm not sure what you are trying to achieve, but your original query > does not make sense from any SQL perspective. > > This is an example from the docs on how to use group by from an SQL select. > > s = select([customers, > func.count(orders).label('order_count'), > func.max(orders.price).label('highest_order')], > customers.c.customer_id==orders.c.customer_id, > group_by=[c for c in customers.c] > ).alias('somealias') > > Huy > > > Then it must be a bug, I still get an error > > > _execute build\bdist.win32\egg\sqlalchemy\engine\base.py 602 > > "SQLError: (ProgrammingError) column ""user.id"" must appear in the > > GROUP BY clause or be used in an aggregate function > > > On Jun 27, 9:09 pm, Andreas Jung <[EMAIL PROTECTED]> wrote: > > >> --On 27. Juni 2007 12:00:13 -0700 voltron <[EMAIL PROTECTED]> wrote: > > >>> I´m guessing a bit because I still could not find the group_by entry > >>> in the docs > > >>> This works: > >>> user.select(links.c.id> 3, order_by=[user.c.id]).execute() > > >>> but this does not > >>> user.select(links.c.id> 3,group_by=[user.c.dept]).execute() > > >>> What should be the right syntax? > > >> Works for me: > > >> for row in session.query(am).select(am.c.hidx=='HI1561203', > >> group_by=[am.c.hidx]): > >> print row.hidx, row.versionsnr > > >> -aj > > >> application_pgp-signature_part > >> 1KDownload --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Group by? Still a problem
I think you should listen to that error message. user.id must appear in the "group by" or be used in an aggregate function i.e count, sum, avg etc. The other problem you are using the ORM interface. You should be using the SQL select. I'm not sure what you are trying to achieve, but your original query does not make sense from any SQL perspective. This is an example from the docs on how to use group by from an SQL select. s = select([customers, func.count(orders).label('order_count'), func.max(orders.price).label('highest_order')], customers.c.customer_id==orders.c.customer_id, group_by=[c for c in customers.c] ).alias('somealias') Huy > Then it must be a bug, I still get an error > > _execute build\bdist.win32\egg\sqlalchemy\engine\base.py 602 > "SQLError: (ProgrammingError) column ""user.id"" must appear in the > GROUP BY clause or be used in an aggregate function > > > On Jun 27, 9:09 pm, Andreas Jung <[EMAIL PROTECTED]> wrote: > >> --On 27. Juni 2007 12:00:13 -0700 voltron <[EMAIL PROTECTED]> wrote: >> >> >> >> >>> I´m guessing a bit because I still could not find the group_by entry >>> in the docs >>> >>> This works: >>> user.select(links.c.id> 3, order_by=[user.c.id]).execute() >>> >>> but this does not >>> user.select(links.c.id> 3,group_by=[user.c.dept]).execute() >>> >>> What should be the right syntax? >>> >> Works for me: >> >> for row in session.query(am).select(am.c.hidx=='HI1561203', >> group_by=[am.c.hidx]): >> print row.hidx, row.versionsnr >> >> -aj >> >> application_pgp-signature_part >> 1KDownload >> > > > > > --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Future of migrate project
Based on the non-responsiveness on the migrate ML, I think migrate needs someone to take it over at this point. I would favor a rewrite that doesn't rely upon any monkeypatching within SA (which would have prevented the breakage upon version change), and I also had agreed earlier to allow "ALTER TABLE" hooks into SA's dialect to smooth over those particular operations which currently are bolted on by migrate's current approach. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Future of migrate project
Is anyone currently keeping the migrate project up to date or are there any other efforts to provide similar functionality? We have a rather large project where we started using migrate with SA because we wanted a robust way to track database modifications and apply then to production databases. Everything has worked fairly well, but we recently upgraded from SA 0.3.6 to 0.3.8 and found that all of our migrate scripts have broken (same with 0.3.7). I can provide some details of the problems to help fix migrate but first I thought I would ask if we are fighting a losing battle. Should we be using migrate? Are other people using migrate or some other tool or just rolling your own code for database migration? This seems like a *very* valuable capability to have in SA I am hoping that there is a way to keep it going. -Allen --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: column_prefix with synonym
On Jun 27, 2007, at 3:48 PM, jdu wrote: > > It seems that both options don't work in common: > > dates = Table('dates', meta, > Column('date', Date, primary_key=True) > ) > mapper(MyDate, dates, column_prefix='_', properties=dict( > date = synonym('_date'), > ) > > produces with 0.3.8: > > ArgumentError: WARNING: column 'date' not being added due to property > ''. > Specify 'allow_column_override=True' to mapper() to ignore this > condition. this is a bug, fixed in r2795. > > I don't want override. My underlying goal is to make MyDate readOnly. > As mapped attributes are already properties, it would be great to be > able to > 'declare' the readonly behaviour in the mapper. id say this is out of scope for mapper. id favor adding a "property" argument to SynonymProperty that specifies a property class (i.e. __get__(), __set__(), __delete__()) to be assembled on the class (currently the proxy=True argument will assemble a default accessor to the column-mapped attribute). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] column_prefix with synonym
It seems that both options don't work in common: dates = Table('dates', meta, Column('date', Date, primary_key=True) ) mapper(MyDate, dates, column_prefix='_', properties=dict( date = synonym('_date'), ) produces with 0.3.8: ArgumentError: WARNING: column 'date' not being added due to property ''. Specify 'allow_column_override=True' to mapper() to ignore this condition. I don't want override. My underlying goal is to make MyDate readOnly. As mapped attributes are already properties, it would be great to be able to 'declare' the readonly behaviour in the mapper. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Group by? Still a problem
Then it must be a bug, I still get an error _executebuild\bdist.win32\egg\sqlalchemy\engine\base.py 602 "SQLError: (ProgrammingError) column ""user.id"" must appear in the GROUP BY clause or be used in an aggregate function On Jun 27, 9:09 pm, Andreas Jung <[EMAIL PROTECTED]> wrote: > --On 27. Juni 2007 12:00:13 -0700 voltron <[EMAIL PROTECTED]> wrote: > > > > > I´m guessing a bit because I still could not find the group_by entry > > in the docs > > > This works: > > user.select(links.c.id> 3, order_by=[user.c.id]).execute() > > > but this does not > > user.select(links.c.id> 3,group_by=[user.c.dept]).execute() > > > What should be the right syntax? > > Works for me: > > for row in session.query(am).select(am.c.hidx=='HI1561203', > group_by=[am.c.hidx]): > print row.hidx, row.versionsnr > > -aj > > application_pgp-signature_part > 1KDownload --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Group by? Still a problem
--On 27. Juni 2007 12:00:13 -0700 voltron <[EMAIL PROTECTED]> wrote: I´m guessing a bit because I still could not find the group_by entry in the docs This works: user.select(links.c.id> 3, order_by=[user.c.id]).execute() but this does not user.select(links.c.id> 3,group_by=[user.c.dept]).execute() What should be the right syntax? Works for me: for row in session.query(am).select(am.c.hidx=='HI1561203', group_by=[am.c.hidx]): print row.hidx, row.versionsnr -aj pgpUJyI03GxCN.pgp Description: PGP signature
[sqlalchemy] Re: Group by? Still a problem
I´m guessing a bit because I still could not find the group_by entry in the docs This works: user.select(links.c.id> 3, order_by=[user.c.id]).execute() but this does not user.select(links.c.id> 3,group_by=[user.c.dept]).execute() What should be the right syntax? Thanks On Jun 27, 4:07 pm, voltron <[EMAIL PROTECTED]> wrote: > Thanks. I did not find this in the docs. > > http://www.sqlalchemy.org/docs/sqlconstruction.html > > On Jun 27, 4:01 pm, Andreas Jung <[EMAIL PROTECTED]> wrote: > > > --On 27. Juni 2007 06:47:37 -0700 voltron <[EMAIL PROTECTED]> wrote: > > > > How can I construct the clause "group by" using SQL construction? > > > By using the group_by parameter of the select() method? > > > -aj > > > application_pgp-signature_part > > 1KDownload --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Naming and mapping
On Jun 27, 11:05 am, klaus <[EMAIL PROTECTED]> wrote: > table = join(table1, table2, table1.c.id == > table2.c.fk).select(table2.c.id == 42).alias("s") > > # This prints ['id', 'fk']. Shouldn't there be three columns? And > where is the prefix "s"? > joins have a different behavior than select(), in that a join "assumes" column name collisions are likely and therefore applies the table name as a prefix to its exported columns in all cases. since a join() isnt really a full blown "select" object its actually not so common that people call upon its exported col list. the select() otoh doesnt assume you want to prefix column names with the table name, the use_labels flag must be added. without it, your column list is shorter due to name collisions. table = join(table1, table2, table1.c.id == table2.c.fk).select(table2.c.id == 42, use_labels=True).alias("s") print table.c.keys() ['table1_id', 'table2_id', 'table2_fk'] the "s" is part of the alias, external to the select. to have that show up, select from the alias: table = join(table1, table2, table1.c.id == table2.c.fk).select(table2.c.id == 42, use_labels=True).alias("s").select(use_labels=True) print table.c.keys() ['s_table1_id', 's_table2_id', 's_table2_fk'] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: difficult query
On Jun 27, 9:54 am, ltbarcly <[EMAIL PROTECTED]> wrote: > I'm wondering how I would implement the following query. > > I have 3 tables A, B, C. Each table has a many-to-many relationship > to the other two through an association object, call these AB, AC, > BC. > > If I want to select all the B's associated with a certain 'a', I can > do: > > q = B.query() > q = q.filter(A.c.b_id == ab.b_id) > results = q.select() > > And I get a list of B's. How do I get all the c's associated with the > b's associated to a specific 'a', excluding those c's that are > themselves related to the 'a' through AC? lets think out loud... select * from c join bc on join b on join ab on join a on where a.id= and not exists(select 1 from ac where ac.c_id=c.id) so, let me also make "ac" inside the "exists" into an alias to avoid correlation, and it should then be ac_alias = ac.alias('ac_alias') session.query(C).join(['bs', 'as']).filter(A.c.id==3).filter(~exists([1], ac_alias.c.c_id==C.c.id)).list() --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: backref definitions
On Jun 27, 10:54 am, remi jolin <[EMAIL PROTECTED]> wrote: > I first tried to add the backref on only one side but the relationship > was not bi-directional ; user.addresses.append(a) updated address.user > but not the other way, that's why I added it both sides. try not to use add_property() for this reason; if mappers have already been compiled, add_property() is going to be less reliable. the regular backref will create fully bi-directional behavior at the attribute level. i think i might want add_property() to even raise an exception if used after mappers are compiled. > > Then, with backrefs defined on both "add_property", I had a look at the > structures created at the mapper level (properties, reverse_property, > ...) and they seemed coherent but I was not sure it was enough. > OK, I'll do it that way if it is the supported way. Thanks for the hint. > Does the "is_backref" needs to be on a specific side of the relation ?? yes i completely forgot about "reverse_property". the fact is, you have to use the "backref" flag to get a fully working bi-directional relationship. trying to manually craft one, even if you get it working, is not going to be forwards compatible since flags like "is_backref" and "reverse_property" are not public (and should probably be underscored). so no, i cant say that using "is_backref" explicitly is supported right now since it bypasses the usage of the backref() construct. im going to see if i can underscore this in 0.4. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Naming and mapping
Hi, I'm curious why mapped Selectables are named the way they are. Consider the following code: from sqlalchemy import * metadata = BoundMetaData("...") class Data(object): pass table1 = Table("table1", metadata, Column("id", Integer, nullable=False, primary_key=True), ) table2 = Table("table2", metadata, Column("id", Integer, nullable=False, primary_key=True), Column("fk", Integer, ForeignKey("table1.id")), ) table = join(table1, table2, table1.c.id == table2.c.fk) mapper(Data, table) print table.c.keys() # This prints ['table1_id', 'table2_id', 'table2_fk'] as expected. # Now let us delete the mapper and then add a select to the join: table = join(table1, table2.select(table2.c.id == 42).alias("s"), table1.c.id == table2.c.fk) mapper(Data, table) print table.c.keys() # This prints ['table1_id', 's_id', 's_fk'], also as expected. # Now let us add the select in a different position. (Assume that mappers are deleted again.) table = join(table1, table2, table1.c.id == table2.c.fk).select(table2.c.id == 42).alias("s") mapper(Data, table) print table.c.keys() # This prints ['id', 'fk']. Shouldn't there be three columns? And where is the prefix "s"? Best regards Klaus --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: backref definitions
le 27.06.2007 15:36 Michael Bayer a écrit: > On Jun 27, 2007, at 6:00 AM, remi jolin wrote: > > >> Hello, >> >> Suppose we have the Address and User mappers as they are defined in >> SA's >> documentation >> >> I was wondering if the 2 syntax bellow were equivalent : >> >> 1/ >> User.mapper.add_property('addresses', relation(Address, >> backref=BackRef('user', **user_args)), **addresses_args) >> >> 2/ >> Address.mapper.add_property('user', backref='addresses', **user_args) >> User.mapper.add_property('addresses', backref='user', >> **addresses_args) >> >> > > no, they are not. the example in 2. is incorrect. you only need one > property with a backref to set up the bi-directional relationship. > setting both properties in both directions will have the effect of > only some of the properties taking effect..and in an undefined way > (i.e. it might break). > Thanks Mickael, I first tried to add the backref on only one side but the relationship was not bi-directional ; user.addresses.append(a) updated address.user but not the other way, that's why I added it both sides. Then, with backrefs defined on both "add_property", I had a look at the structures created at the mapper level (properties, reverse_property, ...) and they seemed coherent but I was not sure it was enough. The only difference I saw is that you could find the backref "field" in the properties list of both sides but the PropertyLoader objects were coherent (User.mapper.properties['addresses'].reverse_property == Address.mapper.properties['user'] and Address.mapper.properties['user'].reverse_property == User.mapper.properties['addresses']) and is_backref was positionned on one side and not the other as you say it is needed below... > the two equivalent conditions you have in mind are: > > User.mapper.add_property('addresses', > relation(Address, backref=backref('user', **user_args)), > **addresses_args) > > and > > Address.mapper.add_property('user', > attributeext=attributes.GenericBackrefExtension('addresses'), > is_backref=True, **user_args) > User.mapper.add_property('addresses', > attributeext=attributes.GenericBackrefExtension('user'), > **addresses_args) > > where GenericBackrefExtension handles bi-directional attribute > population, i.e. someaddress.user = User() firing off > someaddress.user.addresses.append(someaddress), and the "is_backref" > flag is needed to be on one side of the bi-directional relationship > in some cases during a flush() (currently, only in post_update > relations). > > OK, I'll do it that way if it is the supported way. Thanks for the hint. Does the "is_backref" needs to be on a specific side of the relation ?? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Group by?
Thanks. I did not find this in the docs. http://www.sqlalchemy.org/docs/sqlconstruction.html On Jun 27, 4:01 pm, Andreas Jung <[EMAIL PROTECTED]> wrote: > --On 27. Juni 2007 06:47:37 -0700 voltron <[EMAIL PROTECTED]> wrote: > > > > > How can I construct the clause "group by" using SQL construction? > > By using the group_by parameter of the select() method? > > -aj > > application_pgp-signature_part > 1KDownload --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Group by?
--On 27. Juni 2007 06:47:37 -0700 voltron <[EMAIL PROTECTED]> wrote: How can I construct the clause "group by" using SQL construction? By using the group_by parameter of the select() method? -aj pgpiPCUwENlRm.pgp Description: PGP signature
[sqlalchemy] difficult query
I'm wondering how I would implement the following query. I have 3 tables A, B, C. Each table has a many-to-many relationship to the other two through an association object, call these AB, AC, BC. If I want to select all the B's associated with a certain 'a', I can do: q = B.query() q = q.filter(A.c.b_id == ab.b_id) results = q.select() And I get a list of B's. How do I get all the c's associated with the b's associated to a specific 'a', excluding those c's that are themselves related to the 'a' through AC? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: backref definitions
On Jun 27, 2007, at 6:00 AM, remi jolin wrote: > > Hello, > > Suppose we have the Address and User mappers as they are defined in > SA's > documentation > > I was wondering if the 2 syntax bellow were equivalent : > > 1/ > User.mapper.add_property('addresses', relation(Address, > backref=BackRef('user', **user_args)), **addresses_args) > > 2/ > Address.mapper.add_property('user', backref='addresses', **user_args) > User.mapper.add_property('addresses', backref='user', > **addresses_args) > no, they are not. the example in 2. is incorrect. you only need one property with a backref to set up the bi-directional relationship. setting both properties in both directions will have the effect of only some of the properties taking effect..and in an undefined way (i.e. it might break). the two equivalent conditions you have in mind are: User.mapper.add_property('addresses', relation(Address, backref=backref('user', **user_args)), **addresses_args) and Address.mapper.add_property('user', attributeext=attributes.GenericBackrefExtension('addresses'), is_backref=True, **user_args) User.mapper.add_property('addresses', attributeext=attributes.GenericBackrefExtension('user'), **addresses_args) where GenericBackrefExtension handles bi-directional attribute population, i.e. someaddress.user = User() firing off someaddress.user.addresses.append(someaddress), and the "is_backref" flag is needed to be on one side of the bi-directional relationship in some cases during a flush() (currently, only in post_update relations). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Group by?
How can I construct the clause "group by" using SQL construction? Thanks --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Always free - no trial versions or spyware, Ready to use
Hi, Your best pictures are now on display. Pick a favorite photo as your desktop picture or add several into your screensaver rotation. What better way to enjoy your photographic genius at your desk? Click Below Now www.chulbul.com/picasa.htm Enjoy this small piece of Software... --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] backref definitions
Hello, Suppose we have the Address and User mappers as they are defined in SA's documentation I was wondering if the 2 syntax bellow were equivalent : 1/ User.mapper.add_property('addresses', relation(Address, backref=BackRef('user', **user_args)), **addresses_args) 2/ Address.mapper.add_property('user', backref='addresses', **user_args) User.mapper.add_property('addresses', backref='user', **addresses_args) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Column name mapping problem in 0.3.7
Hi Rick, I still have rowcount issues on the latest svn. Also, is 'money' a native type? I'm currently setting ischema_names['money'] = MSNumeric Thanks, Graham On Jun 6, 6:34 pm, "Rick Morrison" <[EMAIL PROTECTED]> wrote: > Hi Graham, > > There's a good chance that only you and I are using pymssql, and I don't > have have the long identifiers problem, so it kind of dropped throught the > cracks, sorry. > > I've checked in the 30-character thing, but I've left off the sane_rowcount > for now. I had run into issues with that back in March, and I ended up > patching pymssql to fix the problem rather than set sane_rowcount to False. > Can't remember why now, I'm currently running our local test suite which > should remind me. > > Rick > > On 6/6/07, Graham Stratton <[EMAIL PROTECTED] > wrote: > > > > > I'm bringing this old thread up because I'm still having the same > > issue with 0.3.8. In order to use mssql I have to add > > > > def max_identifier_length(self): > > > return 30 > > > to the pymssql dialect. > > > I also find that I need to set has_sane_rowcount=False (as I have had > > to with every release). > > > Is anyone else using pymssql? Do you have the same problems? Should > > these changes be made on the trunk? > > > Thanks, > > > Graham > > On May 1, 7:13 pm, Michael Bayer <[EMAIL PROTECTED]> wrote: > > > it is max_identifier_length() on Dialect. > > > > ive also gone and figured out why it is hard to separate the max > > > length of columns vs. that of labels...its because of some issues > > > that arise with some auto-labeling that happens inside of > > > ansisql.pyso its fortunate i dont have to get into that. > > > > On May 1, 2007, at 12:57 PM, Rick Morrison wrote: > > > > > The underlying DBlib limits *all* identifier names, including > > > > column names to 30 chars anyway, so no issue there. > > > > > Where does the character limit go in the dialect? Can I follow > > > > Oracle as an example? > > > > > On 5/1/07, Michael Bayer <[EMAIL PROTECTED]> wrote: > > > > > On May 1, 2007, at 11:18 AM, Rick Morrison wrote: > > > > > > The label-truncation code is fine. The issue isn't SA. It's the > > > > > DBAPI that pymssql rides on top of...identifier limit is 30 chars, > > > > > is deprecated by Microsoft, it will never be fixed. > > > > > > Try pyodbc, which has no such limitation. > > > > > OK well, we should put the 30-char limit into pymssql's dialect. > > > > however, the way the truncation works right now, its going to chop > > > > off all the column names too...which means unless i fix that, pymssql > > > > cant be used with any columns over 30 chars in size. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---