[sqlalchemy] [Q][0.7.9] problem with with_entities(), over() and label()
Hello. I am trying to adapt WindowedRangeQuery recipe http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery I have just stumbled upon the following issue: def _q_windows(self, window_size): q = self._query.with_entities( self._column, func.row_number().over(order_by=self._column).label('rownum'), ) if window_size 1: q = q.filter(rownum % {} = 1.format(window_size)) return q where self._query = session.query(Foo) self._column = Foo.id window_size = 2 The returned query q produces the following SQL: SELECT foo.id AS foo_id, row_number() OVER (ORDER BY foo.id) AS rownum FROM foo WHERE rownum % 2 = 1 When executed, it generates the following error: sqlalchemy.exc.ProgrammingError: (ProgrammingError) column rownum does not exist I am kind of lost here, because something very similar should work according to the original recipe. Note that I apply with_entities() on the original query whereas the recipe uses from_self() on the column. Do you have any idea what should I do to fix this? Thank you in advance, Ladislav Lenart -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Retrieve Dates as UTC
As far as I can tell that is just for setting the time. By the linked documentation, using server_default. Postgres stores all datetimes as UTC and then does the conversion on query, depending on the timezone set in the connection. This defaults to the computer's timezone. In my submission it sets the timezone on all the connections in the pool. From then on all datetimes returned in queries are UTC regardless of the database configuration. This saves a configuration dependency while still allowing other connections/applications to query in other timezones if needed. The other added benefit is that it only happens once during connection setup, not at every query call. On Wednesday, 5 June 2013 14:44:23 UTC-7, Michael Bayer wrote: we do this (for PG and SQL Server): class utcnow(expression.FunctionElement): key = 'utcnow' type = DateTime() @compiles(utcnow, 'postgresql') def _pg_utcnow(element, compiler, **kw): return TIMEZONE('utc', CURRENT_TIMESTAMP) @compiles(utcnow, 'mssql') def _ms_utcnow(element, compiler, **kw): return GETUTCDATE() you then use utcnow() as your SQL expression. just remembered i put it in the docs too: http://docs.sqlalchemy.org/en/rel_0_8/core/compiler.html#utc-timestamp-function On Jun 5, 2013, at 4:51 PM, Owen M owen...@gmail.com javascript: wrote: Was hunting around for how to do this, and couldn't find any examples, but eventually figured it out. Goal is to have all dates coming out of Postgres to be in UTC, regardless of the DBA's timezone configuration. Key piece of code is as follows: import sqlalchemy def set_utc_on_connect(dbapi_con, con_record): c = dbapi_con.cursor() c.execute(SET TIME ZONE UTC) c.close() sqlalchemy.event.listen(sqlalchemy.pool.Pool, 'connect', set_utc_on_connect) Now all connections in the pool will have their timezone set to UTC, so all queries will return UTC for datetime objects with timezone specified. Just posting it here incase someone else needed the info. Cheers, ~Owen -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+...@googlegroups.com javascript:. To post to this group, send email to sqlal...@googlegroups.comjavascript: . Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] [Q][0.7.9] problem with with_entities(), over() and label()
Hello. I have already solved the issue by using subquery: SELECT t.id AS t_id, t.rownum AS t_rownum FROM ( SELECT FROM foo.id AS id, row_number() OVER (ORDER BY foo.id) AS rownum ) AS t WHERE rownum % 50 = 1 I have just tried your suggestion about using HAVING instead of WHERE, but that fails with the same error. Thus a label cannot be used inside a query. However, I am still curious whether the original WindowedRangeQuery recipe at http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery works or also has this error. Thank you, Ladislav Lenart On 6.6.2013 18:26, Charlie Clark wrote: Am 06.06.2013, 16:30 Uhr, schrieb Ladislav Lenart lenart...@volny.cz: The returned query q produces the following SQL: SELECT foo.id AS foo_id, row_number() OVER (ORDER BY foo.id) AS rownum FROM foo WHERE rownum % 2 = 1 When executed, it generates the following error: sqlalchemy.exc.ProgrammingError: (ProgrammingError) column rownum does not exist Just a guess but the error is coming from Postgres. You'll need to doublecheck the docs but I don't think rownum is available for the WHERE clause because OVER works like the aggregate functions. You can either try wrapping the SELECT in correlated subquery as the example shows or simply use HAVING rownum % 2 = 1 (I think this is best approach but I'm far from an expert). Try the SQL directly in the DB to see which works. Charlie -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Query and compiled_cache
On Tue, Jun 4, 2013 at 5:26 PM, Claudio Freire klaussfre...@gmail.com wrote: On Sun, Jun 2, 2013 at 9:41 PM, Claudio Freire klaussfre...@gmail.com wrote: So the whole thing is rolled up into the named thing I referred to also, so that there's no need to keep a Query object hanging around, when we say bake() we're really just referring to a position in the code somewhere, so I've updated the wiki recipe to use a named system like this: q = s.query(Foo).\ filter(Foo.data == bindparam('foo')).\ bake_as(foo, cache) result = q.params(foo='data 12').all() A highly cleaned up version of your test is attached. I'm still not sure I'm getting everything accounted for here! thanks for testing ! The feature is actually looking quite simple and probably works better as something built in, or at least if we added some methods to QueryContext to ease the burden of caching/copying it. Well, if that works, it certainly covers my needs so there would be no pressing need to incorporate it into the core. I'll let you know tomorrow. I've done only superficial testing for now, I have to build me a test database, but I thought I'd let you know, it seems to work flawless till now. Seems to break unpredictably with subqueryloads. I'll try to get a test for it. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Query and compiled_cache
On Jun 6, 2013, at 1:03 PM, Claudio Freire klaussfre...@gmail.com wrote: On Tue, Jun 4, 2013 at 5:26 PM, Claudio Freire klaussfre...@gmail.com wrote: On Sun, Jun 2, 2013 at 9:41 PM, Claudio Freire klaussfre...@gmail.com wrote: So the whole thing is rolled up into the named thing I referred to also, so that there's no need to keep a Query object hanging around, when we say bake() we're really just referring to a position in the code somewhere, so I've updated the wiki recipe to use a named system like this: q = s.query(Foo).\ filter(Foo.data == bindparam('foo')).\ bake_as(foo, cache) result = q.params(foo='data 12').all() A highly cleaned up version of your test is attached. I'm still not sure I'm getting everything accounted for here! thanks for testing ! The feature is actually looking quite simple and probably works better as something built in, or at least if we added some methods to QueryContext to ease the burden of caching/copying it. Well, if that works, it certainly covers my needs so there would be no pressing need to incorporate it into the core. I'll let you know tomorrow. I've done only superficial testing for now, I have to build me a test database, but I thought I'd let you know, it seems to work flawless till now. Seems to break unpredictably with subqueryloads. I'll try to get a test for it. I think I know why that might be, because subqueryload is using that same query to build a new one.the new one still has the wrong _baked_context stuck on it. So one way to fix would be to hack into the recipe to blow away the _baked_context when subqueryload starts working with the query. Really, _baked_context should be automatically blown away for all generative methods except for params() and maybe some others. We should add some logic to _clone() and _generative() to control this (see below). But that starts defeating the purpose of the baking. So really, this starts looking like the actual feature - the Subqueryloader would want to detect this baked condition, and then bake the query that it makes as well, using the original baked name appended with the loader path that it's working with. See now that is starting to make this look cool (but we've gone into, this is a built in feature now territory). diff --git a/lib/sqlalchemy/orm/query.py b/lib/sqlalchemy/orm/query.py index c2ec72c..b458975 100644 --- a/lib/sqlalchemy/orm/query.py +++ b/lib/sqlalchemy/orm/query.py @@ -40,12 +40,15 @@ from ..sql import ( __all__ = ['Query', 'QueryContext', 'aliased'] -def _generative(*assertions): +def _generative(*assertions, maintain_baked=False): Mark a method as generative. @util.decorator def generate(fn, *args, **kw): self = args[0]._clone() +if not maintain_baked: +del self._baked_cache +del self._baked_context for assertion in assertions: assertion(self, fn.func_name) fn(self, *args[1:], **kw) @@ -1157,7 +1160,7 @@ class Query(object): self._lockmode = mode -@_generative() +@_generative(maintain_baked=True) def params(self, *args, **kwargs): add values for bind parameters which may have been specified in filter(). -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] [Q][0.7.9] problem with with_entities(), over() and label()
On Jun 6, 2013, at 12:56 PM, Ladislav Lenart lenart...@volny.cz wrote: Hello. I have already solved the issue by using subquery: SELECT t.id AS t_id, t.rownum AS t_rownum FROM ( SELECT FROM foo.id AS id, row_number() OVER (ORDER BY foo.id) AS rownum ) AS t WHERE rownum % 50 = 1 I have just tried your suggestion about using HAVING instead of WHERE, but that fails with the same error. Thus a label cannot be used inside a query. However, I am still curious whether the original WindowedRangeQuery recipe at http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery works or also has this error. the recipe as you noted uses from_self(), which means, wrap myself in a subquery, so that's where the necessary subquery is applied. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] [Q][0.7.9] problem with with_entities(), over() and label()
Am 06.06.2013, 18:56 Uhr, schrieb Ladislav Lenart lenart...@volny.cz: Hello. I have already solved the issue by using subquery: SELECT t.id AS t_id, t.rownum AS t_rownum FROM ( SELECT FROM foo.id AS id, row_number() OVER (ORDER BY foo.id) AS rownum ) AS t WHERE rownum % 50 = 1 I have just tried your suggestion about using HAVING instead of WHERE, but that fails with the same error. Thus a label cannot be used inside a query. Yeah, sorry. As it says in the docs: If the query contains any window functions (see Section 3.5, Section 9.21 and Section 4.2.8), these functions are evaluated after any grouping, aggregation, and HAVING filtering is performed So, it would be possible with standard aggregate functions, well something like it is, but not with anything like row_number() which must use a window. Compared with Python seems a weird way of striding through the results but maybe that's just SQL. Charlie -- Charlie Clark Managing Director Clark Consulting Research German Office Kronenstr. 27a Düsseldorf D- 40217 Tel: +49-211-600-3657 Mobile: +49-178-782-6226 -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Connect to SQL Server (with pyodbc) getting error: sqlalchemy.exc.DBAPIError: (Error) ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default dri
I will show you a short program that you can use to experiment with the Pyodbc connector - the purpose of this program is to illustrate what SQLAlchemy will send to pyodbc.connect(): from sqlalchemy.connectors import pyodbc from sqlalchemy.engine.url import make_url conn = pyodbc.PyODBCConnector() url = make_url('mssql+pyodbc://DSN=py_test; Trusted_Connection=Yes') connect_args = conn.create_connect_args(url) print(connect_args) This returns: [['dsn=DSN=py_test; Trusted_Connection=Yes;Trusted_Connection=Yes'], {}] where you can see there's an extra DSN= in there. There's a large number of connection examples for Pyodbc here: http://docs.sqlalchemy.org/en/rel_0_8/dialects/mssql.html#additional-connection-examples You can see, that for standard TrustedConnection, you only need the dsn name: create_engine('mssql+pyodbc://mydsn') which our test script shows the arguments as: [['dsn=mydsn;Trusted_Connection=Yes'], {}] looks good to me, so good luck ! On Jun 5, 2013, at 9:47 PM, Victor Reichert vfr...@gmail.com wrote: Hello World! This is my first foray into python and SQL Alchemy, and I'm spinning my wheels. I'm running the code below and am able to connect to my DB and query data without error. import pyodbc cnxn = pyodbc.connect('DSN=py_test; Trusted_Connection=Yes') However, when I try import sqlalchemy engine = sqlalchemy.create_engine('mssql+pyodbc://DSN=py_test; Trusted_Connection=Yes') result = engine.execute(SELECT * FROM dbo.test_table) I receive the following error, I am running python 3.3 on 32 bit Windows 7 Enterprise Traceback (most recent call last): File C:\Python33\lib\site-packages\sqlalchemy\pool.py, line 757, in _do_get return self._pool.get(wait, self._timeout) File C:\Python33\lib\site-packages\sqlalchemy\util\queue.py, line 166, in get raise Empty sqlalchemy.util.queue.Empty During handling of the above exception, another exception occurred: Traceback (most recent call last): File C:\Python33\lib\site-packages\sqlalchemy\engine\strategies.py, line 80, in connect return dialect.connect(*cargs, **cparams) File C:\Python33\lib\site-packages\sqlalchemy\engine\default.py, line 285, in connect return self.dbapi.connect(*cargs, **cparams) pyodbc.Error: ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnect)') The above exception was the direct cause of the following exception: Traceback (most recent call last): File C:\Users\V114804\workspace\Remetrica_Simulated_ILC_WW_AP_20130520_ELT\Src\Test_DB_Connection.py, line 11, in module result = engine.execute(SELECT * FROM dbo.test_table) File C:\Python33\lib\site-packages\sqlalchemy\engine\base.py, line 1613, in execute connection = self.contextual_connect(close_with_result=True) File C:\Python33\lib\site-packages\sqlalchemy\engine\base.py, line 1661, in contextual_connect self.pool.connect(), File C:\Python33\lib\site-packages\sqlalchemy\pool.py, line 272, in connect return _ConnectionFairy(self).checkout() File C:\Python33\lib\site-packages\sqlalchemy\pool.py, line 425, in __init__ rec = self._connection_record = pool._do_get() File C:\Python33\lib\site-packages\sqlalchemy\pool.py, line 777, in _do_get con = self._create_connection() File C:\Python33\lib\site-packages\sqlalchemy\pool.py, line 225, in _create_connection return _ConnectionRecord(self) File C:\Python33\lib\site-packages\sqlalchemy\pool.py, line 318, in __init__ self.connection = self.__connect() File C:\Python33\lib\site-packages\sqlalchemy\pool.py, line 368, in __connect connection = self.__pool._creator() File C:\Python33\lib\site-packages\sqlalchemy\engine\strategies.py, line 87, in connect ) from e sqlalchemy.exc.DBAPIError: (Error) ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnect)') None None I have also tried: import sqlalchemy import pyodbc def connect(): pyodbc.connect('DRIVER={SQL Server};Server=SDAWWRMSD05;Database=ReMetrica_Results_201207;Trusted_Connection=Yes') print('Connect Method Created') created_engine = sqlalchemy.create_engine('mssql://', creator=connect) created_result = created_engine.execute(SELECT * FROM dbo.test_table) The definition is called, then the program hangs. If anyone could please give me some advice on how to get around this that would be great. Thank you for your help, please let me know if I can provide any additional information. ~Victor -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit
[sqlalchemy] Feature suggestion: fake ForeignKeyConstraints to simplify joins
I've written a handful of primaryjoin and secondaryjoin attributes on Relationships. This mechanism is flexible, but it's error-prone, and I think that, at least for all the cases I've personally encountered, there could be a better way. As an example, I have: thing = Table('thing', metadata, Column('thing_id', Integer, primary_key=True), Column('favorite_group_id', Integer), ForeignKeyConstraint(['thing_id', 'favorite_group_id'], ['rel.thing_id', 'rel.group_id'], use_alter=True, name='foobar')) group = Table('group', metadata, Column('group_id', Integer, primary_key=True)) rel = Table('rel', metadata, Column('group_id', Integer, ForeignKey('group.group_id', primary_key=True), Column('thing_id', Integer, ForeignKey('thing.thing_id'), primary_key=True)) IOW I have things and groups. The rel table is a many-to-many relation between things and groups. A thing also may have a favorite group; if so, there has to be a rel between that thing and its favorite group. I don't have a foreign key directly from favorite_group_id because the equivalent constraint is already implied by the existing foreign keys. Everything is straightforward to map, except the favorite_group relation on thing -- there's no foreign key. So I can set primary and primaryjoin, but here's a different suggestion: Add a couple of flags to ForeignKeyConstraint so that I can have three kinds of ForeignKeyConstraint: 1. The normal kind: the mapper setup and the DDL both see it. 2. A mapper-only ForeignKeyConstraint: the mapper will look at it to deduce join conditions, but there's no DDL. 3. A DDL-only ForeignKeyConstraint: There's DDL but the mapper won't see it. So I could do: thing = Table('thing', metadata, Column('thing_id', Integer, primary_key=True), Column('favorite_group_id', Integer), ForeignKeyConstraint(['thing_id', 'favorite_group_id'], ['rel.thing_id', 'rel.group_id'], use_alter=True, use_for_mapper=False, name='foobar'), ForeignKeyConstraint(['favorite_group_id'], ['group.group_id'], use_alter=True, emit_ddl=False, name='foobar')) And now the trivial mapper relations will all work with no fiddling. Thoughts? --Andy -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Feature suggestion: fake ForeignKeyConstraints to simplify joins
On Jun 6, 2013, at 2:21 PM, Andy aml...@gmail.com wrote: I've written a handful of primaryjoin and secondaryjoin attributes on Relationships. This mechanism is flexible, but it's error-prone, and I think that, at least for all the cases I've personally encountered, there could be a better way. As an example, I have: thing = Table('thing', metadata, Column('thing_id', Integer, primary_key=True), Column('favorite_group_id', Integer), ForeignKeyConstraint(['thing_id', 'favorite_group_id'], ['rel.thing_id', 'rel.group_id'], use_alter=True, name='foobar')) group = Table('group', metadata, Column('group_id', Integer, primary_key=True)) rel = Table('rel', metadata, Column('group_id', Integer, ForeignKey('group.group_id', primary_key=True), Column('thing_id', Integer, ForeignKey('thing.thing_id'), primary_key=True)) IOW I have things and groups. The rel table is a many-to-many relation between things and groups. A thing also may have a favorite group; if so, there has to be a rel between that thing and its favorite group. I don't have a foreign key directly from favorite_group_id because the equivalent constraint is already implied by the existing foreign keys. that's a fascinating FK setup, took me a few minute to understand what i was looking at, but sure I see why it's that way. Everything is straightforward to map, except the favorite_group relation on thing -- there's no foreign key. well there is, it's to the primary key of your rel table. If you were using the association pattern, then (I'm pretty sure) this would all work out naturally. What you're doing here though is not mapping rel and then doing kind of an artificial primaryjoin over to group directly. So this is exactly the reason primaryjoin exists, when you want to map around the normal linkages between tables. So I can set primary and primaryjoin, but here's a different suggestion: Add a couple of flags to ForeignKeyConstraint so that I can have three kinds of ForeignKeyConstraint: OK you need to know that I really, really hate flags. While we have lots of them, each one has arrived to the codebase in shame. Flags mean that the normal construction of your objects is not good enough, you also have to deal with a big panel of switches that do strange things. They make your API more complicated and mysterious. ForeignKeyConstraint(['thing_id', 'favorite_group_id'], ['rel.thing_id', 'rel.group_id'], use_alter=True, use_for_mapper=False, name='foobar'), so this flag, is the most evil of all - it's a flag that explicitly leaks an ORM concept into the Core. I think we are very close to having no leakage of explicit ORM concepts in the Core whatsoever at the moment (even in private APIs), but its a big library so I might be wrong. But that's certainly a goal. If we were to have hints that the ORM could understand inside of schema objects, it would be through some agnostic system like ForeignKeyContraint(, info={orm: False}), or something like that (something better than that). ForeignKeyConstraint(['favorite_group_id'], ['group.group_id'], use_alter=True, emit_ddl=False, name='foobar')) And now the trivial mapper relations will all work with no fiddling. This flag at least is Core only but still, if emit_ddl is False then why do we need use_alter and name? Thoughts? When a beginner comes to SQLAlchemy, and they want to make a custom join condition - now there are two entirely different ways of doing it. Which one should they use and why ? That's probably the biggest concern I have here. Two ways to do something means you have to produce the rationale for when to use each one. Not only that, but we are now placing ORM-specific configuration into our Table metadata. What's wrong with doing it in relationship()?Clearly, the way primaryjoin works, in that it's an expression, is cumbersome. But that seems like a much easier problem to solve, just by adding some extra sugar to relationship itself? Like relationship(MyTable, primaryjoin=[(a, b), (c, d)]) ? How is that more fiddling than the other approach? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Feature suggestion: fake ForeignKeyConstraints to simplify joins
On Jun 6, 2013, at 3:21 PM, Michael Bayer mike...@zzzcomputing.com wrote: Not only that, but we are now placing ORM-specific configuration into our Table metadata. What's wrong with doing it in relationship()?Clearly, the way primaryjoin works, in that it's an expression, is cumbersome. But that seems like a much easier problem to solve, just by adding some extra sugar to relationship itself? Like relationship(MyTable, primaryjoin=[(a, b), (c, d)]) ? How is that more fiddling than the other approach? maybe we could even go to that Place We've Never Gone (since I realized [(a, b), (c, d)] doesn't say anything about 'foreign'), and spell out the direction: relationship(Group, manytoone(favorite_group_id, group_id)) manytoone() would be some declarative sugar that expands into: foreign(thing.c.favorite_group_id) == remote(group.c.group_id) at least here, we can make the case that, these little annotations onetomany(), manytoone(), manytomany() are shortcuts to spelling out these particular joins. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Feature suggestion: fake ForeignKeyConstraints to simplify joins
Am 06.06.2013, 20:21 Uhr, schrieb Andy aml...@gmail.com: IOW I have things and groups. The rel table is a many-to-many relation between things and groups. A thing also may have a favorite group; if so, there has to be a rel between that thing and its favorite group. Are favourites optional? Why not normalise to Favourites with strict 1:1 with things and groups? Charlie -- Charlie Clark Managing Director Clark Consulting Research German Office Kronenstr. 27a Düsseldorf D- 40217 Tel: +49-211-600-3657 Mobile: +49-178-782-6226 -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Feature suggestion: fake ForeignKeyConstraints to simplify joins
On Jun 6, 2013, at 5:18 PM, Charlie Clark charlie.cl...@clark-consulting.eu wrote: Am 06.06.2013, 20:21 Uhr, schrieb Andy aml...@gmail.com: IOW I have things and groups. The rel table is a many-to-many relation between things and groups. A thing also may have a favorite group; if so, there has to be a rel between that thing and its favorite group. Are favourites optional? Why not normalise to Favourites with strict 1:1 with things and groups? by putting the FK constraint to the composite primary key of rel, it guarantees that the favorite item is a member of the thing-groups collection. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Feature suggestion: fake ForeignKeyConstraints to simplify joins
Am 06.06.2013, 23:36 Uhr, schrieb Michael Bayer mike...@zzzcomputing.com: On Jun 6, 2013, at 5:18 PM, Charlie Clark charlie.cl...@clark-consulting.eu wrote: Am 06.06.2013, 20:21 Uhr, schrieb Andy aml...@gmail.com: IOW I have things and groups. The rel table is a many-to-many relation between things and groups. A thing also may have a favorite group; if so, there has to be a rel between that thing and its favorite group. Are favourites optional? Why not normalise to Favourites with strict 1:1 with things and groups? by putting the FK constraint to the composite primary key of rel, it guarantees that the favorite item is a member of the thing-groups collection. I understand that I just wonder whether every thing has a favourite or not, in which case the structure is not fully normalised and that is how I would do it because it makes the projections easier. Well, to my mind at least. And, wouldn't it resolve the join problem? Charlie -- Charlie Clark Managing Director Clark Consulting Research German Office Kronenstr. 27a Düsseldorf D- 40217 Tel: +49-211-600-3657 Mobile: +49-178-782-6226 -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Feature suggestion: fake ForeignKeyConstraints to simplify joins
On Jun 6, 2013, at 5:40 PM, Charlie Clark charlie.cl...@clark-consulting.eu wrote: Am 06.06.2013, 23:36 Uhr, schrieb Michael Bayer mike...@zzzcomputing.com: On Jun 6, 2013, at 5:18 PM, Charlie Clark charlie.cl...@clark-consulting.eu wrote: Am 06.06.2013, 20:21 Uhr, schrieb Andy aml...@gmail.com: IOW I have things and groups. The rel table is a many-to-many relation between things and groups. A thing also may have a favorite group; if so, there has to be a rel between that thing and its favorite group. Are favourites optional? Why not normalise to Favourites with strict 1:1 with things and groups? by putting the FK constraint to the composite primary key of rel, it guarantees that the favorite item is a member of the thing-groups collection. I understand that I just wonder whether every thing has a favourite or not, in which case the structure is not fully normalised and that is how I would do it because it makes the projections easier. Well, to my mind at least. And, wouldn't it resolve the join problem? can you show me the alternate design you have in mind? I'm not sure what normalize to favorites with strict 1:1 looks like. this is all just my own curiosity. the issue at hand is something different. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Feature suggestion: fake ForeignKeyConstraints to simplify joins
On Thursday, June 6, 2013 2:40:57 PM UTC-7, Charlie Clark wrote: Am 06.06.2013, 23:36 Uhr, schrieb Michael Bayer mik...@zzzcomputing.comjavascript:: On Jun 6, 2013, at 5:18 PM, Charlie Clark charli...@clark-consulting.eu javascript: wrote: Am 06.06.2013, 20:21 Uhr, schrieb Andy aml...@gmail.com javascript:: IOW I have things and groups. The rel table is a many-to-many relation between things and groups. A thing also may have a favorite group; if so, there has to be a rel between that thing and its favorite group. Are favourites optional? Why not normalise to Favourites with strict 1:1 with things and groups? by putting the FK constraint to the composite primary key of rel, it guarantees that the favorite item is a member of the thing-groups collection. I understand that I just wonder whether every thing has a favourite or not, in which case the structure is not fully normalised and that is how I would do it because it makes the projections easier. Well, to my mind at least. And, wouldn't it resolve the join problem? I may be misunderstanding the question, but the reason that having a favorite is optional is because I'm using mysql and mysql doesn't supported deferred constraints. So if favorite were NOT NULL, then there would be no way to create the thing. Just to muddy the waters some more, that manytomany would be nice, too -- my funny foreign key constraint screws it up impressively. --Andy -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out. #!/usr/bin/python from sqlalchemy import * from sqlalchemy import orm import sqlalchemy as sa import sqlalchemy.events # This could be improved. def _explicit_rel_to_parent(parent_class, parent_table, child_table, using = None, # Specify instead of cols below remote_col_name = None, local_col_obj = None, backref_name=None, post_update=False): if using is not None: if remote_col_name is not None or local_col_obj is not None: raise ValueError('using is mutually exclusive with explicit cols') remote_col_name = using local_col_obj = child_table.columns[using] remote_col_obj = parent_table.columns[remote_col_name] backref_obj = (_ca_backref(backref_name) if backref_name is not None else None) return orm.relationship(parent_class, foreign_keys=[local_col_obj], primaryjoin=(remote_col_obj == local_col_obj), backref=backref_obj, post_update=post_update) metadata = sa.MetaData() group = sa.Table( 'group', metadata, Column('group_id', Integer, primary_key=True) ) class Group(object): pass sa.orm.mapper(Group, group) thing = sa.Table( 'thing', metadata, Column('thing_id', Integer, primary_key=True, autoincrement='ignore_fk'), Column('favorite_group_id', Integer, nullable=True), # Comment this out for remote_side to be correct ForeignKeyConstraint(['favorite_group_id', 'thing_id'], ['rel.group_id', 'rel.thing_id'], use_alter=True, name='foobar') ) class Thing(object): pass rel = sa.Table( 'rel', metadata, Column('thing_id', Integer, ForeignKey('thing.thing_id'), primary_key=True, nullable=False, index=True), Column('group_id', Integer, ForeignKey('group.group_id'), primary_key=True, nullable=False, index=True), mysql_engine='INNODB') sa.orm.mapper(Thing, thing, properties = { 'groups' : orm.relationship( Group, secondary=rel, primaryjoin=(thing.c.thing_id == rel.c.thing_id), # Uncommenting this get it wrong regardless of foreign keys #remote_side=(rel.c.thing_id, rel.c.group_id), # Why is this correct? foreign_keys=[rel.c.thing_id, rel.c.group_id], backref='things'), 'favorite_group' : _explicit_rel_to_parent( parent_class=Group, parent_table=group, child_table=thing, remote_col_name='group_id', local_col_obj=thing.c.favorite_group_id, post_update=True), }) print Thing's properties: for p in orm.class_mapper(Thing).iterate_properties: if isinstance(p, orm.properties.RelationshipProperty): print '%s:' % p.key print ' remote_side = %r' % p.remote_side print ' secondaryjoin = %s' %
[sqlalchemy] [BUG][PATCH] Function names not quoted when necessary
Function names in SQL can contain pretty much anything, e.g.: =# create function A Bug?(integer) returns integer as $$ select $1; $$ language sql; CREATE FUNCTION But when attempting to use the function from SQLAlchemy: from sqlalchemy.sql.expression import func bug = getattr(func, A Bug?)(1) session.query(bug).all() ProgrammingError: (ProgrammingError) syntax error at or near ? LINE 1: SELECT A Bug?(1) AS A Bug?_1 'SELECT A Bug?(%(A Bug?_2)s) AS A Bug?_1' {'A Bug?_2': 1} -Ryan P. Kelly -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out. diff --git a/lib/sqlalchemy/sql/compiler.py b/lib/sqlalchemy/sql/compiler.py index dd2a6e0..ada56c6 100644 --- a/lib/sqlalchemy/sql/compiler.py +++ b/lib/sqlalchemy/sql/compiler.py @@ -603,7 +603,10 @@ class SQLCompiler(engine.Compiled): if disp: return disp(func, **kwargs) else: -name = FUNCTIONS.get(func.__class__, func.name + %(expr)s) +name = FUNCTIONS.get( +func.__class__, +self.preparer.quote(func.name, None) + %(expr)s +) return ..join(list(func.packagenames) + [name]) % \ {'expr': self.function_argspec(func, **kwargs)} diff --git a/test/sql/test_compiler.py b/test/sql/test_compiler.py index 473a422..6ea4d2a 100644 --- a/test/sql/test_compiler.py +++ b/test/sql/test_compiler.py @@ -2481,6 +2481,49 @@ class SelectTest(fixtures.TestBase, AssertsCompiledSQL): and_, (a,), (b,) ) +def test_func(self): +f1 = func.somefunc(1) +self.assert_compile( +select([f1]), +SELECT somefunc(:somefunc_2) AS somefunc_1, +) +self.assert_compile( +select([f1.label(f1)]), +SELECT somefunc(:somefunc_1) AS f1, +) + +f2 = func.somefunc(table1.c.name) +self.assert_compile( +select([f2]), +SELECT somefunc(mytable.name) AS somefunc_1 FROM mytable, +) +self.assert_compile( +select([f2.label(f2)]), +SELECT somefunc(mytable.name) AS f2 FROM mytable, +) + +f3 = getattr(func, Needs Quotes?)(table1.c.myid) +self.assert_compile( +select([f3]), +'SELECT Needs Quotes?(mytable.myid) AS Needs Quotes?_1 FROM ' +'mytable' +) +self.assert_compile( +select([f3.label(f3)]), +'SELECT Needs Quotes?(mytable.myid) AS f3 FROM mytable', +) + +f4 = getattr(func, query from pg_stat_activity; --)() +self.assert_compile( +select([f4]), +'SELECT query from pg_stat_activity; --() AS query from ' +'pg_stat_activity; --_1', +) +self.assert_compile( +select([f4.label(f4)]), +'SELECT query from pg_stat_activity; --(mytable.myid) AS f4' +) + class KwargPropagationTest(fixtures.TestBase):