[sqlalchemy] Transforming SQL to SQLAlchemy
Hello, all. I am trying to reshape the (somewhat tricky) SQL query below into something SQLAlchemy can generate, and I just hit a wall: This is the query: SELECT (c).csr_id, (c).not_before, (c).not_after FROM (SELECT (SELECT c FROM certificate c WHERE c.csr_id=csr.id ORDER BY c.not_after DESC LIMIT 1) AS c FROM csr offset 0) s; And, I just can't seem to get SQLAlchemy to emit a correlated scalar subquery like this. There ought to be something, but for the life of me I can't seem to make it happen. InklessPen tried to help me on IRC, and we got something, but it's still not quite the same, https://gist.github.com/inklesspen/49e69e1f33f3852d348a A reduced model is below, class CSR(Base): certificates = _orm.relationship(Certificate, backref=csr) class Certificate(Base): not_before = _sa.Column(_sa.DateTime, nullable=False) not_after = _sa.Column(_sa.DateTime, nullable=False) csr_id = _fkcolumn(CSR.id, nullable=False) Regards, D.S. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
[sqlalchemy] scalar() works with outerjoin()?
Does scalar() work with outerjoin()? I have an outerjoin sql statement that will only return 1 row. I used scalar(), but then I could not get the column values of the 2nd table. I fixed it, but just wondering if scalar() and outerjoin() are not to be mixed in the same sql statement? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
Re: [sqlalchemy] scalar() works with outerjoin()?
oh, I found the function: first() which seems to work fine with joined queries. On Mon, Feb 2, 2015 at 12:21 AM, Dennis dennisf...@gmail.com wrote: Does scalar() work with outerjoin()? I have an outerjoin sql statement that will only return 1 row. I used scalar(), but then I could not get the column values of the 2nd table. I fixed it, but just wondering if scalar() and outerjoin() are not to be mixed in the same sql statement? -- You received this message because you are subscribed to a topic in the Google Groups sqlalchemy group. To unsubscribe from this topic, visit https://groups.google.com/d/topic/sqlalchemy/si-gRQnxPWk/unsubscribe. To unsubscribe from this group and all its topics, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
Re: [sqlalchemy] Re: django user (using django ORM) + sqlalchemy for other db tables
Thanks for the advice -- your recommendations against this configuration were a surprise to me... It's making me rethink what I want (and how much I want it). I'll post this as a comment to the first stackoverflow question so others are made aware. On Monday, September 9, 2013 10:17:10 PM UTC+8, Mauricio de Abreu Antunes wrote: The major problem is: everything in Django is mapped to the ORM. Even the sessions. Sorry for being negative but that is my way to understand this. 2013/9/9 Jonathan Vanasco jona...@findmeon.com javascript: Honestly, I wouldn't do this. Django has a lot of magic under the hood, and it's ORM does some very specific things to make this magic happen. It's not just the auth, it's how everything is structured in the database and how the app integrates with the database. You're likely to break things and be miserable. Django , Rails, etc are all-in frameworks -- they provide a stack with several decisions made for you; it's all or nothing. I'd personally suggest you either: - Continue using Django with their ORM. Create a secondary model that uses SqlAlchemy to reflect the Django mapping. That will let you use SqlAlchemy to do advanced read queries. - Use a different framework ( Pyramid, Flask, etc ; many have auth plugins that work with SqlAlchemy models ) -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+...@googlegroups.com javascript:. To post to this group, send email to sqlal...@googlegroups.comjavascript: . Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out. -- *Mauricio de Abreu Antunes* * * Github: https://github.com/mauricioabreu Twitter: https://twitter.com/maugzoide -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] django user (using django ORM) + sqlalchemy for other db tables
Any advice (or potential problems) with using sqlalchemy with the django framework but keeping the django user auth module? The django ORM would be used for the auth module (and its user table). SqlAlchemy ORM would be used for other tables and also to read (but not write) the django user table. This combination would allow the power of sqlachemy but maintain compatibility with django auth -- a sweet combination! Of course, the django admin system would likely not work, but that's ok. Just wondering if others have tried this and what issues might have come up. This approach is described in the stackoverflow comment here: http://stackoverflow.com/questions/18465197/how-do-i-start-with-django-orm-to-easily-switch-to-sqlalchemy#comment27166795_18476898 The method to integrate sqlalchemy into django is described here: http://stackoverflow.com/questions/6606725/best-way-to-integrate-sqlalchemy-into-a-django-project -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] Re: django user (using django ORM) + sqlalchemy for other db tables
PS: from my google searching, the 2 references below (and their references) are really the only mention of this approach on the web! So, it would be great to get some definitive advice on this seemly reasonable approach. On Monday, September 9, 2013 6:04:23 PM UTC+8, Dennis wrote: Any advice (or potential problems) with using sqlalchemy with the django framework but keeping the django user auth module? The django ORM would be used for the auth module (and its user table). SqlAlchemy ORM would be used for other tables and also to read (but not write) the django user table. This combination would allow the power of sqlachemy but maintain compatibility with django auth -- a sweet combination! Of course, the django admin system would likely not work, but that's ok. Just wondering if others have tried this and what issues might have come up. This approach is described in the stackoverflow comment here: http://stackoverflow.com/questions/18465197/how-do-i-start-with-django-orm-to-easily-switch-to-sqlalchemy#comment27166795_18476898 The method to integrate sqlalchemy into django is described here: http://stackoverflow.com/questions/6606725/best-way-to-integrate-sqlalchemy-into-a-django-project -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] mapper could not assemble primary key for table
I have following table: class Group(DeclarativeBase): Group definition Only the ``group_name`` column is required. __tablename__ = 'tg_group' id = Column(Integer, autoincrement=True, primary_key=True) group_name = Column(Unicode(16), unique=True, nullable=False) display_name = Column(Unicode(255)) created = Column(DateTime, default=datetime.now) users = relation('User', secondary=user_group_table, backref='groups') 'id' used to be 'group_id'. I am using TGII and just wrote a migration script to change the 'group_id' field to 'id'. I then manually went into class (above) and changed it to 'id' as well. Now upon running my project, I get following error message: sqlalchemy.exc.ArgumentError: Mapper Mapper|Group|tg_group could not assemble any primary key columns for mapped table 'tg_group' I am at a loss as to why this is happening. I changed all the references to the 'group_id' field to 'id'. Googl'ing this error often comes with a solution that some table does not have a primary key, but I do not see how this could be the case here - as I clearly state 'primary_key = True' for the 'id' field. Can anyone please help me with the trouble shooting? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] no server side cursors in SA 0.4.6?
Using Postgres 8.3.1 Consider the following script. In SA 0.4.3 it works as intended. In SA 0.4.6, it does not. In particular, the time to get the resultset in 0.4.3 is sub-second. The time in 0.4.6 is about 20 seconds. Also, when running on 0.4.3 the memory consumption of the script is constant under 10MB. When running on 0.4.6, it grows to hundreds of MB and is dependent on the size of the result set. Seems to me that 0.4.3 used a cursor like it was configured to and 0.4.6 ignored the server_side_cursors=True parameter to the create_engine call. How do I make 0.4.6 use server side cursors? #!/usr/bin/python from time import time from sqlalchemy import text from sqlalchemy import create_engine stime = time() engine = create_engine('postgres://[EMAIL PROTECTED]/postgres', server_side_cursors=True, encoding='utf-8') conn = engine.connect() trans = conn.begin() print have engine, connection, transaction after about %.4f seconds % (time() - stime) stime = time() rs = conn.execute(text(select * from generate_series(1,1000) s0, generate_series(1,1) s1)) print have resultset after about %.4f seconds % (time() - stime) count = 0 stime = time() for r in rs: count += 1 print counted %s rows after about %.4f seconds % (count, time() - stime) stime = time() rs.close() print closed resultset after about %.4f seconds % (time() - stime) stime = time() trans.commit() print commited after about %.4f seconds % (time() - stime) stime = time() conn.close() print closed connection after about %.4f seconds % (time() - stime) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] transactional sessions not transactional?
The following test case of mine fails on PG 8.3 and SA 0.4.3 Basically, create two sessions, make some changes in the first and obverse they are visible before commit/rollback in the second (and via connectionless execution directly on the engine), but become unvisible after rollback. The first two print statements both show a row returned (that should only be visible from s0), but after the rollback the print statements show there are no values. It's almost like a threadlocal strategy is being used when it was never configured. Ideas/thoughts/comments? #!/usr/bin/python from sqlalchemy.sql import text from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker engine = create_engine('postgres://[EMAIL PROTECTED]/testsatransaction') new_session = sessionmaker(bind=engine, transactional=True) engine.execute(text(drop table if exists foo)) engine.execute(text(create table foo(c1 int))) s0 = new_session() s1 = new_session() s0.execute(text(insert into foo values(1))) (one,) = s0.execute(text(select * from foo)).fetchone() assert one == 1 print engine.execute(text(select * from foo)).fetchone() print s1.execute(text(select * from foo)).fetchone() s0.rollback() print engine.execute(text(select * from foo)).fetchone() print s1.execute(text(select * from foo)).fetchone() --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] SQLAlchemy, Postgres and ENum?
I didn't see anything in the doc, and google wasn't much help in this case so I'm guessing that SA doesn't support a ENum type with PG? If not, are there standard/best practices for working around it in SA? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] SA confusing timestamp with interval?
I'm using SA 0.4.3 and PostgreSQL 8.3.1 I'm new to SA, so perhaps I'm doing something wrong or just not understanding something, but I think SA is trying to treat my timestamps as intervals in some cases. If I run the equivalent (select c0 from t0 where c0 current_timestamp - interval '1 hour') via psql, it works as expected. However, If I run: #!/usr/bin/python from datetime import datetime from sqlalchemy.sql import text from sqlalchemy import create_engine, MetaData engine = create_engine('postgres://[EMAIL PROTECTED]/testdb', encoding='utf-8') metadata = MetaData(bind=engine) engine.execute(text(drop table if exists t0)) engine.execute(text(create table t0(c0 timestamp(0) with time zone))) engine.execute(text(insert into t0 values(current_timestamp))) engine.execute(text(select c0 from t0 where c0 :bindArg - interval '1 hour'), bindArg=datetime.utcnow()) I get: Traceback (most recent call last): File ./saerr.py, line 14, in module engine.execute(text(select c0 from t0 where c0 :bindArg - interval '1 hour'), bindArg=datetime.utcnow()) File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py, line 1214, in execute return connection.execute(statement, *multiparams, **params) File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py, line 846, in execute return Connection.executors[c](self, object, multiparams, params) File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py, line 897, in execute_clauseelement return self._execute_compiled(elem.compile(dialect=self.dialect, column_keys=keys, inline=len(params) 1), distilled_params=params) File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py, line 909, in _execute_compiled self.__execute_raw(context) File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py, line 918, in __execute_raw self._cursor_execute(context.cursor, context.statement, context.parameters[0], context=context) File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py, line 962, in _cursor_execute self._handle_dbapi_exception(e, statement, parameters, cursor) File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py, line 944, in _handle_dbapi_exception raise exceptions.DBAPIError.instance(statement, parameters, e, connection_invalidated=is_disconnect) sqlalchemy.exceptions.ProgrammingError: (ProgrammingError) invalid input syntax for type interval: 2008-04-18T15:37:02.235955 select c0 from t0 where c0 %(bindArg)s - interval '1 hour' {'bindArg': datetime.datetime(2008, 4, 18, 15, 37, 2, 235955)} --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SA confusing timestamp with interval?
I get a similar result if I use psycopg2 directly: #!/usr/bin/python import psycopg2 from datetime import datetime conn = psycopg2.connect('''dbname=testdb user=postgres host=localhost''') cur = conn.cursor() cur.execute(drop table if exists t0) cur.execute(create table t0(c0 timestamp(0) with time zone)) cur.execute(insert into t0 values(current_timestamp)) cur.execute(select c0 from t0 where c0 %(bindArg)s - interval '1 hour', {'bindArg':datetime.utcnow()}) On Fri, Apr 18, 2008 at 6:26 PM, Michael Bayer [EMAIL PROTECTED] wrote: On Apr 18, 2008, at 4:42 PM, Matthew Dennis wrote: I'm using SA 0.4.3 and PostgreSQL 8.3.1 I'm new to SA, so perhaps I'm doing something wrong or just not understanding something, but I think SA is trying to treat my timestamps as intervals in some cases. If I run the equivalent (select c0 from t0 where c0 current_timestamp - interval '1 hour') via psql, it works as expected. However, If I run: #!/usr/bin/python from datetime import datetime from sqlalchemy.sql import text from sqlalchemy import create_engine, MetaData engine = create_engine('postgres://[EMAIL PROTECTED]/testdb', encoding='utf-8') metadata = MetaData(bind=engine) engine.execute(text(drop table if exists t0)) engine.execute(text(create table t0(c0 timestamp(0) with time zone))) engine.execute(text(insert into t0 values(current_timestamp))) engine.execute(text(select c0 from t0 where c0 :bindArg - interval '1 hour'), bindArg=datetime.utcnow()) the text() above sends through the bind argument to psycopg2 directly, which knows how to handle datetime objects. What happens if you test with raw psycopg2 ? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SA confusing timestamp with interval?
I posted it on the psycopg list at http://lists.initd.org/pipermail/psycopg/2008-April/006026.html, but it mangled my link to this discussion (by eating a space after the URL and appending the first word of the next sentence) On Fri, Apr 18, 2008 at 9:34 PM, Michael Bayer [EMAIL PROTECTED] wrote: On Apr 18, 2008, at 10:20 PM, Matthew Dennis wrote: I get a similar result if I use psycopg2 directly: #!/usr/bin/python import psycopg2 from datetime import datetime conn = psycopg2.connect('''dbname=testdb user=postgres host=localhost''') cur = conn.cursor() cur.execute(drop table if exists t0) cur.execute(create table t0(c0 timestamp(0) with time zone)) cur.execute(insert into t0 values(current_timestamp)) cur.execute(select c0 from t0 where c0 %(bindArg)s - interval '1 hour', {'bindArg':datetime.utcnow()}) great. lets let them know on the psycopg2 list. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Clause Visitor changes causing issues.
While using 0.3.5, I created a ClauseVisitor to see if a particular table was included in a query that I dynamically build. I've been trying to upgrade to 0.3.6 (or trunk), but that particular section is not working the same way. The docs say that by default, all the visit methods are still called, but I placed logging statements in my visit_table(self,table) method and it isn't being called. Do I need to specify something additional? Is there a ClauseVisitor example somewhere. Thanks Dennis --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Deep Eagerload
nope, options(eagerload('a'), eagerload('a.b')) works just fine, have tested it here to verify You're quite correct.. Sorry for the noise. I had tried it a while back and it didn't work yet. But in between now and then, it has magically been implemented by you! Thanks Dennis --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Deep Eagerload
I have a class that has a lazy loaded option. This class is a parent of another table that I'd like to select from. a-lazy_b c-lazy_a I want to eagerload both a b like this: c-a-b Is there a way to specify that? query(c).options(eagerload('a'),eagerload('a.b')) seams logical. Thoughts? -Dennis --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Deep Eagerload
On Mar 9, 2:20 pm, Michael Bayer [EMAIL PROTECTED] wrote: in theory youd say options(eagerload(a.b)). the separate eagerload(a) isnt needed since its sort of impossible to eagerload b without eager loading a. But I'm assuming there isn't a way to do it currently. I guess I could create a mapper for a that doesn't lazyload b and use that mapper instead. -Dennis --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Dynamically building a query with a join
Actually, I'm still having a problem because the primary object is already a join and the next object that I append gets listed twice. Example sel=select([a,b], from_obj=[a.outerjoin(b)]) sel.append( a.outerjoin(c,somecriteriaforthejoin)) str(sel) SELECT ,,, FROM a LEFT OUTER JOIN b ON , a LEFT OUTER JOIN c ON ... sel.execute() SQLError: (ProgrammingError) table name a specified more than once What I really need is: a.outerjoin(b).outerjoin(c) That is what I can't seem to find a way to dynamically generate. Thanks -Dennis On Mar 5, 3:16 pm, Michael Bayer [EMAIL PROTECTED] wrote: there is append_from() joins know how to find their components that are already in the selectable and replace them. On Mar 5, 2007, at 4:38 PM, Dennis wrote: I'm playing around with dynamically building a query. I can append columns, where clauses, from objects etc... but what about the case where I want to modify the from obj with a join? For example I can do this: sel=select() sel.append_from(a) sel.append_from(b) sel.append_whereclause(a.c.id==b.c.id) That won't work for an outerjoin though. I have a query that works like this now: select ( [...], from_obj=[a.outerjoin(b)] ) but I can't figure out a way to add the outerjoin dynamically. I looked at clause visitors but there doesn't seem like a way to actually modify an existing join. Any thoughts? Thanks Dennis --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Dynamically building a query with a join
I did find a slight hack: query.append_from ( query.froms._list[0].outerjoin( etc ... ) ) -Dennis On Mar 5, 3:54 pm, Dennis [EMAIL PROTECTED] wrote: Actually, I'm still having a problem because the primary object is already a join and the next object that I append gets listed twice. Example sel=select([a,b], from_obj=[a.outerjoin(b)]) sel.append( a.outerjoin(c,somecriteriaforthejoin)) str(sel) SELECT ,,, FROM a LEFT OUTER JOIN b ON , a LEFT OUTER JOIN c ON ... sel.execute() SQLError: (ProgrammingError) table name a specified more than once What I really need is: a.outerjoin(b).outerjoin(c) That is what I can't seem to find a way to dynamically generate. Thanks -Dennis On Mar 5, 3:16 pm, Michael Bayer [EMAIL PROTECTED] wrote: there is append_from() joins know how to find their components that are already in the selectable and replace them. On Mar 5, 2007, at 4:38 PM, Dennis wrote: I'm playing around with dynamically building a query. I can append columns, where clauses, from objects etc... but what about the case where I want to modify the from obj with a join? For example I can do this: sel=select() sel.append_from(a) sel.append_from(b) sel.append_whereclause(a.c.id==b.c.id) That won't work for an outerjoin though. I have a query that works like this now: select ( [...], from_obj=[a.outerjoin(b)] ) but I can't figure out a way to add the outerjoin dynamically. I looked at clause visitors but there doesn't seem like a way to actually modify an existing join. Any thoughts? Thanks Dennis --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: contains_eager is somehow not loading all the instances.
Thanks! On Mar 1, 1:31 pm, Michael Bayer [EMAIL PROTECTED] wrote: yeah its a bug, its all fixed (several issues with text columns) in 2368. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: contains_eager is somehow not loading all the instances.
That's the problem, if I have use_labels=True, and I include as somecolumn in the text clause, I still get a generated label. The resulting sql = . as somecolumn as somelonglabelthatisashortenedversionofthetext. that creates an sql error. On Feb 27, 3:51 pm, Michael Bayer [EMAIL PROTECTED] wrote: yeah with text just say as sometable_somecolumn. On Feb 27, 2007, at 4:16 PM, Dennis wrote: Well, columnname isn't a simple column in the case of a case statement... the label is turning out like this: casewhenhas_testtrueandscoreisnullandgender1then1whenscoreisnullthen2 elsescoreend I haven't found a way to manually assign a label to a text clause yet, but before I tried use_labels=True, I has appended as score to the case clause and that worked. On Feb 27, 2:44 pm, Michael Bayer [EMAIL PROTECTED] wrote: the label is always tablename_columnname. youd have to show me where you need that to be programmatic. On Feb 27, 2007, at 2:29 PM, Dennis wrote: Thanks for taking a peek. Interesting, it does indeed fix the issue to use labels. Now I have another issue though, I have a case statement in my select which I was specifying like this: select ( ['case when yada yada yada end as something' ] .. If use_labels = True, then the query breaks because the generated sql has two as label parts two it. if I delete the as something part, I think don't know programatically what the label is though. I need to know that because I order by it. Isn't there a way to find out a column label from a query? -Dennis On Feb 27, 12:47 pm, Michael Bayer [EMAIL PROTECTED] wrote: if you run it with full blown logging on, i.e.: import logging logging.basicConfig() logging.getLogger('sqlalchemy.engine').setLevel(logging.DEBUG) logging.getLogger('sqlalchemy.orm').setLevel(logging.DEBUG) the issue can be detected when you look at the mapper creating instance keys for T (although this is clearly not a novice issue): DEBUG:sqlalchemy.orm.mapper.Mapper:(T|ts) _instance(): identity key (class '__main__.T', (1,), None) not in session[] DEBUG:sqlalchemy.orm.mapper.Mapper:(T|ts) _instance(): identity key (class '__main__.T', (None,), None) not in session[] DEBUG:sqlalchemy.orm.mapper.Mapper:(T|ts) _instance(): identity key (class '__main__.T', (3,), None) not in session[] DEBUG:sqlalchemy.orm.mapper.Mapper:(T|ts) _instance(): identity key (class '__main__.T', (None,), None) not in session[] DEBUG:sqlalchemy.orm.mapper.Mapper:(T|ts) _instance(): identity key (class '__main__.T', (5,), None) not in session[] so its not getting an identity key for every other row, which indicates its looking at the wrong column in the result set. (on each of those Nones, its going to skip that entity) looking at the query: SELECT ts.id, ts.dat, other.ts_id, other.other_dat FROM ts LEFT OUTER JOIN other ON ts.id = other.ts_id we can see that other has a column called ts_id, which looks exactly like the label that would be made for id in table ts. so thats whats happening here. so throwing on a use_labels=True to the query (or changing the name of ts_id) produces the query: SELECT ts.id AS ts_id, ts.dat AS ts_dat, other.ts_id AS other_ts_id, other.other_dat AS other_other_dat FROM ts LEFT OUTER JOIN other ON ts.id = other.ts_id that gives the correct results. not sure what SA can really do here to make this kind of issue easier to catch, since the resultproxy itself is where its looking for col label, col name, , etc. the generated labels are generally more accurate. i tried playing around with ResultProxy to make it detect an ambiguity of this nature, but i think it might not be possible unless more flags/switches get passed from the statement to the result (which id rather not do since it further marginalizes straight textual queries), since if the select statement uses table/col labels for each column, there still could be conflicts which dont matter, such as the column names the normal eager loader generates: 'ts_id', 'ts_dat', 'other_4966_ts_id', 'other_4966_other_dat', that result is from column ts_id attached to an Alias other_4966. if we said dont allow any Column to be found twice in the row, then that breaks (since it will match other_4966_ts_id on its _label, ts_id on its name). On Feb 27, 2007, at 12:09 PM, Dennis Muhlestein wrote: from sqlalchemy import * e=create_engine('sqlite://memory') ts=Table('ts',e, Column ( 'id',Integer,primary_key=True), Column ( 'dat',Integer,nullable=False)) ts.create() to_oneornone=Table('other',e, Column ( 'ts_id', Integer,ForeignKey('ts.id'), primary_key=True, nullable=False ), Column ( 'other_dat', Integer, nullable=False ) ) to_oneornone.create() class T(object): pass T.mapper=mapper(T,ts) class To(object):pass To.mapper=mapper(To,to_oneornone,properties={'ts
[sqlalchemy] Re: contains_eager is somehow not loading all the instances.
Thanks for taking a peek. Interesting, it does indeed fix the issue to use labels. Now I have another issue though, I have a case statement in my select which I was specifying like this: select ( ['case when yada yada yada end as something' ] .. If use_labels = True, then the query breaks because the generated sql has two as label parts two it. if I delete the as something part, I think don't know programatically what the label is though. I need to know that because I order by it. Isn't there a way to find out a column label from a query? -Dennis On Feb 27, 12:47 pm, Michael Bayer [EMAIL PROTECTED] wrote: if you run it with full blown logging on, i.e.: import logging logging.basicConfig() logging.getLogger('sqlalchemy.engine').setLevel(logging.DEBUG) logging.getLogger('sqlalchemy.orm').setLevel(logging.DEBUG) the issue can be detected when you look at the mapper creating instance keys for T (although this is clearly not a novice issue): DEBUG:sqlalchemy.orm.mapper.Mapper:(T|ts) _instance(): identity key (class '__main__.T', (1,), None) not in session[] DEBUG:sqlalchemy.orm.mapper.Mapper:(T|ts) _instance(): identity key (class '__main__.T', (None,), None) not in session[] DEBUG:sqlalchemy.orm.mapper.Mapper:(T|ts) _instance(): identity key (class '__main__.T', (3,), None) not in session[] DEBUG:sqlalchemy.orm.mapper.Mapper:(T|ts) _instance(): identity key (class '__main__.T', (None,), None) not in session[] DEBUG:sqlalchemy.orm.mapper.Mapper:(T|ts) _instance(): identity key (class '__main__.T', (5,), None) not in session[] so its not getting an identity key for every other row, which indicates its looking at the wrong column in the result set. (on each of those Nones, its going to skip that entity) looking at the query: SELECT ts.id, ts.dat, other.ts_id, other.other_dat FROM ts LEFT OUTER JOIN other ON ts.id = other.ts_id we can see that other has a column called ts_id, which looks exactly like the label that would be made for id in table ts. so thats whats happening here. so throwing on a use_labels=True to the query (or changing the name of ts_id) produces the query: SELECT ts.id AS ts_id, ts.dat AS ts_dat, other.ts_id AS other_ts_id, other.other_dat AS other_other_dat FROM ts LEFT OUTER JOIN other ON ts.id = other.ts_id that gives the correct results. not sure what SA can really do here to make this kind of issue easier to catch, since the resultproxy itself is where its looking for col label, col name, , etc. the generated labels are generally more accurate. i tried playing around with ResultProxy to make it detect an ambiguity of this nature, but i think it might not be possible unless more flags/switches get passed from the statement to the result (which id rather not do since it further marginalizes straight textual queries), since if the select statement uses table/col labels for each column, there still could be conflicts which dont matter, such as the column names the normal eager loader generates: 'ts_id', 'ts_dat', 'other_4966_ts_id', 'other_4966_other_dat', that result is from column ts_id attached to an Alias other_4966. if we said dont allow any Column to be found twice in the row, then that breaks (since it will match other_4966_ts_id on its _label, ts_id on its name). On Feb 27, 2007, at 12:09 PM, Dennis Muhlestein wrote: from sqlalchemy import * e=create_engine('sqlite://memory') ts=Table('ts',e, Column ( 'id',Integer,primary_key=True), Column ( 'dat',Integer,nullable=False)) ts.create() to_oneornone=Table('other',e, Column ( 'ts_id', Integer,ForeignKey('ts.id'), primary_key=True, nullable=False ), Column ( 'other_dat', Integer, nullable=False ) ) to_oneornone.create() class T(object): pass T.mapper=mapper(T,ts) class To(object):pass To.mapper=mapper(To,to_oneornone,properties={'ts':relation (T,backref=backref('other',uselist=False))}) s=create_session() for x in range(10): t=T() t.dat=x s.save(t) if x % 2 == 0: # test every other T has an optional data o=To() o.other_dat=x t.other=o s.save(t) s.flush() s.clear() somedata=s.query(T).options(eagerload('other')).select() print 'Number results should be 10: ', len(somedata) s.clear() sel=select([ts,to_oneornone], from_obj=[ts.outerjoin(to_oneornone)]) print Raw select also is 10: , len(sel.execute().fetchall() ) print Instances should also be 10: , len(s.query(T).options (contains_eager('other')).instances(sel.execute())) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: contains_eager is somehow not loading all the instances.
Well, columnname isn't a simple column in the case of a case statement... the label is turning out like this: casewhenhas_testtrueandscoreisnullandgender1then1whenscoreisnullthen2elsescoreend I haven't found a way to manually assign a label to a text clause yet, but before I tried use_labels=True, I has appended as score to the case clause and that worked. On Feb 27, 2:44 pm, Michael Bayer [EMAIL PROTECTED] wrote: the label is always tablename_columnname. youd have to show me where you need that to be programmatic. On Feb 27, 2007, at 2:29 PM, Dennis wrote: Thanks for taking a peek. Interesting, it does indeed fix the issue to use labels. Now I have another issue though, I have a case statement in my select which I was specifying like this: select ( ['case when yada yada yada end as something' ] .. If use_labels = True, then the query breaks because the generated sql has two as label parts two it. if I delete the as something part, I think don't know programatically what the label is though. I need to know that because I order by it. Isn't there a way to find out a column label from a query? -Dennis On Feb 27, 12:47 pm, Michael Bayer [EMAIL PROTECTED] wrote: if you run it with full blown logging on, i.e.: import logging logging.basicConfig() logging.getLogger('sqlalchemy.engine').setLevel(logging.DEBUG) logging.getLogger('sqlalchemy.orm').setLevel(logging.DEBUG) the issue can be detected when you look at the mapper creating instance keys for T (although this is clearly not a novice issue): DEBUG:sqlalchemy.orm.mapper.Mapper:(T|ts) _instance(): identity key (class '__main__.T', (1,), None) not in session[] DEBUG:sqlalchemy.orm.mapper.Mapper:(T|ts) _instance(): identity key (class '__main__.T', (None,), None) not in session[] DEBUG:sqlalchemy.orm.mapper.Mapper:(T|ts) _instance(): identity key (class '__main__.T', (3,), None) not in session[] DEBUG:sqlalchemy.orm.mapper.Mapper:(T|ts) _instance(): identity key (class '__main__.T', (None,), None) not in session[] DEBUG:sqlalchemy.orm.mapper.Mapper:(T|ts) _instance(): identity key (class '__main__.T', (5,), None) not in session[] so its not getting an identity key for every other row, which indicates its looking at the wrong column in the result set. (on each of those Nones, its going to skip that entity) looking at the query: SELECT ts.id, ts.dat, other.ts_id, other.other_dat FROM ts LEFT OUTER JOIN other ON ts.id = other.ts_id we can see that other has a column called ts_id, which looks exactly like the label that would be made for id in table ts. so thats whats happening here. so throwing on a use_labels=True to the query (or changing the name of ts_id) produces the query: SELECT ts.id AS ts_id, ts.dat AS ts_dat, other.ts_id AS other_ts_id, other.other_dat AS other_other_dat FROM ts LEFT OUTER JOIN other ON ts.id = other.ts_id that gives the correct results. not sure what SA can really do here to make this kind of issue easier to catch, since the resultproxy itself is where its looking for col label, col name, , etc. the generated labels are generally more accurate. i tried playing around with ResultProxy to make it detect an ambiguity of this nature, but i think it might not be possible unless more flags/switches get passed from the statement to the result (which id rather not do since it further marginalizes straight textual queries), since if the select statement uses table/col labels for each column, there still could be conflicts which dont matter, such as the column names the normal eager loader generates: 'ts_id', 'ts_dat', 'other_4966_ts_id', 'other_4966_other_dat', that result is from column ts_id attached to an Alias other_4966. if we said dont allow any Column to be found twice in the row, then that breaks (since it will match other_4966_ts_id on its _label, ts_id on its name). On Feb 27, 2007, at 12:09 PM, Dennis Muhlestein wrote: from sqlalchemy import * e=create_engine('sqlite://memory') ts=Table('ts',e, Column ( 'id',Integer,primary_key=True), Column ( 'dat',Integer,nullable=False)) ts.create() to_oneornone=Table('other',e, Column ( 'ts_id', Integer,ForeignKey('ts.id'), primary_key=True, nullable=False ), Column ( 'other_dat', Integer, nullable=False ) ) to_oneornone.create() class T(object): pass T.mapper=mapper(T,ts) class To(object):pass To.mapper=mapper(To,to_oneornone,properties={'ts':relation (T,backref=backref('other',uselist=False))}) s=create_session() for x in range(10): t=T() t.dat=x s.save(t) if x % 2 == 0: # test every other T has an optional data o=To() o.other_dat=x t.other=o s.save(t) s.flush() s.clear() somedata=s.query(T).options(eagerload('other')).select() print 'Number results should be 10: ', len(somedata) s.clear() sel=select([ts,to_oneornone
[sqlalchemy] Re: Full Text Search using PostgreSQL and tsearch2
I have site in production right now that is using tsearch2. What I did to accommodate the results with SA was to simply not map the tsearch2 column to the SA object. I have a view that creates the tsvector objects based on the source table. (I actually created a materialized view and indexed it if you wanted to google for materialized views and PG), Next, I simply join the SA object with a custom query when I do the search. Example tsearch=engine.text('my tsearch query that returns the ids of the objs'). If you want objects returned, you can use the mapper.instances function myobj=MyObj.mapper.instances(tsearch.execute()) -Dennis --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] MapperExtension with options
I wanted to experiment with using a MapperExtension to add a couple of non-mapped properties to a class at load time. def MyExt(MapperExtension): def populate_instance(self, mapper, selectioncontext, row, instance, identitykey, isnew): print Hello World # do some work here return EXT_PASS obj=query(MyObj).options(extension(MyExt)).select() MyExt isn't printing anything though.. Am I missing something? Thanks Dennis --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: about cascade rule
On Feb 1, 5:24 am, Manlio Perillo [EMAIL PROTECTED] wrote: Hi. I still do not fully understand cascade rules, however I want to be sure the behaviour below is a feature and not a bug. Did you intend for the B object to be able to not exist? I modified your code to create a b object as well as obj and the transaction worked: obj = A('x') bobj=B('hi') # I modified your constructor to not take the id as well bobj.a=obj sess.save(obj) I'm not sure if it is a bug or feature either but perhaps a different cascade rule will alter the behavior. -Dennis --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MapperExtension with options
On Feb 1, 12:58 pm, Michael Bayer [EMAIL PROTECTED] wrote: ah the mapper isnt taking into account the extensions that are local to the query during the _instance() phase. we can add a ticket for that. http://www.sqlalchemy.org/trac/ticket/454 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] sorting mapped class by joined subquery result (error)
I have a mapped class.. lets call it Data with a few properties Data.id (primary key), Data.a, Data.b, Data.c I want to query a few of these objects out.. but they need to be sorted by some arbitrary data arbitrary_data=select ( [Data.c.id, OtherClass.c.somedata], and_()).alias('somedata') ok.. now query the data: dat=Data.select( and_(.), from_obj=[ datas.join(arbitrary_data,arbitrary_data.c.id==datas.c.id) ] , order_by=[asc(arbitrary_data.c.somedata)]) Now, the generated sql is in the form (with query.py deciding it needs to nest the query): select datas.id as datas_id, datas.a as datas_a etc. from (select datas.id as datas_id, arbitrary_data.somedata as arbitrary_data_somedata from datas join (my arbitrary_data table query with where clause ) as arbitrary_data where . order by arbitrary_data.somedata ) as tbl_row_count join datas on ... order by arbitrary_data.somedata The last line is the problem.. The from clause renames the column to arbitrary_data_somedata but the order by clause uses the inner form with a . still. The error: missing FROM-clause entry for table arbitrary_data (because that table only exists on the inner aliased table) Anyhow, if I rename the sort on the outer query to use the underscore manually, the query returns the correct results in the correct order. I believe the faulty behavior starts at line 455 in orm/query.py (trunk).I'm not sure if it is the Aliasizer that is not converting the column. Anyhow, I need this to work so I don't have to write my great big huge dynamic query out by hand so I'll be digging into the sqlalchemy code for a second. Is there is quick easy fix though? Thanks -Dennis --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: sorting mapped class by joined subquery result (error)
Sorry if this posts twice... I didn't get a Message has been sent page last time.. I posted a bug with a test case here: http://www.sqlalchemy.org/trac/ticket/449 Thanks Dennis --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: sorting mapped class by joined subquery result (error)
I've created a bug with an attached test: http://www.sqlalchemy.org/trac/ticket/449 Thanks! -Dennis --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: sorting mapped class by joined subquery result (error)
Just a quick note, I tried out your suggestion to pass in a select statement. That does indeed work. There is an issue though, I tried using the contains_eager('myproperty') as noted in the docs and that only worked in combination with eagerload('myproperty'). I think the reason is that in the docs, the mapper is defined with lazy=False. That was a little confusing, but perhaps it is supposed to be that way. I would have thought that contains_eager implies eagerload. -Dennis On Jan 31, 12:27 pm, Michael Bayer [EMAIL PROTECTED] wrote: quick easy fix is to use result set mapping instead, or feed a full select() statement into query.select(), which will skip the whole compilation step. i would like a small test case attached for this one so i can play with it, though. for example i dont see why its deciding to use the nesting feature of the compilation in the first place. On Jan 31, 1:12 pm, Dennis [EMAIL PROTECTED] wrote: I have a mapped class.. lets call it Data with a few properties Data.id (primary key), Data.a, Data.b, Data.c I want to query a few of these objects out.. but they need to be sorted by some arbitrary data arbitrary_data=select ( [Data.c.id, OtherClass.c.somedata], and_()).alias('somedata') ok.. now query the data: dat=Data.select( and_(.), from_obj=[ datas.join(arbitrary_data,arbitrary_data.c.id==datas.c.id) ] , order_by=[asc(arbitrary_data.c.somedata)]) Now, the generated sql is in the form (with query.py deciding it needs to nest the query): select datas.id as datas_id, datas.a as datas_a etc. from (select datas.id as datas_id, arbitrary_data.somedata as arbitrary_data_somedata from datas join (my arbitrary_data table query with where clause ) as arbitrary_data where . order by arbitrary_data.somedata ) as tbl_row_count join datas on ... order by arbitrary_data.somedata The last line is the problem.. The from clause renames the column to arbitrary_data_somedata but the order by clause uses the inner form with a . still. The error: missing FROM-clause entry for table arbitrary_data (because that table only exists on the inner aliased table) Anyhow, if I rename the sort on the outer query to use the underscore manually, the query returns the correct results in the correct order. I believe the faulty behavior starts at line 455 in orm/query.py (trunk).I'm not sure if it is the Aliasizer that is not converting the column. Anyhow, I need this to work so I don't have to write my great big huge dynamic query out by hand so I'll be digging into the sqlalchemy code for a second. Is there is quick easy fix though? Thanks -Dennis --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] lazyload issue with new object creation
I decided to set a group of columns to be deferred. I have a situation in my code where I set the properties of an object in a loop. After deferring the column group, the new object created can no longer set properties dynamically if I set the primary key columns first. Example: m=MyObject() m.primary_key_col=1 m.__setattr__(name,val) File build/bdist.linux-i686/egg/sqlalchemy/orm/attributes.py, line 42, in __set__ File build/bdist.linux-i686/egg/sqlalchemy/orm/attributes.py, line 232, in set File build/bdist.linux-i686/egg/sqlalchemy/orm/attributes.py, line 205, in get File build/bdist.linux-i686/egg/sqlalchemy/orm/strategies.py, line 104, in lazyload TypeError: unsubscriptable object If I however wait until after the attributes are set to assign the primary key columns, things work. m=MyObject() m.__setattr__(name,val) m.primary_key_col=1 Perhaps the lazyload code doesn't have another mechanism besides the primary key to know if the properties need loaded or not. If it does, there might be a simple workaround to avoid the nuance above. Thanks All -Dennis --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] sqlsoup and transactions
I wanted to write a quick/dirty script to convert some data from one db to another and I thought that sqlsoup would be the perfect candidate for the job. I put my new insertion calls in a function like this: def convert(*args,**kw): newdb.sometable.insert() newdb.someothertable.insert() # etc Then I call newdb.engine.transaction(convert) Nothing happens though at this point. if I call newdb.flush().. the data insertion is attempted but either it isn't being run in a transaction or an incorrect order is aborting the transaction. I'm not sure why the transaction method isn't committing. The flush call fails because it attempts to insert the data in the wrong order and there are foreign key violations. What do you think Jonathan? -Dennis --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] many to many on another many to many
Hi, mailto:sqlalchemy@googlegroups.com I have the use case to map a many to many relation on a existing many to many relation. (Means I want to assign a connection table with 2 primary keys to a connection table with 3 primary keys) I tried a lot of things but nothing seems to work. Basically, I have a Purchase Requisition Object on that can be assigned various (Cost Center ID / Budget ID) Pairs.(means a connection table) I map the connection tables to associationObjects and tried to use the AssociationProxy Extension and map a List assignedBCCs to the Purchase Requisition Object, so I can (theoretically) directly assign BCC Objects without using the Association(what I thought) Read access works, and manipulations on the assignedBCCs list seem to work on the objects, but the changes are not written into the DB. Anyone has experiences with this kind of relationship? Here the corresponding code. I am sorry, it is not a complete working testcase, and the tables are only MySql without the foreign keys. (only the output of the dbdesigner...) Nevertheless, until this point I implemented working 1:n and n:1 relationships on the tables and this shouldn't be the reason why it is not working. Thank you for any help. Dennis -- tables -- CREATE TABLE `pr_PurchaseRequisition` ( `pr_PurchaseRequisition_ID` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT, PRIMARY KEY(`pr_PurchaseRequisition_ID`) ) TYPE=InnoDB; CREATE TABLE `pr_PurchaseRequisition_has_CELLS_budget_has_CELLS_costCenter` ( `pr_PurchaseRequisition_ID` INTEGER UNSIGNED NOT NULL, `CELLS_costCenter_ID` INTEGER UNSIGNED NOT NULL, `CELLS_budget_ID` INTEGER UNSIGNED NOT NULL, `percentage` FLOAT NULL, PRIMARY KEY(`pr_PurchaseRequisition_ID`, `CELLS_costCenter_ID`, `CELLS_budget_ID`) ) TYPE=InnoDB; CREATE TABLE `CELLS_budget_has_CELLS_costCenter` ( `CELLS_budget_ID` INTEGER UNSIGNED NOT NULL, `CELLS_costCenter_ID` INTEGER UNSIGNED NOT NULL, PRIMARY KEY(`CELLS_budget_ID`, `CELLS_costCenter_ID`) ) TYPE=InnoDB; CREATE TABLE `CELLS_budget` ( `CELLS_budget_ID` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT, `budgetCode` VARCHAR(10) NULL, `name` VARCHAR(100) NULL, PRIMARY KEY(`CELLS_budget_ID`) ) TYPE=InnoDB; CREATE TABLE `CELLS_costCenter` ( `CELLS_costCenter_ID` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT, `CELLS_division_ID` INTEGER UNSIGNED NOT NULL, `name` VARCHAR(45) NULL, `budgetCode` VARCHAR(10) NULL, PRIMARY KEY(`CELLS_costCenter_ID`) ) TYPE=InnoDB; CREATE TABLE `CELLS_division` ( `CELLS_division_ID` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT, `name` VARCHAR(45) NULL, PRIMARY KEY(`CELLS_division_ID`) ) TYPE=InnoDB; -- models -- class PRBudgetCostCenterAssociation(object): pass class BudgetCostCenterAssociation(object): pass def create_BCC(BCC): new = PRBudgetCostCenterAssociation() new.budgetCostCenter = BCC return new class PurchaseRequisition( DomainRecord ): implements(IPurchaseRequisition) assignedBCCs = AssociationProxy('purchaseRequisitionBudgetCostCenter', 'budgetCostCenter', creator=create_BCC) def create_budget_association(budget): ka = KeywordAssociation() ka.keyword = keyword return ka class CELLSCostCenter( DomainRecord ): implements( ICELLSCostCenterTable ) assignedBudgets = AssociationProxy('associatedBudgets', 'budget', creator=create_budget_association) class PRHasBudgetHasCostCenter( DomainRecord ): implements( IPRHasBudgetHasCostCenterTable ) class CELLSBudget( DomainRecord ): implements( ICELLSBudgetTable ) class CELLSBudgetHasCostCenter( DomainRecord ): implements( ICELLSBudgetHasCostCenterTable ) -- mappers -- budget_mapper = bind_mapper( app_model.CELLSBudget, app_schema.CELLSBudgetTable) cost_center_mapper = bind_mapper( app_model.CELLSCostCenter, app_schema.CELLSCostCenterTable, properties={ 'associatedBudgets' : relation(app_model.BudgetCostCenterAssociation, lazy=False, cascade=save-update), 'division': relation(app_model.CELLSDivision, lazy=True), }) mapper(app_model.BudgetCostCenterAssociation, app_schema.CELLSBudgetHasCostCenterTable, primary_key
[sqlalchemy] Re: Performing a search
On Nov 6, 4:29 am, Alexandre CONRAD [EMAIL PROTECTED] wrote: Hello, what would be the best way to perform a search against columns of a table ? I have the following code that works fine: pattern = %bla% client_list = self.query.select(or_(model.Client.c.name.like(pattern), model.Client.c.contact.like(pattern), model.Client.c.email.like(pattern))) This generates the following SQL: SELECT clients.name AS clients_name, clients.contact AS clients_contact, clients.email AS clients_email, clients.id_client AS clients_id_client FROM clients WHERE clients.name LIKE %s OR clients.contact LIKE %s OR clients.email LIKE %s ORDER BY clients.name ['%bla%', '%bla%', '%bla%'] Is this the correct way to do it ? Isn't there a way I could give a list of columns to search in rather than doing it like my example ? Or have a short way to search for the given pattern in all columns ? I suppose you could write a function that provided the list of columns for you. example class myclass(object): def search(self,pattern): # pseudo code for column in [ 'a', 'b', 'c', 'd' ...etc ]: somequery.appendorclause ( column, pattern ) return somequery (or somequery.execute() ) etc. If you were to take that approach though.. the sql that is generated would be the same as what you already came up with. The only other approach I can think of is to use your databases (if applicable) full text indexing feature and create a FTI on all of the columns you want searched. I'm currently doing that with Postgresql (tsearch2) for a project and it works quite well. -Dennis --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---