[sqlalchemy] Re: Cannot abort wxPython thread with SQLAlchemy
Thanks Peter for your answer. On 11 juin, 16:16, Peter Hansen [EMAIL PROTECTED] wrote: Aside from that, you don't have many options. What about changing the query so that it will return its results in increments, rather than all at once? If it's a long-running query but you can break it up that way, then the check event flag approach you're using would be able to work. -Peter That's exactly what I am going to do. Many thanks for your help Dominique --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Copy of a table with a different name
Hi. I want to have two Table definitions in one MetaData which are the same except the name of the second one has SND_ prefix. To avoid duplication of schema definition I looked at Table.tometadata() source and created the following function: def _cloneToSND(table, metadata): return Table('SND_' + table.name, metadata, *([c.copy() for c in table.columns] + \ [c.copy() for c in table.constraints])) But calling metadata.create_all() ends with error: metadata.create_all(e) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/schema.py, line 1582, in create_all bind.create(self, checkfirst=checkfirst, tables=tables) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/engine/base.py, line 1139, in create self._run_visitor(self.dialect.schemagenerator, entity, connection=connection, **kwargs) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/engine/base.py, line 1168, in _run_visitor visitorcallable(self.dialect, conn, **kwargs).traverse(element) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/sql/visitors.py, line 75, in traverse return self._non_cloned_traversal(obj) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/sql/visitors.py, line 134, in _non_cloned_traversal self.traverse_single(target) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/sql/visitors.py, line 35, in traverse_single return meth(obj, **kwargs) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/sql/compiler.py, line 756, in visit_metadata collection = [t for t in metadata.table_iterator(reverse=False, tables=self.tables) if (not self.checkfirst or not self.dialect.has_table(self.connection, t.name, schema=t.schema))] File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/schema.py, line 1456, in table_iterator return iter(sort_tables(tables, reverse=reverse)) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/sql/util.py, line 21, in sort_tables vis.traverse(table) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/sql/visitors.py, line 75, in traverse return self._non_cloned_traversal(obj) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/sql/visitors.py, line 134, in _non_cloned_traversal self.traverse_single(target) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/sql/visitors.py, line 35, in traverse_single return meth(obj, **kwargs) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/sql/util.py, line 15, in visit_foreign_key parent_table = fkey.column.table File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/schema.py, line 788, in column foreign key % tname) sqlalchemy.exceptions.NoReferencedTableError: Could not find table 'ObjectType' with which to generate a foreign key The first table (without SND_ prefix) was created successfully. Any ideas how to achieve my goal? Regards, Artur --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Insertion order not respecting FK relation
Michael Bayer wrote: The most crucial, although not the issue in this specific example, is that the relations table is used both as the secondary table in a relation(), and is also mapped directly to the Relation class. SQLA does not track this fact and even in a working mapping will attempt to insert multiple, redundant rows into the table if you had, for example, appended to the records collection and also created a Relation object. Right; this did seem wrong in the first place. The next issue which is the specific cause of the problem here is that SQLA's topological sort is based off of the relationships between classes and objects, and not directly the foreign key relationships between tables. Specifically, there is no stated relationship between the Record class and the Soup/Collection classes - yet you append a Record object to the records collection which is only meant to store Soup objects. SQLA sees no dependency between the Collection and Record mappers in this case, and the order of table insertion is undefined. This collection append is only possible due to the enable_typechecks=False setting which essentially causes SQLA to operate in a slightly broken mode to allow very specific use cases to work (which are not this one- hence SQLA's behavior is still undefined). enable_typechecks , as the initial error message implied when it mentioned polymorphic mapping, is meant to be used only with inheritance scenarios, and only with objects that are subclasses of the collected object. It suggests that a certain degree of typechecking should remain even if enable_typechecks is set to False (something for me to consider in 0.5). Thank you for clarifying this; at a certain point it was clear to us that SQLA was not equipped to understand what we were doing. I think we somehow expected it to look at the FKs. I've considered someday doing a rewrite of UOW that ultimately bases topological off of ForeignKey and the actual rows to be inserted, and that's it. It's nothing that will happen anytime soon as its a huge job and our current UOW is extremely stable and does a spectacular job for almost two years at this point. But even then, while such an approach might prevent this specific symptom with this specific mapping, it seems like a bad idea in any case to support placing arbitrary, unrelated types into collections that have been defined as storing a certain type. I'm not sure at all if that approach to UOW wouldn't ultmately have all the same constraints as our current approach anyway. Certainly stable is good; strictly looking at FKs only might ultimately make for a simpler implementation though. Fortunately, the solution here is very simple as your table setup is a pure classic joined table inheritance configuration. The attached script (just one script; sorry, all the buildout stuff seemed a little superfluous here) illustrates a straightforward mapping against these tables which only requires that Record and Collection subclass Soup (which is the nature of the joins on those tables). The joins themselves are generated automatically by SQLA so theres no need to spell those out. The enable_typechecks flag is still in use here in its stated use case; that you have a collection which can flush subtypes of Soup, but when queried later, will only return Soup objects. You can improve upon that by using a polymorphic discriminator (see the docs for info on that). Hmm, this solution hadn't occured to me; but it makes a lot of sense. This is great. For what it's worth, we do have a polymorphic rebuilder function in place to bring back to life these soup items. With regards to buildout---it's a habit acquired from the Zope community; it really is a lot less overhead that you might think :-) The script illustrates using the secondary table in the records collection; this is what seems reasonable considering that there is no other meaningful data in the relations table (the surrogate PK in that table is also superfluous). If there are meaningful columns in your actual application's version of the table, then you'd want to do away with secondary and use the association object pattern. We did start out without the secondary table, manually setting up relations, because in fact, we're trying to do an ordered list, which requires a ``position`` column. I'll try to adapt all this into our existing package* and see how it works. Your help is much appreciated. \malthe *) http://pypi.python.org/pypi/z3c.dobbin --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at
[sqlalchemy] unexpected behaviour of in_
Hi All, I am seeing something I didn't expect using in_. Here is a simple example, exactly as I expect: In [13]: col = Trade.c.TradeId.in_([1,2]) In [14]: sel = select([col]) In [15]: print col Trade.TradeId IN (?, ?) In [16]: print sel SELECT Trade.TradeId IN (?, ?) AS anon_1 FROM Trade But now, if I use a subselect, I see a problem: In [17]: col = Trade.c.TradeId.in_(select([Trade.c.TradeId])) In [18]: sel = select([col]) In [19]: print col Trade.TradeId IN (SELECT Trade.TradeId FROM Trade) In [20]: print sel SELECT Trade.TradeId IN (SELECT Trade.TradeId FROM Trade) AS anon_1 FROM Trade, (SELECT Trade.TradeId AS TradeId FROM Trade) The column definition (col) is as expected, but the select definition (sel) is strange. It selects two things and generates n^2 rows. How can I get the select I expect: SELECT Trade.TradeId IN (SELECT Trade.TradeId FROM Trade) AS anon_1 FROM Trade thanks, James --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Copy of a table with a different name
On Jun 12, 2008, at 4:05 AM, Artur Siekielski wrote: Hi. I want to have two Table definitions in one MetaData which are the same except the name of the second one has SND_ prefix. To avoid duplication of schema definition I looked at Table.tometadata() source and created the following function: def _cloneToSND(table, metadata): return Table('SND_' + table.name, metadata, *([c.copy() for c in table.columns] + \ [c.copy() for c in table.constraints])) But calling metadata.create_all() ends with error: metadata.create_all(e) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/schema.py, line 1582, in create_all bind.create(self, checkfirst=checkfirst, tables=tables) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/engine/base.py, line 1139, in create self._run_visitor(self.dialect.schemagenerator, entity, connection=connection, **kwargs) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/engine/base.py, line 1168, in _run_visitor visitorcallable(self.dialect, conn, **kwargs).traverse(element) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/sql/visitors.py, line 75, in traverse return self._non_cloned_traversal(obj) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/sql/visitors.py, line 134, in _non_cloned_traversal self.traverse_single(target) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/sql/visitors.py, line 35, in traverse_single return meth(obj, **kwargs) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/sql/compiler.py, line 756, in visit_metadata collection = [t for t in metadata.table_iterator(reverse=False, tables=self.tables) if (not self.checkfirst or not self.dialect.has_table(self.connection, t.name, schema=t.schema))] File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/schema.py, line 1456, in table_iterator return iter(sort_tables(tables, reverse=reverse)) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/sql/util.py, line 21, in sort_tables vis.traverse(table) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/sql/visitors.py, line 75, in traverse return self._non_cloned_traversal(obj) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/sql/visitors.py, line 134, in _non_cloned_traversal self.traverse_single(target) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/sql/visitors.py, line 35, in traverse_single return meth(obj, **kwargs) File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/sql/util.py, line 15, in visit_foreign_key parent_table = fkey.column.table File /usr/share/python2.5/site-packages/SQLAlchemy-0.4.6dev_r4841- py2.5.egg/sqlalchemy/schema.py, line 788, in column foreign key % tname) sqlalchemy.exceptions.NoReferencedTableError: Could not find table 'ObjectType' with which to generate a foreign key The first table (without SND_ prefix) was created successfully. Any ideas how to achieve my goal? send along a test case that includes whatever ForeignKey references to/ from ObjectType might be involved here.My initial guess might be to lose the constraints.copy() section since each Column.copy() will contain a copied ForeignKey inside of it. copy() has only been used by the to_metadata() method up til this point. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Insertion order not respecting FK relation
On Jun 12, 2008, at 4:48 AM, Malthe Borch wrote: Certainly stable is good; strictly looking at FKs only might ultimately make for a simpler implementation though. It starts out as simpler, but that simplicity breaks down almost immediately as the dependency rules, which include rules for populating foreign key columns from source columns, as well as delete/ update operations which need to be cascaded, also need to execute in the proper sequence (largely because newly generated PK values are created in tandem with INSERTs in all cases). Those rules are all derived from the actual objects at play, so it would still be quite complex to link the tables/rows for insert/delete/update to the classes/objects they represent. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: unexpected behaviour of in_
this is ticket #1074. A slightly klunky workaround for now is: col = t1.c.c1.in_([select([t1.c.c1]).as_scalar()]) On Jun 12, 2008, at 10:04 AM, casbon wrote: Hi All, I am seeing something I didn't expect using in_. Here is a simple example, exactly as I expect: In [13]: col = Trade.c.TradeId.in_([1,2]) In [14]: sel = select([col]) In [15]: print col Trade.TradeId IN (?, ?) In [16]: print sel SELECT Trade.TradeId IN (?, ?) AS anon_1 FROM Trade But now, if I use a subselect, I see a problem: In [17]: col = Trade.c.TradeId.in_(select([Trade.c.TradeId])) In [18]: sel = select([col]) In [19]: print col Trade.TradeId IN (SELECT Trade.TradeId FROM Trade) In [20]: print sel SELECT Trade.TradeId IN (SELECT Trade.TradeId FROM Trade) AS anon_1 FROM Trade, (SELECT Trade.TradeId AS TradeId FROM Trade) The column definition (col) is as expected, but the select definition (sel) is strange. It selects two things and generates n^2 rows. How can I get the select I expect: SELECT Trade.TradeId IN (SELECT Trade.TradeId FROM Trade) AS anon_1 FROM Trade --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: unexpected behaviour of in_
Ah, thanks. Should have searched the bug reports as well as the list. On Jun 12, 3:27 pm, Michael Bayer [EMAIL PROTECTED] wrote: this is ticket #1074. A slightly klunky workaround for now is: col = t1.c.c1.in_([select([t1.c.c1]).as_scalar()]) On Jun 12, 2008, at 10:04 AM, casbon wrote: Hi All, I am seeing something I didn't expect using in_. Here is a simple example, exactly as I expect: In [13]: col = Trade.c.TradeId.in_([1,2]) In [14]: sel = select([col]) In [15]: print col Trade.TradeId IN (?, ?) In [16]: print sel SELECT Trade.TradeId IN (?, ?) AS anon_1 FROM Trade But now, if I use a subselect, I see a problem: In [17]: col = Trade.c.TradeId.in_(select([Trade.c.TradeId])) In [18]: sel = select([col]) In [19]: print col Trade.TradeId IN (SELECT Trade.TradeId FROM Trade) In [20]: print sel SELECT Trade.TradeId IN (SELECT Trade.TradeId FROM Trade) AS anon_1 FROM Trade, (SELECT Trade.TradeId AS TradeId FROM Trade) The column definition (col) is as expected, but the select definition (sel) is strange. It selects two things and generates n^2 rows. How can I get the select I expect: SELECT Trade.TradeId IN (SELECT Trade.TradeId FROM Trade) AS anon_1 FROM Trade --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: unexpected behaviour of in_
On Jun 12, 2008, at 10:51 AM, casbon wrote: Ah, thanks. Should have searched the bug reports as well as the list. no no, I just created that ticket :) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Object is already attached to session
On Apr 16, 2008, at 3:55 PM, mg wrote: I have a couple of threads that are working on the same objects, passing them back and forth in queues. I have just started testing with the Sqlalchemy parts turned on, and I am getting the already attached to session message. Also of note is that I am using the Elixir declarative layer, although I don't think that is causing the problem. Here is a basic example of what I am doing. queue = Queue() class History(Entity): status_id = Field(Integer) text = Field(Text) using_options(tablename='history', autosetup=True) using_table_options(useexisting=True) class Worker(threading.Thread): def run(self): items = History.query.all() for item in item: queue.put(item) for i in range(pool_size): consumer = Consumer(self.getName()) consumer.start() class Consumer(threading.Thread): def run(self): item = queue.get_nowait() if True: item.status_id = 1 else: item.status_id = 2 item.update() item.flush() when I try to update I get: InvalidRequestError: Object '[EMAIL PROTECTED]' is already attached to session '21222960' (this is '21223440') any help would be greatly appreciated. if you're passing objects between threads with corresponding contextual sessions, you need to be expunging those objects from the session in which they're present before using them in another session. Use Session.expunge() for this. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] UnboundExecutionError with SQL objects in subclassed Django Context
In the following code, I am using django templates to render data from a SQLAlchemy-mapped database. I subclass django.template.Context, so that I can pass it a unique ID, from which it determines what to pull from the DB. But when it comes time to render the template (that is: when I actually try to access data in the database), I get an UnboundExecutionError. If I instantiate the Context object directly, I don't have any problems. Any idea why this would be, and how I can get my class to work? I'd rather keep the lazy loading semantics intact, if possible. In the code below, render_1() has no problem, while render_2() raises the error. Below the code, I show the output I'm getting. CODE from django.conf import settings settings.configure() from django.template import Template, Context from sqlalchemy.orm.session import Session from cdla.orm import docsouth, docsouth_sohp class SohpContext(Context): def __init__(self, sohp_id): s = Session() q = s.query(docsouth_sohp.Interview) context_dict = {'interview': q.filter_by(sohp_id=sohp_id).one()} super(SohpContext, self).__init__(context_dict) def render_1(sohp_id): print render_1 template = Template(''' {% for p in interview.participants %}\ * {{ p.participant.participant_firstname }} {% endfor %}''') s = Session() c = Context({'interview': s.query(docsouth_sohp.Interview).filter_by(sohp_id=sohp_id).one()}) print template.render(c) def render_2(sohp_id): print render_2 template = Template(''' {% for p in interview.participants %}\ * {{ p.participant.participant_firstname }} {% endfor %}''') c = SohpContext(sohp_id) print template.render(c) if __name__ == '__main__': render_1('A-0001') render_2('A-0001') END CODE RESULTS $ python error_reduce.py /net/docsouth/dev/lib/python/sqlalchemy/logging.py:62: FutureWarning: hex()/oct() of negative int will return a signed string in Python 2.4 and up return %s.%s.0x..%s % (instance.__class__.__module__, render_1 * Richard * Richard * Jack * Jack render_2 Traceback (most recent call last): File error_reduce.py, line 37, in ? render_2('A-0001') File error_reduce.py, line 33, in render_2 print template.render(c) File /usr/lib/python2.3/site-packages/django/template/__init__.py, line 168, in render return self.nodelist.render(context) File /usr/lib/python2.3/site-packages/django/template/__init__.py, line 705, in render bits.append(self.render_node(node, context)) File /usr/lib/python2.3/site-packages/django/template/__init__.py, line 718, in render_node return(node.render(context)) File /usr/lib/python2.3/site-packages/django/template/defaulttags.py, line 93, in render values = self.sequence.resolve(context, True) File /usr/lib/python2.3/site-packages/django/template/__init__.py, line 563, in resolve obj = resolve_variable(self.var, context) File /usr/lib/python2.3/site-packages/django/template/__init__.py, line 650, in resolve_variable current = getattr(current, bits[0]) File /net/docsouth/dev/lib/python/sqlalchemy/orm/attributes.py, line 44, in __get__ return self.impl.get(instance._state) File /net/docsouth/dev/lib/python/sqlalchemy/orm/attributes.py, line 279, in get value = callable_() File /net/docsouth/dev/lib/python/sqlalchemy/orm/strategies.py, line 432, in __call__ raise exceptions.UnboundExecutionError(Parent instance %s is not bound to a Session, and no contextual session is established; lazy load operation of attribute '%s' cannot proceed % (instance.__class__, self.key)) sqlalchemy.exceptions.UnboundExecutionError: Parent instance class 'cdla.orm.docsouth_sohp.Interview' is not bound to a Session, and no contextual session is established; lazy load operation of attribute 'participants' cannot proceed $ END RESULTS --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: UnboundExecutionError with SQL objects in subclassed Django Context
SohpContext creates a Session, then loses it immediately. That's your transactional context getting thrown away basically. You should have a Session open for the lifespan of all ORM operations which includes lazy loads. See http://www.sqlalchemy.org/docs/04/session.html#unitofwork_contextual_lifespan for some more ideas on this. On Jun 12, 2008, at 1:06 PM, J. Cliff Dyer wrote: In the following code, I am using django templates to render data from a SQLAlchemy-mapped database. I subclass django.template.Context, so that I can pass it a unique ID, from which it determines what to pull from the DB. But when it comes time to render the template (that is: when I actually try to access data in the database), I get an UnboundExecutionError. If I instantiate the Context object directly, I don't have any problems. Any idea why this would be, and how I can get my class to work? I'd rather keep the lazy loading semantics intact, if possible. In the code below, render_1() has no problem, while render_2() raises the error. Below the code, I show the output I'm getting. CODE from django.conf import settings settings.configure() from django.template import Template, Context from sqlalchemy.orm.session import Session from cdla.orm import docsouth, docsouth_sohp class SohpContext(Context): def __init__(self, sohp_id): s = Session() q = s.query(docsouth_sohp.Interview) context_dict = {'interview': q.filter_by(sohp_id=sohp_id).one()} super(SohpContext, self).__init__(context_dict) def render_1(sohp_id): print render_1 template = Template(''' {% for p in interview.participants %}\ * {{ p.participant.participant_firstname }} {% endfor %}''') s = Session() c = Context({'interview': s.query(docsouth_sohp.Interview).filter_by(sohp_id=sohp_id).one()}) print template.render(c) def render_2(sohp_id): print render_2 template = Template(''' {% for p in interview.participants %}\ * {{ p.participant.participant_firstname }} {% endfor %}''') c = SohpContext(sohp_id) print template.render(c) if __name__ == '__main__': render_1('A-0001') render_2('A-0001') END CODE RESULTS $ python error_reduce.py /net/docsouth/dev/lib/python/sqlalchemy/logging.py:62: FutureWarning: hex()/oct() of negative int will return a signed string in Python 2.4 and up return %s.%s.0x..%s % (instance.__class__.__module__, render_1 * Richard * Richard * Jack * Jack render_2 Traceback (most recent call last): File error_reduce.py, line 37, in ? render_2('A-0001') File error_reduce.py, line 33, in render_2 print template.render(c) File /usr/lib/python2.3/site-packages/django/template/__init__.py, line 168, in render return self.nodelist.render(context) File /usr/lib/python2.3/site-packages/django/template/__init__.py, line 705, in render bits.append(self.render_node(node, context)) File /usr/lib/python2.3/site-packages/django/template/__init__.py, line 718, in render_node return(node.render(context)) File /usr/lib/python2.3/site-packages/django/template/defaulttags.py, line 93, in render values = self.sequence.resolve(context, True) File /usr/lib/python2.3/site-packages/django/template/__init__.py, line 563, in resolve obj = resolve_variable(self.var, context) File /usr/lib/python2.3/site-packages/django/template/__init__.py, line 650, in resolve_variable current = getattr(current, bits[0]) File /net/docsouth/dev/lib/python/sqlalchemy/orm/attributes.py, line 44, in __get__ return self.impl.get(instance._state) File /net/docsouth/dev/lib/python/sqlalchemy/orm/attributes.py, line 279, in get value = callable_() File /net/docsouth/dev/lib/python/sqlalchemy/orm/strategies.py, line 432, in __call__ raise exceptions.UnboundExecutionError(Parent instance %s is not bound to a Session, and no contextual session is established; lazy load operation of attribute '%s' cannot proceed % (instance.__class__, self.key)) sqlalchemy.exceptions.UnboundExecutionError: Parent instance class 'cdla.orm.docsouth_sohp.Interview' is not bound to a Session, and no contextual session is established; lazy load operation of attribute 'participants' cannot proceed $ END RESULTS --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Comparable ColumnDefaults for shema diffing
Greeting Alchemists, in order to implement schema diffing, it would be nice if two similar ColumnDefault objects would be comparable as such. I attach a path to implement such test. Would it make sense to add this support in Alchemy's core or should a schema diffing library add it through monkey patching? -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~--- Index: lib/sqlalchemy/schema.py === --- lib/sqlalchemy/schema.py(revision 4842) +++ lib/sqlalchemy/schema.py(working copy) @@ -970,2 +970,2 @@ return column_default __visit_name__ = property(_visit_name) +def __eq__(self, other): +if self.__class__ != other.__class__: +return NotImplemented +if callable(self.arg) and callable(other.arg): +return NotImplemented +return self.arg == other.arg + +def __ne__(self, other): +return not self.__eq__(other) + def __repr__(self): return ColumnDefault(%s) % repr(self.arg) + class Sequence(DefaultGenerator): Represents a named database sequence.
[sqlalchemy] Re: Comparable ColumnDefaults for shema diffing
Michael Bayer [EMAIL PROTECTED] writes: can't the schema diff utility include a function such as compare_defaults(a, b) ? a ColumnDefault isn't really like a SQL expression object so the __eq__()/__ne__() seems inappropriate (in general, overriding __eq__() is an endeavor to be taken on carefully, since it heavily changes the behavior of that object when used in lists and such). You are right that defining __eq__() can have nasty side effects but it seems strange to me that ColumnDefault(20) == ColumnDefault(20) is False. If you think that there might be other side effect that I didn't foresee, I will implement the comparator in the diffing library. It the same way, what do you think about __eq__() for types? This is False: types.Integer(10) == types.Integer(10) which was unexpected to say the least but there might be a good reason for it. -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] SQLAlchemy 0.5 beta1 released
The first release of 0.5 is now available. For those who have been slow to move to the trunk, 0.5 represents a great leap in refinement to the paradigms that have become standard in 0.4, with a good degree of API lockdown that removes all the clutter that was lingering from 0.3. For the typical 0.4 application that didn't rely on any deprecated features, migration to 0.5 should be pretty straightforward.The http://www.sqlalchemy.org/trac/wiki/05Migration should be the guide, and everyone should feel free to amend this document regarding notes and changes. The best place to familiarize with the ORM is the new tutorial, which is hands down the best SA tutorial we've ever had which I'd recommend everyone read: http://www.sqlalchemy.org/docs/05/ormtutorial.html . We're standardizing on the declarative extension (for those who don't use Elixir), sessions that hug along transactions, and Query as the single point of SQL generation (including as an ORM-enabled replacement for select()).As these features are all quite new I'd like to start piling up some user experience data so we can get all the bugs out by the first final release. Download SQLA 0.5 beta1 at: http://www.sqlalchemy.org/download.html --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Comparable ColumnDefaults for shema diffing
On Jun 12, 2008, at 5:01 PM, Yannick Gingras wrote: Michael Bayer [EMAIL PROTECTED] writes: can't the schema diff utility include a function such as compare_defaults(a, b) ? a ColumnDefault isn't really like a SQL expression object so the __eq__()/__ne__() seems inappropriate (in general, overriding __eq__() is an endeavor to be taken on carefully, since it heavily changes the behavior of that object when used in lists and such). You are right that defining __eq__() can have nasty side effects but it seems strange to me that ColumnDefault(20) == ColumnDefault(20) is False. If you think that there might be other side effect that I didn't foresee, I will implement the comparator in the diffing library. It the same way, what do you think about __eq__() for types? This is False: types.Integer(10) == types.Integer(10) which was unexpected to say the least but there might be a good reason for it. it's just that these objects weren't intended for direct expression usage. I don't override __eq__() lightly, and it does create a lot of burden within the source base (such as, we need to use the non-native OrderedSet to represent ordered collections of Columns, not plain listspainful). At the moment it seems like the comparsion of schema elements is specific to schema tools which are doing some kind of coarse-grained compare() operation. I don't yet see the comparison of granular schema elements to be of general use. I'm sure you've already written a compare_columns() function since Column cannot be compared via __eq__() either. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: sqlite PK autoincrement not working when you do a composite PK?
judging by the slapdown in this ticket, and it looks safe to say that this behavior in SQLite will never change: http://www.sqlite.org/cvstrac/tktview?tn=2553 backend of Yow - that is a pretty terse slapdown! It doesn't seem like sqlite will ever support it. I keep hoping that sqlalchemy can be abstract enough that it will enable the use of any database backend. Stuff like this sqlite composite PK hiccup is discouraging, but I'm convinced there is a workaround to make this work! I don't care if it is the fastest way, it just has to work. I'm trying to get this to work with logic along these lines: 1. Identify in SQLiteCompiler.visit_insert when a primary key is missing from the insert (insert_stmt) 2. If the missing key is tagged as autoincrement, auto-add it to the insert object with a bogus value before the insert is processed (and flag it for use later when working with the execution context) 3. When subbing the variables into the INSERT statement later, replace the bogus value with something like: (SELECT max(id) FROM user)+1 It seems somewhat reasonable in principle to me, but the problems I'm having in reality are: 1. How do I override SQLiteCompiler.visit_insert without modifying SQLA's sqlite.py? I of course want to avoid trashing the base SQLA install, but can't find an override location in the object tree from my session or engine or anything. 2. Even if I could find a way to override visit_insert, I'm having trouble locating a location to stuff the select max code in place. Tweaking the statement by creating an SQLiteDialect.do_execute implementation seems like it might work, but it also doesn't seem lke the cleanest way. 3. What internal SQLA structures can I count on staying fixed through revisions? eg: in visit_insert I can use self._get_colparams to figure out what columns have been requested, and I can use insert_stmt.table._columns to figure out what primary key is missing (and if it is supposed to be autoincrement). But I don't know which of those I can actually count on being there in the future! Plus, crawling around internal objects like this just seems like a bad idea. Any help is appreciated. I expect I'm in over my head trying to mess with a dialect implementation. I'm also worried that this will just be the first of many things like this I'll be trying to overcome to get SQLA to truly abstract the database implementations away... And a related question: What is the general feeling on how well SQLA abtstracts the underlying database away? Am I expecting too much to be able to write my application using SQLA-only from the beginning and have it work on any of the popular databases without much tweaking? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---