[sqlalchemy] Re: extended inserts
Right, INSERT in general doesn't support bulk inserts, but MySQL specifically does. MySQLdb generates an extended, single insert statement when you use executemany(). Anyway, it makes sense that the ORM layer wouldn't know how to generate bulk inserts for just on engine. I'll just have to drop back to raw SQL for that part of my application. Thanks! David Howell On Mon, Aug 31, 2009 at 9:12 PM, Mike Conleymconl...@gmail.com wrote: people = [Person('Mick Jagger'), Person('Keith Richards')] s.add_all(people) s.commit() INSERT INTO person (name) VALUES (('Mick Jagger'), ('Keith Richards')) I could be wrong for some database engine, but INSERT generally does not support a bulk insert mechanism like this. The only bulk insert capability is the INSERT ... SELECT FROM syntax. DB API's, such as the Python DBAPI executemany() or SQLAlchemy, accept something like a bulk insert, but actually generate multiple insert statements. -- Mike Conley --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] dictionary changed size during iteration in check_modified() error
We encounter the following error with SA 0.5.4p2 under Postgres 7.4 (Linux) from within Zope. Anyone seen this before? Andreas --- Error Type: exceptions.RuntimeError Error Value: dictionary changed size during iteration Traceback: File /projects/HRS2/etch64/HRS_P_0907/HRS/BPExport/Bauplanexport/ hrsbp_exp_query_thread.py, line 113, in run_export self.exportoptions = self.validate_query() File /projects/HRS2/etch64/HRS_P_0907/HRS/BPExport/Bauplanexport/ hrsbp_exp_query_thread.py, line 183, in validate_query return self.query.validate(self.threadContext) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/parts/products-svn/ HaufeCMS/DataLevel/BauplanQuery/BauplanQueryBase.py, line 120, in validate return resolver.validate(qd) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/parts/products-svn/ HaufeCMS/DataLevel/BauplanQuery/ToolboxQueryResolver.py, line 312, in validate kategorieNode = hierarchy_by_lidx(toolbox_db, productID, kategorieID) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/parts/modules-svn/ toolbox/product/util3.py, line 150, in hierarchy_by_lidx parent = TB(pid=pid) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/parts/modules-svn/ toolbox/product/treebuilder3.py, line 40, in __call__ self._updateNodes(node) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/parts/modules-svn/ toolbox/product/treebuilder3.py, line 51, in _updateNodes self._updateNodes(child, level+1) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/parts/modules-svn/ toolbox/product/treebuilder3.py, line 51, in _updateNodes self._updateNodes(child, level+1) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/parts/modules-svn/ toolbox/product/treebuilder3.py, line 50, in _updateNodes for child in node.children: File /projects/HRS2/etch64/HaufeCMS/parts/modules-svn/toolbox/ database/model.py, line 479, in children for n in self._children: File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/eggs/ SQLAlchemy-0.5.4p2-py2.4.egg/sqlalchemy/orm/attributes.py, line 158, in __get__ return self.impl.get(instance_state(instance), instance_dict (instance)) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/eggs/ SQLAlchemy-0.5.4p2-py2.4.egg/sqlalchemy/orm/attributes.py, line 374, in get value = callable_() File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/eggs/ SQLAlchemy-0.5.4p2-py2.4.egg/sqlalchemy/orm/strategies.py, line 568, in __call__ result = q.all() File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/eggs/ SQLAlchemy-0.5.4p2-py2.4.egg/sqlalchemy/orm/query.py, line 1193, in all return list(self) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/eggs/ SQLAlchemy-0.5.4p2-py2.4.egg/sqlalchemy/orm/query.py, line 1286, in __iter__ self.session._autoflush() File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/eggs/ SQLAlchemy-0.5.4p2-py2.4.egg/sqlalchemy/orm/session.py, line 899, in _autoflush self.flush() File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/eggs/ SQLAlchemy-0.5.4p2-py2.4.egg/sqlalchemy/orm/session.py, line 1354, in flush self._flush(objects) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/eggs/ SQLAlchemy-0.5.4p2-py2.4.egg/sqlalchemy/orm/session.py, line 1359, in _flush if (not self.identity_map.check_modified() and File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/eggs/ SQLAlchemy-0.5.4p2-py2.4.egg/sqlalchemy/orm/identity.py, line 56, in check_modified for state in self._mutable_attrs: --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Complicated relations
Hello: I am trying to build a complicated relation, but am having trouble due to the following: * there are multiple intermediate tables involved * the relation requires EXISTS clauses as the join condition between several of these tables My model is shown below. The relation I am trying to build is RunFilter.matching_runs, which is effectively a many-to-many relationship between RunFilter and Run. Currently, I'm using raw SQL to model this relationship, as shown at the end of the example. Ideally, I would like to build this relationship on top of other relationships, so that RunFilter.matching_runs would use RunFilter.var_clauses.any() and VarClause.matching_run_items.any() in its join condition, and VarClause.matching_run_items would use VarClause.values.any(). I realize that SQLAlchemy does not support this because mapper properties are not compiled in any predictable order, but it would be nice. import sqlalchemy as sa from sqlalchemy.ext.associationproxy import association_proxy from sqlalchemy.ext.declarative import declarative_base import sqlalchemy.orm as orm from sqlalchemy.orm.collections import attribute_mapped_collection Base = declarative_base() # Association tables var_clause_to_value = sa.Table(var_clause_to_value, Base.metadata, sa.Column(value_id, sa.Integer, sa.ForeignKey(value.id), primary_key=True), sa.Column(var_clause_id, sa.Integer, sa.ForeignKey (var_clause.id), primary_key=True)) # Entities class Variable(Base): __tablename__ = variable # Fields id = sa.Column(sa.Integer, primary_key=True) # In the real world, we would have a `name` attribute as well, but it's not # important for this example. # Relations values = orm.relation(Value) class Value(Base): Each variable can be attached to zero or more values. __tablename__ = value # Fields id = sa.Column(sa.Integer, primary_key=True) variable_id = sa.Column(sa.Integer, sa.ForeignKey(variable.id), nullable=False) # In the real world, we would have a `name` attribute as well, but it's not # important for this example. # Relations variable = orm.relation(Variable) class Run(Base): A run consists of a combination of inputs (variable=value pairs) and an output (result). __tablename__ = run def __str__(self): return run%d % self.id # Fields id = sa.Column(sa.Integer, primary_key=True) result = sa.Column(sa.Unicode, nullable=False) # Relations var2item = orm.relation(RunItem, collection_class=attribute_mapped_collection(variable)) var2value = association_proxy( var2item, value, creator=lambda var, value: RunItem(value=value, variable=var)) class RunItem(Base): A single variable=value pair for a given run. __tablename__ = run_item # Fields run_id = sa.Column(sa.Integer, sa.ForeignKey(run.id), primary_key=True) value_id = sa.Column(sa.Integer, sa.ForeignKey(value.id), primary_key=True) variable_id = sa.Column(sa.Integer, sa.ForeignKey(variable.id), primary_key=True) # Relations run = orm.relation(Run) value = orm.relation(Value) variable = orm.relation(Variable) class RunFilter(Base): A user-defined query for runs. For example, if you want to find all runs where var1=value1 and var2=value2, you could use:: filter = RunFilter(var_clauses=[VarClause(variable=var1, values=[value1]), ... VarClause(variable=var2, values=[value2])]) Note that for a run filter to match a run, *all* of its var clauses have to be satisfied. __tablename__ = run_filter def __str__(self): return run_filter%d % self.id # Fields id = sa.Column(sa.Integer, primary_key=True) # Relations var_clauses = orm.relation(VarClause) # Desired SQL for `matching_runs`: # (assume `run` and `run_filter` are correlated to an outer query) # NOT EXISTS (SELECT 1 FROM var_clause # WHERE var_clause.run_filter_id = run_filter.id # AND NOT EXISTS (SELECT 1 FROM run_item # WHERE run_item.variable_id = var_clause.variable_id # AND EXISTS (SELECT 1 FROM var_clause_to_value # WHERE var_clause_to_value.var_clause_id = var_clause.id # AND var_clause_to_value.value_id = run_item.value_id))) # # I would like to define this relation as (using `if False` to effectively # comment it out but retain syntax highlighting): if False: def _primaryjoin(): return ~RunFilter.var_clauses.any( ~VarClause.matching_run_items.any(RunItem.run_id == Run.id)) matching_runs = orm.relation(Value, primaryjoin=_primaryjoin) del _primaryjoin class VarClause(Base): A VarClause matches a RunItem
[sqlalchemy] Re: dictionary changed size during iteration in check_modified() error
zopyxfil...@googlemail.com wrote: We encounter the following error with SA 0.5.4p2 under Postgres 7.4 (Linux) from within Zope. Anyone seen this before? it is a known issue repaired in the 0.5.6 branch as well as trunk. Andreas --- Error Type: exceptions.RuntimeError Error Value: dictionary changed size during iteration Traceback: File /projects/HRS2/etch64/HRS_P_0907/HRS/BPExport/Bauplanexport/ hrsbp_exp_query_thread.py, line 113, in run_export self.exportoptions = self.validate_query() File /projects/HRS2/etch64/HRS_P_0907/HRS/BPExport/Bauplanexport/ hrsbp_exp_query_thread.py, line 183, in validate_query return self.query.validate(self.threadContext) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/parts/products-svn/ HaufeCMS/DataLevel/BauplanQuery/BauplanQueryBase.py, line 120, in validate return resolver.validate(qd) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/parts/products-svn/ HaufeCMS/DataLevel/BauplanQuery/ToolboxQueryResolver.py, line 312, in validate kategorieNode = hierarchy_by_lidx(toolbox_db, productID, kategorieID) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/parts/modules-svn/ toolbox/product/util3.py, line 150, in hierarchy_by_lidx parent = TB(pid=pid) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/parts/modules-svn/ toolbox/product/treebuilder3.py, line 40, in __call__ self._updateNodes(node) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/parts/modules-svn/ toolbox/product/treebuilder3.py, line 51, in _updateNodes self._updateNodes(child, level+1) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/parts/modules-svn/ toolbox/product/treebuilder3.py, line 51, in _updateNodes self._updateNodes(child, level+1) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/parts/modules-svn/ toolbox/product/treebuilder3.py, line 50, in _updateNodes for child in node.children: File /projects/HRS2/etch64/HaufeCMS/parts/modules-svn/toolbox/ database/model.py, line 479, in children for n in self._children: File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/eggs/ SQLAlchemy-0.5.4p2-py2.4.egg/sqlalchemy/orm/attributes.py, line 158, in __get__ return self.impl.get(instance_state(instance), instance_dict (instance)) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/eggs/ SQLAlchemy-0.5.4p2-py2.4.egg/sqlalchemy/orm/attributes.py, line 374, in get value = callable_() File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/eggs/ SQLAlchemy-0.5.4p2-py2.4.egg/sqlalchemy/orm/strategies.py, line 568, in __call__ result = q.all() File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/eggs/ SQLAlchemy-0.5.4p2-py2.4.egg/sqlalchemy/orm/query.py, line 1193, in all return list(self) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/eggs/ SQLAlchemy-0.5.4p2-py2.4.egg/sqlalchemy/orm/query.py, line 1286, in __iter__ self.session._autoflush() File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/eggs/ SQLAlchemy-0.5.4p2-py2.4.egg/sqlalchemy/orm/session.py, line 899, in _autoflush self.flush() File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/eggs/ SQLAlchemy-0.5.4p2-py2.4.egg/sqlalchemy/orm/session.py, line 1354, in flush self._flush(objects) File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/eggs/ SQLAlchemy-0.5.4p2-py2.4.egg/sqlalchemy/orm/session.py, line 1359, in _flush if (not self.identity_map.check_modified() and File /projects/HRS2/etch64/HRS_P_0907/HaufeCMS/eggs/ SQLAlchemy-0.5.4p2-py2.4.egg/sqlalchemy/orm/identity.py, line 56, in check_modified for state in self._mutable_attrs: --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Local disk database caching and Session subclasses
An update on the above, I goofed with subclassing since I had various interesting things in PYTHONPATH which caused issues. My original architecture question still applies tho. Dimitri. On Aug 31, 4:43 pm, Dimitri Tcaciuc dtcac...@gmail.com wrote: I'm working with SQLite databases located on NFS mount which is proving to be too slow. However, since the access is mostly read-only, when local disk space allows for that, I can make a copy of the database in the temp space and work with it instead. I'd like to do it as nonintrusive as possible, yet fairly explicit. My question is what part of SQLA framework makes most sense to extend/sub- class for that purpose? I'm having hard time deciding what object should be responsible for the lifetime of that cache. My best guess is, conceptually, Session is the closest one, however I'm also considering Engine objects. My plan was that when an engine gets bound to session, I perform the database copy and create another engine which actually binds. When the session expires, temporary database is collected. But at this point, trying to make a simple Session subclass like so: from sqlalchemy.orm.session import Session from sqlalchemy.orm import sessionmaker def CachedSession(Session): pass MySession = sessionmaker(class_=CachedSession) results in metaclass conflict, so I'm not sure how to proceed here. Does the whole thing even make sense and maybe there's a better way to do this? My constraint is that I need to stick to SQLite for various other reason, which is not directly related to the discussion. Thanks, Dimitri. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Complicated relations
Conor wrote: Hello: I am trying to build a complicated relation, but am having trouble due to the following: * there are multiple intermediate tables involved * the relation requires EXISTS clauses as the join condition between several of these tables My model is shown below. The relation I am trying to build is RunFilter.matching_runs, which is effectively a many-to-many relationship between RunFilter and Run. Currently, I'm using raw SQL to model this relationship, as shown at the end of the example. Ideally, I would like to build this relationship on top of other relationships, so that RunFilter.matching_runs would use RunFilter.var_clauses.any() and VarClause.matching_run_items.any() in its join condition, and VarClause.matching_run_items would use VarClause.values.any(). I realize that SQLAlchemy does not support this because mapper properties are not compiled in any predictable order, but it would be nice. have you considered the advice at http://www.sqlalchemy.org/docs/05/mappers.html#building-query-enabled-properties ?an embedded EXISTS subquery in a relation() doesn't seem like something that should be emitted implicitly - hand-constructing the appropriate Query on a case by case basis would lead to better results. import sqlalchemy as sa from sqlalchemy.ext.associationproxy import association_proxy from sqlalchemy.ext.declarative import declarative_base import sqlalchemy.orm as orm from sqlalchemy.orm.collections import attribute_mapped_collection Base = declarative_base() # Association tables var_clause_to_value = sa.Table(var_clause_to_value, Base.metadata, sa.Column(value_id, sa.Integer, sa.ForeignKey(value.id), primary_key=True), sa.Column(var_clause_id, sa.Integer, sa.ForeignKey (var_clause.id), primary_key=True)) # Entities class Variable(Base): __tablename__ = variable # Fields id = sa.Column(sa.Integer, primary_key=True) # In the real world, we would have a `name` attribute as well, but it's not # important for this example. # Relations values = orm.relation(Value) class Value(Base): Each variable can be attached to zero or more values. __tablename__ = value # Fields id = sa.Column(sa.Integer, primary_key=True) variable_id = sa.Column(sa.Integer, sa.ForeignKey(variable.id), nullable=False) # In the real world, we would have a `name` attribute as well, but it's not # important for this example. # Relations variable = orm.relation(Variable) class Run(Base): A run consists of a combination of inputs (variable=value pairs) and an output (result). __tablename__ = run def __str__(self): return run%d % self.id # Fields id = sa.Column(sa.Integer, primary_key=True) result = sa.Column(sa.Unicode, nullable=False) # Relations var2item = orm.relation(RunItem, collection_class=attribute_mapped_collection(variable)) var2value = association_proxy( var2item, value, creator=lambda var, value: RunItem(value=value, variable=var)) class RunItem(Base): A single variable=value pair for a given run. __tablename__ = run_item # Fields run_id = sa.Column(sa.Integer, sa.ForeignKey(run.id), primary_key=True) value_id = sa.Column(sa.Integer, sa.ForeignKey(value.id), primary_key=True) variable_id = sa.Column(sa.Integer, sa.ForeignKey(variable.id), primary_key=True) # Relations run = orm.relation(Run) value = orm.relation(Value) variable = orm.relation(Variable) class RunFilter(Base): A user-defined query for runs. For example, if you want to find all runs where var1=value1 and var2=value2, you could use:: filter = RunFilter(var_clauses=[VarClause(variable=var1, values=[value1]), ... VarClause(variable=var2, values=[value2])]) Note that for a run filter to match a run, *all* of its var clauses have to be satisfied. __tablename__ = run_filter def __str__(self): return run_filter%d % self.id # Fields id = sa.Column(sa.Integer, primary_key=True) # Relations var_clauses = orm.relation(VarClause) # Desired SQL for `matching_runs`: # (assume `run` and `run_filter` are correlated to an outer query) # NOT EXISTS (SELECT 1 FROM var_clause # WHERE var_clause.run_filter_id = run_filter.id # AND NOT EXISTS (SELECT 1 FROM run_item # WHERE run_item.variable_id = var_clause.variable_id # AND EXISTS (SELECT 1 FROM var_clause_to_value # WHERE var_clause_to_value.var_clause_id = var_clause.id # AND var_clause_to_value.value_id = run_item.value_id)))
[sqlalchemy] Re: Local disk database caching and Session subclasses
Dimitri Tcaciuc wrote: My plan was that when an engine gets bound to session, I perform the database copy and create another engine which actually binds. When the session expires, temporary database is collected. But at this point, trying to make a simple Session subclass like so: from sqlalchemy.orm.session import Session from sqlalchemy.orm import sessionmaker def CachedSession(Session): pass MySession = sessionmaker(class_=CachedSession) results in metaclass conflict, sessionmaker() and Session don't use any metaclasses so I'm not sure what the issue is here. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Complicated relations
On Sep 1, 12:13 pm, Michael Bayer mike...@zzzcomputing.com wrote: [...] have you considered the advice athttp://www.sqlalchemy.org/docs/05/mappers.html#building-query-enabled... ? an embedded EXISTS subquery in a relation() doesn't seem like something that should be emitted implicitly - hand-constructing the appropriate Query on a case by case basis would lead to better results. Thanks for your reply Michael. I wanted it to be a relation so that I would only have to define it once and be able use it in different contexts, but it is not absolutely urgent. Also, I did try writing the query without using raw SQL, but I could not get the relations' any() calls to correlate properly. The query I tried, along with the generated SQL, is at http://python.pastebin.com/f5ec43ac9. The highlighted lines show that var_clause and run_item, respectively, are not getting correlated to the outer queries. Is there a way to make the any() method force this correlation? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Complicated relations
Conor wrote: On Sep 1, 12:13 pm, Michael Bayer mike...@zzzcomputing.com wrote: [...] have you considered the advice athttp://www.sqlalchemy.org/docs/05/mappers.html#building-query-enabled... ? an embedded EXISTS subquery in a relation() doesn't seem like something that should be emitted implicitly - hand-constructing the appropriate Query on a case by case basis would lead to better results. Thanks for your reply Michael. I wanted it to be a relation so that I would only have to define it once and be able use it in different contexts, but it is not absolutely urgent. Also, I did try writing the query without using raw SQL, but I could not get the relations' any() calls to correlate properly. The query I tried, along with the generated SQL, is at http://python.pastebin.com/f5ec43ac9. The highlighted lines show that var_clause and run_item, respectively, are not getting correlated to the outer queries. Is there a way to make the any() method force this correlation? any() is hardwired to correlate only to the parent table of the relation(). So for anything more custom than that you'd probably have to spell out the correlation you want using exists().correlate(tables). if you're super-motivated to build up these comparators (personally I wouldn't be), you can build it up using comparable_property() and your own PropComparator subclass. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Inferring joins from table A to table C via table B
Hi, Relatively new to SA and unable to find the answer in this group's archives. (Possibly my search-fu is weak, in which case I apologize but hope you'll point me in right direction.) We're building a generic DB query engine on SA using the ORM and mappers, but we can't seem to get SA to naturally figure out the join required to render a query that filters on data found in two tables that are related through a 3rd-party table. For example, given two tables: tbl_people idpeople (primary key) ... tbl_documents iddoc (primary key) ... We have a 3rd-party table that relates the two of them together thusly: tbl_people_documents idpeople (foreign key to tbl_people.idpeople; primary key) iddoc (foreign key to tbl_documents.iddoc; primary key) ... Building a mapper object with the proper relations is no problem. But building a query object that filters on data in just tbl_people and tbl_documents fails because SA can't seem to infer the join between tbl_people and tbl_documents despite the foreign keys present in tbl_people_documents. MUST we explicitly supply the join to such query objects? Or is there some way that SA can figure out that tbl_people_documents is in between tbl_people and tbl_documents on its own? Perhaps there is something we can add to the tbl_people/tbl_documents object definitions that clues SA in? Silly example: Say we want to find all the people records that have last names of SMITH and document body content of SPAM. We would write the SQL like this: SELECT p.* FROM tbl_people p, tbl_documents d, tbl_people_documents pd WHERE p.idpeople = pd.idpeople AND pd.iddoc = d.iddoc AND p.lastname = 'SMITH' AND d.body = 'SPAM' ; I understand how to build the equivalent SA query object by explicitly supplying the query object the join criteria. But we want SA to *know* how to infer that join criteria itself. Is this possible? Thanks in advance, Damon --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Inferring joins from table A to table C via table B
Damon wrote: MUST we explicitly supply the join to such query objects? Or is there some way that SA can figure out that tbl_people_documents is in between tbl_people and tbl_documents on its own? Perhaps there is something we can add to the tbl_people/tbl_documents object definitions that clues SA in? join on the relation. query(A).join(A.relation_to_b).filter(B.foo == 'bar') if you're saying this, query(A).join(B) that isn't really how query.join() is supposed to work - it's not using ORM channels to figure out the join in that case. all documentation and examples use the first format. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Curious Problem
Hi all, I just discovered something weird when doing concurrency testing with my program. Before writing a simplified test case for it and really figuring out whether its a bug with sqlalchemy (I am using 0.5.5), I wanted to write the scenario here. Basically I was getting the infamous Set size changed during iteration error during commit. Multiple threads are not accessing the same session (as was in some of the posts I have read online). Here's what I am doing: There is a task dispatcher that queries the database, gets a list of tasks to be done, dispatches them to separate threads. Each thread creates its own session using the session_maker and updates the status of a task. There are three kinds of objects in question: Task, TaskSchedule and TaskResult. There is a one-to-many relation between Task and TaskSchedule. There is also a one-to-many relation between Task and TaskResult. Pseudo code: task_dispatcher: for task_schedule in sess.query(TaskSchedule).filter (next_execution_time = current_time): do_task(task_schedule.task) -- using relation task_schedule.next_execution_time = some_value sess.commit() (in a new thread) do_task(task): sess := get_session() sess.add(TaskResult(task.id, task_started)) sess.commit() task.perform() sess.merge(TaskResult(task.id, task_finished)) sess.commit() sess.close() Basically, I get the mysterious error for the commit in task_dispatcher intermittently (probably 10 times out of 1000 executions). This brings me to my first question: Is it bad practice to commit a session from within the sess.query() loop? I wanted each task execution time update to be independent of other tasks. If this is bad practice, then should I use sub-transactions for this purpose? After realizing this possible pitfall, I changed the code to commit only after the loop has finished. This lead to another interesting result: None of my TaskResult objects got committed. Instead I saw a lot of ROLLBACK's in the log with no errors associated with them. I tried yet another variation. This is the one that puzzled me the most. I tried eager loading the task relation in the TaskScheduleItem and it worked like a charm: task_dispatcher: for task_schedule in sess.query(TaskScheduleItem).options(eagerload ('task')).filter(next_execution_time = current_time): do_task(task_schedule.task) -- using relation task_schedule.next_execution_time = some_value sess.commit() This time, no errors, the TaskResults appeared in database with each commit in the do_task function. Does anyone have an insight into this? Thanks, --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] declarative style many to many, possible fix
Is there a way to declaratively create many to many relationships where the 'secondary' parameter for the relationship is deferred ? I couldn't get this to work, e.g. class User(DeclarativeBase): id = Column(Integer, primary_key=True) name = Column(String(20)) groups = relation(Group, primaryjoin=(User.id == GroupMember.user_id), secondaryjoin=(GroupMember.group_id == Group.id), secondary=GroupMember) (the other classes are defined later). I was able to get around this with the following patch. --- a/sqlalchemy0.5/lib/sqlalchemy/orm/properties.pyMon Aug 31 22:37:21 2009 -0700 +++ b/sqlalchemy0.5/lib/sqlalchemy/orm/properties.pyTue Sep 01 22:11:07 2009 -0700 @@ -736,7 +745,11 @@ # accept callables for other attributes which may require deferred initialization for attr in ('order_by', 'primaryjoin', 'secondaryjoin', 'secondary', '_foreign_keys', 'remote_side'): if util.callable(getattr(self, attr)): -setattr(self, attr, getattr(self, attr)()) +called_value = getattr(self, attr)() +# the 'secondary' param requires a table, not a declarative class... +if attr == 'secondary' and hasattr(called_value, '__mapper__'): +called_value = called_value.__mapper__.mapped_table +setattr(self, attr, called_value) # in the case that InstrumentedAttributes were used to construct # primaryjoin or secondaryjoin, remove the _orm_adapt annotation so these - Jae --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---