Re: [sqlalchemy] Refactoring events / triggers into Mixins
It was an error from my hardcoded ddl statement not matching the resulting Class.__tablename__. So the error was from the database. No issues with sqlalchemy at all. The resulting Mixin does what I expect. Thanks again for your incredible dedication on this listserv and with open source. - Luke -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at https://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
Re: [sqlalchemy] Refactoring events / triggers into Mixins
It's as simple as cls.__name__ isn't it? Thanks again. - Luke PS: Some people on this list may find my (very alpha) pytest-pgtap library helpful: https://github.com/lmergner/pytest-pgtap -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at https://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
Re: [sqlalchemy] Refactoring events / triggers into Mixins
> > ignore my previous email, this event handler should be failing, > there's no after_create event for a mapped class. you'd want to > put this on the Table for which the class is mapped.You don't need > to use __declare_last__ as the Table can take these within the > arguments: > > class MyClass: > @declared_attr > def __table_args__(cls): >return (Index(...), {"listeners": [("after_create", > my_create_event)]}) > That seems to be working. I've got a new error that tells me I need to insert the __tablename__ of the subclass into my DDL statement. Does SqlAlchemy have a way to do this? - Luke -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at https://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
[sqlalchemy] Refactoring events / triggers into Mixins
Michael and others: My goal is to have an orm Mixin that implements everything for Postgresql tsvector full-text search that I can import as a single object into a new project. I haven't been able to get a event.listen( ... trigger DDL ...) to work as part of a Mixin. I don't really understand the docs on __declare_last__, so I'm working from some stackoverflow answers. The code creates the expected Index and Trigger if the statements are defined outside of the class. ``` class Searchable: """ An SQLAlchemy ORM Mixin that creates Postgres TSVECTOR columns :example: >>> class Text(Searchable, Base): >>> pass """ # Original inspiration: http://shisaa.jp/postset/postgresql-full-text-search-part-1.html # __abstract__ = True _trigger_ddl = DDL( "create trigger ts_update before insert or update on text for " "each row execute procedure tsvector_update_trigger(tsvector, " "'pg_catalog.english', 'text');" ) # Data Fields text = Column(String) # PostgreSQL Full Text Search field # http://www.postgresql.org/docs/current/static/datatype-textsearch.html tsvector = Column(postgresql.TSVECTOR) @declared_attr def __table_args__(cls): # CREATE INDEX tsvector_idx ON tsvector USING gin(to_tsvector('english', message)); # Note: __table_args__ should return a tuple return (Index( "tsvector_idx_%s" % cls.__tablename__, "tsvector", postgresql_using="gin", ),) @classmethod def __declare_last__(cls): event.listen( cls, "after_create", cls._trigger_ddl.execute_if(dialect="postgresql"), ) ``` When __abstract__ = True the subclass is unmapped and no ddl is emitted on create_all(). I did not expect that. Is it possible to have the Mixin execute the event.listen or otherwise created the trigger? Thanks, Luke -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at https://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
[sqlalchemy] polymorphic inheritance where an engineer is also a manager
Michael, et al: The excellent documentation <http://docs.sqlalchemy.org/en/latest/orm/inheritance.html#> uses the example of an Employee from which Manager and Engineer inherit. What to do when an Manger is also an Engineer? My thought is that I want some kind of Adjacency relationship as long as all Managers are also Engineers. Is there a simpler way to accomplish this goal or am I thinking about the problem incorrectly? Thanks in advance, Luke Mergner Glendale, CA -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at https://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
[sqlalchemy] Adjacency with Inherited Classes
Hi, I'm looking for specific help with building an self-referential relationship (adjacency?) inside of inherited ORM classes. I'm also looking for a sanity check on the basic design of the classes since I'm not an expert in SQL. This in the context of a Flask-sqlalchemy app resembling a magazine or journal. Here is the logic I'd like to express, briefly: There can be many kinds of Writing. Articles are writing that are associated with Issues (not shown here). Responses are writing that are linked to other writing, somewhat like a conversation or exchange. class Writing(Base): # I could have all sorts of different genres id = Column(Integer, primary_key=True) # title # text # authors = many to many relation table type = Column(String) __mapper_args__ = { 'polymorphic_identity': 'writing', 'polymorphic_on': type } class Article(Writing): id = Column(ForeignKey('writing.id'), primary_key=True) # inherits some stuff from Writing # Also defined here are columns relating to an Issue, resembling a # Table of contents # issue_id... Each article is associated with an Issue obj # position... Each article has a position in the issue __mapper_args__ = { 'polymorphic_identity': 'article', } class Response(Writing): # All responses are associated with Writing. Writing/Articles may have # several responses # __tablename__ = 'response' id = Column('id', ForeignKey('writing.id'), primary_key=True) respondee_id = Column('respondee', ForeignKey('article.id'), nullable=False) # So, No, I don't know what I'm doing here... respondee = relationship( 'Response', backref='respondant', primaryjoin=('Writing.respondant' == 'Response.respondee')) __mapper_args__ ={ 'polymorphic_identity': 'response', } And my exception: sqlalchemy.exc.AmbiguousForeignKeysError: Can't determine join between 'writing' and 'response'; tables have more than one foreign key constraint relationship between them. Please specify the 'onclause' of this join explicitly. Thank you for help and any feedback. I can throw a more detailed example on gist if I need to. I am hoping the answer is obvious to more experienced eyes. Luke Thomas Mergner Glendale, CA @lmergner on twitter -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Adjacency with Inherited Classes
On Monday, February 24, 2014 12:55:16 PM UTC-8, Michael Bayer wrote: On Feb 24, 2014, at 2:41 PM, Luke lmer...@gmail.com javascript: wrote: Hi, class Response(Writing): # All responses are associated with Writing. Writing/Articles may have # several responses # __tablename__ = 'response' id = Column('id', ForeignKey('writing.id'), primary_key=True) respondee_id = Column('respondee', ForeignKey('article.id'), nullable=False) # So, No, I don't know what I'm doing here... respondee = relationship( 'Response', backref='respondant', primaryjoin=('Writing.respondant' == 'Response.respondee')) __mapper_args__ ={ 'polymorphic_identity': 'response', } 1. for the syntax on primaryjoin as a string, it has to be all one string: primaryjoin = “Writing.id == Response.respondee_id” see the examples at: http://docs.sqlalchemy.org/en/rel_0_9/orm/relationships.html#relationship-primaryjoin http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/declarative.html#configuring-relationships 2. The primaryjoin is between columns, which are accessed using the attribute name given, such as Writing.id. not the name of a relationship (respondent). 3. the relationship joins two tables between primary key columns and foreign key columns. In this case, you’re saying you’d like “response.respondee” to refer to “article.id”, so you’d be setting up primary join in that way…..but 4. ..the mapping doesn’t seem to make sense in that regard, you have Response linked to Response, and not Article. If Article is just one kind of Writing, and Response belongs to any kind of Writing, it would be: class Response(Writing): __tablename__ = 'response' id = Column('id', ForeignKey('writing.id'), primary_key=True) respondee_id = Column(ForeignKey('writing.id'), nullable=False) writing = relationship(Writing, remote_side=Writing.id, backref=responses, primaryjoin=Writing.id == respondee_id) __mapper_args__ = {'polymorphic_identity': 'response', 'inherit_condition': id == Writing.id} inherit condition is needed as “response” now has two ways to link directly to “writing”. 5. On the topic of “not an expert in SQL”, I will say that this kind of design where “everything inherits from joined-table X” is extremely common for people I see moving from a pure OO background to beginning to mix SQL into their modeling. It’s not “wrong” per se and can be a useful design, but in my experience is overused, and it also leads to queries that are very complicated on the render side and often not very performant. You should consider very carefully how many columns will actually be on “Writing” as well as how many kinds of “sub-tables” will really be needed to import these columns on “writing”; additionally, you should consider how much of a need you will have to query across all kinds of “Writing” objects at the same time. In practice, this use case is rare; your app will either need to see lists of Articles, or it will need to see the list of Responses given a parent Writing. If Writing has only a handful of columns, it would be much more efficient and simple from both a storage and a querying perspective to just have a “response” table, an “article” table, and then a few columns on both that just happen to be the same. From the object modeling side, you can keep Response and Article as subclasses of Writing, however you can have Writing itself be a non-mapped mixin or __abstract__ class. The mapping becomes extremely simple and from a SQL perspective much more succinct and performant: class Writing(Base): __abstract__ = True id = Column(Integer, primary_key=True) # gets copied out to subclasses class Article(Writing): __tablename__ = 'article' class Response(Writing): __tablename__ = 'response' respondee_id = Column(ForeignKey('article.id'), nullable=False) writing = relationship(Article, backref=responses) Michael, Thank you for the care you put into your support here and on stackoverflow. I will consider your advice carefully. I had moved to this more complex, inherited design after implementing simple, independent tables (as you suggested). And you have confirmed that it's probably not worth the trouble. I am concerned that the more complex my mapped classes are, the more complex, and thus slow, my queries will be. I'm not even to the point where I have thought about tuning queries or caching. I am not a trained programmer and I have probably bitten off more than I can chew. But, in any case, that's not your concern. Thank you again for the help. Luke -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email
Re: [sqlalchemy] Adjacency with Inherited Classes
On Monday, February 24, 2014 1:12:59 PM UTC-8, Luke wrote: On Monday, February 24, 2014 12:55:16 PM UTC-8, Michael Bayer wrote: On Feb 24, 2014, at 2:41 PM, Luke lmer...@gmail.com wrote: Hi, class Response(Writing): # All responses are associated with Writing. Writing/Articles may have # several responses # __tablename__ = 'response' id = Column('id', ForeignKey('writing.id'), primary_key=True) respondee_id = Column('respondee', ForeignKey('article.id'), nullable=False) # So, No, I don't know what I'm doing here... respondee = relationship( 'Response', backref='respondant', primaryjoin=('Writing.respondant' == 'Response.respondee')) __mapper_args__ ={ 'polymorphic_identity': 'response', } 1. for the syntax on primaryjoin as a string, it has to be all one string: primaryjoin = “Writing.id == Response.respondee_id” see the examples at: http://docs.sqlalchemy.org/en/rel_0_9/orm/relationships.html#relationship-primaryjoin http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/declarative.html#configuring-relationships 2. The primaryjoin is between columns, which are accessed using the attribute name given, such as Writing.id. not the name of a relationship (respondent). 3. the relationship joins two tables between primary key columns and foreign key columns. In this case, you’re saying you’d like “response.respondee” to refer to “article.id”, so you’d be setting up primary join in that way…..but 4. ..the mapping doesn’t seem to make sense in that regard, you have Response linked to Response, and not Article. If Article is just one kind of Writing, and Response belongs to any kind of Writing, it would be: class Response(Writing): __tablename__ = 'response' id = Column('id', ForeignKey('writing.id'), primary_key=True) respondee_id = Column(ForeignKey('writing.id'), nullable=False) writing = relationship(Writing, remote_side=Writing.id, backref=responses, primaryjoin=Writing.id == respondee_id) __mapper_args__ = {'polymorphic_identity': 'response', 'inherit_condition': id == Writing.id} inherit condition is needed as “response” now has two ways to link directly to “writing”. 5. On the topic of “not an expert in SQL”, I will say that this kind of design where “everything inherits from joined-table X” is extremely common for people I see moving from a pure OO background to beginning to mix SQL into their modeling. It’s not “wrong” per se and can be a useful design, but in my experience is overused, and it also leads to queries that are very complicated on the render side and often not very performant. You should consider very carefully how many columns will actually be on “Writing” as well as how many kinds of “sub-tables” will really be needed to import these columns on “writing”; additionally, you should consider how much of a need you will have to query across all kinds of “Writing” objects at the same time. In practice, this use case is rare; your app will either need to see lists of Articles, or it will need to see the list of Responses given a parent Writing. If Writing has only a handful of columns, it would be much more efficient and simple from both a storage and a querying perspective to just have a “response” table, an “article” table, and then a few columns on both that just happen to be the same. From the object modeling side, you can keep Response and Article as subclasses of Writing, however you can have Writing itself be a non-mapped mixin or __abstract__ class. The mapping becomes extremely simple and from a SQL perspective much more succinct and performant: class Writing(Base): __abstract__ = True id = Column(Integer, primary_key=True) # gets copied out to subclasses class Article(Writing): __tablename__ = 'article' class Response(Writing): __tablename__ = 'response' respondee_id = Column(ForeignKey('article.id'), nullable=False) writing = relationship(Article, backref=responses) Michael, Thank you for the care you put into your support here and on stackoverflow. I will consider your advice carefully. I had moved to this more complex, inherited design after implementing simple, independent tables (as you suggested). And you have confirmed that it's probably not worth the trouble. I am concerned that the more complex my mapped classes are, the more complex, and thus slow, my queries will be. I'm not even to the point where I have thought about tuning queries or caching. I am not a trained programmer and I have probably bitten off more than I can chew. But, in any case, that's not your concern. Thank you again for the help. Luke Michael, I am continuing to get the same AmbiguousForeignKeysError with your first suggestion. Your simpler
[sqlalchemy] How to use classes derived from declarative_base for default columns
Hey, how may i extend the declerative base to provide default things that are always available to any class/table that is derived from it (like primary keys, create timestamp columns ect) ? i tried something like this: import datetime from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Column, Integer, String Base = declarative_base() class NewBase(Base): __tablename__ = default id = Column(Integer, primary_key=True) created = Column(DateTime, nullable=False, default=datetime.datetime.now) class NewCustomTable(NewBase): __tablename__ = NewCustomTable awesome = Column(String) but i'm getting either sqlalchemy.exc.NoForeignKeysError or sqlalchemy.exc.InvalidRequestError depending on declaring the __tablename__ attribute or not. with __tablename__ : sqlalchemy.exc.NoForeignKeysError: Can't find any foreign key relationships between 'default' and 'NewCustomTable'. without: sqlalchemy.exc.InvalidRequestError: Class class 'NewBase' does not have a __table__ or __tablename__ specified and does not inherit from an existing table-mapped class. Any recommendation about this? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] How to use classes derived from declarative_base for default columns
Hey Werner, thanks for the hint. i got the correct answer via IRC now. I'm using http://docs.sqlalchemy.org/en/rel_0_8/orm/extensions/declarative.html#mixin-and-custom-base-classes to achieve my goal. Luke -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] Re: ORM Many to Many Across Two Databases
My apologies. I must have misunderstanding the behavior of the schema option and thus its intent. How do I configure a relation across two logical databases using schema? I have tried the following: engine = create_engine(DATABASEURI, echo=True) meta = MetaData() meta.bind = engine left_schema = LeftDatabaseName right_schema = RightDatabaseName left_table = Table('LeftTable', meta, Column('id', Integer, primary_key=True), Column('description', String(128)), schema=left_schema) right_table = Table('RightTable', meta, Column('id', Integer, primary_key=True), Column('description', String(128)), schema=right_schema) assoc_table = Table('LeftAssoc', meta, Column('left_id', Integer), Column('right_id', Integer), #quote_schema=False, schema=left_schema) MySession = sessionmaker(bind=engine) class MyBase(object): def __init__(self, description): self.description = description def __str__(self): return str(self.description) class Left(MyBase): pass class Right(MyBase): pass mapper(Left, left_table) mapper(Right, right_table, properties={ 'lefts': relation(Left, secondary=assoc_table, primaryjoin=(right_table.c.id==assoc_table.c.right_id), secondaryjoin=(assoc_table.c.left_id==left_table.c.id), foreign_keys=[assoc_table.c.left_id, assoc_table.c.right_id], backref=rights), }) if __name__ == '__main__': meta.drop_all() meta.create_all() session = MySession() left1 = Left('Left 1') left2 = Left('Left 2') right1 = Right('Right 1') right2 = Right('Right 2') left1.rights.extend([right1, right2]) right1.lefts.extend([left1, left2]) session.add_all([left1, left2, right1, right2]) session.commit() left1 = session.query(Left).filter_by(description=Left 1).one() print left1 for right in left1.rights: print *4, right for left in right.lefts: print *8, left The table name in the generated SQL is DBNAME.TABLENAME, which doesn't work. It needs to be DBNAME.SCHEMANAME.TABLENAME or DBNAME..TABLENAME (uses default schema) I tried using quote_schema=False and adding a . to the end of the schema value (schema=DBNAME.) but this results in the broken SQL mentioned earlier: SELECT [LeftTable_1].description AS [DBNAME._LeftTab_1] Using schema=DBNAME.SCHEMANAME didn't work either. Please let me know how to do this correctly. I am using: SQL Server 8.0 Hardy Heron Python 2.5 SQLAlchemy 0.5.6 pymssql 1.0.2 Thanks, again! - Luke On Wed, Oct 14, 2009 at 6:46 PM, Michael Bayer mike...@zzzcomputing.comwrote: Luke Arno wrote: So, is there any chance that relations across multiple _logical_ databases on a _single_ physical server will be supported by SQLAlchemy in the future? that is supported now, via the schema argument to Table, assuming you are connecting with an account that has access to all logical databases. relation() supports spanning physical databases as well, if you either ensure the secondary table is on the same server as the target, or alternatively map the association table explicitly as I said in my initial response. Thanks. - Luke On Wed, Oct 14, 2009 at 3:01 PM, Luke Arno luke.a...@gmail.com wrote: Thanks, Simon. Good suggestion, but these are databases and not schema within a database. If I use the schema=MyDatabase. ( notice the .) and quote_schema=False, the table names come out like I want, but look at what happens to the label here: SELECT [LeftTable_1].description AS [MyDatabase._LeftTab_1] FROM MyDatabase..[LeftTable] AS [LeftTable_1] WHERE [LeftTable_1].id = %(param_1)s That is really abusing the schema feature, so this is not a bug... I just wish there were a databasename=Foo option for Table() so I could use these ugly databases the way they are. :( On Wed, Oct 14, 2009 at 11:51 AM, King Simon-NFHD78 simon.k...@motorola.com wrote: -Original Message- From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] On Behalf Of Luke Arno Sent: 14 October 2009 16:41 To: sqlalchemy@googlegroups.com Subject: [sqlalchemy] Re: ORM Many to Many Across Two Databases It looks like if I put the relation on the obj mapped to the DB where the association table is _not_, it works in one direction. (So, in the example Right.lefts can work but Left.rights cannot.) When trying to use Left.rights, it looks for the table in the wrong database. It appears that it would be fine if I could just get the table names all qualified with database name in the issued SQL. Is there a way to make that happen, by any chance? You can do this by using a single engine and metadata, and passing a 'schema' parameter when defining your tables: http://www.sqlalchemy.org/docs/05/metadata.html#specifying-the-schema-na me Hope that helps, Simon
[sqlalchemy] Re: ORM Many to Many Across Two Databases
Thanks, Simon. Good suggestion, but these are databases and not schema within a database. If I use the schema=MyDatabase. ( notice the .) and quote_schema=False, the table names come out like I want, but look at what happens to the label here: SELECT [LeftTable_1].description AS [MyDatabase._LeftTab_1] FROM MyDatabase..[LeftTable] AS [LeftTable_1] WHERE [LeftTable_1].id = %(param_1)s That is really abusing the schema feature, so this is not a bug... I just wish there were a databasename=Foo option for Table() so I could use these ugly databases the way they are. :( On Wed, Oct 14, 2009 at 11:51 AM, King Simon-NFHD78 simon.k...@motorola.com wrote: -Original Message- From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] On Behalf Of Luke Arno Sent: 14 October 2009 16:41 To: sqlalchemy@googlegroups.com Subject: [sqlalchemy] Re: ORM Many to Many Across Two Databases It looks like if I put the relation on the obj mapped to the DB where the association table is _not_, it works in one direction. (So, in the example Right.lefts can work but Left.rights cannot.) When trying to use Left.rights, it looks for the table in the wrong database. It appears that it would be fine if I could just get the table names all qualified with database name in the issued SQL. Is there a way to make that happen, by any chance? You can do this by using a single engine and metadata, and passing a 'schema' parameter when defining your tables: http://www.sqlalchemy.org/docs/05/metadata.html#specifying-the-schema-na me Hope that helps, Simon --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: ORM Many to Many Across Two Databases
So, is there any chance that relations across multiple _logical_ databases on a _single_ physical server will be supported by SQLAlchemy in the future? As I mentioned before, this could work (at least for MSSQL) if we only had a databasename=foo on Table(). I am not really sure how much work it would be, but it _sounds_ easy. :) Thanks. - Luke On Wed, Oct 14, 2009 at 3:01 PM, Luke Arno luke.a...@gmail.com wrote: Thanks, Simon. Good suggestion, but these are databases and not schema within a database. If I use the schema=MyDatabase. ( notice the .) and quote_schema=False, the table names come out like I want, but look at what happens to the label here: SELECT [LeftTable_1].description AS [MyDatabase._LeftTab_1] FROM MyDatabase..[LeftTable] AS [LeftTable_1] WHERE [LeftTable_1].id = %(param_1)s That is really abusing the schema feature, so this is not a bug... I just wish there were a databasename=Foo option for Table() so I could use these ugly databases the way they are. :( On Wed, Oct 14, 2009 at 11:51 AM, King Simon-NFHD78 simon.k...@motorola.com wrote: -Original Message- From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] On Behalf Of Luke Arno Sent: 14 October 2009 16:41 To: sqlalchemy@googlegroups.com Subject: [sqlalchemy] Re: ORM Many to Many Across Two Databases It looks like if I put the relation on the obj mapped to the DB where the association table is _not_, it works in one direction. (So, in the example Right.lefts can work but Left.rights cannot.) When trying to use Left.rights, it looks for the table in the wrong database. It appears that it would be fine if I could just get the table names all qualified with database name in the issued SQL. Is there a way to make that happen, by any chance? You can do this by using a single engine and metadata, and passing a 'schema' parameter when defining your tables: http://www.sqlalchemy.org/docs/05/metadata.html#specifying-the-schema-na me Hope that helps, Simon --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] ORM Many to Many Across Two Databases
I have a MSSQL server with two logical databases. (I inherited this situation, of course.) There is a table in each database and an association table in one of them. What is the right way to configure this? Here is what I have and it complains about the values in foreign_keys. I've tried a lot of permutations and can't seem to hit on the right one. Thanks much! left_engine = create_engine(SERVER_A_DB_ONE) left_meta = MetaData() left_meta.bind = left_engine right_engine = create_engine(SERVER_A_DB_TWO) right_meta = MetaData() right_meta.bind = right_engine left_table = Table('LeftTable', left_meta, Column('id', Integer, primary_key=True), Column('description', String(128))) right_table = Table('RightTable', right_meta, Column('id', Integer, primary_key=True), Column('description', String(128))) assoc_table = Table('LeftAssoc', left_meta, Column('left_id', Integer), Column('right_id', Integer)) MySession = sessionmaker(binds={ left_table: left_engine, right_table: right_engine, assoc_table: left_engine }) class Left(object): pass class Right(object): pass mapper(Left, left_table) mapper(Right, right_table, properties={ 'lefts': relation(Left, secondary=assoc_table, primaryjoin=right_table.c.id==assoc_table.c.right_id, secondaryjoin=assoc_table.c.left_id==left_table.c.id, foreign_keys=[right_table.c.id, left_table.c.id], backref=rights), }) - Luke --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] dynamic creation of tables
As long as we're on the topic of dynamic creation of where clauses ... maybe this is helpful? I have been working on a script that involves the creation of a table based on a dynamic set of requirements and got it to work using eval(). This probably isn't best-practice, but hey ... it makes me my tables from a set of requirements gathered based on a list of fields with some details about the fields (the max string length, whether it is unique or null): rollingfields = Table('%s', meta % (filename) for row in self.fields: rollingfields = rollingfields + , Column('+row['fieldname']+', String(+str(row['fieldlength'])+) if row['unique'] == True: rollingfields = rollingfields + , unique=True if row['notnull'] == True: rollingfields = rollingfields + , nullable=False rollingfields = rollingfields + ) rollingfields = rollingfields + ) eval(rollingfields) meta.create_all(execengine) - Luke Peterson --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: generate_series?
On Apr 25, 8:46 am, Michael Bayer [EMAIL PROTECTED] wrote: On Apr 25, 2008, at 9:04 AM, Luke Iannini wrote: Hi all, Is there a way to use generate_series with SQLAlchemy? http://www.postgresql.org/docs/8.2/interactive/functions-srf.html generate_series = select([column('i')], from_obj=[func.generate_series(bindparam('start'), bindparam('end'))]) zrows = select([generate_series.params(start=1, end=20).c.i]).label('series') is the best stab I've made at it, but I don't actually understand the syntax in the PGSQL docs: select current_date + s.a as dates from generate_series(0,14,7) as s(a); that particular syntax is an aliasing syntax that we plan on supporting in the near future, so the above would look possibly like s = func.generate_series(4,5,6).alias(cols=['a']) select([func.current_date() + s.c.a]) you can hardwire this stuff using text right now: select([(func.current_date() + literal_column (s.a)).label(dates)]).select_from(generate_series(0, 14, 7) as s(a)) Thanks so much Michael! That worked perfectly. I just noticed that the text() construct, which would allow the bindparams to happen, is not being accepted into select_from() so i've added ticket #1014 for that. As a side question, how would I add static columns to a select statement? e.g. based on the pseudocode above: zrows = select([generate_series.params(start=1, end=20).c.i], 0, 0, 0).label('series') to add 3 columns of 0 value to each row generated. you should be able to put plain strings in the columns clause: select([foo, bar, bat]) this is shorthand for using the literal_column() construct which you can use anywhere in a column expression to produce textual SQL expressions. Got it. Thanks again Cheers Luke --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: generate_series?
Hm, yes, so is: Traceback (most recent call last): File /Users/LukeIannini/Checkout/trunk/adpinion_web/ HistoryAlchemy.py, line 46, in module allstats = union_all(stats, zeros) File /Library/Python/2.5/site-packages/SQLAlchemy-0.4.1-py2.5.egg/ sqlalchemy/sql/expression.py, line 498, in union_all return _compound_select('UNION ALL', *selects, **kwargs) File /Library/Python/2.5/site-packages/SQLAlchemy-0.4.1-py2.5.egg/ sqlalchemy/sql/expression.py, line 780, in _compound_select return CompoundSelect(keyword, *selects, **kwargs) File /Library/Python/2.5/site-packages/SQLAlchemy-0.4.1-py2.5.egg/ sqlalchemy/sql/expression.py, line 2895, in __init__ self.oid_column = self._proxy_column(s.oid_column) File /Library/Python/2.5/site-packages/SQLAlchemy-0.4.1-py2.5.egg/ sqlalchemy/sql/expression.py, line 3323, in oid_column oid = f.oid_column AttributeError: '_TextFromClause' object has no attribute 'oid_column' when trying to union_all the result of a (on its own, seemingly working) generate_series select a manifestation of the bug you mentioned or am I doing something else wrong? Cheers Luke On Apr 25, 11:39 pm, Luke Iannini [EMAIL PROTECTED] wrote: On Apr 25, 8:46 am, Michael Bayer [EMAIL PROTECTED] wrote: On Apr 25, 2008, at 9:04 AM, Luke Iannini wrote: Hi all, Is there a way to use generate_series with SQLAlchemy? http://www.postgresql.org/docs/8.2/interactive/functions-srf.html generate_series = select([column('i')], from_obj=[func.generate_series(bindparam('start'), bindparam('end'))]) zrows = select([generate_series.params(start=1, end=20).c.i]).label('series') is the best stab I've made at it, but I don't actually understand the syntax in the PGSQL docs: select current_date + s.a as dates from generate_series(0,14,7) as s(a); that particular syntax is an aliasing syntax that we plan on supporting in the near future, so the above would look possibly like s = func.generate_series(4,5,6).alias(cols=['a']) select([func.current_date() + s.c.a]) you can hardwire this stuff using text right now: select([(func.current_date() + literal_column (s.a)).label(dates)]).select_from(generate_series(0, 14, 7) as s(a)) Thanks so much Michael! That worked perfectly. I just noticed that the text() construct, which would allow the bindparams to happen, is not being accepted into select_from() so i've added ticket #1014 for that. As a side question, how would I add static columns to a select statement? e.g. based on the pseudocode above: zrows = select([generate_series.params(start=1, end=20).c.i], 0, 0, 0).label('series') to add 3 columns of 0 value to each row generated. you should be able to put plain strings in the columns clause: select([foo, bar, bat]) this is shorthand for using the literal_column() construct which you can use anywhere in a column expression to produce textual SQL expressions. Got it. Thanks again Cheers Luke --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Learning SQLAlchemy - some questions on mapped selects and set-based update/delete in sessions
different... session.update(Data, ..., values=dict(is_live=False)) # ...and I can't construct the restriction without referring to objects on the wrong side of the mapper anyways session.execute(update(Data, ...)) # ... and update objects don't have filter_by and Data objects don't have .update So if I give up criteria b. above (a sizeable blow), this works: session.execute(data_table.update(data_table.c.is_live==False, values=dict(is_live=True))) However, if I say ok, the free-to-change-schema primary goal just means I have to leave the table object names alone - if data is no longer a base table but a unioned select, i'll just still call it data_table, then the above code stops working (Alias objects have no attribute 'update'), and even this fails: # data_query name still used for clarity session.execute(update(data_query, data_query.c.is_live==False, values=dict(is_live=True))) because it still tries to update the data table (the alias name) rather than deal with the select (unlike the attempted intelligent update that occurs with session.save()/flush() as in 1. above). --- Ok, enough for tonight. Am I way off base? It seems that many/all of the problems for 1. must be handled internally in some fashion for the polymorphic behaviors of mapper, but I haven't found my way into that code yet - does it seem sensible/possible to try to generalize that to relationships other than subclass/superclass, or to db models that don't have a string type column? For the iterative problem and the syntax, perhaps the conversion dict's values could be sql expressions, although they'd pretty much have to be selects although context is unclear and now inserts would be different from update/delete. For 2, another consideration that might have kept this off the table so far would be managing session/uow state when updating objects that may or may not be loaded yet. Appreciate any discussion that comes out of this, - Luke --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] suggestion to use deferrable foreign key constraints
Are there any plans to handle circular dependencies by using deferrable foreign key constraints when available? In my case, I had made the foreign key constraints deferred, but SQLAlchemy didn't pick up on that when I reflected the database metadata. I eliminated the circular dependency by using post_update=True, but that meant dropping a NOT NULL constraint since postgres can't defer those (sigh). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Cascade-Delete causes AssertionError (Tries to blank-out primary key ...)
Right, delete-orphan is what adds the lifecycle relationship between parent and child. It means that the child can't exist without a parent. That lets SA know that it should eliminate the child rather than trying to null out the relationship. You probably want all so that all actions performed on the parent will propagate to the child, which will remove the need to save children directly. See here also: http://www.sqlalchemy.org/docs/unitofwork.myt#unitofwork_cascade Cheers, Luke On Feb 11, 1:49 pm, Nebur [EMAIL PROTECTED] wrote: The minimum correction of my above code seems to be 2 lines: 1.The cascade rule changed from delete into all,delete-orphan No, it turned out there is a still smaller change: The cascade rule changed from delete into delete, delete-orphan will work, too. The delete-orphan makes up the difference. Ruben --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Cascade-Delete causes AssertionError (Tries to blank-out primary key ...)
See http://www.sqlalchemy.org/docs/ datamapping.myt#datamapping_relations_lifecycle I think you want a parent-child relationship between User and Userdata. Here's how I would change it (disclaimer: I'm new to SA myself): class User(object): pass class Userdata(object): pass ### we'll let SQLAlchemy update the relationship automatically instead of setting it manually if __name__==__main__: db = create_engine(mysql://[EMAIL PROTECTED]/test_cascade) session = create_session() metadata = BoundMetaData(db) t_user = Table(user,metadata, Column(id,Integer,primary_key=True), ) t_userdata = Table(userdata,metadata, Column(user_id,Integer,ForeignKey(user.id),primary_key=True), ) metadata.create_all() ### we create the relationship here instead, for clarity. uselist=False on the relation and the backref makes it one-to-one. Compare to the example in the link I gave mapper(User, t_user, properties = { userdata: relation(Userdata, uselist=False, cascade=all, delete-orphan, backref=backref(myuser, uselist=False)) }) mapper(Userdata, t_userdata) # create 1 instance of each object: user1 = User() session.save(user1) session.flush() user1.userdata = Userdata() ### add userdata to user, relationship is automatically created session.flush() # now delete the user session.delete(user1) session.flush() Cheers, Luke On Feb 11, 9:44 am, Nebur [EMAIL PROTECTED] wrote: The example below raises an: sqlalchemy.exceptions.AssertionError: Dependency rule tried to blank- out primary key column 'userdata.user_id' on instance '[EMAIL PROTECTED]' The code creates 2 objects having a 1:1 relation with cascade-delete. The ForeignKey is declared as a primary key. This seems to cause the Error. Versions: Python 2.4, SA 0.3.1, SA 0.3.4 class User(object): pass class Userdata(object): def __init__(self, user): self.user_id = user.id if __name__==__main__: db = create_engine(mysql://[EMAIL PROTECTED]/test_cascade) session = create_session() metadata = BoundMetaData(db) t_user = Table(user,metadata, Column(id,Integer,primary_key=True), ) t_userdata = Table(userdata,metadata, Column(user_id,Integer,ForeignKey(user.id),primary_key=True), ) metadata.create_all() mapper(User, t_user) mapper(Userdata, t_userdata, properties = { myuser:relation(User,backref=backref(meta,cascade=delete)) }) # create 1 instance of each object: user1 = User() session.save(user1) session.flush() data1 = Userdata(user1) session.save(data1) session.flush() # now delete the user, # expecting the cascade to delete userdata,too: session.delete(user1) session.flush() #AssertionError: Dependency rule tried to blank- out ... I can workaround this error by using a separate primary key in table userdata: t_userdata = Table(userdata,metadata, Column(id,Integer,primary_key=True), Column(user_id,Integer,ForeignKey(user.id)), ) and everything works fine. I'm wondering whether this is an SA bug, or a bad table design ? Thanks and regards, Ruben --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---