Re: [sqlalchemy] Copying Between Databases - Can I modify part of a Composite Key?
the most idiomatic way to handle this is to merge the objects in: obj = session.merge(existing_object) this will emit a SELECT for the existing row, then copy the state of existing_object to an object located for that primary key, if found. It ensures that the correct choice of pending or persistent is made depending on if the row already exists. Thanks for your response Michael. It wasn't clear from my original post, but I am using merge to copy from PROD to DEV. My merge function looks something like this (simplified, but I'm copying multiple entities) session_dest.merge(entity) session_dest.commit() session_dest.expunge_all() # large object graphs were causing me to run low on memory, so I merge them one at a time and then clear the local cache. So, assuming DEV has a single record {acct_id: 1, env_id: 1} and I'm copying a record {acct_id: 1, env_id: 4} from PROD, it incorrectly thinks that this record should be INSERTed, when in fact there is a constraint (acct_id must be unique) that prevents this. The more I evaluate this, the more I think that correctly modeling the unique constraint will fix my problem. Then my before_update handler would function but would properly UPDATE the record. Shawn -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To view this discussion on the web visit https://groups.google.com/d/msg/sqlalchemy/-/9azvATgVSsoJ. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Copying Between Databases - Can I modify part of a Composite Key?
On Thursday, September 27, 2012 9:21:57 AM UTC-4, Shawn Wheatley wrote: the most idiomatic way to handle this is to merge the objects in: obj = session.merge(existing_object) this will emit a SELECT for the existing row, then copy the state of existing_object to an object located for that primary key, if found. It ensures that the correct choice of pending or persistent is made depending on if the row already exists. Thanks for your response Michael. It wasn't clear from my original post, but I am using merge to copy from PROD to DEV. My merge function looks something like this (simplified, but I'm copying multiple entities) session_dest.merge(entity) session_dest.commit() session_dest.expunge_all() # large object graphs were causing me to run low on memory, so I merge them one at a time and then clear the local cache. So, assuming DEV has a single record {acct_id: 1, env_id: 1} and I'm copying a record {acct_id: 1, env_id: 4} from PROD, it incorrectly thinks that this record should be INSERTed, when in fact there is a constraint (acct_id must be unique) that prevents this. The more I evaluate this, the more I think that correctly modeling the unique constraint will fix my problem. Then my before_update handler would function but would properly UPDATE the record. I updated my class to include a UniqueConstraint on acct_id: class CiAcctK(DeclarativeBase): __tablename__ = 'ci_acct_k' __table_args__ = {} #column definitions acct_id = Column(u'acct_id', CHAR(length=10), ForeignKey('ci_acct.acct_id'), primary_key=True, nullable=False) env_id = Column(u'env_id', NUMERIC(precision=6, scale=0), primary_key=True, nullable=False) uix_1 = UniqueConstraint(u'acct_id') I also separated out my event handlers: @event.listens_for(CiAcctK, before_update) def CiAcctK_gen_default_upd(mapper, connection, instance): print got here! update instance.env_id = ENV_ID @event.listens_for(CiAcctK, before_insert) def CiAcctK_gen_default_ins(mapper, connection, instance): print got here! insert instance.env_id = ENV_ID Sure enough, when executing the session.merge, SA is trying to INSERT the record {acct_id: 1, env_id: 4} even though acct_id: 1 already exists. Any thoughts on what I'm doing wrong? Shawn -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To view this discussion on the web visit https://groups.google.com/d/msg/sqlalchemy/-/w4VUwG5bh48J. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Copying Between Databases - Can I modify part of a Composite Key?
session.merge only looks at the primary key of the instance you are inserting (it pays no attention to unique constraints). In your example, the table contains a single row with PK (1, 1), and you are merging an instance with PK (1, 4). SA sees these as different, so it tries to INSERT the new row. If (1, 4) is actually the new version of (1, 1), then I would say that your primary key should just be the acct_id column, not the pair of (acct_id, env_id). SA will then see them as the same row, and UPDATE rather than INSERT. Simon While I agree with you philosophically, Simon, this utility is for an off-the-shelf application, so I can't make changes to the underlying data structure. I could, however, remove env_id as a primary key: class CiAcctK(DeclarativeBase): __tablename__ = 'ci_acct_k' __table_args__ = {} #column definitions acct_id = Column(u'acct_id', CHAR(length=10), ForeignKey('ci_acct.acct_id'), primary_key=True, unique=True, nullable=False) env_id = Column(u'env_id', NUMERIC(precision=6, scale=0), primary_key=False, nullable=False) Will this mapping cause any issues, given that the acct_id/env_id pair really *is* the primary key? Shawn -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To view this discussion on the web visit https://groups.google.com/d/msg/sqlalchemy/-/cslGg8zn2QsJ. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Copying Between Databases - Can I modify part of a Composite Key?
Clarification, I can't make changes to the underlying *database* (i.e. change the PK) but I can change my SA mapping. I did make the mapping change and it seems to run like a champ now, updating the env_id before either inserting or updating the row. Thanks everyone for your help! Shawn On Thursday, September 27, 2012 1:06:14 PM UTC-4, Shawn Wheatley wrote: session.merge only looks at the primary key of the instance you are inserting (it pays no attention to unique constraints). In your example, the table contains a single row with PK (1, 1), and you are merging an instance with PK (1, 4). SA sees these as different, so it tries to INSERT the new row. If (1, 4) is actually the new version of (1, 1), then I would say that your primary key should just be the acct_id column, not the pair of (acct_id, env_id). SA will then see them as the same row, and UPDATE rather than INSERT. Simon While I agree with you philosophically, Simon, this utility is for an off-the-shelf application, so I can't make changes to the underlying data structure. I could, however, remove env_id as a primary key: class CiAcctK(DeclarativeBase): __tablename__ = 'ci_acct_k' __table_args__ = {} #column definitions acct_id = Column(u'acct_id', CHAR(length=10), ForeignKey('ci_acct.acct_id'), primary_key=True, unique=True, nullable=False) env_id = Column(u'env_id', NUMERIC(precision=6, scale=0), primary_key=False, nullable=False) Will this mapping cause any issues, given that the acct_id/env_id pair really *is* the primary key? Shawn -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To view this discussion on the web visit https://groups.google.com/d/msg/sqlalchemy/-/LQn_FTa2OOIJ. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Declarative Models: Can they be used with two databases and two schema names?
Fantastic, that event hack was just what I needed. I did have to change sub to replace on the statement, for any future readers, but that was a trivial change. Thanks Michael! Shawn On Wednesday, June 6, 2012 3:51:55 PM UTC-4, Shawn Wheatley wrote: Hi, I'm trying to use my declarative models to copy data from an Oracle database with a non-default schema name to a SQLite database (which has no schema name, or at least a default name that can't be changed). Copying from Oracle to Oracle has not been a problem for me, but Oracle to SQLite will not work. The problem for me is that the schema definition used for SQL generation is on the table. I went through a fruitless exercise of calling tometadata on every table in the metadata created by the generated declarative base class, copying into a new MetaData object. I then swapped the metadata on the declarative base and ran my query, with the intention of swapping it back after. No luck. The purpose of my project is to surgically extract related data for a small subset of accounts from our production database and bring it down to a local SQLite database. Does anybody have experience doing this? Am I going about this the wrong way? Thanks for any help, Shawn -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To view this discussion on the web visit https://groups.google.com/d/msg/sqlalchemy/-/h8kudiaKdHsJ. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Declarative Models: Can they be used with two databases and two schema names?
Hi, I'm trying to use my declarative models to copy data from an Oracle database with a non-default schema name to a SQLite database (which has no schema name, or at least a default name that can't be changed). Copying from Oracle to Oracle has not been a problem for me, but Oracle to SQLite will not work. The problem for me is that the schema definition used for SQL generation is on the table. I went through a fruitless exercise of calling tometadata on every table in the metadata created by the generated declarative base class, copying into a new MetaData object. I then swapped the metadata on the declarative base and ran my query, with the intention of swapping it back after. No luck. The purpose of my project is to surgically extract related data for a small subset of accounts from our production database and bring it down to a local SQLite database. Does anybody have experience doing this? Am I going about this the wrong way? Thanks for any help, Shawn -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To view this discussion on the web visit https://groups.google.com/d/msg/sqlalchemy/-/Y6z2q5U_B8gJ. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.