SQLAlchemy will wrap that psycopg2 exception underneath IntegrityError (or at
least DBAPIError at the bottommost) so you need to catch the
SQLAlchemy-specific expression. of course you should also test it with an
existing row to make sure it actually catches and works!
On Thu, Feb 4, 2021,
Additional background: In the proposed code snippet, I placed the
try/except around the `begin_nested()` block instead of just the
`orm_session.add()` statement because the `add()` might not get flushed to
database until the savepoint is released in the event the session is not
auto-flushing.
Thanks Mike! I followed up on your suggestion concerning savepoints and
came up with the following code snippet. Does this look right to you?
```
try:
with orm_session.begin_nested():
orm_session.add(record)
except psycopg2.errors.UniqueViolation as error: # or is it
for inserts, not really. if you want to guard against duplicate, the pattern
is use a savepoint (begin_nested()), catch integrity error, then use the new
row if someone else inserted it.this is the more performant option as well
as opposed to pessimistic locking.
How can the query/merge or query/add be performed atomically? What would
happen if between the two calls another process inserted a row with the
same unique key? Wouldn't the `s.merge()` then trigger a
`psycopg2.errors.UniqueViolation` exception (in postgres case) when the new
row insert is
On Jun 18, 2011, at 4:32 PM, Eric Ongerth wrote:
Just curious and learning here -- There are two separate issues here,
aren't there? (1.) Atomicity of the transaction, taken care of by the
above discussion, and (2.) what if there was a need to have it be not
only atomic but consume as
Just curious and learning here -- There are two separate issues here,
aren't there? (1.) Atomicity of the transaction, taken care of by the
above discussion, and (2.) what if there was a need to have it be not
only atomic but consume as little time as possible between the read
and write, let's