On 04/20/2016 07:42 AM, Ben Sizer wrote:
I had an unnamed UniqueConstraint, and autogenerated upgrade scripts
that passed in None as the name - these worked perfectly. Now I am
discovering that the downgrade doesn't work, as there is no name to
refer to the constraint (i.e. "sqlalchemy.exc.CompileError: Can't emit
DROP CONSTRAINT for constraint UniqueConstraint(); it has no name")

So, I'm trying to add a name in the model, and edit the migration
scripts accordingly. My strategy was this:

  - the upgrade function would add the constraint with a name in future
  - the downgrade function needs to be able to delete the constraint
whatever it was called

The first is easy enough and seems to have no problems.

The second is tricky; if the name passed to drop_constraint is wrong, it
raises the expected error (e.g. sqlalchemy.exc.ProgrammingError:
(psycopg2.ProgrammingError) constraint "provider_name_user_id_key" of
relation "user_authentication_method" does not exist)
What I'd like to do, is catch that exception, and try another name (e.g.
the name that I know was autogenerated, in this case
'user_authentication_method_pkey') - however, attempting to catch the
error and perform a further operation emits a Postgres internal error
that suggests I need to end the current transaction and start a new one
(i.e. sqlalchemy.exc.InternalError: (psycopg2.InternalError) current
transaction is aborted, commands ignored until end of transaction block)
and I can't find out how to do that. I tried calling rollback() on a
session created around the return value of get_bind() but the error was
the same.

You should be able to use a savepoint around that operation to prevent this issue from happening. This works for simple things like INSERTs and hopefully works for DDL as well. Either with an ORM Session or with the Connection, the begin_nested() method gives you a savepoint transaction.


To complicate matters, one of the possible DBs is SQLite so I have to do
this via op.batch_etc. Thankfully that data is not essential so deleting
the whole DB is an option there - not so much for the Postgres data.

For SQLite I've always been a proponent of dropping the whole thing and recreating from scratch. The existence of batch mode is due to all the pressure I get from users who don't want to do it that way, but I still favor not trying to make SQLite do real migrations - its creators disagree with this use case hence they've never implemented it.


So my questions are:

1) How can I handle this drop_constraint call when we didn't specify a
constraint name in the first place?
2) If the best way is for me to just attempt to drop constraints by
name, trying several names, how can I catch or avoid the first error and
try subsequent constraint names?

Thanks,
Ben

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy-alembic+unsubscr...@googlegroups.com
<mailto:sqlalchemy-alembic+unsubscr...@googlegroups.com>.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to