On 02/22/2017 01:28 PM, Ashish Patel wrote:
Hi,

Not sure what is wrong the way we are doing migrations for trigger on a
large table ( table is about 600 GB, which we tried to modify the trigger ).
DDL for trigger is attached and the migration script is attached as well.

On a high level -- we do a create and replace of trigger for the upgrade
and downgrade script.

What is odd is that when we run the trigger directly on the DB [same
content of trigger creation present in the ddl.py], it modifies the
trigger within a few seconds - 2 or 3 seconds, but when I run this via
the migrations the whole DB just hangs and we have to kill the migrations.

We get a bunch of such errors, as shown below. We have a decent amount
of load on the DB hitting the dr table, but then the direct wirting of
SQL query works, where as running the migration hangs the system.
process 10775 still waiting for AccessShareLock on relation 16538 of
database 16385 after 1000.160 ms

The psycopg2 database driver runs operations inside of a transaction by default. The difference between that and your commandline is that your commandline does not, unless you say "BEGIN TRANSACTION" first (which you can try).

This seems likely to be an issue of transactional scope. Usually Alembic is doing a transaction for all the migrations in a single operation. To limit this to transaction per migration file rather than all of them, set transaction_per_migration=True on your EnvironmentContext.configure. To turn off transactions at the SQLAlchemy level entirely (still does not turn off transactions completely), set tranasctional_ddl=False in EnvironmentContext.configure. To totally turn off the transactions at the driver level, in your env.py when you get the connection, set isolation level to AUTOCOMMIT via "connection = engine.connect().execution_options(isolation_level='AUTOCOMMIT')".

When the database does hang you should try looking in pg_stat_activity to see which statements are locked. (https://www.postgresql.org/docs/9.2/static/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW)







PG DB - version 9.3 running on linux.
alembic==0.7.6
SQLAlchemy==1.0.6

I am starting to go over the documentation to see what is the best
practice to run trigger changes, or FK changes on large tables. If there
is a better approach than what I am using, would like to learn about it.


Thanks
Ashish

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy-alembic+unsubscr...@googlegroups.com
<mailto:sqlalchemy-alembic+unsubscr...@googlegroups.com>.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to