Python 3.10.6
psycopg library 3.1.8

Running consecutive inserts sourced in files.
All inserts are of the same format:

INSERT INTO _____ (field1, field2, field3)
SELECT field1, field2, field3 FROM ____, Join ___, join ___ etc...

The code I've written is this:

for qi in range(qlen):
            query = queries[qi]
            qparams = params[qi]
            with self.connection.cursor() as conn:
                conn.execute(query, qparams)

When I run the queries in dbeaver - the first query takes 120s (it's 1.9M
rows), the second query takes 2s (7000 rows).
When I run the queries in python - it freezes on the second query.

Any guidance on how to attack this would be awesome as I have re-written my
code a dozen times and am just slinging mud to see what sticks.

Reply via email to