Re: [sqlalchemy] Speed issue with bulk inserts

2016-12-23 Thread Jonathan Vanasco
Does this issue consistently repeat within a transaction block? Does it still happen if you reverse the tests? I've run into similar issues in the past, and the problem was often from postgresql checking indexes -- the first test would stall because indexes needed to be read into memory, the

Re: [sqlalchemy] Speed issue with bulk inserts

2016-12-23 Thread Mike Bayer
Does this table have triggers of some kind on it ? I've been asking on the psycopg2 list about this as this is not the first time this has come up. On Dec 23, 2016 8:46 AM, "mike bayer" wrote: > those are two different kinds of INSERT statements. To compare to Core > you need to run like this

Re: [sqlalchemy] Speed issue with bulk inserts

2016-12-23 Thread mike bayer
those are two different kinds of INSERT statements. To compare to Core you need to run like this: engine.execute( TrialLocations.__table__.insert(), trial_location_core_inserts ) that will run executemany() on the psycopg2 side, which internally will run 223 INSERT s

[sqlalchemy] Speed issue with bulk inserts

2016-12-23 Thread Brian Clark
So I'm having an issue with a very slow insert, I'm inserting 223 items and it takes 20+ seconds to execute. Any advice on what I'm doing wrong and why it would be so slow? Using Postgresql 9.4.8 The line of code LOG_OUTPUT('==PRE BULK==', True) db_session.bulk_save_obje