On Friday, November 15, 2019 at 4:03:44 AM UTC-5, Elmer de Looff wrote: 
>
> That said, I've run into the same problem with a little toy project, which 
> works around this with a 'bulk save' interface. 
>

FWIW on something related... In my experience, if you need to focus on 
speed for bulk inserts / migrations, one of the fastest methods I've used 
is to have several Python scripts working in parallel. You can use a shared 
resource like a sqlite3 file or Redis to coordinate the workers acting 
together. Usually. I'll have workers "claim" a range of 10k items with 
Redis (if we're doing a migration, then I'll use the Windowed Ranged Query 
recipe - https://github.com/sqlalchemy/sqlalchemy/wiki/WindowedRangeQuery )

it takes a little bit of extra work to determine the right number of 
workers to spin up, but parallel workers will usually make a migration or 
bulk insert task 5-15x faster for me.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/b6b20b34-eb55-416c-8a36-d1aa94ef82aa%40googlegroups.com.

Reply via email to