On Friday, November 15, 2019 at 4:03:44 AM UTC-5, Elmer de Looff wrote:
>
> That said, I've run into the same problem with a little toy project, which
> works around this with a 'bulk save' interface.
>
FWIW on something related... In my experience, if you need to focus on
speed for bulk ins
I'm not even sure the problem is with the batch insert function itself,
creating half a million dicts in Python is going to cause you a bit of a
bad time. That said, I've run into the same problem with a little toy
project, which works around this with a 'bulk save' interface. With a
minimal change
Because the memory spike was so bad (the application usually runs at 250mb
RAM, and it went up to a GB during this process), I was able to find the
problem by running htop and using print statements to discover where in the
execution the Python code was when the RAM spike happened.
I unfortunately
What did you use to profile memory usage? I've recently been investigating
memory usage when loading data using memory_profiler and would be
interested to find out about the best approach
On Thu, 14 Nov 2019, 17:16 James Fennell, wrote:
> Hi all,
>
> Just sharing some perf insights into the bulk
Hi all,
Just sharing some perf insights into the bulk operation function
bulk_insert_mappings.
I was recently debugging a SQL Alchemy powered web app that was crashing
due to out of memory issues on a small Kubernetes node. It turned out to be
"caused" by an over optimistic invocation of bulk_ins