Thank you very much for your feedback! 2015-09-12 22:34 GMT+02:00 Aner Perez <[email protected]>:
> I had dismissed your recommendation for bulk and batch sizes because I was > confusing them with the commit size. I tried using bulkAll() but it did > not make any difference. > That's very interesting. Perhaps, at some point, if there are too many bind variables in a single statement, PostgreSQL will switch executino modes to a static statement, or the parser requires excessive memory. I've also found this post here that explains how PostgreSQL's JDBC driver behaves when it comes to batching. Batching doesn't seem to really help much: http://dba.stackexchange.com/a/58179/2512 There's also: http://stackoverflow.com/a/12207237/521799 Quoting Craig Ringer: If you can't use COPY consider using multi-valued INSERTs if practical. You seem to be doing this already. Don't try to list *too* many values in a single VALUES though; those values have to fit in memory a couple of times over, so keep it to a few hundred per statement. > Then I turned debug logging back on and noticed that with bulkAll() it > was still logging 80k individual debug inserts. > Are you sure? bulkAll() should just generate one huge INSERT statement. Do you still have the source code to reproduce this? > I changed the call to uses bulkAfter(1000) which took the load time down > to 18 seconds! This is still 35 times longer than the psql \copy command > but a perfectly usable number for our use case! > You'll never beat COPY :) > Any chance that jOOQ will support the postgresql CopyManager class (See > http://stackoverflow.com/questions/6958965/how-to-copy-a-data-from-file-to-postgresql-using-jdbc > ) > in some future release? Sounds like it can lead to some worthwhile speed > improvements for people who need every last drop. > We have a pending feature request for COPY: https://github.com/jOOQ/jOOQ/issues/2747 We could certainly look into using COPY within the Loader API, if the Loader configuration permits it... Best Regards, Lukas 2015-09-12 22:34 GMT+02:00 Aner Perez <[email protected]>: > I had dismissed your recommendation for bulk and batch sizes because I was > confusing them with the commit size. I tried using bulkAll() but it did > not make any difference. Then I turned debug logging back on and noticed > that with bulkAll() it was still logging 80k individual debug inserts. I > changed the call to uses bulkAfter(1000) which took the load time down to > 18 seconds! This is still 35 times longer than the psql \copy command but > a perfectly usable number for our use case! > > Seems like builkAll() is not doing what it should in this situation. > > Any chance that jOOQ will support the postgresql CopyManager class (See > http://stackoverflow.com/questions/6958965/how-to-copy-a-data-from-file-to-postgresql-using-jdbc > ) in some future release? Sounds like it can lead to some worthwhile speed > improvements for people who need every last drop. > > Thanks again for the help, loadInto()/loadCSV()/bulkAfter() was just what > I needed! > > - Aner > > -- > You received this message because you are subscribed to the Google Groups > "jOOQ User Group" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "jOOQ User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
