I was thinking about that (as per your presentation last week) but my
problem is that when I'm building up a series of inserts, if one of them
fails (very likely in this case due to a unique_violation) I have to
rollback the entire commit. I asked about this in the
novice<http://postgresql.1045698.n5.nabble.com/execute-many-for-each-commit-td5494218.html>forum
and was advised to use
SAVEPOINTs. That seems a little clunky to me but may be the best way. Would
it be realistic to expect this to increase performance by ten-fold?

On Mon, Feb 20, 2012 at 3:30 PM, Josh Berkus <j...@agliodbs.com> wrote:

> On 2/20/12 2:06 PM, Alessandro Gagliardi wrote:
> > . But first I just want to know if people
> > think that this might be a viable solution or if I'm barking up the wrong
> > tree.
>
> Batching is usually helpful for inserts, especially if there's a unique
> key on a very large table involved.
>
> I suggest also making the buffer table UNLOGGED, if you can afford to.
>
> --
> Josh Berkus
> PostgreSQL Experts Inc.
> http://pgexperts.com
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

Reply via email to