On Sat, Jun 1, 2013 at 9:41 AM, Simon Riggs <si...@2ndquadrant.com> wrote: > COMMIT; > The inserts into order_line repeatedly execute checks against the same > ordid. Deferring and then de-duplicating the checks would optimise the > transaction. > > Proposal: De-duplicate multiple checks against same value. This would > be implemented by keeping a hash of rows that we had already either > inserted and/or locked as the transaction progresses, so we can use > the hash to avoid queuing up after triggers.
Fwiw the reason we don't do that now is that the rows might be later deleted within the same transaction (or even the same statement I think). If they are then the trigger needs to be skipped for that row but still needs to happen for other rows. So you need to do some kind of book-keeping to keep track of that. The easiest way was just to do the check independently for each row. I think there's a comment about this in the code. I think you're right that this should be optimized because in the vast majority of cases you don't end up deleting rows and we're currently doing lots of redundant checks. But you need to make sure you don't break the unusual case entirely. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers