On 05/15/13 16:10, Tom Lane wrote:
"Todd A. Cook" <tc...@blackducksoftware.com> writes:
On 05/15/13 13:27, tc...@blackducksoftware.com wrote:
When nearly identical update queries arrive simultaneously, the first one to
execute runs normally, but subsequent executions run _extremely_ slowly.
We've seen this behaviour in production, and the contrived test case below
reproduces the issue.

I've repeated the test below on a 9.1.9 installation, and it works fine there.

Given the reference to EvalPlanQual in your stack trace, I'm thinking
the explanation is this 9.0 fix:

Thanks for the explanation.  Is there any chance of that fix being backpatched
into 8.4?

-- todd


Author: Tom Lane <t...@sss.pgh.pa.us>
Branch: master Release: REL9_0_BR [9f2ee8f28] 2009-10-26 02:26:45 +0000

     Re-implement EvalPlanQual processing to improve its performance and 
eliminate
     a lot of strange behaviors that occurred in join cases.  We now identify 
the
     "current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
     UPDATE/SHARE queries.  If an EvalPlanQual recheck is necessary, we jam the
     appropriate row into each scan node in the rechecking plan, forcing it to 
emit
     only that one row.  The former behavior could rescan the whole of each 
joined
     relation for each recheck, which was terrible for performance, and what's 
much
     worse could result in duplicated output tuples.

     Also, the original implementation of EvalPlanQual could not re-use the 
recheck
     execution tree --- it had to go through a full executor init and shutdown 
for
     every row to be tested.  To avoid this overhead, I've associated a special
     runtime Param with each LockRows or ModifyTable plan node, and arranged to
     make every scan node below such a node depend on that Param.  Thus, by
     signaling a change in that Param, the EPQ machinery can just rescan the
     already-built test plan.

     This patch also adds a prohibition on set-returning functions in the
     targetlist of SELECT FOR UPDATE/SHARE.  This is needed to avoid the
     duplicate-output-tuple problem.  It seems fairly reasonable since the
     other restrictions on SELECT FOR UPDATE are meant to ensure that there
     is a unique correspondence between source tuples and result tuples,
     which an output SRF destroys as much as anything else does.

                        regards, tom lane





--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Reply via email to