"vinayak" <[EMAIL PROTECTED]> writes:
> A single run of this update works as expected. Concurrent runs cause one to
> succeed and the other to be blocked indefinitely. 

It's not blocked, it's just doing EvalPlanQual over and over, and that's
quite inefficient in this example.  (It looks like it's using a mergejoin,
so the "s" relation has to be sorted over again for each updated "d"
tuple :-(.)  I don't think anyone's ever tried to optimize EvalPlanQual,
because concurrent updates of the same tuple are usually not common.
But there's definitely room for improvement there.  The code comments
talk about trying to avoid a full restart of the sub-plan, but I wonder
whether it would be worth generating a completely different plan using
the knowledge that we have exactly one row coming from the target
table...

Anyway, don't hold your breath waiting for a performance improvement
here.  You'll need to revise your application to avoid having quite so
many concurrent updates of the same tuples.  Maybe you could use
table-level locks to serialize your full-table update operations?
It's not too clear what the real-world application underlying this
example might have been.

                        regards, tom lane

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Reply via email to