On 2014-10-31 08:54:54 -0400, Robert Haas wrote:
> On Fri, Oct 31, 2014 at 6:41 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
> > Is it genuinely required for most parallel operations? I think it's
> > clear that none of us knows the answer. Sure, the general case needs
> > it, but is the general case the same thing as the reasonably common
> > case?
> 
> Well, I think that the answer is pretty clear.  Most of the time,
> perhaps in 99.9% of cases, group locking will make no difference as to
> whether a parallel operation succeeds or fails.  Occasionally,
> however, it will cause an undetected deadlock.  I don't hear anyone
> arguing that that's OK.  Does anyone wish to make that argument?

Maybe we can, as a first step, make those edges in the lock graph
visible to the deadlock detector? It's pretty clear that undetected
deadlocks aren't ok, but detectable deadlocks in a couple corner cases
might be acceptable.

> If not, then we must prevent it.  The only design, other than what
> I've proposed here, that seems like it will do that consistently in
> all cases is to have the user backend lock every table that the child
> backend might possibly want to lock and retain those locks throughout
> the entire duration of the computation whether the child would
> actually need those locks or not.  I think that could be made to work,
> but there are two probems:
> 
> 1. Turing's theorem being what it is, predicting what catalog tables
> the child might lock is not necessarily simple.

Well, there's really no need to be absolutely general here. We're only
going to support a subset of functionality as parallelizable. And in
that case we don't need anything complicated to acquire all locks.

It seems quite possible to combine a heuristic approach with improved
deadlock detection.

> 2. It might end up taking any more locks than necessary and holding
> them for much longer than necessary.  Right now, for example, a
> syscache lookup locks the table only if we actually need to read from
> it and releases the lock as soon as the actual read is complete.
> Under this design, every syscache that the parallel worker might
> conceivably consult needs to be locked for the entire duration of the
> parallel computation.  I would expect this to provoke a violent
> negative reaction from at least one prominent community member (and I
> bet users wouldn't like it much, either).

I see little reason to do this for system level relations.

> So, I am still of the opinion that group locking makes sense.   While
> I appreciate the urge to avoid solving difficult problems where it's
> reasonably avoidable, I think we're in danger of spending more effort
> trying to avoid solving this particular problem than it would take to
> actually solve it.  Based on what I've done so far, I'm guessing that
> a complete group locking patch will be between 1000 and 1500 lines of
> code and that nearly all of the new logic will be skipped when it's
> not in use (i.e. no parallelism).  That sounds to me like a hell of a
> deal compared to trying to predict what locks the child might
> conceivably take and preemptively acquire them all, which sounds
> annoyingly tedious even for simple things (like equality operators)
> and nearly impossible for anything more complicated.

What I'm worried about is the performance impact of group locking when
it's not in use. The heavyweight locking code is already quite complex
and often a noticeable bottleneck...

Greetings,

Andres Freund

-- 
 Andres Freund                     http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to