On Mon, Mar 16, 2015 at 02:38:34PM -0400, Robert Haas wrote: > On Sun, Mar 15, 2015 at 2:39 AM, Noah Misch <n...@leadboat.com> wrote: > > On Thu, Mar 12, 2015 at 11:21:37AM -0400, Robert Haas wrote: > >> On Thu, Feb 19, 2015 at 1:19 AM, Noah Misch <n...@leadboat.com> wrote: > >> > Rereading my previous message, I failed to make the bottom line clear: I > >> > recommend marking eqsel etc. PROPARALLEL_UNSAFE but *not* checking an > >> > estimator's proparallel before calling it in the planner. > >> > >> But what do these functions do that is actually unsafe? > > > > They call the oprcode function of any operator naming them as an estimator. > > Users can add operators that use eqsel() as an estimator, and we have no > > bound > > on what those operators' oprcode can do. (In practice, parallel-unsafe > > operators using eqsel() as an estimator will be rare.) > > Is there a reason not to make a rule that opclass members must be > parallel-safe? I ask because I think it's important that the process > of planning a query be categorically parallel-safe. If we can't count > on that, life gets a lot more difficult - what happens when we're in a > parallel worker and call a SQL or PL/pgsql function?
Neither that rule, nor its variant downthread, would hurt operator authors too much. To make the planner categorically parallel-safe, though, means limiting evaluate_function() to parallel-safe functions. That would dramatically slow selected queries. It's enough for the PL scenario if planning a parallel-safe query is itself parallel-safe. If the planner is parallel-unsafe when planning a parallel-unsafe query, what would suffer? > > RecordTransactionAbort() skips this for subtransaction aborts. I would omit > > it here, because a parallel worker abort is, in this respect, more like a > > subtransaction abort than like a top-level transaction abort. > > No, I don't think so. A subtransaction abort will be followed by > either a toplevel commit or a toplevel abort, so any xlog written by > the subtransaction will be flushed either synchronously or > asynchronously at that time. But for an aborting worker, that's not > true: there's nothing to force the worker's xlog out to disk if it's > ahead of the master's XactLastRecEnd. If our XactLastRecEnd is behind > the master's, then it doesn't matter what we do: an extra flush > attempt is a no-op anyway. If it's ahead, then we need it to be sure > of getting the same behavior that we would have gotten in the > non-parallel case. Fair enough. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers