On 2014-01-21 16:34:45 -0800, Peter Geoghegan wrote:
> On Tue, Jan 21, 2014 at 3:43 PM, Andres Freund <and...@2ndquadrant.com> wrote:
> > I personally think this isn't worth complicating the code for.
> 
> You're probably right. However, I don't see why the bar has to be very
> high when we're considering the trade-off between taking some
> emergency precaution against having a PANIC shutdown, and an assured
> PANIC shutdown

Well, the problem is that the tradeoff would very likely include making
already complex code even more complex. None of the proposals, even the
one just decreasing the likelihood of a PANIC, like like they'd end up
being simple implementation-wise.
And that additional complexity would hurt robustness and prevent things
I find much more important than this.

> Heikki said somewhere upthread that he'd be happy with
> a solution that only catches 90% of the cases. That is probably a
> conservative estimate. The schemes discussed here would probably be
> much more effective than that in practice. Sure, you can still poke
> holes in them. For example, there has been some discussion of
> arbitrarily large commit records. However, this is the kind of thing
> just isn't that relevant in the real world. I believe that in practice
> the majority of commit records are all about the same size.

Yes, realistically the boundary will be relatively low, but I don't
think that that means that we can disregard issues like the possibility
that a record might be bigger than wal_buffers. Not because it'd allow
theoretical issues, but because it rules out several tempting approaches
like e.g. extending the in-memory reservation scheme of Heikki's
scalability work to handle this.

Greetings,

Andres Freund

-- 
 Andres Freund                     http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to