On Wed, Sep 7, 2016 at 12:12 PM, Greg Stark <st...@mit.edu> wrote: > On Wed, Sep 7, 2016 at 1:45 PM, Simon Riggs <si...@2ndquadrant.com> wrote: >> On 6 September 2016 at 19:59, Tom Lane <t...@sss.pgh.pa.us> wrote: >> >>> The idea of looking to the stats to *guess* about how many tuples are >>> removable doesn't seem bad at all. But imagining that that's going to be >>> exact is folly of the first magnitude. >> >> Yes. Bear in mind I had already referred to allowing +10% to be safe, >> so I think we agree that a reasonably accurate, yet imprecise >> calculation is possible in most cases. > > That would all be well and good if it weren't trivial to do what > Robert suggested. This is just a large unsorted list that we need to > iterate throught. Just allocate chunks of a few megabytes and when > it's full allocate a new chunk and keep going. There's no need to get > tricky with estimates and resizing and whatever.
I agree. While the idea of estimating the right size sounds promising a priori, considering the estimate can go wrong and over or underallocate quite severely, the risks outweigh the benefits when you consider the alternative of a dynamic allocation strategy. Unless the dynamic strategy has a bigger CPU impact than expected, I believe it's a superior approach. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers