On Wed, May 2, 2018 at 11:06 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> -1 from me. What about the case where only some tuples are massive?
>
> Well, what about it?  If there are just a few wide tuples, then the peak
> memory consumption will depend on how many of those happen to be in memory
> at the same time ... but we have zero control over that in the merge
> phase, so why sweat about it here?  I think Heikki's got a good idea about
> setting a lower bound on the number of tuples we'll hold in memory during
> run creation.

We don't have control over it, but I'm not excited about specifically
going out of our way to always use more memory in dumptuples() because
it's no worse than the worst case for merging. I am supportive of the
idea of making sure that the amount of memory left over for tuples is
reasonably in line with memtupsize at the point that the sort starts,
though.

-- 
Peter Geoghegan

Reply via email to