On Wed, May 2, 2018 at 11:06 AM, Tom Lane wrote:
>> -1 from me. What about the case where only some tuples are massive?
>
> Well, what about it? If there are just a few wide tuples, then the peak
> memory consumption will depend on how many of those happen to be in memory
>
On Wed, May 2, 2018 at 10:43 AM, Heikki Linnakangas wrote:
> I'm not sure what you could derive that from, to make it less arbitrary. At
> the moment, I'm thinking of just doing something like this:
>
> /*
> * Minimum amount of memory reserved to hold the sorted tuples in
> *
On Wed, May 2, 2018 at 10:43 AM, Heikki Linnakangas wrote:
> Independently of this, perhaps we should put in special case in
> dumptuples(), so that it would never create a run with fewer than maxTapes
> tuples. The rationale is that you'll need to hold that many tuples in memory
On 02/05/18 19:41, Tom Lane wrote:
Robert Haas writes:
On Wed, May 2, 2018 at 11:38 AM, Heikki Linnakangas wrote:
To fix, I propose that we change the above so that we always subtract
tapeSpace, but if there is less than e.g. 32 kB of memory left after
On Wed, May 2, 2018 at 8:38 AM, Heikki Linnakangas wrote:
> With a small work_mem values, maxTapes is always 6, so tapeSpace is 48 kB.
> With a small enough work_mem, 84 kB or below in this test case, there is not
> enough memory left at this point, so we don't subtract
On Wed, May 2, 2018 at 8:46 AM, Robert Haas wrote:
> On Wed, May 2, 2018 at 11:38 AM, Heikki Linnakangas wrote:
>> To fix, I propose that we change the above so that we always subtract
>> tapeSpace, but if there is less than e.g. 32 kB of memory left after
On Wed, May 2, 2018 at 11:38 AM, Heikki Linnakangas wrote:
> To fix, I propose that we change the above so that we always subtract
> tapeSpace, but if there is less than e.g. 32 kB of memory left after that
> (including, if it went below 0), then we bump availMem back up to 32