On 3/7/15 4:49 PM, Andres Freund wrote:
On 2015-03-05 15:28:12 -0600, Jim Nasby wrote:
I was thinking the simpler route of just repalloc'ing... the memcpy would
suck, but much less so than the extra index pass. 64M gets us 11M tuples,
which probably isn't very common.

That has the chance of considerably increasing the peak memory usage
though, as you obviously need both the old and new allocation during the
repalloc().

And in contrast to the unused memory at the tail of the array, which
will usually not be actually allocated by the OS at all, this is memory
that's actually read/written respectively.

That leaves me wondering why we bother with dynamic resizing in other areas (like sorts, for example) then? Why not just palloc work_mem and be done with it? What makes those cases different?

I've to say, I'm rather unconvinced that it's worth changing stuff
around here. If overcommit is enabled, vacuum won't fail unless the
memory is actually used (=> no problem). If overcommit is disabled and
you get memory allocations, you're probably already running awfully
close to the maximum of your configuration and you're better off
adjusting it.  I'm not aware of any field complaints about this and thus
I'm not sure it's worth fiddling with this.

Perhaps; Noah seems to be the one one who's seen this.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to