On 4/28/15 7:11 AM, Robert Haas wrote:
On Fri, Apr 24, 2015 at 4:09 PM, Jim Nasby<jim.na...@bluetreble.com>
wrote:>>> When I read that I think about something configurable at
>>>relation-level.There are cases where you may want to have more
>>>granularity of this information at block level by having the VM slots
>>>to track less blocks than 32, and vice-versa.
>>
>>What are those cases?  To me that sounds like making things
>>complicated to no obvious benefit.
>
>Tables that get few/no dead tuples, like bulk insert tables. You'll have
>large sections of blocks with the same visibility.
I don't see any reason why that would require different granularity.

Because in those cases it would be trivial to drop XMIN out of the tuple headers. For a warehouse with narrow rows that could be a significant win. Moreso, we could also move XMAX to the page level if we accept that if we need to invalidate any tuple we'd have to move all of them. In a warehouse situation that's probably OK as well.

That said, I don't think this is the first place to focus for reducing our on-disk format; reducing cleanup bloat would probably be a lot more useful.

Did you or Jan have more detailed info from the test he ran about where our 80% overhead was ending up? That would remove a lot of speculation here...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to