For reference what I'm picturing is this: When a table is compressed it's marked read-only which bars any new tuples from being inserted or existing tuples being deleted. Then it's frozen and any pages which contain tuples wich can't be frozen are waited on until they can be. When it's finished every tuple has to be guaranteed to be fully frozen.
Then the relation is rewritten in compressed form. Each block is compressed one by one and written one after the other to disk. At the same time a new fork is written which contains a pointer to each block. It could just be a directly addressed array of offsets and lengths. All block lookups have to first load the page of the indirection map, then read the appropriate section of the original file and decompress it into shared buffers. >From a programming point of view this is nice and simple. From a user's point of view it's a bit of a pain since it means you have to rewrite your whole table when you want to compress it. And it means you have to rewrite it all again if you decide you want to set it back to read-write. My experience with people who have very large tables is that they design their whole process around the goal of avoiding having to move the data once it's written. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers