On Mar 10, 2006, at 11:54 AM, Tom Lane wrote:
Stefan Kaltenbrunner <[EMAIL PROTECTED]> writes:
3. vacuuming this table - it turned out that VACUUM FULL is
completly
unusable on a table(which i actually expected before) of this
size not
only to the locking involved but rather due to a gigantic memory
requirement and unbelievable slowness.
sure, that was mostly meant as an experiment, if I had to do this
on a
production database I would most likely use CLUSTER to get the
desired
effect (which in my case was purely getting back the diskspace
wasted by
dead tuples)
Yeah, the VACUUM FULL algorithm is really designed for situations
where
just a fraction of the rows have to be moved to re-compact the table.
It might be interesting to teach it to abandon that plan and go to a
CLUSTER-like table rewrite once the percentage of dead space is
seen to
reach some suitable level. CLUSTER has its own disadvantages though
(2X peak disk space usage, doesn't work on core catalogs, etc).
I get bitten by this quite often (customer machines, one giant table,
purge out a lot of old data).
CLUSTER is great for that, given the headroom, though I've often
resorted to a dump and restore because I've not had the headroom
for cluster, and it's a lot less downtime than a full vacuum.
While the right fix there is to redo the application engine side to use
table partitioning, I keep wondering whether it would be possible
to move rows near the end of the table to the beginning in one, non-
locking
phase (vacuum to populate FSM with free space near beginning of table,
touch rows starting at end of table, repeat) and then finish off with a
vacuum full to tidy up the remainder and truncate the files (or a
simpler
"lock the table and truncate anything unused at the end").
Cheers,
Steve
---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings