2) By no fault of its own, autovacuum's level of granularity is the table
level. For people dealing with non-trivial amounts of data (and we're not
talking gigabytes or terabytes here), this is a serious drawback. Vacuum
at peak times can cause very intense IO bursts -- even with the
enhancements in 8.0. I don't think the solution to the problem is to give
users the impression that it is solved and then vacuum their tables during
peak periods. I cannot stress this enough.


I completly agree with Gavin - integrating this kind of thing into the backend writer or integrate it with FSM would be the ideal solution.

I guess everybody who has already vacuumed a 2 TB relation will agree here. VACUUM is not a problem for small "my cat Minka" databases. However, it has been a real problem on large, heavy-load databases. I have even seen people splitting large tables and join them with a view to avoid long vacuums and long CREATE INDEX operations (i am not joking - this is serious).

postgresql is more an more used to really large boxes. this is an increasing problem. gavin's approach using a vacuum bitmap seems to be a good approach. an alternative would be to have some sort of vacuum queue containing a set of pages which are reported by the writing process (= backend writer or backends).

        best regards,

                hans

--
Cybertec Geschwinde u Schoenig
Schoengrabern 134, A-2020 Hollabrunn, Austria
Tel: +43/664/393 39 74
www.cybertec.at, www.postgresql.at


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
     joining column's datatypes do not match

Reply via email to