On Thu, 16 Jun 2005, [ISO-8859-1] Hans-Jürgen Schönig wrote:

> > 2) By no fault of its own, autovacuum's level of granularity is the table
> > level. For people dealing with non-trivial amounts of data (and we're not
> > talking gigabytes or terabytes here), this is a serious drawback. Vacuum
> > at peak times can cause very intense IO bursts -- even with the
> > enhancements in 8.0. I don't think the solution to the problem is to give
> > users the impression that it is solved and then vacuum their tables during
> > peak periods. I cannot stress this enough.
>
>
> I completly agree with Gavin - integrating this kind of thing into the
> backend writer or integrate it with FSM would be the ideal solution.
>
> I guess everybody who has already vacuumed a 2 TB relation will agree
> here. VACUUM is not a problem for small "my cat Minka" databases.
> However, it has been a real problem on large, heavy-load databases. I
> have even seen people splitting large tables and join them with a view
> to avoid long vacuums and long CREATE INDEX operations (i am not joking
> - this is serious).

I think this gets away from my point a little. People with 2 TB tables can
take care of themselves, as can people who've taken the time to partition
their tables to speed up vacuum. I'm more concerned about the majority of
people who fall in the middle -- between the hobbiest and the high end
data centre.

Thanks,

Gavin

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Reply via email to