Re: [PERFORM] How to avoid vacuuming a huge logging table

2007-02-21 Thread D'Arcy J.M. Cain
On Wed, 21 Feb 2007 21:58:33 - "Greg Sabino Mullane" <[EMAIL PROTECTED]> wrote: > SELECT 'vacuum verbose analyze > '||quote_ident(nspname)||'.'||quote_ident(relname)||';' > FROM pg_class c, pg_namespace n > WHERE relkind = 'r' > AND relnamespace = n.oid > AND nspname = 'novac' > ORD

Re: [PERFORM] How to avoid vacuuming a huge logging table

2007-02-21 Thread Greg Sabino Mullane
-BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160 A minor correction to my earlier post: I should have specified the schema as well in the vacuum command for tables with the same name in different schemas: SET search_path = 'pg_catalog'; SELECT set_config('search_path', current_setting('se

Re: [PERFORM] How to avoid vacuuming a huge logging table

2007-02-21 Thread Greg Sabino Mullane
-BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160 > Take a really different approach. Log in CSV format to text files > instead, And only import the date ranges we need "on demand" if a report > is requested on the data. Seems like more work than a separate database to me. :) > 2. We could fi

[PERFORM] How to avoid vacuuming a huge logging table

2007-02-21 Thread Mark Stosberg
Our application has a table that is only logged to, and infrequently used for reporting. There generally no deletes and updates. Recently, the shear size (an estimated 36 million rows) caused a serious problem because it prevented a "vacuum analyze" on the whole database from finishing in a timely