PS: PGSQL version is: 8.2.7. (BTW, which catalog view contains the back-end version number?)
On Mon, Sep 29, 2008 at 11:37 AM, Peter Kovacs <[EMAIL PROTECTED]> wrote: > Hi, > > We have a number of automated performance tests (to test our own code) > involving PostgreSQL. Test cases are supposed to drop and recreate > tables each time they run. > > The problem is that some of the tests show a linear performance > degradation overtime. (We have data for three months back in the > past.) We have established that some element(s) of our test > environment must be the culprit for the degradation. As rebooting the > test machine didn't revert speeds to baselines recorded three months > ago, we have turned our attention to the database as the only element > of the environment which is persistent across reboots. Recreating the > entire PGSQL cluster did cause speeds to revert to baselines. > > I understand that vacuuming solves performance problems related to > "holes" in data files created as a result of tables being updated. Do > I understand correctly that if tables are dropped and recreated at the > beginning of each test case, holes in data files are reclaimed, so > there is no need for vacuuming from a performance perspective? > > I will double check whether the problematic test cases do indeed > always drop their tables, but assuming they do, are there any factors > in the database (apart from table updates) that can cause a linear > slow-down with repetitive tasks? > > Thanks > Peter > -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin