We have tables which we archive and shorten every day. That is - the main table 
that has daily inserts and updates is kept small, and there is a parallel table 
with all the old data up to a year ago.

In the past we noticed that the bulk transfer from the main table to the 
archive table takes a very long time, so we decided to do this in three steps: 
(1) drop indexes on the archive table, (2) insert a week's worth of data into 
the archive table. (3) recreate the indexes. This proved to take much less time 
than having each row update the index.

However, this week we finally upgraded from PG 8.3 to 9.1, and suddenly, the 
archiving process takes a lot more time than it used to - 14:30 hours for the 
most important table, to be exact, spent only on index creation.

The same work running on the same data in 8.3 on a much weaker PC took merely 
4:30 hours.

There are 8 indexes on the archive table.

The size of the main table is currently (after archive) 7,805,009 records.
The size of the archive table is currently 177,328,412 records.

Has there been a major change in index creation that would cause 9.1 to do it 
this much slower? Should I go back to simply copying over the data or is the 
whole concept breaking down?


TIA,
Herouth

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to