I got a report today on the IRC channel about a pg_upgrade problem with upgrading clusters with indexes that exist but are invalid.
For example, if you use CREATE INDEX CONCURRENTLY, then shut down the server while it is running, the index will be left as INVALID; from our CREATE INDEX docs: If a problem arises while scanning the table, such as a uniqueness violation in a unique index, the CREATE INDEX command will fail but leave behind an 'invalid' index. This index will be ignored for querying purposes because it might be incomplete; however it will still consume update overhead. The psql \d command will report such an index as INVALID: postgres=# \d tab Table "public.tab" Column | Type | Modifiers --------+---------+----------- col | integer | Indexes: "idx" btree (col) INVALID The recommended recovery method in such cases is to drop the index and try again to perform CREATE INDEX CONCURRENTLY. (Another possibility is to rebuild the index with REINDEX. However, since REINDEX does not support concurrent builds, this option is unlikely to seem attractive.) The problem is that this invalid state is not dumped by pg_dump, meaning pg_upgrade will restore the index as valid. There are a few possible fixes. The first would be to have pg_upgrade throw an error on any invalid index in the old cluster. Another option would be to preserve the invalid state in pg_dump --binary-upgrade. I also need help in how to communicate this to users since our next minor release will be in the future. -- Bruce Momjian <br...@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers