On Wednesday, September 26, 2012 02:39:36 PM Michael Paquier wrote: > Do you think it is acceptable to consider that the user has to do the > cleanup of the old or new index himself if there is a failure? The problem I see is that if you want the thing to be efficient you might end up doing step 1) for all/a bunch of indexes, then 2), then .... In that case you can have loads of invalid indexes around.
> You could also reissue the reindex command and avoid an additional command. > When launching a concurrent reindex, it could be possible to check if there > is already an index that has been created to replace the old one that failed > previously. In order to control that, why not adding an additional field in > pg_index? > When creating a new index concurrently, we register in its pg_index entry > the oid of the index that it has to replace. When reissuing the command > after a failure, it is then possible to check if there is already an index > that has been issued by a previous REINDEX CONCURRENT command and based on > the flag values of the old and new indexes it is then possible to replay the > command from the step where it previously failed. I don't really like this idea but we might end up there anyway because we probably need to keep track whether an index is actually only a "replacement" index that shouldn't exist on its own. Otherwise its hard to know which indexes to drop if it failed halfway through. Greetings, Andres -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers