Jeff Janes <jeff.ja...@gmail.com> writes: > I am getting corrupted Bloom indexes in which a tuple in the table > heap is not in the index.
Hmm. I can trivially reproduce a problem, but I'm not entirely sure whether it matches yours. Same basic test case as the bloom regression test: regression=# CREATE TABLE tst ( i int4, t text ); CREATE TABLE regression=# CREATE INDEX bloomidx ON tst USING bloom (i, t) WITH (col1 = 3); CREATE INDEX regression=# INSERT INTO tst SELECT i%10, substr(md5(i::text), 1, 1) FROM generate_series(1,2000) i; INSERT 0 2000 regression=# vacuum verbose tst; ... INFO: index "bloomidx" now contains 2000 row versions in 5 pages ... regression=# delete from tst; DELETE 2000 regression=# vacuum verbose tst; ... INFO: index "bloomidx" now contains 0 row versions in 5 pages DETAIL: 2000 index row versions were removed. ... regression=# INSERT INTO tst SELECT i%10, substr(md5(i::text), 1, 1) FROM generate_series(1,2000) i; INSERT 0 2000 regression=# vacuum verbose tst; ... INFO: index "bloomidx" now contains 1490 row versions in 5 pages ... Ooops. (Note: this is done with some fixes already in place to make blvacuum.c return correct tuple counts for VACUUM VERBOSE; right now it tends to double-count during a VACUUM.) The problem seems to be that (1) blvacuum marks all the index pages as BLOOM_DELETED, but doesn't bother to clear the notFullPage array on the metapage; (2) blinsert uses a page from notFullPage, failing to notice that it's inserting data into a BLOOM_DELETED page; (3) once we ask the FSM for a page, it returns a BLOOM_DELETED page that we've already put tuples into, which we happily blow away by reinit'ing the page. A race-condition variant of this could be that after an autovacuum has marked a page BLOOM_DELETED, but before it's reached the point of updating the metapage, blinsert could stick data into the deleted page. That would make it possible to reach the problem without requiring the extreme edge case that blvacuum finds no partly-full pages to put into the metapage. If this does explain your problem, it's probably that variant. Will push a fix in a bit. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers