That is very interesting indeed, these indexes are quite large! I will apply that patch and try it out this evening and let you know.
Thank you very much everyone for your time, the support has been amazing. PS: Just looked at this thread on the archives page and realised I don't have my name in FROM: field, which is a misconfiguration of my email client, but figured I would leave it to prevent confusion, sorry about that. All the best, Philip Scott -----Original Message----- From: Tom Lane [mailto:t...@sss.pgh.pa.us] Sent: 05 December 2012 18:05 To: Jeff Janes Cc: postgre...@foo.me.uk; postgres performance list Subject: Re: [PERFORM] Slow query: bitmap scan troubles Jeff Janes <jeff.ja...@gmail.com> writes: > I now see where the cost is coming from. In commit 21a39de5809 (first > appearing in 9.2) the "fudge factor" cost estimate for large indexes > was increased by about 10 fold, which really hits this index hard. > This was fixed in commit bf01e34b556 "Tweak genericcostestimate's > fudge factor for index size", by changing it to use the log of the > index size. But that commit probably won't be shipped until 9.3. Hm. To tell you the truth, in October I'd completely forgotten about the January patch, and was thinking that the 1/10000 cost had a lot of history behind it. But if we never shipped it before 9.2 then of course that idea is false. Perhaps we should backpatch the log curve into 9.2 --- that would reduce the amount of differential between what 9.2 does and what previous branches do for large indexes. It would definitely be interesting to know if applying bf01e34b556 helps the OP's example. regards, tom lane -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance