Alvaro Herrera <alvhe...@commandprompt.com> writes: > Excerpts from Robert Haas's message of lun mar 14 11:18:24 -0300 2011: >> Does it really matter? What Tom was describing sounded embarassingly cheap.
That was my thought exactly. If you could even measure the added cost of doing that, I'd be astonished. It'd be adding one comparison-and- possible-assignment to a loop that also has to invoke a binary search of a TID array --- a very large array, in the cases we're worried about. I'd put the actual update of pg_statistic somewhere where it only happened once, but I don't especially care if the stat gets computed on each index scan. > As Heikki says, maybe this wouldn't be an issue at all if we can do it > during ANALYZE instead, but I don't know if that works. I'm not convinced you can get a sufficiently good estimate from a small subset of pages. I actually started with the idea of having ANALYZE try to calculate correlation for multi-column indexes the same way it now calculates it for individual data columns, but when this idea occurred to me it just seemed a whole lot better. Note that we could remove the correlation calculations from ANALYZE altogether. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers