On Thu, Jun 5, 2014 at 3:54 PM, Linos <i...@linos.es> wrote: > What I don't understand is why the statistics have this bad information, all > my tests are done on a database just restored and analyzed. Can I do > something to improve the quality of my database statistics and let the > planner do better choices? Maybe increase the statistics target of the > columns involved?
The statistics don't seem different at all in this case. The planner is predicting more or less the same results right up to the top level join where it think it'll be joining 200 rows by 92,000 rows. In 8.4 it predicted the join will produce 200 rows but in 9.4 it's predicting the join will produce 42 million rows. That's a pretty big difference. The actual number of rows it's seeing are about 2000x68 in both versions. I think in this case part of the answer is just that if your estimates are wrong then the planner will make bad deductions and it'll just be luck whether one set of bad deductions will produce better or worse plans than another set of bad deductions. The particular bad deductions here are that 9.3 is better able to deduce the ordering of the aggregates and avoid the extra sort. In 8.4 it probably wasn't aware of any plans that would produce rows in the right order. But why is it guessing the join will produce 42 million in 9.4 and only 200 in 8.4? -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers