On Wed, Jun 3, 2009 at 3:18 PM, Kevin Grittner <kevin.gritt...@wicourts.gov> wrote: > Gregory Stark <st...@enterprisedb.com> wrote: > >> My money's still on very large statistics targets. If you have a lot >> of columns and 1,000-element arrays for each column that can get big >> pretty quickly. > > I'm finding that even the ones that had a plan time in the range of > 260 ms go down to 15 ms to 85 ms once the statistics are cached. I > wonder if the long run time is because it's having to read statistics > multiple times because they don't fit in cache? Like with really wide > values? Would the wider bitmap type help with that situation in any > way? > > -Kevin
I had some performance results back when we were last looking at default_statistics_target that indicated that the time to repeatedly decompress a toasted statistics array contributed significantly to the total planning time, but my suggestion to disable compression for pg_statistic was summarily poo-poohed for reasons that still aren't quite clear to me. When you say, "don't fit in cache", exactly what cache are you talking about? It seems to me that the statistics should be far smaller than the underlying tables, so if even your statistics don't fit in shared buffers (let alone main memory), it doesn't really matter how long your query takes to plan because it will probably take literally forever to execute. How many tables would you have to be joining to get a GB of statistics, even with dst = 1000? A few hundred? ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers