Adam Gundy wrote:
I doubt we could get stats stable enough for this. the number of groups will hopefully be much larger at some point.

The pg_stats table should be recording the n most-common values, so if you have 1 million groups you track details of the 1000 most-common. That gives you a maximum for how common any value not in the stats can be.

No, but let's see what's in pg_stats.

no real help there. either it hits the group being read, and does a good plan, or it doesn't, and tries to seqscan (unless I disable it). even forcing stats to 1000 only bandaids the situation, given the number of groups will eventually exceed that..

Like I say, that's not the point of gathering the stats. If one group represents 95% of your rows, then its group-id should be almost certain to occur in the stats. Are you saying that's not happening with your data?

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to