* Mark Kirkwood ([EMAIL PROTECTED]) wrote:
> I can see your point, however I wonder if the issue is that the default
> stats settings of '10' (3000 rows, 10 histogram buckets) is too low, and
> maybe we should consider making a higher value (say '100') the default.

Personally, I think that'd be reasonable.

> The idea of either automatically increasing sample size for large
> tables, or doing a few more samplings with different sizes and examining
> the stability of the estimates is rather nice, provided we can keep the
> runtime for ANALYZE to reasonable limits, I guess :-)

I also agree with this and personally don't mind *too* much if analyze
takes a little while on a large table to get decent statistics for it.
One thing I was wondering about though is if we use the index to
get some of the statistics information?  Would it be possible, or
reasonable?  Do we already?  I dunno, just some thoughts there, I keep
hearing about the number of rows that are sampled and I would have
thought it'd make sense to scan the index for things like the number of
distinct values...

        Stephen

Attachment: signature.asc
Description: Digital signature

Reply via email to