Dne 13.12.2010 03:00, Robert Haas napsal(a):
> Well, the question is what data you are actually storing.  It's
> appealing to store a measure of the extent to which a constraint on
> column X constrains column Y, because you'd only need to store
> O(ncolumns^2) values, which would be reasonably compact and would
> potentially handle the zip code problem - a classic "hard case" rather
> neatly.  But that wouldn't be sufficient to use the above equation,
> because there A and B need to be things like "column X has value x",
> and it's not going to be practical to store a complete set of MCVs for
> column X for each possible value that could appear in column Y.

O(ncolumns^2) values? You mean collecting such stats for each possible
pair of columns? Well, I meant something different.

The proposed solution is based on contingency tables, built for selected
groups of columns (not for each possible group). And the contingency
table gives you the ability to estimate the probabilities needed to
compute the selectivity. Or am I missing something?

regards
Tomas

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to