"Josh Berkus" <[EMAIL PROTECTED]> writes: > Where analyze does systematically fall down is with databases over 500GB in > size, but that's not a function of d_s_t but rather of our tiny sample size.
Speak to the statisticians. Our sample size is calculated using the same theory behind polls which sample 600 people to learn what 250 million people are going to do on election day. You do NOT need (significantly) larger samples for larger populations. In fact where those polls have difficulty is the same place we have some problems. For *smaller* populations like individual congressional races you need to have nearly the same 600 sample for each of those small races. That adds up to a lot more than 600 total. In our case it means when queries cover a range much less than a whole bucket then the confidence interval increases too. Also, our estimates for n_distinct are very unreliable. The math behind sampling for statistics just doesn't work the same way for properties like n_distinct. For that Josh is right, we *would* need a sample size proportional to the whole data set which would practically require us to scan the whole table (and have a technique for summarizing the results in a nearly constant sized data structure). -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support! -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers