On Tue, Dec 28, 2010 at 1:39 AM, Josh Berkus <j...@agliodbs.com> wrote: > While I don't want to discourage you from working on steam-based > estimators ... I'd love to see you implement a proof-of-concept for > PostgreSQL, and test it ... the above is a non-argument. It requires us > to accept that sample-based estimates cannot ever be made to work, > simply because you say so.
This argument has been made on this mailing list and on pgsql-performance many times, so I have been assuming that this is settled mathematics. Admittedly, I don't have a citation for this... > I would agree that it's impossible to get a decent estimate of > n-distinct from a 1% sample. But there's a huge difference between 5% > or 10% and "a majority of the table". Considering the way that disks work, it's not that huge. If the table is small enough to fit in memory conveniently, it probably wouldn't be unacceptably costly even to read the whole thing. But if it's a terabyte on disk, reading every tenth block is probably going to carry most of the I/O cost of reading every block. Even reading every twentieth or hundredth block is going to be significantly more expensive than what we do now. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers