Heikki Linnakangas <heikki.linnakan...@enterprisedb.com> wrote: > where exactly is the extra overhead coming from? Keep in mind that this is a sort of worst case scenario. The data is fully cached in shared memory and we're doing a sequential pass just counting the rows. In an earlier benchmark (which I should re-do after all this refactoring), random access queries against a fully cached data set only increased run time by 1.8%. Throw some disk access into the mix, and the overhead is likely to get lost in the noise. But, as I said, count(*) seems to be the first thing many people try as a benchmark, and this is a symptom of a more general issue, so I'd like to find a good solution. -Kevin
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers