On Tue, May 10, 2011 at 6:25 PM, Kevin Grittner <kevin.gritt...@wicourts.gov> wrote:
>> ... but I share Simon's desire to see some proof before anything >> gets committed. > > And we agree there. In fact, I can't think of anyone in the > community who doesn't want to see that for *any* purported > performance enhancement. I'm not talking about eventual commit, I'm talking about the whole process of development. We should be focusing on improving a measurable performance issue, not on implementing one exact design that someone thought might help. How will we review the patch except by measuring it against the declared performance goal? Otherwise all the various options along the way will just be matters of opinion, instead of measurement. From what has been said so far, the use case for this is related to the practice of using "covered indexes", which makes me nervous because that is an expert level tuning task on other DBMS, limiting the range of people who get benefit. The typical speed up for non-covered indexes will come when we access a very large table (not in cache) via an index scan that is smaller than a bitmapindex scan. Will we be able to gauge selectivities sufficiently accurately to be able to pinpoint that during optimization? How will we know that the table is not in cache? Or is this an optimisation in the executor for a bitmapheap scan? I'm not being negative, I'm trying to avoid arguments, blind alleys and much wasted development if we focus on the wrong things or go to design too early.. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers