On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe <scott.marl...@gmail.com> wrote:
> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg <p...@pvh.ca> wrote:
>> That said, I have access to a very large fleet in which to can collect
>> data so I'm all ears for suggestions about how to measure and would
>> gladly share the results with the list.
>
> I wonder if some kind of script that grabbed random queries and ran
> them with explain analyze and various random_page_cost to see when
> they switched and which plans are faster would work?

We aren't exactly in a position where we can adjust random_page_cost
on our users' databases arbitrarily to see what breaks. That would
be... irresponsible of us.

How would one design a meta-analyzer which we could run across many
databases and collect data? Could we perhaps collect useful
information from pg_stat_user_indexes, for example?

-p

-- 
Peter van Hardenberg
San Francisco, California
"Everything was beautiful, and nothing hurt." -- Kurt Vonnegut

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to