On Wed, Feb 8, 2012 at 7:54 PM, Peter van Hardenberg <p...@pvh.ca> wrote:
> On Wed, Feb 8, 2012 at 6:47 PM, Peter van Hardenberg <p...@pvh.ca> wrote:
>> On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe <scott.marl...@gmail.com> 
>> wrote:
>>> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg <p...@pvh.ca> wrote:
>>>> That said, I have access to a very large fleet in which to can collect
>>>> data so I'm all ears for suggestions about how to measure and would
>>>> gladly share the results with the list.
>>>
>>> I wonder if some kind of script that grabbed random queries and ran
>>> them with explain analyze and various random_page_cost to see when
>>> they switched and which plans are faster would work?
>>
>> We aren't exactly in a position where we can adjust random_page_cost
>> on our users' databases arbitrarily to see what breaks. That would
>> be... irresponsible of us.
>>
>
> Oh, of course we could do this on the session, but executing
> potentially expensive queries would still be unneighborly.
>
> Perhaps another way to think of this problem would be that we want to
> find queries where the cost estimate is inaccurate.

Yeah, have a script the user runs for you heroku guys in their spare
time to see what queries are using the most time and then to jangle
the random_page_cost while running them to get an idea what's faster
and why.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to