2015-06-02 9:07 GMT+02:00 Craig Ringer <cr...@2ndquadrant.com>:

> On 29 May 2015 at 11:35, Tom Lane <t...@sss.pgh.pa.us> wrote:
>
>> Craig Ringer <cr...@2ndquadrant.com> writes:
>> > It's sometimes desirable to collect auto_explain data with ANALYZE in
>> order
>> > to track down hard-to-reproduce issues, but the performance impacts can
>> be
>> > pretty hefty on the DB.
>>
>> > I'm inclined to add a sample rate to auto_explain so that it can trigger
>> > only on x percent of queries,
>>
>> That sounds reasonable ...
>>
>
> Cool, I'll cook that up then. Thanks for the sanity check.
>
>
>> > and also add a sample test hook that can be
>> > used to target statements of interest more narrowly (using a C hook
>> > function).
>>
>> You'd have to be pretty desperate, *and* knowledgeable, to write a
>> C function for that.  Can't we invent something a bit more user-friendly
>> for the purpose?  No idea what it should look like though.
>>
>
> I've been that desperate.
>
> For the majority of users I'm sure it's sufficient to just have a sample
> rate.
>
> Anything that's trying to match individual queries could be interested in
> all sorts of different things. Queries that touch a particular table being
> one of the more obvious things, or queries that mention a particular
> literal. Rather than try to design something complicated in advance that
> anticipates all needs, I'm thinking it makes sense to just throw a hook in
> there. If some patterns start to emerge in terms of useful real world
> filtering criteria then that'd better inform any more user accessible
> design down the track.
>

same method can be interesting for interactive EXPLAIN ANALYZE too. TIMING
has about 20%-30% overhead and usually we don't need a perfectly exact
numbers

Regards

Pavel


>
>
> --
>  Craig Ringer                   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>

Reply via email to