Re: [PERFORM] Yet another abort-early plan disaster on 9.3

2014-09-30 Thread Simon Riggs
On 30 September 2014 18:28, Jeff Janes wrote: >> Anyway, in the particular case I posted fixing n_distinct to realistic >> numbers (%) fixed the query plan. > > > But wouldn't fixing the absolute number also have fixed the plan? There are two causes of this issue. 1. Poor estimates of n_distinc

Re: [PERFORM] Yet another abort-early plan disaster on 9.3

2014-09-30 Thread Merlin Moncure
On Tue, Sep 30, 2014 at 11:54 AM, Jeff Janes wrote: > On Mon, Sep 29, 2014 at 7:12 PM, Gavin Flower > wrote: >> >> >> Would it be feasible to get a competent statistician to advise what data >> to collect, and to analyze it? Maybe it is possible to get a better >> estimate on how much of a table

Re: [PERFORM] Yet another abort-early plan disaster on 9.3

2014-09-30 Thread Gavin Flower
On 01/10/14 05:54, Jeff Janes wrote: On Mon, Sep 29, 2014 at 7:12 PM, Gavin Flower mailto:gavinflo...@archidevsys.co.nz>> wrote: Would it be feasible to get a competent statistician to advise what data to collect, and to analyze it? Maybe it is possible to get a better estimate on how

Re: [PERFORM] Yet another abort-early plan disaster on 9.3

2014-09-30 Thread Graeme B. Bell
Thanks for your replies everyone. > You can't run two plans and have them both returning rows to the client, That wasn't what I had in mind. I can envisage cases where the worst case behaviour of one plan results in zero rows by the time the alternative plan has generated the complete resul

Re: [PERFORM] Yet another abort-early plan disaster on 9.3

2014-09-30 Thread Jeff Janes
On Mon, Sep 29, 2014 at 2:54 PM, Josh Berkus wrote: > On 09/26/2014 01:06 AM, Simon Riggs wrote: > > On 23 September 2014 00:56, Josh Berkus wrote: > > > >> We've hashed that out a bit, but frankly I think it's much more > >> profitable to pursue fixing the actual problem than providing a > >> w

Re: [PERFORM] Yet another abort-early plan disaster on 9.3

2014-09-30 Thread Jeff Janes
On Mon, Sep 29, 2014 at 7:12 PM, Gavin Flower wrote: > > Would it be feasible to get a competent statistician to advise what data to > collect, and to analyze it? Maybe it is possible to get a better estimate on > how much of a table needs to be scanned, based on some fairly simple > statisti

Re: [PERFORM] Yet another abort-early plan disaster on 9.3

2014-09-30 Thread Tom Lane
"Graeme B. Bell" writes: > Every year or two the core count goes up. Can/should/does postgres ever > attempt two strategies in parallel, in cases where strategy A is generally > good but strategy B prevents bad worst case behaviour? Kind of like a > Schrödinger's Cat approach to scheduling. Wha

Re: [PERFORM] Yet another abort-early plan disaster on 9.3

2014-09-30 Thread Claudio Freire
On Tue, Sep 30, 2014 at 8:34 AM, Graeme B. Bell wrote: > >>> The existing cost estimation >>> code effectively assumes that they're perfectly uniformly distributed; >>> which is a good average-case assumption but can be horribly wrong in >>> the worst case. > > > Sorry, just an outsider jumping in

Re: [PERFORM] Yet another abort-early plan disaster on 9.3

2014-09-30 Thread Graeme B. Bell
>> The existing cost estimation >> code effectively assumes that they're perfectly uniformly distributed; >> which is a good average-case assumption but can be horribly wrong in >> the worst case. Sorry, just an outsider jumping in with a quick comment. Every year or two the core count goes up.

Re: [PERFORM] Yet another abort-early plan disaster on 9.3

2014-09-30 Thread Simon Riggs
On 30 September 2014 00:00, Tom Lane wrote: > Simon Riggs writes: >> The way I'm seeing it, you can't assume the LIMIT will apply to any >> IndexScan that doesn't have an index condition. If it has just a >> filter, or nothing at all, just an ordering then it could easily scan >> the whole index

Re: [PERFORM] Yet another abort-early plan disaster on 9.3

2014-09-30 Thread Simon Riggs
On 29 September 2014 22:54, Josh Berkus wrote: > On 09/26/2014 01:06 AM, Simon Riggs wrote: >> On 23 September 2014 00:56, Josh Berkus wrote: >> >>> We've hashed that out a bit, but frankly I think it's much more >>> profitable to pursue fixing the actual problem than providing a >>> workaround l