Re: [PERFORM] Slow 3 Table Join with v bad row estimate

2015-11-11 Thread David Osborne
Sorry Igor - yes wrong plan. Here's the new one ... (running a wee bit slower this morning - still 20x faster that before however) http://explain.depesz.com/s/64YM QUERY PLAN ---

Re: [PERFORM] Slow 3 Table Join with v bad row estimate

2015-11-10 Thread David Osborne
Ok - wow. Adding that index, I get the same estimate of 1 row, but a runtime of ~450ms. A 23000ms improvement. http://explain.depesz.com/s/TzF8h This is great. So as a general rule of thumb, if I see a Join Filter removing an excessive number of rows, I can check if that condition can be added t

Re: [PERFORM] Slow 3 Table Join with v bad row estimate

2015-11-10 Thread David Osborne
Thanks very much Tom. Doesn't seem to quite do the trick. I created both those indexes (or the missing one at least) Then I ran analyse on stocksales_ib and branch_purchase_order. I checked there were stats held in pg_stats for both indexes, which there were. But the query plan still predicts 1 ro

[PERFORM] Slow 3 Table Join with v bad row estimate

2015-11-10 Thread David Osborne
We're hoping to get some suggestions as to improving the performance of a 3 table join we're carrying out. (I've stripped out some schema info to try to keep this post from getting too convoluted - if something doesn't make sense it maybe I've erroneously taken out something significant) The 3 tab

Re: [PERFORM] GroupAggregate and Integer Arrays

2015-10-26 Thread David Osborne
Physical memory is 61GB at the moment. work_mem is 1,249,104kB >> > What's physical memory and setting of work_mem? > > merlin > -- David Osborne Qcode Software Limited http://www.qcode.co.uk T: +44 (0)1463 896484

Re: [PERFORM] GroupAggregate and Integer Arrays

2015-10-23 Thread David Osborne
23 October 2015 at 17:15, Jeff Janes wrote: > On Fri, Oct 23, 2015 at 7:29 AM, David Osborne wrote: > > >> Hi, >> >> Wondering if anyone could suggest how we could improve the performance of >> this type of query? >> The intensive part is the summing of inte

[PERFORM] GroupAggregate and Integer Arrays

2015-10-23 Thread David Osborne
Hi, Wondering if anyone could suggest how we could improve the performance of this type of query? The intensive part is the summing of integer arrays as far as I can see. We're thinking there's not much we can do to improve performance apart from throw more CPU at it... would love to be proven wro

Re: [PERFORM] Index Scan Backward Slow

2015-05-01 Thread David Osborne
Shishkin wrote: > > > On 01 May 2015, at 13:54, David Osborne wrote: > > > > Hi, > > > > We have a query which finds the latest row_id for a particular code. > > > > We've found a backwards index scan is much slower than a forward one, to >

[PERFORM] Index Scan Backward Slow

2015-05-01 Thread David Osborne
x27;::bpchar) Buffers: shared hit=14723 -> Bitmap Index Scan on table_code_idx (cost=0.00..5206.91 rows=278731 width=0) (actual time=23.298..23.298 rows=275909 loops=1) Index Cond: (code = 'XX'::bpchar) Buffers: shared hit=765 Total runtime: 184.043 ms (13 rows) http://explain.depesz.com/s/E9VE Thanks in advance for any help. Regards, -- David Osborne Qcode Software Limited http://www.qcode.co.uk