Re: [PERFORM] merge join killing performance

2010-05-27 Thread Tom Lane
Scott Marlowe scott.marl...@gmail.com writes:
 So, Tom, so you think it's possible that the planner isn't noticing
 all those nulls and thinks it'll just take a row or two to get to the
 value it needs to join on?

I've committed a patch for this, if you're interested in testing that
it fixes your situation.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] merge join killing performance

2010-05-27 Thread Scott Marlowe
On Thu, May 27, 2010 at 7:16 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Scott Marlowe scott.marl...@gmail.com writes:
 So, Tom, so you think it's possible that the planner isn't noticing
 all those nulls and thinks it'll just take a row or two to get to the
 value it needs to join on?

 I've committed a patch for this, if you're interested in testing that
 it fixes your situation.

Cool, do we have a snapshot build somewhere or do I need to get all
the extra build bits like flex or yacc or bison or whatnot?

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] merge join killing performance

2010-05-27 Thread Tom Lane
Scott Marlowe scott.marl...@gmail.com writes:
 On Thu, May 27, 2010 at 7:16 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I've committed a patch for this, if you're interested in testing that
 it fixes your situation.

 Cool, do we have a snapshot build somewhere or do I need to get all
 the extra build bits like flex or yacc or bison or whatnot?

There's a nightly snapshot tarball of HEAD on the ftp server.
I don't believe there's any snapshots for back branches though.

Alternatively, you could grab the latest release tarball for whichever
branch you want and just apply that patch --- it should apply cleanly.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] merge join killing performance

2010-05-20 Thread Tom Lane
Scott Marlowe scott.marl...@gmail.com writes:
 So, Tom, so you think it's possible that the planner isn't noticing
 all those nulls and thinks it'll just take a row or two to get to the
 value it needs to join on?

Could be.  I don't have time right now to chase through the code, but
that sounds like a plausible theory.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] merge join killing performance

2010-05-20 Thread Scott Marlowe
On Thu, May 20, 2010 at 8:28 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Scott Marlowe scott.marl...@gmail.com writes:
 So, Tom, so you think it's possible that the planner isn't noticing
 all those nulls and thinks it'll just take a row or two to get to the
 value it needs to join on?

 Could be.  I don't have time right now to chase through the code, but
 that sounds like a plausible theory.

K.  I think I'll try an index on that field where not null and see
if that helps.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] merge join killing performance

2010-05-19 Thread Tom Lane
Matthew Wakeling matt...@flymine.org writes:
 On Tue, 18 May 2010, Scott Marlowe wrote:
 Aggregate  (cost=902.41..902.42 rows=1 width=4)
 -  Merge Join  (cost=869.97..902.40 rows=1 width=4)
 Merge Cond: (f.eid = ev.eid)
 -  Index Scan using files_eid_idx on files f
 (cost=0.00..157830.39 rows=3769434 width=8)

 Okay, that's weird. How is the cost of the merge join only 902, when the 
 cost of one of the branches 157830, when there is no LIMIT?

It's apparently estimating (wrongly) that the merge join won't have to
scan very much of files before it can stop because it finds an eid
value larger than any eid in the other table.  So the issue here is an
inexact stats value for the max eid.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] merge join killing performance

2010-05-19 Thread Scott Marlowe
On Wed, May 19, 2010 at 10:53 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Matthew Wakeling matt...@flymine.org writes:
 On Tue, 18 May 2010, Scott Marlowe wrote:
 Aggregate  (cost=902.41..902.42 rows=1 width=4)
     -  Merge Join  (cost=869.97..902.40 rows=1 width=4)
         Merge Cond: (f.eid = ev.eid)
         -  Index Scan using files_eid_idx on files f
         (cost=0.00..157830.39 rows=3769434 width=8)

 Okay, that's weird. How is the cost of the merge join only 902, when the
 cost of one of the branches 157830, when there is no LIMIT?

 It's apparently estimating (wrongly) that the merge join won't have to
 scan very much of files before it can stop because it finds an eid
 value larger than any eid in the other table.  So the issue here is an
 inexact stats value for the max eid.

That's a big table.  I'll try cranking up the stats target for that
column and see what happens.  Thanks!

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] merge join killing performance

2010-05-19 Thread Scott Marlowe
On Wed, May 19, 2010 at 10:53 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Matthew Wakeling matt...@flymine.org writes:
 On Tue, 18 May 2010, Scott Marlowe wrote:
 Aggregate  (cost=902.41..902.42 rows=1 width=4)
     -  Merge Join  (cost=869.97..902.40 rows=1 width=4)
         Merge Cond: (f.eid = ev.eid)
         -  Index Scan using files_eid_idx on files f
         (cost=0.00..157830.39 rows=3769434 width=8)

 Okay, that's weird. How is the cost of the merge join only 902, when the
 cost of one of the branches 157830, when there is no LIMIT?

 It's apparently estimating (wrongly) that the merge join won't have to
 scan very much of files before it can stop because it finds an eid
 value larger than any eid in the other table.  So the issue here is an
 inexact stats value for the max eid.

I changed stats target to 1000 for that field and still get the bad plan.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] merge join killing performance

2010-05-19 Thread Scott Marlowe
On Wed, May 19, 2010 at 2:27 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Wed, May 19, 2010 at 10:53 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Matthew Wakeling matt...@flymine.org writes:
 On Tue, 18 May 2010, Scott Marlowe wrote:
 Aggregate  (cost=902.41..902.42 rows=1 width=4)
     -  Merge Join  (cost=869.97..902.40 rows=1 width=4)
         Merge Cond: (f.eid = ev.eid)
         -  Index Scan using files_eid_idx on files f
         (cost=0.00..157830.39 rows=3769434 width=8)

 Okay, that's weird. How is the cost of the merge join only 902, when the
 cost of one of the branches 157830, when there is no LIMIT?

 It's apparently estimating (wrongly) that the merge join won't have to
 scan very much of files before it can stop because it finds an eid
 value larger than any eid in the other table.  So the issue here is an
 inexact stats value for the max eid.

 I changed stats target to 1000 for that field and still get the bad plan.

And of course ran analyze across the table...

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] merge join killing performance

2010-05-19 Thread Matthew Wakeling

On Wed, 19 May 2010, Scott Marlowe wrote:

It's apparently estimating (wrongly) that the merge join won't have to
scan very much of files before it can stop because it finds an eid
value larger than any eid in the other table.  So the issue here is an
inexact stats value for the max eid.


I wandered if it could be something like that, but I rejected that idea, 
as it obviously wasn't the real world case, and statistics should at least 
get that right, if they are up to date.



I changed stats target to 1000 for that field and still get the bad plan.


What do the stats say the max values are?

Matthew

--
Nog: Look! They've made me into an ensign!
O'Brien: I didn't know things were going so badly.
Nog: Frightening, isn't it?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] merge join killing performance

2010-05-19 Thread Scott Marlowe
On Wed, May 19, 2010 at 7:46 PM, Matthew Wakeling matt...@flymine.org wrote:
 On Wed, 19 May 2010, Scott Marlowe wrote:

 It's apparently estimating (wrongly) that the merge join won't have to
 scan very much of files before it can stop because it finds an eid
 value larger than any eid in the other table.  So the issue here is an
 inexact stats value for the max eid.

 I wandered if it could be something like that, but I rejected that idea, as
 it obviously wasn't the real world case, and statistics should at least get
 that right, if they are up to date.

 I changed stats target to 1000 for that field and still get the bad plan.

 What do the stats say the max values are?

5277063,5423043,13843899 (I think).

# select count(distinct eid) from files;
 count
---
   365
(1 row)

# select count(*) from files;
  count
-
  3793748

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] merge join killing performance

2010-05-19 Thread Scott Marlowe
On Wed, May 19, 2010 at 8:04 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Wed, May 19, 2010 at 7:46 PM, Matthew Wakeling matt...@flymine.org wrote:
 On Wed, 19 May 2010, Scott Marlowe wrote:

 It's apparently estimating (wrongly) that the merge join won't have to
 scan very much of files before it can stop because it finds an eid
 value larger than any eid in the other table.  So the issue here is an
 inexact stats value for the max eid.

 I wandered if it could be something like that, but I rejected that idea, as
 it obviously wasn't the real world case, and statistics should at least get
 that right, if they are up to date.

 I changed stats target to 1000 for that field and still get the bad plan.

 What do the stats say the max values are?

 5277063,5423043,13843899 (I think).

 # select count(distinct eid) from files;
  count
 ---
   365
 (1 row)

 # select count(*) from files;
  count
 -
  3793748

A followup.  of those rows,

select count(*) from files where eid is null;
  count
-
 3793215

are null.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] merge join killing performance

2010-05-19 Thread Scott Marlowe
On Wed, May 19, 2010 at 8:06 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Wed, May 19, 2010 at 8:04 PM, Scott Marlowe scott.marl...@gmail.com 
 wrote:
 On Wed, May 19, 2010 at 7:46 PM, Matthew Wakeling matt...@flymine.org 
 wrote:
 On Wed, 19 May 2010, Scott Marlowe wrote:

 It's apparently estimating (wrongly) that the merge join won't have to
 scan very much of files before it can stop because it finds an eid
 value larger than any eid in the other table.  So the issue here is an
 inexact stats value for the max eid.

 I wandered if it could be something like that, but I rejected that idea, as
 it obviously wasn't the real world case, and statistics should at least get
 that right, if they are up to date.

 I changed stats target to 1000 for that field and still get the bad plan.

 What do the stats say the max values are?

 5277063,5423043,13843899 (I think).

 # select count(distinct eid) from files;
  count
 ---
   365
 (1 row)

 # select count(*) from files;
  count
 -
  3793748

 A followup.  of those rows,

 select count(*) from files where eid is null;
  count
 -
  3793215

 are null.

So, Tom, so you think it's possible that the planner isn't noticing
all those nulls and thinks it'll just take a row or two to get to the
value it needs to join on?

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] merge join killing performance

2010-05-18 Thread Scott Marlowe
Machine: 8 core AMD opteron 2.1GHz, 12 disk RAID-10, 2 disk pg_xlog,
RHEL 5.4 pg version 8.3.9 (upgrading soon to 8.3.11 or so)

This query:
SELECT sum(f.bytes) AS sum FROM files f INNER JOIN events ev ON f.eid
= ev.eid WHERE ev.orgid = 969677;

is choosing a merge join, which never returns from explain analyze (it
might after 10 or so minutes, but I'm not beating up my production
server over it)

 Aggregate  (cost=902.41..902.42 rows=1 width=4)
   -  Merge Join  (cost=869.97..902.40 rows=1 width=4)
 Merge Cond: (f.eid = ev.eid)
 -  Index Scan using files_eid_idx on files f
(cost=0.00..157830.39 rows=3769434 width=8)
 -  Sort  (cost=869.52..872.02 rows=1002 width=4)
   Sort Key: ev.eid
   -  Index Scan using events_orgid_idx on events ev
(cost=0.00..819.57 rows=1002 width=4)
 Index Cond: (orgid = 969677)


If I turn off mergejoin it's fast:

explain analyze SELECT sum(f.bytes) AS sum FROM files f INNER JOIN
events ev ON f.eid = ev.eid WHERE ev.orgid = 969677;

QUERY PLAN
---
 Aggregate  (cost=3653.28..3653.29 rows=1 width=4) (actual
time=1.541..1.541 rows=1 loops=1)
   -  Nested Loop  (cost=0.00..3653.28 rows=1 width=4) (actual
time=1.537..1.537 rows=0 loops=1)
 -  Index Scan using events_orgid_idx on events ev
(cost=0.00..819.57 rows=1002 width=4) (actual time=0.041..0.453
rows=185 loops=1)
   Index Cond: (orgid = 969677)
 -  Index Scan using files_eid_idx on files f
(cost=0.00..2.82 rows=1 width=8) (actual time=0.005..0.005 rows=0
loops=185)
   Index Cond: (f.eid = ev.eid)
 Total runtime: 1.637 ms

I've played around with random_page_cost.  All the other things you'd
expect, like effective_cache_size are set rather large (it's a server
with 32Gig ram and a 12 disk RAID-10) and no setting of
random_page_cost forces it to choose the non-mergejoin plan.

Anybody with any ideas, I'm all ears.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] merge join killing performance

2010-05-18 Thread Matthew Wakeling

On Tue, 18 May 2010, Scott Marlowe wrote:

Aggregate  (cost=902.41..902.42 rows=1 width=4)
  -  Merge Join  (cost=869.97..902.40 rows=1 width=4)
Merge Cond: (f.eid = ev.eid)
-  Index Scan using files_eid_idx on files f
(cost=0.00..157830.39 rows=3769434 width=8)


Okay, that's weird. How is the cost of the merge join only 902, when the 
cost of one of the branches 157830, when there is no LIMIT?


Are the statistics up to date?

Matthew

--
As you approach the airport, you see a sign saying Beware - low
flying airplanes. There's not a lot you can do about that. Take 
your hat off?  -- Michael Flanders


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] merge join killing performance

2010-05-18 Thread Scott Marlowe
On Tue, May 18, 2010 at 9:00 PM, Matthew Wakeling matt...@flymine.org wrote:
 On Tue, 18 May 2010, Scott Marlowe wrote:

 Aggregate  (cost=902.41..902.42 rows=1 width=4)
  -  Merge Join  (cost=869.97..902.40 rows=1 width=4)
        Merge Cond: (f.eid = ev.eid)
        -  Index Scan using files_eid_idx on files f
 (cost=0.00..157830.39 rows=3769434 width=8)

 Okay, that's weird. How is the cost of the merge join only 902, when the
 cost of one of the branches 157830, when there is no LIMIT?

 Are the statistics up to date?

Yep.  The explain analyze shows it being close enough it should guess
right (I think) We have default stats target set to 200 and the table
is regularly analyzed by autovac, which now has much smaller settings
for threshold and % than default to handle these big tables.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance