On Wed, Aug 28, 2024 at 9:01 PM Robert Haas <robertmh...@gmail.com> wrote:
> On Tue, Aug 27, 2024 at 11:57 PM Tender Wang <tndrw...@gmail.com> wrote:
> > I haven't look all of them. I just pick few simple plan test(e.g. 19.sql, 
> > 45.sql).
> > For example, 19.sql, eager agg pushdown doesn't get large gain, but a little
> > performance regress.
>
> Yeah, this is one of the things I was worried about in my previous
> reply to Richard. It would be worth Richard, or someone, probing into
> exactly why that's happening. My fear is that we just don't have good
> enough estimates to make good decisions, but there might well be
> another explanation.

Sorry it takes some time to switch back to this thread.

I revisited the part about cost estimates for grouped paths in this
patch, and I found a big issue: the row estimate for a join path could
be significantly inaccurate if there is a grouped join path beneath
it.

The reason is that it is very tricky to set the size estimates for a
grouped join relation.  For a non-grouped join relation, we know that
all its paths have the same rowcount estimate (well, in theory).  But
this is not true for a grouped join relation.  Suppose we have a
grouped join relation for t1/t2 join.  There might be two paths for
it:

Aggregate
    -> Join
        -> Scan on t1
        -> Scan on t2

Or

Join
 -> Scan on t1
 -> Aggregate
     -> Scan on t2

These two paths can have very different rowcount estimates, and we
have no way of knowing which one to set for this grouped join
relation, because we do not know which path would be picked in the
final plan.  This issue can be illustrated with the query below.

create table t (a int, b int, c int);
insert into t select i%10, i%10, i%10 from generate_series(1,1000)i;
analyze t;

set enable_eager_aggregate to on;

explain (costs on)
select sum(t2.c) from t t1 join t t2 on t1.a = t2.a join t t3 on t2.b
= t3.b group by t3.a;
                                      QUERY PLAN
---------------------------------------------------------------------------------------
 Finalize HashAggregate  (cost=6840.60..6840.70 rows=10 width=12)
   Group Key: t3.a
   ->  Nested Loop  (cost=1672.00..1840.60 rows=1000000 width=12)
         Join Filter: (t2.b = t3.b)
         ->  Partial HashAggregate  (cost=1672.00..1672.10 rows=10 width=12)
               Group Key: t2.b
               ->  Hash Join  (cost=28.50..1172.00 rows=100000 width=8)
                     Hash Cond: (t1.a = t2.a)
                     ->  Seq Scan on t t1  (cost=0.00..16.00 rows=1000 width=4)
                     ->  Hash  (cost=16.00..16.00 rows=1000 width=12)
                           ->  Seq Scan on t t2  (cost=0.00..16.00
rows=1000 width=12)
         ->  Materialize  (cost=0.00..21.00 rows=1000 width=8)
               ->  Seq Scan on t t3  (cost=0.00..16.00 rows=1000 width=8)
(13 rows)

Look at the Nested Loop node:

   ->  Nested Loop  (cost=1672.00..1840.60 rows=1000000 width=12)

How can a 10-row outer path joining a 1000-row inner path generate
1000000 rows?  This is because we are using the plan of the first path
described above, and the rowcount estimate of the second path.  What a
kluge!

To address this issue, one solution I’m considering is to recalculate
the row count estimate for a grouped join path using its outer and
inner paths.  While this may seem expensive, it might not be that bad
since we will cache the results of the selectivity calculation.  In
fact, this is already the approach we take for parameterized join
paths (see get_parameterized_joinrel_size).

Any thoughts on this?

Thanks
Richard


Reply via email to