On 2020-Aug-19, David Rowley wrote:

> Andres' suggestion:
> 
> regression=# explain (analyze, costs off, timing off, summary off)
> select count(*) from tenk1 t1 inner join tenk1 t2 on
> t1.twenty=t2.unique1;
>                                       QUERY PLAN
> ---------------------------------------------------------------------------------------
>  Aggregate (actual rows=1 loops=1)
>    ->  Nested Loop (actual rows=10000 loops=1)
>           Cache Key: t1.twenty  Hits: 9980  Misses: 20  Evictions: 0 
> Overflows: 0
>         ->  Seq Scan on tenk1 t1 (actual rows=10000 loops=1)
>         ->  Index Scan using tenk1_unique1 on tenk1 t2 (actual rows=1 
> loops=20)
>               Index Cond: (unique1 = t1.twenty)
> (6 rows)

I think it doesn't look terrible in the SubPlan case -- it kinda makes
sense there -- but for nested loop it appears really strange.

On the performance aspect, I wonder what the overhead is, particularly
considering Tom's point of making these nodes more expensive for cases
with no caching.  And also, as the JIT saga continues, aren't we going
to get plan trees recompiled too, at which point it won't matter much?

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Reply via email to