On 26/2/26 17:22, Attila Soki wrote:
On 24 Feb 2026, at 20:20, Andrei Lepikhov <[email protected]> wrote:
On 24/2/26 17:48, Attila Soki wrote:
On 24 Feb 2026, at 16:57, Andrei Lepikhov <[email protected]> wrote:
On 24/2/26 16:50, Attila Soki wrote:
I can provide more details on other parts of the query too, if that helps.
Only query and reproduction make sense for me to discover more deeply at
the moment.
It looks like we have managed to reproduce the potential 'Hash over
parameterised subtree' issue. Please check the attachment: there are two
plans. One plan has a longer execution time and more blocks hit, but its
cost estimate is four times lower. The EXPLAIN output does not show any
obvious estimation errors. This suggests there may be a bug in the cost
model.
I looked your repro, and I tried to find the corresponding part in my query.
If that not the right place is, please point me to the part in explain, so I
can compare your repro and that part of my query.
No problem. Issues with your query plan starts in exactly the following
line:
-> Hash Right Join (cost=210369.25..210370.30 rows=8 width=99)
(actual time=150.790..150.853 rows=44.56 loops=21798)
Schema of this part of the query tree is as the following:
Hash Right Join (loops=21798)
│
├─ [Left/Probe] GroupAggregate (loops=14426)
│ └─ Merge Right Anti Join
│ └─ Merge Join
│ └─ Index Only Scan on table_k gkal_2 (loops=14426)
│
└─ [Right/Build = Hash] Nested Loop (loops=21798)
├─ Index Scan on table_o goftr_1 (loops=21798)
│ Index Cond: goftr_1.au_id = gauf_1.id
└─ Index Scan on table_k gkal_1
Index Cond: gkal_1.oo_id = goftr_1.id
So, the hash table is rebuilt each rescan based on the changed
'gauf_1.id' external parameter.
Without the query, it is hard to say exactly what the trigger of this
problem is. Having a reproduction, we could use planner advising
extensions and see how additional knowledge of true cardinalities
rebuilds the query plan. Sometimes, additional LATERAL restriction,
added by the planner to pull-up subplan, restricts the join search scope
badly, but I doubt if we have this type of problem here.
--
regards, Andrei Lepikhov,
pgEdge