Tom Lane-2 wrote
> Jake Magner <
> jakemagner90@
> > writes:
>> I tried without doing an INSERT at all, just running the SELECT queries
>> and
>> the result is the same. Nested loop is chosen but is much slower.
>
> FWIW, I just noticed that the compariso
I tried without doing an INSERT at all, just running the SELECT queries and
the result is the same. Nested loop is chosen but is much slower.
--
View this message in context:
http://postgresql.nabble.com/Planner-chooses-slow-index-heap-scan-despite-accurate-row-estimates-tp5905357p5905383.html
I am trying to insert rows that don't already exist from a temp table into
another table. I am using a LEFT JOIN on all the columns and checking for
nulls in the base table to know which rows to insert. The problem is that
the planner is choosing a nested loop plan which is very slow over the much
I think I understand now after reading the notes here on the merge join
algorithm:
https://github.com/postgres/postgres/blob/4ea51cdfe85ceef8afabceb03c446574daa0ac23/src/backend/executor/nodeMergejoin.c
The index scanning node doesn't know the max id of the vehicle table and so
can't know when t
Thanks Tom, that sounds like what is happening. Some additional
comments/questions inline.
Tom Lane-2 wrote
> I think what must be happening is that the planner notes the maximum
> possible value of v.id and supposes that the mergejoin will stop far short
> of completion because v.id spans just a
random_page_cost = 4
seq_page_cost = 1
Regardless of the the choice to use the index scan and random access to the
rows, how come in the second query with the freq > -1 condition, it accesses
far fewer pages with the same index scan even though no rows are filtered
out?
Thanks
--
View this mes
I am having problems with a join where the planner picks a merge join and an
index scan on one of the tables. Manually disabling merge joins and running
the query both ways shows the merge join takes over 10 seconds while a hash
join takes less than 100ms. The planner total cost estimate favors the