Externalizing the limit has improved the speed a lot. Distinct is half a
second faster than group by.
http://explain.depesz.com/s/vP1
with tmp as (
select distinct tr.nr as tnr
, tr.time_end as tend
, c.id_board as cb
, c.id_board_mini as cbm
, ti.id_test_result as itr
from test_item ti
, test_r
But why? Is there a way to force the planner into this?
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Slow-query-where-am-I-going-wrong-tp5730015p5730151.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance m
Much better...
http://explain.depesz.com/s/uFi
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Slow-query-where-am-I-going-wrong-tp5730015p5730145.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance mailing l
Albe Laurenz *EXTERN* wrote
> Increasing the statistics for test_result.id_recipe_version
> had no effect?
>
>> I am going to normalize the table some more before partitioning.
>
> How do you think that partitioning will help?
I increased the statistics in steps up to 5000 (with vacuum analyse)
A marginal improvement.
http://explain.depesz.com/s/y63
I am going to normalize the table some more before partitioning.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Slow-query-where-am-I-going-wrong-tp5730015p5730059.html
Sent from the PostgreSQL - performance mai
Thanks very much Laurenz.
I'll put your suggestions into motion right away and let you know the
results.
Albe Laurenz *EXTERN* wrote
> BTW, you seem to have an awful lot of indexes defined, some
> of which seem redundant.
I am in the process of pruning unused/useless indexes on this database -