"Guillaume Smet" <[EMAIL PROTECTED]> writes:
> While testing RC2 on the new servers of one of our customers, I found
> a query really slow on the new server which is quite fast on the old
> box currently in production (production is 8.1 at the moment).

Have you ANALYZEd these tables lately?  It looks like 8.3 likes the plan
it likes because it thinks that the ranges of el.numasso and a.numasso
are fairly distinct:

>                ->  Merge Join  (cost=7.55..2905.50 rows=65 width=68) (actual 
> time=5138.556..8106.465 rows=36 loops=1)
>                      Merge Cond: (el.numasso = a.numasso)
>                      ->  Nested Loop  (cost=0.00..254537.64 rows=90 width=37) 
> (actual time=5137.405..8104.863 rows=36 loops=1)

The only way the merge join could have an estimated cost that's barely
1% of the estimate for one of its inputs is if the planner thinks the
merge will stop after reading only 1% of that input, ie, the largest
a.numasso value is only about 1% of the way through the range of
el.numasso.  If the a.numasso distribution has a long tail, you might
need to raise the statistics target to fix this estimate.

I'd expect 8.1 to make about the same estimate given the same stats,
so I think it's not looking at the same stats.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to