On Thu, 21 Apr 2005 10:44 pm, Shoaib Burq (VPAC) wrote:
> �-> �Nested Loop �(cost=2.19..1069345.29 rows=16 width=58) (actual
> time=135.390..366902.373 rows=13276368 loops=1)
> � � � � � � � � � � �-> �Nested Loop �(cost=2.19..1067304.07 rows=44
> width=68) (actual time=107.627..186390.137 rows=13276368 loops=1)
> � � � � � � � � � � � � � �-> �Nested Loop �(cost=2.19..1067038.94 rows=44
> width=52) (actual time=87.255..49743.796 rows=13276368 loops=1)
OUCH, OUCH, OUCH.
Most if not all of the time is going on nested loop joins. The tuple estimates
are off by a factore of 10^6 which is means it's chosing the wrong
join type.
you could set enable_seqscan to OFF; to test what he performance is like with
a different plan, and then set it back on.
However you really need to get the row count estimates up to something
comparable. within a factor of 10 at least.
A number of the other rows estimates seem to be off by a reasonable amount too.
You may want to bump up the statistics on the relevant
columns. I can't find what they are from looking at that, I probably should be
able too, but it's late.
If you get the stats up to something near the real values, then the planner
will choose a different plan, which should give a huge performance
increase.
Regards
Russell Smith.
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq