Analyzing did not help, here is the out of EXPLAIN ANALYZE of update query

"Seq Scan on dummy  (cost=0.00..56739774.24 rows=23441 width=275) (actual
time=18.927..577929.014 rows=22712 loops=1)"
"  SubPlan"
"    ->  Aggregate  (cost=2420.41..2420.43 rows=1 width=19) (actual time=
25.423..25.425 rows=1 loops=22712)"
"          ->  Seq Scan on dummy "temp"  (cost=0.00..2416.01 rows=586
width=19) (actual time=0.049..17.834 rows=2414 loops=22712)"
"                Filter: ((product)::text = ($0)::text)"
"Total runtime: 578968.885 ms"


On Feb 11, 2008 9:59 PM, Tom Lane <[EMAIL PROTECTED]> wrote:

> "Linux Guru" <[EMAIL PROTECTED]> writes:
> > We have a large datawarehouse stored in postgres and temp tables are
> created
> > based on user query. The process of temp table creation involves
> selecting
> > data from main fact table, this includes several select and update
> > statements and one of the following update statement is having
> performance
> > issues.
>
> Try ANALYZEing the temp table before the step that's too slow.
>
> If that doesn't help, let's see EXPLAIN ANALYZE (not just EXPLAIN)
> output.
>
>                        regards, tom lane
>

Reply via email to