Pallav Kalva <[EMAIL PROTECTED]> writes:
>    I am having problem optimizing this query,

Get rid of the un-optimizable function inside the view.  You've
converted something that should be a join into an unreasonably large
number of function calls.

>                     ->  Seq Scan on serviceinstance si  
> (cost=0.00..15831.52 rows=7 width=78) (actual time=174.430..18948.210 
> rows=358 loops=1)
>                           Filter: (((subplan) = 'FL'::text) AND 
> ((subplan) = '099'::text))
>                           SubPlan
>                             ->  Result  (cost=0.00..0.01 rows=1 width=0) 
> (actual time=0.090..0.093 rows=1 loops=3923)
>                             ->  Result  (cost=0.00..0.01 rows=1 width=0) 
> (actual time=0.058..0.061 rows=1 loops=265617)

The bulk of the cost here is in the second subplan (0.061 * 265617 =
16202.637 msec total runtime), and there's not a darn thing Postgres
can do to improve this because the work is all down inside a "black box"
function.  In fact the planner does not even know that the function call
is expensive, else it would have preferred a plan that requires fewer
evaluations of the function.  The alternative plan you show is *not*
faster "because it's an indexscan"; it's faster because get_parametervalue
is evaluated fewer times.

The useless sub-SELECTs atop the function calls are adding their own
little increment of wasted time, too.  I'm not sure how bad that is
relative to the function calls, but it's certainly not helping.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to