On 5/5/07, Tom Lane <[EMAIL PROTECTED]> wrote:

"Aaron Bono" <[EMAIL PROTECTED]> writes:
> 9.                           ->  Seq Scan on branch  (cost=0.00..4.72rows=1
> width=1281) (actual time=130129.988..157492.057 rows=1 loops=1)
> 10.                                Filter: ((start_day <= now()) AND
> ((end_day IS NULL) OR (end_day >= now())) AND (branch_id =
> get_branch_for_zip('22151'::character varying)))

There is something *awfully* wacko about that entry --- the fact that
the cost estimate is less than 5 units means that the planner thinks
there's 4 or fewer pages; either that's way wrong or the
get_branch_for_zip function is taking enormous amounts of time per row.
Have you tried timing that function on its own?

One possible reason for the performance difference is if you have
get_branch_for_zip marked as stable in one database and volatile in the
other --- volatile would prevent it from being used in an indexqual as
you'd like.


I verified it by putting a RAISE NOTICE in the function.  The fast schema
runs the function twice (odd, I would think it would run only once).  The
slow schema runs it 30 times (the number of records returned + 1).  I know I
put the functions into both schemas as stable and even dropped and recreated
the function.  Then I verified with EMS Manager and it tells me the DDL for
the function in the database is set to stable.  Is there something I can do
to tell PostgreSQL that I really did mean stable?

Thanks for all the help,
Aaron

--
==================================================================
  Aaron Bono
  Aranya Software Technologies, Inc.
  http://www.aranya.com
  http://codeelixir.com
==================================================================

Reply via email to