On 10/27/2011 11:10 PM, Tom Lane wrote:
CS DBA<cs_...@consistentstate.com>  writes:
I have code that drops a table, re-create's it (based on a long set of
joins) and then re-creates the indexes.
It runs via psql in about 10 seconds.  I took the code and simply
wrapped it into a plpgsql function and the function version takes almost
60 seconds to run.
I always thought that functions should run faster than psql... am I wrong?
Did you really just put the identical queries into a function, or did
you parameterize them with values passed to the function?

Parameterized queries are often slower due to the planner not knowing
the specific constant values that are used in the actual calls.  There's
some work done for 9.2 to improve that, but in existing releases you
typically have to construct dynamic queries and EXECUTE them if you run
into this problem.

                        regards, tom lane


No parameters,  one of them looks like this:


CREATE or REPLACE FUNCTION refresh_xyz_view_m() RETURNS TRIGGER  AS $$
BEGIN

DROP TABLE xyz_view_m ;
CREATE TABLE xyz_view_m AS
SELECT
pp.id, pp.name, pp.description, pp.tariff_url, ppe.account_id, pp.active, ppe.time_zone
FROM
        tab1 pp, enrollment ppe
WHERE
        ((pp.id = ppe.pp_id) AND pp.active);

create index xyz_m_id_idx on xyx_view_m(id);


analyze xyz_view_m;

RETURN NULL;
END
$$
LANGUAGE plpgsql;

--
---------------------------------------------
Kevin Kempter       -       Constent State
A PostgreSQL Professional Services Company
          www.consistentstate.com
---------------------------------------------

Reply via email to