Hi Fernando,
I ran into something similar ... with hard-coded queries in a function
that ends up getting cached. My solution was to store the referenced
table in a variable and, as you suggested, use EXECUTE to dynamically
build up and run the query.
Cheers,
Kevin
Fernando Moreno wrote:
2008/5/26 Gurjeet Singh <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>>:
On Mon, May 26, 2008 at 9:49 PM, Fernando Moreno
<[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
Hi everyone, a few months ago I was still using Postgresql 8.2
and had the problem described here:
http://www.postgresql.org/docs/faqs.FAQ.html#item4.19 , that
time I solved it using EXECUTE for all sentences accessing
temporary tables. Right now I'm using 8.3, the scenario is a
little different but the problem is the same. I have many
schemas with the same structure (tables, views and one
trigger), and two functions in the public schema which insert
and delete data from them, the INSERT and DELETE sentences are
hard-coded. Every schema represents a store from the same company.
The idea is that just by changing the search_path value to
something like "schema1,public", it's possible to execute the
functions and to process data for any schema (one at a time).
But the problem is here: through the client app, a user
invokes one of these functions on a given schema (schema1),
then requests a "store change", actually setting the
search_path to use another schema (schema2) and again,
executes any of the functions that access the schema tables,
BUT the function seems to be still linked to the first schema,
so new records are added to the wrong schema and delete
operations don't find the right record. EXECUTE will save the
day again, but I'd like to know if this is considered a known
bug even when it was apparently fixed.
I don't think it can be categorized as a bug! This is happening
because all the DML queries are prepared upon first execution, and
the plan stores the unique identifiers (OIDs) of the objects and
not the names of the objects. Upon changing search_path, the
function cache is not flushed, and hence the query plans are still
operating on the same objects.
I see two possibilities,
i) Flush function cache (only the query plans, if possible) when
changing search_path.
ii) Give users the ability to flush the function cache at will.
I don't think (ii) will have much backing, but (i) does make some
sense.
Best regards,
--
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com
EnterpriseDB http://www.enterprisedb.com
Mail sent from my BlackLaptop device
Thanks for your reply. I've been digging the list archive and I think
EXECUTE is the best workaround, at least better than restarting the
connection, creating the function again or restarting the server (!!).
By the way, this flushing-function-cache thing seems to be an almost
esoteric topic, because I wasn't able to find anything clear, unless
you were talking about it more as a consequence than an action by itself.
On the other hand, perhaps this problem could have been avoided by
creating the same function in every schema. That way the function
cache and query plans would be harmless. Am I right?
Cheers.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general