Stephen Frost <sfr...@snowman.net> writes:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> However, it could still come annoyingly
>> close to "any DDL invalidates all cached plans", at least for apps that
>> keep most of their objects in one schema.  Not entirely sure it's worth
>> the implementation hassle and extra sinval traffic.

> I'm really on the fence about this myself.  I can certainly see value in
> doing the invalidations to provide an easy way for individuals to do
> database updates which include DDL changes without downtime or even
> having to pause all concurrent activity (consider a create-table, rename
> old-table, rename-new-into-place, drop old-table approach).  That said,
> that use-case may end up being vanishingly small due to the planned
> queries themselves (or plpgsql functions) needing to be updated anyway.

Even more to the point, in most scenarios like that the inval on the
object previously referenced in the query would be enough to force
replan.  AFAICS it's only the interpose-something-new-earlier-in-the-
search-path case that is problematic, and that's got to be a corner
case (or even an application bug) for most people.

I guess one example where it might happen routinely is if you like to
create temp tables that mask regular tables, and then reuse prepared
queries that originally referenced the regular tables with the
expectation that they now reference the temp tables ... but that doesn't
seem like great programming practice from here.

I'm thinking that the main argument for trying to do this is so that we
could say "plan caching is transparent", full stop, with no caveats or
corner cases.  But removing those caveats is going to cost a fair
amount, and it looks like that cost will be wasted for most usage
patterns.

                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to