On Sat, Apr 24, 2010 at 12:11 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> "Greg Sabino Mullane" <g...@turnstep.com> writes:
>>> surprised to find my clone unaffected?  If it modifies both, how do we
>>> avoid complete havoc if the original has since been modified (perhaps
>>> incompatibly, perhaps not) by some other backend doing its own ALTER
>>> TABLE?
>
>> Since this is such a thorny problem, and this is a temporary table, why
>> not just disallow ALTER completely for the first pass?
>
> Usually the way we approach these kinds of problems is that we want
> to see some plausible outline for how they might be fixed before we
> move forward with the base feature.  IOW, I wouldn't object to not
> having ALTER in the first release, but if we have no idea how to do
> ALTER at all I'd be too worried that we were painting ourselves into
> a corner.
>
> Or maybe you can make a case that there's no need to allow ALTER at
> all, ever.  But surely DROP needs to be possible, and that seems to
> already introduce some of the same issues.

I had the same thought as GSM this morning.  More specifically, it
seems to me that the problematic cases are precisely those in which
you might feel an urge to touch somebody else's local buffers, so I
think we should disallow, approximately, those ALTER TABLE cases which
require a full-table rewrite.  I don't see the problem with DROP.
Under the proposed design, it's approximately equivalent to dropping a
table that someone else has truncated.  You just wait for the
necessary lock and then do it.

At least AIUI, the use case for this feature is that you want to avoid
creating "the same" temporary table over and over again.  The schema
is fixed and doesn't change much, but you're creating it lots and lots
of times in lots and lots of different backends, leading to both
management and performance difficulties.  If you want to be able to
change the schema frequently or in a backend-local way, use the
existing temporary table feature.

Now, there is ONE problem with DROP, which is that you might orphan
some heaps.  Of course, that can also happen due to a backend crash.
Currently, autovacuum arranges to drop any orphaned temp tables that
have passed the wraparound threshold, but even if we were happy with
waiting 2 billion transactions to get things cleaned up, the mechanism
can't work here because it relies on being able to examine the
pg_class row and determine which backend owns it, and where the
storage is located.

We could possibly set things up so that a running backend will notice
if a global temporary table for which it's created a private
relfilenode gets dropped, and blow away the backing file.  But that
doesn't protect against crashes, so I think we're going to need some
other garbage collection mechanism, either instead of in addition to
asking backends to clean up after themselves.  I'm not quite sure what
the design of that should look like, though.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to