On Thu, Aug 07, 2008 at 01:30:27PM +0100, Gregory Stark wrote: > "Simon Riggs" <[EMAIL PROTECTED]> writes: > > > Currently, we calculate a single OldestXmin across all snapshots on the > > assumption that any transaction might access any table. > > > > I propose creating "Visibility Groups" that *explicitly* limit the > > ability of a transaction to access data outside its visibility group(s). > > By default, visibility_groups would be NULL, implying potential access > > to all tables. > > > > Once set, any attempt to lock an object outside of a transactions > > defined visibility_groups will result in an error: > > ERROR attempt to lock table outside of visibility group(s): foo > > HINT you need to set a different value for visibility_groups > > A transaction can only ever reduce or restrict its visibility_groups, it > > cannot reset or add visibility groups. > > Hm, so backing up a bit from the specific proposed interface, the key here is > being able to explicitly mark which tables your transaction will need in the > future? > > Is it always just a handful of heavily updated tables that you want to > protect? In that case we could have a lock type which means "I'll never need > to lock this object". Then a session could issue "LOCK TABLE foo IN > INACCESSIBLE MODE" or something like that. That requires people to hack up > their pg_dump or replication script though which might be awkward. > > Perhaps the way to do that would be to preemptively take locks on all the > objects that you'll need, then have a command to indicate you won't need any > further objects beyond those.
+1 -dg -- David Gould [EMAIL PROTECTED] 510 536 1443 510 282 0869 If simplicity worked, the world would be overrun with insects. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers