On Tue, Jul 7, 2009 at 6:22 PM, Tom Lane<t...@sss.pgh.pa.us> wrote: > > This seems a bit pointless. There is certainly not any use case for a > constraint without an enforcement mechanism (or at least none the PG > community is likely to consider legitimate ;-)). And it's not very > realistic to suppose that you'd check a constraint by doing a seqscan > every time. Therefore there has to be an index underlying the > constraint somehow.
I'm not entirely convinced that running a full scan to enforce constraints is necessarily such a crazy idea. It may well be the most efficient approach after a major bulk load. And consider a read-only database where the only purpose of the constraint is to inform the optimizer that it can trust the property to hold. That said this seems like an orthogonal issue to me. > Jeff's complaint about total order is not an > argument against having an index, it's just pointing out that btree is > not the only possible type of index. It's perfectly legitimate to > imagine using a hash index to enforce uniqueness, for example. If hash > indexes had better performance we'd probably already have been looking > for a way to do that, and wanting some outside-the-AM mechanism for it > so we didn't have to duplicate code from btree. I'm a bit at a loss why we need this extra data structure though. The whole duplicated code issue seems to me to be one largely of code structure. If we hoisted the heap-value rechecking code out of the btree AM then the hash AM could reuse it just fine. Both the hash and btree AMs would have to implement some kind of "insert-unique-key" operation which would hold some kind of lock preventing duplicate unique keys from being inserted but both btree and hash could implement that efficiently by locking one page or one hash value. GIST would need something like this "store the key value or tid in shared memory" mechanism. But that could be implemented as an external facility which GIST then made use of -- just the way every part of the system makes use of other parts. It doesn't mean we have to make "prevent concurrent unique inserts" not the responsibility of the AM which knows best how to handle that. -- greg http://mit.edu/~gsstark/resume.pdf -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers