Currently there are three mechanisms for assigning OIDs to system objects during initdb:
1. Manual assignment of OIDs in the include/catalog/*.h files. (We need to do this for objects that are cross-referenced in other DATA entries or that we have or might want #define macros for. So, though it's a PITA to do manual assignment of pg_proc and pg_operator OIDs, I don't really foresee getting rid of it.) 2. Automatic assignment of OIDs by genbki.sh during preparation of the postgres.bki file. This is triggered by an explicit "OID = 0" in a DATA entry, and the OID range 10000-16383 is reserved for the purpose. I was a bit surprised earlier today to realize that this mechanism has been unused since 7.2. 3. Automatic assignment of an OID by heap_insert when inserting a row with no OID into a table that has OIDs. This happens e.g. when creating an index's pg_class row. Since the OID counter starts at 16384 (BootstrapObjectIdData), all such OIDs are above 16k. It strikes me that mechanism #2 is redundant and may as well be removed. I made pg_cast use it earlier today, but am thinking I should revert that change. What we should do instead is start the OID counter at 10000, and then boost it up to 16k at the completion of initdb. Currently, GetNewObjectId() has hardwired logic to prevent generation of OIDs less than 16k, but we could modify that code so that the limit is 10k during bootstrap or standalone operation, and 16k in normal multiuser operation. This would have the benefit that the wraparound skip would really manage to skip over every OID assigned during initdb --- currently there are several hundred OIDs just above 16k that could conflict right after a wraparound. Comments, better ideas? regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]