So I'm finally wrapping my head around this new code. There is something I'm surprised by that perhaps I'm misreading or perhaps I shouldn't be surprised by, not sure.
Is it true that the shared memory allocation contains the hash table entry and body of every object in every database? I guess I was assuming I would find some kind of LRU cache which loaded data from disk on demand. But afaict it loads everything on startup and then never loads from disk later. The disk is purely for recovering state after a restart. On the one hand the rest of Postgres seems to be designed on the assumption that the number of tables and database objects is limited only by disk space. The catalogs are stored in relational storage which is read through the buffer cache. On the other hand it's true that syscaches don't do expire entries (though I think the assumption is that no one backend touches very much). It seems like if we really think the total number of database objects is reasonably limited to scales that fit in RAM there would be a much simpler database design that would just store the catalog tables in simple in-memory data structures and map them all on startup without doing all the work Postgres does to make relational storage scale.