On Fri, Jan 13, 2017 at 8:58 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Michael Paquier <michael.paqu...@gmail.com> writes:
>> Have there been ever discussions about having catcache entries in a
>> shared memory area? This does not sound much performance-wise, I am
>> just wondering about the concept and I cannot find references to such
>> discussions.
>
> I'm sure it's been discussed.  Offhand I remember the following issues:
>
> * A shared cache would create locking and contention overhead.
>
> * A shared cache would have a very hard size limit, at least if it's
> in SysV-style shared memory (perhaps DSM would let us relax that).
>
> * Transactions that are doing DDL have a requirement for the catcache
> to reflect changes that they've made locally but not yet committed,
> so said changes mustn't be visible globally.
>
> You could possibly get around the third point with a local catcache that's
> searched before the shared one, but tuning that to be performant sounds
> like a mess.  Also, I'm not sure how such a structure could cope with
> uncommitted deletions: delete A -> remove A from local catcache, but not
> the shared one -> search for A in local catcache -> not found -> search
> for A in shared catcache -> found -> oops.

I think the first of those concerns is the key one.  If searching the
system catalogs costs $100 and searching the private catcache costs
$1, what's the cost of searching a hypothetical shared catcache?  If
the answer is $80, it's not worth doing.  If the answer is $5, it's
probably still not worth doing.  If the answer is $1.25, then it's
probably worth investing some energy into trying to solve the other
problems you list.  For some users, the memory cost of catcache and
syscache entries multiplied by N backends are a very serious problem,
so it would be nice to have some other options.  But we do so many
syscache lookups that a shared cache won't be viable unless it's
almost as fast as a backend-private cache, or at least that's my
hunch.

I think it would be interested for somebody to build a prototype here
that ignores all the problems but the first and uses some
straightforward, relatively unoptimized locking strategy for the first
problem.  Then benchmark it.  If the results show that the idea has
legs, then we can try to figure out what a real implementation would
look like.

(One possible approach: use Thomas Munro's DHT stuff to build the shared cache.)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to