On Thu, Jun 30, 2011 at 12:31 AM, Jim Nasby <j...@nasby.net> wrote:
> Would it be reasonable to keep a second level cache that store individual 
> XIDs instead of blocks? That would provide protection for XIDs that are 
> extremely common but don't have a good fit with the pattern of XID ranges 
> that we're caching. I would expect this to happen if you had a transaction 
> that touched a bunch of data (ie: bulk load or update) some time ago (so the 
> other XIDs around it are less likely to be interesting) but not old enough to 
> have been frozen yet. Obviously you couldn't keep too many XIDs in this 
> secondary cache, but if you're just trying to prevent certain pathological 
> cases then hopefully you wouldn't need to keep that many.

Maybe, but I think that's probably still papering around the problem.
I'd really like to find an algorithm that bounds how often we can
flush a page out of the cache to some number of tuples significantly
greater than 100.  The one I suggested yesterday has that property,
for example, although it may have other problems I'm not thinking of.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to