On Wed, Dec 21, 2011 at 12:33 AM, Tom Lane <t...@sss.pgh.pa.us> wrote: > Oh btw, I haven't looked at that code recently, but I have a nasty > feeling that there are parts of it that assume that the number of > buffers it is managing is fairly small. Cranking up the number > might require more work than just changing the value.
Oh, you mean like the fact that it tries to do strict LRU page replacement? *rolls eyes* We seem to have named the SLRU system after one of its scalability limitations... I think there probably are some scalability limits to the current implementation, but also I think we could probably increase the current value modestly with something less than a total rewrite. Linearly scanning the slot array won't scale indefinitely, but I think it will scale to more than 8 elements. The performance results I posted previously make it clear that 8 -> 32 is a net win at least on that system. One fairly low-impact option might be to make the cache less than fully associative - e.g. given N buffers, a page with pageno % 4 == X is only allowed to be in a slot numbered between (N/4)*X and (N/4)*(X+1)-1. That likely would be counterproductive at N = 8 but might be OK at larger values. We could also switch to using a hash table but that seems awfully heavy-weight. The real question is how to decide how many buffers to create. You suggested a formula based on shared_buffers, but what would that formula be? I mean, a typical large system is going to have 1,048,576 shared buffers, and it probably needs less than 0.1% of that amount of CLOG buffers. My guess is that there's no real reason to skimp: if you are really tight for memory, you might want to crank this down, but otherwise you may as well just go with whatever we decide the best-performing value is. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers