Heikki Linnakangas <heikki.linnakan...@enterprisedb.com> wrote: >> (2) The predicate lock and lock target initialization code was >> initially copied and modified from the code for heavyweight >> locks. The heavyweight lock code adds 10% to the calculated >> maximum size. So I wound up doing that for >> PredicateLockTargetHash and PredicateLockHash, but didn't do it >> for SerializableXidHassh. Should I eliminate this from the first >> two, add it to the third, or leave it alone? > > I'm inclined to eliminate it from the first two. Even in > LockShmemSize(), it seems a bit weird to add a safety margin, the > sizes of the lock and proclock hashes are just rough estimates > anyway. I'm fine with that. Trivial patch attached. > * You missed that RWConflictPool is sized five times as large as > SerializableXidHash, and > > * The allocation for RWConflictPool elements was wrong, while the > estimate was correct. > > With these changes, the estimated and actual sizes match closely, > so that actual hash table sizes are 50% of the estimated size as > expected. > > I fixed those bugs Thanks. Sorry for missing them. > but this doesn't help with the buildfarm members with limited > shared memory yet. Well, if dropping the 10% fudge factor on those two HTABs doesn't bring it down far enough (which seems unlikely), what do we do? We could, as I said earlier, bring down the multiplier for the number of transactions we track in SSI based on the maximum allowed connections connections, but I would really want a GUC on it if we do that. We could bring down the default number of predicate locks per transaction. We could make the default configuration more stingy about max_connections when memory is this tight. Other ideas? I do think that anyone using SSI with a heavy workload will need something like the current values to see decent performance, so it would be good if there was some way to do this which would tend to scale up as they increased something. Wild idea: make the multiplier equivalent to the bytes of shared memory divided by 100MB clamped to a minimum of 2 and a maximum of 10? -Kevin
*** a/src/backend/storage/lmgr/predicate.c --- b/src/backend/storage/lmgr/predicate.c *************** *** 1173,1184 **** PredicateLockShmemSize(void) size = add_size(size, hash_estimate_size(max_table_size, sizeof(PREDICATELOCK))); - /* - * Since NPREDICATELOCKTARGETENTS is only an estimate, add 10% safety - * margin. - */ - size = add_size(size, size / 10); - /* transaction list */ max_table_size = MaxBackends + max_prepared_xacts; max_table_size *= 10; --- 1173,1178 ----
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers