On 14.04.2011 23:02, Tom Lane wrote:
Heikki Linnakangas<heikki.linnakan...@enterprisedb.com> writes:
There's one very low-hanging fruit here, though. I profiled the pgbench
case, with -M prepared, and found that like in Greg Smith's profile,
hash_seq_search pops up quite high in the list. Those calls are coming
from LockReleaseAll(), where we scan the local lock hash to find all
locks held. We specify the initial size of the local lock hash table as
128, which is unnecessarily large for small queries like this. Reducing
it to 8 slashed the time spent in hash_seq_search().
I think we should make that hash table smaller. It won't buy much,
somewhere between 1-5 % in this test case, but it's very easy to do and
I don't see much downside, it's a local hash table so it will grow as
needed.
8 sounds awfully small. Can you even get as far as preparing the
statements you intend to use without causing that to grow?
I added a debug print into the locking code, the pgbench test case uses
up to 6 locks. It needs those 6 locks at backend startup, for
initializing caches I guess. The queries after that need only 3 locks.
I agree that 128 may be larger than necessary, but I don't think we
should pessimize normal usage to gain a small fraction on trivial
queries. I'd be happier with something like 16 or 32.
I'll change it to 16.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers