"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> According to postgresql.conf, using these settings the lock table eats
> 64*260*100 bytes = < 2M.  Well, if it's running my server out of shared
> memory, it's eating much, much more shmem than previously thought.

Hmm, the 260 is out of date I think.  I was seeing about 184 bytes/lock
in my tests just now.

> Also, I was able to acquire around 10k locks before the server borked.
> This is obviously a lot more than than 64*100.

Sure, because there's about 100K of deliberate slop in the shared memory
size allocation, and you are probably also testing a scenario where the
buffer and FSM hash tables haven't ramped to full size yet, so the lock
table is able to eat more than the nominal amount of space.

> As I see it, this means the user-locks (and perhaps all
> locks...?) eat around ~ 6k bytes memory each.

They're allocated in groups of 32, which would work out to close to 6k;
maybe you were measuring the incremental cost of allocating the first one?

I did some digging, and as far as I can see the only shared memory
allocations that occur after postmaster startup are for the four shmem
hash tables: buffers, FSM relations, locks, and proclocks.  Of these,
the buffer and FSM hashtables have predetermined maximum sizes.  So
arranging for the space in those tables to be fully preallocated should
prevent any continuing problems from lock table overflow.  I've
committed a fix that does this.  I verified that after running the thing
out of shared memory via creating a lot of user locks and then releasing
same, I could run the regression tests.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Reply via email to