Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> AFAIK you'd get hard failures, not slowdowns, if you ran out of lock
>> space entirely;

> Well, if there still is shared memory available, the lock hash can
> continue to grow, but it would slow down according to this comment in
> ShmemInitHash:

Right, but there's not an enormous amount of headroom in shared memory
beyond the intended size of the hash tables.  I'd think that you'd start
seeing hard failures not very far beyond the point at which performance
impacts became visible.  Of course this is all speculation; I quite
agree with varying the table-size parameters to see if it makes a
difference.

Josh, what sort of workload is being tested here --- read-mostly,
write-mostly, a mixture?

> However I was talking to Josh Drake yesterday and he told me that
> pg_dump was spending some significant amount of time in LOCK TABLE when
> there are lots of tables (say 300k).

I wouldn't be too surprised if there's some O(N^2) effects when a single
transaction holds that many locks, because of the linked-list proclock
data structures.  This would not be relevant to Josh's case though.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to