On 3/11/09 3:27 PM, "Kevin Grittner" <kevin.gritt...@wicourts.gov> wrote:

I'm a lot more interested in what's happening between 60 and 180 than
over 1000, personally.  If there was a RAID involved, I'd put it down
to better use of the numerous spindles, but when it's all in RAM it
makes no sense.

If there is enough lock contention and a common lock case is a short lived 
shared lock, it makes perfect sense sense.  Fewer readers are blocked waiting 
on writers at any given time.  Readers can 'cut' in line ahead of writers 
within a certain scope (only up to the number waiting at the time a shared lock 
is at the head of the queue).  Essentially this clumps up shared and exclusive 
locks into larger streaks, and allows for higher shared lock throughput.
Exclusive locks may be delayed, but will NOT be starved, since on the next 
iteration, a streak of exclusive locks will occur first in the list and they 
will all process before any more shared locks can go.

This will even help in on a single CPU system if it is read dominated, lowering 
read latency and slightly increasing write latency.

If you want to make this more fair, instead of freeing all shared locks, limit 
the count to some number, such as the number of CPU cores.  Perhaps rather than 
wake-up-all-waiters=true, the parameter can be an integer representing how many 
shared locks can be freed at once if an exclusive lock is encountered.


-Kevin

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to