On 3/11/09 7:47 PM, "Tom Lane" <t...@sss.pgh.pa.us> wrote:

Scott Carey <sc...@richrelevance.com> writes:
> If there is enough lock contention and a common lock case is a short lived 
> shared lock, it makes perfect sense sense.  Fewer readers are blocked waiting 
> on writers at any given time.  Readers can 'cut' in line ahead of writers 
> within a certain scope (only up to the number waiting at the time a shared 
> lock is at the head of the queue).  Essentially this clumps up shared and 
> exclusive locks into larger streaks, and allows for higher shared lock 
> throughput.
> Exclusive locks may be delayed, but will NOT be starved, since on the next 
> iteration, a streak of exclusive locks will occur first in the list and they 
> will all process before any more shared locks can go.

That's a lot of sunny assertions without any shred of evidence behind
them...

The current LWLock behavior was arrived at over multiple iterations and
is not lightly to be toyed with IMHO.  Especially not on the basis of
one benchmark that does not reflect mainstream environments.

Note that I'm not saying "no".  I'm saying that I want a lot more
evidence *before* we go to the trouble of making this configurable
and asking users to test it.

                        regards, tom lane


All I'm adding, is that it makes some sense to me based on my experience in CPU 
/ RAM bound scalability tuning.  It was expressed that the test itself didn't 
even make sense.

I was wrong in my understanding of what the change did.  If it wakes ALL 
waiters up there is an indeterminate amount of time a lock will wait.
However, if instead of waking up all of them, if it only wakes up the shared 
readers and leaves all the exclusive ones at the front of the queue, there is 
no possibility of starvation since those exclusives will be at the front of the 
line after the wake-up batch.

As for this being a use case that is important:

*  SSDs will drive the % of use cases that are not I/O bound up significantly 
over the next couple years.  All postgres installations with less than about 
100GB of data TODAY could avoid being I/O bound with current SSD technology, 
and those less than 2TB can do so as well but at high expense or with less 
proven technology like the ZFS L2ARC flash cache.
*  Intel will have a mainstream CPU that handles 12 threads (6 cores, 2 threads 
each) at the end of this year.  Mainstream two CPU systems will have access to 
24 threads and be common in 2010.  Higher end 4CPU boxes will have access to 48 
CPU threads.  Hardware thread count is only going up.  This is the future.

Reply via email to