On 3/12/09 7:57 AM, "Jignesh K. Shah" <j.k.s...@sun.com> wrote:



On 03/11/09 22:01, Scott Carey wrote:
 Re: [PERFORM] Proposal of tunable fix for scalability of 8.4 On 3/11/09 3:27 
PM, "Kevin Grittner" <kevin.gritt...@wicourts.gov> wrote:


If you want to make this more fair, instead of freeing all shared locks, limit 
the count to some number, such as the number of CPU cores.  Perhaps rather than 
wake-up-all-waiters=true, the parameter can be an integer representing how many 
shared locks can be freed at once if an exclusive lock is encountered.



Well I am waking up not just shared but shared and exclusives.. However i like 
your idea of waking up the next N waiters where N matches the number of cpus 
available.  In my case it is 64 so yes this works well since the idea being of 
all the 64 waiters running right now one will be able to lock the next lock  
immediately and hence there are no cycles wasted where nobody gets a lock which 
is often the case when you say wake up only 1 waiter and hope that the process 
is on the CPU (which in my case it is 64 processes) and it is able to acquire 
the lock.. The probability of acquiring the lock within the next few cycles is 
much less for only 1 waiter  than giving chance to 64 such processes  and then 
let them fight based on who is already on CPU  and acquire the lock. That way 
the period where nobody has a lock is reduced and that helps to cut out 
"artifact"  idle time on the system.

In that case, there can be some starvation of writers.  If all the shareds are 
woken up but the exclusives are left in the front of the queued, no starvation 
can occur.
That was a bit of confusion on my part with respect to what the change was 
doing.  Thanks for clarification.



As soon as I get more "cycles" I will try variations of it but it would help if 
others can try it out in their own environments to see if it helps their 
instances.


-Jignesh


Reply via email to