On 03/12/09 11:13, Kevin Grittner wrote:
Scott Carey <sc...@richrelevance.com> wrote:
"Kevin Grittner" <kevin.gritt...@wicourts.gov> wrote:

I'm a lot more interested in what's happening between 60 and 180
than over 1000, personally.  If there was a RAID involved, I'd put
it down to better use of the numerous spindles, but when it's all
in RAM it makes no sense.
If there is enough lock contention and a common lock case is a short
lived shared lock, it makes perfect sense sense.  Fewer readers are
blocked waiting on writers at any given time.  Readers can 'cut' in
line ahead of writers within a certain scope (only up to the number
waiting at the time a shared lock is at the head of the queue). Essentially this clumps up shared and exclusive locks into larger
streaks, and allows for higher shared lock throughput.
You misunderstood me. I wasn't addressing the affects of his change,
but rather the fact that his test shows a linear improvement in TPS up
to 1000 connections for a 64 thread machine which is dealing entirely
with RAM -- no disk access.  Where's the bottleneck that allows this
to happen?  Without understanding that, his results are meaningless.
-Kevin


Every user has a think time (200ms) to wait before doing the next transaction which results in idle time and theoretically allows other users to run in between ..

-Jignesh

Reply via email to