Anders Morken <[EMAIL PROTECTED]> writes:

> The major benefit of reworking the lock manager is scalability - right
> now it's pretty much single threaded, which is fine for a lot of the
> scenarios Derby was written to perform in. However, for "benchmark
> compliance" in a world of multicore desktops and servers, it may be
> beneficial to work on Derby's scalability. 

For those who might not be convinced about this I suggest running the
test client (select or join) from DERBY-1961
with all data in memory on a multi-cpu/multi-core machine using
1 and 2 clients. "Something" is serializing the load, (and introducing
a penalty in the process). And so far everything points to the lock
manager.  

I just ran this experiment on a 2-CPU AMD Opteron

The average of 3 100 sec runs using 1 client was
23423.783 TPS

The average of 3 100 sec runs using 2 clients was
20282.193 TPS

That is -13.411 %. Not only does it not scale, it scales negatively.

>
> To maintain Derby's small footprint while expanding it to handle bigger
> iron is one of the challenges here, and we don't want Derby to become
> big fat O-hm-cle, do we? =)

True, but multi-cpu/multi-core is spreading to smaller devices, as well. It
used to be a server-room thing, but now you find it in desktops and
even laptops.  The segment where you can afford to ignore this
(even with a microscopic footprint) is quickly disappearing.

-- 
dt

Reply via email to