On 11/9/10 4:10 AM, Karsten Thygesen wrote:

The cluster consists of 4 exactly similar nodes - all dedicated to riak use only
- no other zones or tasks going on. We use Riak-EE 0.13. The servers is HP
servers with 4 x 146GB 10K RPM SAS disks. There is a memorycache on the RAID
controller and it is used during both read and writes but the RAID iis built
usin Solaris-10u9 ZFS in a setup as such:

pool: pool01
state: ONLINE
scrub: scrub completed after 0h0m with 0 errors on Tue Oct 26 21:25:05 2010
config:

NAME STATE READ WRITE CKSUM
pool01 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c0t0d0s7 ONLINE 0 0 0
c0t1d0s7 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c0t2d0 ONLINE 0 0 0
c0t3d0 ONLINE 0 0 0

errors: No known data errors

metrics during load gives 5% CPU load and about 10% IO load (iostat reports 30
iops and the disks should be able to handle 300 iops each). So basically, the
servers is unloaded....

One question remains - we use ZFS with default blocksize of 128Kb - what is the
optimal blocksize with bitcask?

But I believe, that we should look somewhere else for the challenge - the
hardware is not loaded significant, so I suspect, that we have a faulty
datamodel or usage...?

How much RAM do you have for filesystem buffering? The difference in a first and a repeated query sounds like normal disk head motion when you have to go to disk for all the data to me. Disk benchmarks tend to use big files where database lookups are going to seek all over the place for things not in cache.

--
   Les Mikesell
    [email protected]

_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to