Hi Each server have 18GB of memory and 8GB swap, which is not in use at all...
So there should be plenty of memory. Riak itself is using around 5-6GB of
memory, so plenty to spare...
output from top:
last pid: 11849; load avg: 0.02, 0.02, 0.02; up 14+01:47:26
15:37:53
50 processes: 48 sleeping, 1 running, 1 on cpu
CPU states: 99.8% idle, 0.1% user, 0.2% kernel, 0.0% iowait, 0.0% swap
Memory: 18G phys mem, 697M free mem, 8192M swap, 8192M free swap
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
8989 riak 77 59 0 5941M 5923M sleep 26:55 0.49% beam.smp
3724 root 37 59 0 57M 42M sleep 48:53 0.19% splunkd
3735 root 13 59 0 24M 20M sleep 18:05 0.08% python2.6
Karsten
On Nov 9, 2010, at 14:42 , Les Mikesell wrote:
> On 11/9/10 4:10 AM, Karsten Thygesen wrote:
>>
>> The cluster consists of 4 exactly similar nodes - all dedicated to riak use
>> only
>> - no other zones or tasks going on. We use Riak-EE 0.13. The servers is HP
>> servers with 4 x 146GB 10K RPM SAS disks. There is a memorycache on the RAID
>> controller and it is used during both read and writes but the RAID iis built
>> usin Solaris-10u9 ZFS in a setup as such:
>>
>> pool: pool01
>> state: ONLINE
>> scrub: scrub completed after 0h0m with 0 errors on Tue Oct 26 21:25:05 2010
>> config:
>>
>> NAME STATE READ WRITE CKSUM
>> pool01 ONLINE 0 0 0
>> mirror-0 ONLINE 0 0 0
>> c0t0d0s7 ONLINE 0 0 0
>> c0t1d0s7 ONLINE 0 0 0
>> mirror-1 ONLINE 0 0 0
>> c0t2d0 ONLINE 0 0 0
>> c0t3d0 ONLINE 0 0 0
>>
>> errors: No known data errors
>>
>> metrics during load gives 5% CPU load and about 10% IO load (iostat reports
>> 30
>> iops and the disks should be able to handle 300 iops each). So basically, the
>> servers is unloaded....
>>
>> One question remains - we use ZFS with default blocksize of 128Kb - what is
>> the
>> optimal blocksize with bitcask?
>>
>> But I believe, that we should look somewhere else for the challenge - the
>> hardware is not loaded significant, so I suspect, that we have a faulty
>> datamodel or usage...?
>
> How much RAM do you have for filesystem buffering? The difference in a first
> and a repeated query sounds like normal disk head motion when you have to go
> to disk for all the data to me. Disk benchmarks tend to use big files where
> database lookups are going to seek all over the place for things not in cache.
>
> --
> Les Mikesell
> [email protected]
>
> _______________________________________________
> riak-users mailing list
> [email protected]
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ riak-users mailing list [email protected] http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
