Hi there,

just wanted to share some benchmark experience with RBD caching, that I
have just (partially) implemented. This is not nicely formated results,
just raw numbers to understadn the difference

**** *INFRASTRUCTURE:
- 3 hosts with:      12 x 4TB drives, 6 Journals on 1 SSD, 6 journals on
second SSD
- 10GB NICs on both Compute and Storage nodes
- 10GB dedicated replication/private CEPH network
- Libvirt 1.2.3
- Qemu 0.12.1.2
- qemu drive-cache=none (set by CloudStack)

*** CEPH SETTINGS (ceph.conf on KVM hosts):
[client]
rbd cache = true
rbd cache size = 67108864 # (64MB)
rbd cache max dirty = 50331648 # (48MB)
rbd cache target dirty = 33554432 # (32MB)
rbd cache max dirty age = 2
rbd cache writethrough until flush = true # For safety reasons


**** *NUMBERS (CentOS 6.6 VM - FIO/sysbench tools):

Random write 16k IO size (yes I know, this is not "iops" because "true"
IOPS is considered 4K size - but is good enough for comparison):

Random write, NO RBD cache: 170 IOPS !!!!
Random write, RBD cache 64MB:  6500 IOPS.

Sequential writes improved from ~ 40 MB/s to 800 MB/s

Will check latency also...and let you know

*** IMPORTANT:
Make sure to have latest VirtIO drivers, because:
- CentOS 6.6, Kernel 2.6.32.x - *RBD caching does not work* (2.6.32 VirtiIO
driver does not send flushes properly)
- CentOS 6.6 Kernel 3.10 Elrepo *RBD caching works fine* (new VirtIO
drivers sending flushes fine)

I dont know for Windows, but will give you "before" and "after" numbers
very soon.

Best,
-- 

Andrija Panić
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to