Hi Mike,

Am 21.04.2016 um 15:20 schrieb Mike Miller:
Hi Udo,

thanks, just to make sure, further increased the readahead:

$ sudo blockdev --getra /dev/rbd0
1048576

$ cat /sys/block/rbd0/queue/read_ahead_kb
524288

No difference here. First one is sectors (512 bytes), second one KB.
oops, sorry! My fault. Sector/KB make sense...

The second read (after drop cache) is somewhat faster (10%-20%) but not much.
That's very strange! Looks like tuning possibilities. Has your OSD-Nodes enough RAM? Are they very very busy?

If I do single thread reading on a test-vm I got following results (very small test-cluster - 2 nodes with 10GB-Nic and one Node with 1GB-Nic):
support@upgrade-test:~/fio$ dd if=fiojo.0.0 of=/dev/null bs=1M
4096+0 Datensätze ein
4096+0 Datensätze aus
4294967296 Bytes (4,3 GB) kopiert, 62,0267 s, 69,2 MB/s

### as root "echo 3 > /proc/sys/vm/drop_caches" and the same on the VM-host

support@upgrade-test:~/fio$ dd if=fiojo.0.0 of=/dev/null bs=1M
4096+0 Datensätze ein
4096+0 Datensätze aus
4294967296 Bytes (4,3 GB) kopiert, 30,0987 s, 143 MB/s

# this is due to cached data on the osd-nodes
# with cleared cache on all nodes (vm, vm-host, osd-nodes)
# I got the value like on the first run:

support@upgrade-test:~/fio$ dd if=fiojo.0.0 of=/dev/null bs=1M
4096+0 Datensätze ein
4096+0 Datensätze aus
4294967296 Bytes (4,3 GB) kopiert, 61,8995 s, 69,4 MB/s

I don't know why this should not the same with krbd.


Udo

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to