Interesting -

I had tried that on the RBD volumes only (via blockdev --setra, but I think the effect is the same as tweaking read_ahead_kb directly), however it made no difference. Unfortunately I didn't think to adjust on the OSD's too - I'll try it out.

One thing that seemed to make a big difference last time I played with this was specifying:

io=native

in the libvirt xml for a QEMU/KVM guest (upto 50% improvement for sequential reads) - however I did see one case where it seemed to hurt the random performance, but this was on non typical hardware (workstation with WD Black disks and Crucial M4 ssds), so could be due to that.

Regards

Mark
On 05/04/14 01:06, Mark Nelson wrote:
One thing you can try is tuning read_ahead_kb on the OSDs and/or the RBD
volume(s) and see if that helps.  On some hardware we've seen this
improve sequential read performance dramatically.

Another big culprit that can really hurt sequential reads is
fragmentation.  BTRFS is particularly bad with RBD. Small writes to the
objects that store the blocks behind the scenes end up being written to
new areas of the disk due to COW.  XFS probably won't fragment as badly,
but we've some times seen lots of extents for some files a well.  It's
something to keep an eye on if you have a big sequential read workload.

Mark

On 04/04/2014 05:00 AM, John-Paul Robinson wrote:
I've seen this "fast everything except sequential reads" asymmetry in my
own simple dd tests on RBD images but haven't really understood the
cause.

Could you clarify what's going on that would cause that kind of
asymmetry. I've been assuming that once I get around to turning
on/tuning read caching on my underlying OSD nodes the situation will
improve but haven't dug into that yet.

~jpr

On 04/04/2014 04:46 AM, Mark Kirkwood wrote:
However you may see some asymmetry in this performance - fast random and
sequential writes, fast random reads but considerably slower sequential
reads. The RBD cache may help here, but I need to investigate this
further (and also some of the more fiddly settings to do with vertio
disk config).
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to