On 01/09/2016 02:34 AM, Wukongming wrote:
Hi, all

    I notice this sentence "Running GFS or OCFS on top of RBD will not work with 
caching enabled." on http://docs.ceph.com/docs/master/rbd/rbd-config-ref/. why? Is 
there any way to open rbd cache with ocfs2 based on? Because I have a fio test with 
qemu-kvm config setting cache=none, which give a terrible result of IOPS less than 100 ( 
fio --numjobs=16 --iodepth=16 --ioengine=libaio --runtime=300 --direct=1 
--group_reporting --filename=/dev/sdd --name=mytest --rw=randwrite --bs=8k --size=8G)
, while other non-ceph cluster could give a result of IOPS to 1000+. Would 
disabling rbd cache cause this problem?

OCFS, GFS, and similar cluster filesystems assume they are using the same physical disk. RBD caching is client side, so if you are accessing the same rbd image from more than one client, they have independent caches that are not coherent. This means something like ocfs2 could cache data in one rbd client, try overwriting it in another rbd client, and still se the original data in the first client. With a regular physical disk, this is not possible, since its
cache is part of the device.

Your diagram shows you using qemu - in that case why not use the rbd support built into
qemu, and avoid a shared fs entirely?

Josh

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to