On Fri, Dec 9, 2016 at 9:42 PM, Gregory Farnum <gfar...@redhat.com> wrote:
> On Fri, Dec 9, 2016 at 6:58 AM, plataleas <platal...@gmail.com> wrote:
>> Hi all
>>
>> We enabled CephFS on our Ceph Cluster consisting of:
>> - 3 Monitor servers
>> - 2 Metadata servers
>> - 24 OSD  (3 OSD / Server)
>> - Spinning disks, OSD Journal is on SSD
>> - Public and Cluster Network separated, all 1GB
>> - Release: Jewel 10.2.3
>>
>> With CephFS we reach roughly 1/3 of the write performance of RBD. There are
>> some other discussions about RBD outperforming CephFS on the mailing list.
>> However it would be interesting to have more figures about that topic.
>>
>> Writes on CephFS:
>>
>> # dd if=/dev/zero of=/data_cephfs/testfile.dd bs=50M count=1 oflag=direct
>> 1+0 records in
>> 1+0 records out
>> 52428800 bytes (52 MB) copied, 1.40136 s, 37.4 MB/s
>>
>> #dd if=/dev/zero of=/data_cephfs/testfile.dd bs=500M count=1 oflag=direct
>> 1+0 records in
>> 1+0 records out
>> 524288000 bytes (524 MB) copied, 13.9494 s, 37.6 MB/s
>>
>> # dd if=/dev/zero of=/data_cephfs/testfile.dd bs=1000M count=1 oflag=direct
>> 1+0 records in
>> 1+0 records out
>> 1048576000 bytes (1.0 GB) copied, 27.7233 s, 37.8 MB/s
>>
>> Writes on RBD
>>
>> # dd if=/dev/zero of=/data_rbd/testfile.dd bs=50M count=1 oflag=direct
>> 1+0 records in
>> 1+0 records out
>> 52428800 bytes (52 MB) copied, 0.558617 s, 93.9 MB/s
>>
>> # dd if=/dev/zero of=/data_rbd/testfile.dd bs=500M count=1 oflag=direct
>> 1+0 records in
>> 1+0 records out
>> 524288000 bytes (524 MB) copied, 3.70657 s, 141 MB/s
>>
>> # dd if=/dev/zero of=/data_rbd/testfile.dd bs=1000M count=1 oflag=direct
>> 1+0 records in
>> 1+0 records out
>> 1048576000 bytes (1.0 GB) copied, 7.75926 s, 135 MB/s
>>
>> Are these measurements reproducible by others ? Thanks for sharing your
>> experience!
>
> IIRC, the interfaces in use mean these are doing very different things
> despite the flag similarity. Direct IO on rbd is still making use of
> the RBD cache, but in CephFS it is going straight to the OSD (if
> you're using the kernel client; if you're on ceph-fuse the flags might
> get dropped on the kernel/FUSE barrier).

A small clarification: if you using the rbd kernel client, all I/O
(including direct I/O) goes straight to the OSDs.  krbd block devices
are advertised as "write through".

Only librbd makes use of the rbd cache.

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to