On Mon, Aug 13, 2018 at 10:01 AM Emmanuel Lacour <elac...@easter-eggs.com>
wrote:

> Le 13/08/2018 à 15:55, Jason Dillaman a écrit :
>
>
>
>>
>> so problem seems located on "rbd" side  ...
>>
>
> That's a pretty big apples-to-oranges comparison (4KiB random IO to 4MiB
> full-object IO). With your RBD workload, the OSDs will be seeking after
> each 4KiB read but w/ your RADOS bench workload, it's reading a full 4MiB
> object before seeking.
>
>
> yes you're right, but if we compare cluster to cluser, on new cluster,
> rados bench is faster (2 times) rbd fio is 7 times slower.
>
> that's why I suppose rbd is th problem here, but I really do not
> understand how to fix it. I looked at 3 old hammer cluster and two new
> luminous/buestore clusters and those results are constants. I do not think
> ceph decided to put bluestore
> as default luminous filestore if random reads are 7 time slower ;)
>

For such a small benchmark (2 GiB), I wouldn't be surprised if you are not
just seeing the Filestore-backed OSDs hitting the page cache for the reads
whereas the Bluestore-backed OSDs need to actually hit the disk. Are the
two clusters similar in terms of the numbers of HDD-backed OSDs?


>
> (BTW: thanks for helping me Jason :) ).
>
>

-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to