Looks like you’ve considered the essential points for bluestore OSDs, yep.
:)
My concern would just be the surprisingly-large block.db requirements for
rgw workloads that have been brought up. (300+GB per OSD, I think someone
saw/worked out?).
-Greg

On Tue, Nov 20, 2018 at 1:35 AM Dan van der Ster <d...@vanderster.com> wrote:

> Hi ceph-users,
>
> Most of our servers have 24 hdds plus 4 ssds.
> Any experience how these should be configured to get the best rgw
> performance?
>
> We have two options:
>    1) All osds the same, with data on the hdd and block.db on a 40GB
> ssd partition
>    2) Two osd device types: hdd-only for the rgw data pool and
> ssd-only for bucket index pool
>
> But all of the bucket index data is in omap, right?
> And all of the omap is stored in the rocks db, right?
>
> After reading the recent threads about bluefs slow_used_bytes, I had
> the thought that as long as we have a large enough block.db, then
> slow_used_bytes will be 0 and all of the bucket indexes will be on
> ssd-only, regardless of option (1) or (2) above.
>
> Any thoughts?
>
> Thanks!
>
> Dan
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to