I did a set of 30GB OSDs before with extra disk space on my SSDs for the
metadata pool on cephfs and my entire cluster locked up about 3 weeks
later. Some metadata operation was happening, filled some of the 30GB disks
to 100%, and all IO was blocked in the cluster. I did some trickery of
deleting 1 copy of a few PGs on each OSD, such that I still had at least 2
copies of each PG, and was able to backfill the pool back onto my HDDs and
restore cluster functionality. I would say that trying to use that space is
definitely not worth it.

In one of my production clusters I occasionally get a warning state that an
omap object is too large in my buckets.index pool. I could very easily
imagine that stalling the entire cluster if my index pool were on such
small OSDs.

On Tue, Oct 22, 2019, 6:55 PM Frank R <frankaritc...@gmail.com> wrote:

> Hi all,
>
> I have 40 nvme drives with about 20G free space each.
>
> Would creating a 10GB partition/lvm on each of the nvmes for an rgw index
> pool be a bad idea?
>
> RGW has about about 5 million objects
>
> I don't think space will be an issue but I am worried about the 10G size,
> is it just too small for a bluestore OSD?
>
> thx
> Frank
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to