That problem seems to have cleared up.  We are in the middle of a massive 
rebalancing effort for a 700 OSD, 10PB cluster that is wildly out of whack 
(because it got too full) and see lots of strange numbers reported occasionally.


________________________________
From: Eugen Block <ebl...@nde.ag>
Sent: Thursday, August 25, 2022 2:56 PM
To: Wyll Ingersoll <wyllys.ingers...@keepertech.com>
Cc: ceph-users@ceph.io <ceph-users@ceph.io>
Subject: Re: [ceph-users] backfillfull osd - but it is only at 68% capacity

Hi,

I’ve seen this many times in older clusters, mostly Nautilus (can’t
say much about Octopus or later). Apparently the root cause hasn’t
been fixed yet, but it should resolve after the recovery has finished.

Zitat von Wyll Ingersoll <wyllys.ingers...@keepertech.com>:

> My cluster (ceph pacific) is complaining about one of the OSD being
> backfillfull:
>
> [WRN] OSD_BACKFILLFULL: 1 backfillfull osd(s)
>
>     osd.31 is backfill full
>
> backfillfull ratios:
>
> full_ratio 0.95
>
> backfillfull_ratio 0.9
>
> nearfull_ratio 0.85
>
> ceph osd df shows:
>
>  31    hdd  5.55899   1.00000  5.6 TiB  3.8 TiB  3.7 TiB  411 MiB
> 6.7 GiB   1.8 TiB  68.13  0.92   83      up
>
> So, why does the cluster think that osd.31 is backfillfull if its
> only at 68% capacity?
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io



_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to