Can you please send the output of "ceph osd tree"

Respectfully,

*Wes Dillingham*
w...@wesdillingham.com
LinkedIn <http://www.linkedin.com/in/wesleydillingham>


On Tue, Aug 23, 2022 at 10:53 AM Wyll Ingersoll <
wyllys.ingers...@keepertech.com> wrote:

>
> We have a large cluster with a many osds that are at their nearfull or
> full ratio limit and are thus having problems rebalancing.
> We added 2 more storage nodes, each with 20 additional drives  to give the
> cluster room to rebalance.  However, for the past few days, the new OSDs
> are NOT being used and the cluster remains stuck and is not improving.
>
> The crush map is correct, the new hosts and osds are at the correct
> location, but dont seem to be getting used.
>
> Any idea how we can force the full or backfillfull OSDs to start unloading
> their pgs to the newly added ones?
>
> thanks!
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to