On 8/28/22 17:30, Wyll Ingersoll wrote:
We have a pacific cluster that is overly filled and is having major trouble
recovering. We are desperate for help in improving recovery speed. We have
modified all of the various recovery throttling parameters.
The full_ratio is 0.95 but we have severa
Isn’t rebalancing onto the empty OSDs default behavior? From: Wyll IngersollSent: Sunday, August 28, 2022 10:31 AMTo: ceph-users@ceph.ioSubject: [ceph-users] OSDs growing beyond full ratio We have a pacific cluster that is overly filled and is having major trouble recovering. We are desperate for
Good morning,
we are trying to migrate a Ceph/Nautilus cluster
into kubernetes/rook/pacific [0]. Due to limitations in kubernetes we
probably need to change the cluster network range, which is currently
set to 2a0a:e5c0::/64.
My question to the list: did anyone already go through this?
My assu
We have a pacific cluster that is overly filled and is having major trouble
recovering. We are desperate for help in improving recovery speed. We have
modified all of the various recovery throttling parameters.
The full_ratio is 0.95 but we have several osds that continue to grow and are
appr