[ceph-users] Re: Rebuilding data resiliency after adding new OSD's stuck for so long at 5%

2023-09-13 Thread ceph
Hi, As long as you see changes and "recovery" it will make progress so i guess you have just to wait... What kind of disks did you add? Hth Mehmet Am 12. September 2023 20:37:56 MESZ schrieb sharathvuthp...@gmail.com: >We have a user-provisioned instance( Bare Metal Installation) of OpenShift

[ceph-users] Re: Rebuilding data resiliency after adding new OSD's stuck for so long at 5%

2023-09-13 Thread sharathvuthpala
Hi, We have HDD disks. Today, after almost 36 hours, Rebuilding Data Resiliency is 58% and still going on. The good thing is it is not stuck at 5%. Does it take this long to complete rebuilding resiliency process whenever there is a maintenance in the cluster? ___

[ceph-users] Re: Rebuilding data resiliency after adding new OSD's stuck for so long at 5%

2023-09-13 Thread Sake
Which version do you use? Quincy has currently incorrect values for it's new IOPS scheduler, this will be fixed in the next release (hopefully soon). But there are workaround, please check the mailing list about this, I'm in a hurry so can't point directly to the correct post. Best regards, SakeOn

[ceph-users] Re: Rebuilding data resiliency after adding new OSD's stuck for so long at 5%

2023-09-14 Thread sharathvuthpala
We are using ceph version 16.2.10-172.el8cp (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable). ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Rebuilding data resiliency after adding new OSD's stuck for so long at 5%

2023-09-17 Thread sharathvuthpala
Hi Guys, Thanks for your responses. The issue has been resolved. We increased the number of backfill threads to 5 from default value(1) and noticed some increase in the speed of rebalancing. Anyhow, it took almost 3 and a half days for the entire rebalancing process, which we believe would no