Note that this will adjust override reweight values, which will conflict with 
balancer upmaps.  

> On Sep 26, 2023, at 3:51 AM, c...@elchaka.de wrote:
> 
> Hi an idea is to see what 
> 
> Ceph osd test-reweight-by-utilization
> shows.
> If it looks usefull you can run the above command without "test"
> 
> Hth
> Mehmet
> 
> Am 22. September 2023 11:22:39 MESZ schrieb b...@sanger.ac.uk:
>> Hi Folks,
>> 
>> We are currently running with one nearfull OSD and 15 nearfull pools. The 
>> most full OSD is about 86% full but the average is 58% full. However, the 
>> balancer is skipping a pool on which the autoscaler is trying to complete a 
>> pg_num reduction from 131,072 to 32,768 (default.rgw.buckets.data pool). 
>> However, the autoscaler has been working on this for the last 20 days, it 
>> works through a list of objects that are misplaced but when it gets close to 
>> the end, more objects get added to the list.
>> 
>> This morning I observed the list get down to c. 7,000 objects misplaced with 
>> 2 PGs active+remapped+backfilling, one PG completed the backfilling then the 
>> list shot up to c. 70,000 objects misplaced with 3 PGs 
>> active+remapped+backfilling.
>> 
>> Has anyone come across this behaviour before? If so, what was your 
>> remediation?
>> 
>> Thanks in advance for sharing.
>> Bruno
>> 
>> Cluster details:
>> 3,068 OSDs when all running, c. 60 per storage node
>> OS: Ubuntu 20.04
>> Ceph: Pacific 16.2.13 from Ubuntu Cloud Archive
>> 
>> Use case:
>> S3 storage and OpenStack backend, all pools three-way replicated
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to