Hi Stefan,

thanks for the fast reply. I did some research and have the following output:

~ $ rados list-inconsistent-pg {pool-name1}
[]
 ~ $ rados list-inconsistent-pg {pool-name2}
[]
 ~ $ rados list-inconsistent-pg {pool-name3}
[]

—

 ~ $ rados list-inconsistent-obj 7.989
{"epoch":3006349,"inconsistents":[]}

~ $ rados list-inconsistent-obj 7.28f
{"epoch":3006337,"inconsistents":[]}

 ~ $ rados list-inconsistent-obj 7.603
{"epoch":3006329,"inconsistents":[]}


 ~ $ ceph config dump |grep osd_scrub_auto_repair 

Is empty 

 $ ceph daemon mon.ceph4 config get osd_scrub_auto_repair
{
    "osd_scrub_auto_repair": "true"
}

What does this tell me know? Setting can be changed to false of course, but as 
list-inconsistent-obj shows something, I would like to find the reason for that 
first.

Regards
Marcus


> Am 27.06.2022 um 08:56 schrieb Stefan Kooman <ste...@bit.nl>:
> 
> On 6/27/22 08:48, Marcus Müller wrote:
>> Hi all,
>> we recently upgraded from Ceph Luminous (12.x) to Ceph Octopus (15.x) (of 
>> course with Mimic and Nautilus in between). Since this upgrade we see we 
>> constant number of active+clean+scrubbing+deep+repair PGs. We never had this 
>> in the past, now every time (like 10 or 20 PGs at the same time with the 
>> +repair flag).
>> Does anyone know how to debug this more in detail ?
> 
> ceph daemon mon.$mon-id config get osd_scrub_auto_repair
> 
> ^^ This is disabled by default (Octopus 15.2.16), but you might have this 
> setting changed to true?
> 
> ceph config dump |grep osd_scrub_auto_repair to check if it's a global 
> setting.
> 
> Do the following commands return any info?
> 
> rados list-inconsistent-pg
> rados list-inconsistent-obj
> 
> Gr. Stefan

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to