>undergo deepscrub and regular scrub cannot be completed in a timely manner. I
>have noticed that these PGs appear to be concentrated on a single OSD. I am
>seeking your guidance on how to address this issue and would appreciate any
>insights or suggestions you may have.
>
The usual "see if the
Hi,
please share more details about your cluster like
ceph -s
ceph osd df tree
ceph pg ls-by-pool | head
If the client load is not too high you could increase the
osd_max_scrubs config from 1 to 3 and see if anything improves (what
is the current value?). If the client load is high during
Hi Janne,
Thank you for your response.
I use `ceph pg deep-scrub ` command, and all returns are point the
osd.166.
I check SMART data and syslog on osd.166 , the disk are fine.
Now the late deep-scrub PG numbers are lower, however it been 5 days since last
post.
I attached the perf dump for