[ceph-users] Scrubbing?

2024-01-22 Thread Jan Marek
Hello, last week I've got a HEALTH_OK on our CEPH cluster and I started upgrade firmware in network cards. When I had upgraded the sixth card from nine (one-by-one), this server didn't started correctly and our ProxMox had problem with accessing disk images on CEPH. rbd ls pool was OK, but:

[ceph-users] scrubbing+deep+repair PGs since Upgrade

2022-06-27 Thread Marcus Müller
Hi all, we recently upgraded from Ceph Luminous (12.x) to Ceph Octopus (15.x) (of course with Mimic and Nautilus in between). Since this upgrade we see we constant number of active+clean+scrubbing+deep+repair PGs. We never had this in the past, now every time (like 10 or 20 PGs at the same

[ceph-users] Scrubbing

2022-03-09 Thread Ray Cunningham
Hi Everyone, We have a 900 OSD cluster and our pg scrubs aren't keeping up. We are always behind and have tried to tweak some of the scrub config settings to allow a higher priority and faster scrubbing, but it doesn't seem to make any difference. Does anyone have any suggestions for

[ceph-users] Scrubbing - osd down

2020-12-11 Thread Miroslav Boháč
Hi, I have a problem with crashing OSD daemons in our Ceph 15.2.6 cluster . The problem was temporarily resolved by disabling scrub and deep-scrub. All PGs are active+clean. After a few days I tried to enable scrubbing again, but the problem persists. OSD with high latencies, PG laggy, osd not