Hello,
last week I've got a HEALTH_OK on our CEPH cluster and I
started upgrade firmware in network cards.
When I had upgraded the sixth card from nine (one-by-one), this
server didn't started correctly and our ProxMox had problem with
accessing disk images on CEPH.
rbd ls pool
was OK, but:
Hi all,
we recently upgraded from Ceph Luminous (12.x) to Ceph Octopus (15.x) (of
course with Mimic and Nautilus in between). Since this upgrade we see we
constant number of active+clean+scrubbing+deep+repair PGs. We never had this in
the past, now every time (like 10 or 20 PGs at the same
Hi Everyone,
We have a 900 OSD cluster and our pg scrubs aren't keeping up. We are always
behind and have tried to tweak some of the scrub config settings to allow a
higher priority and faster scrubbing, but it doesn't seem to make any
difference. Does anyone have any suggestions for
Hi,
I have a problem with crashing OSD daemons in our Ceph 15.2.6 cluster . The
problem was temporarily resolved by disabling scrub and deep-scrub. All PGs
are active+clean. After a few days I tried to enable scrubbing again, but
the problem persists. OSD with high latencies, PG laggy, osd not