Hi Everyone,
Just an update to this in case anyone has the same issue. This seems
to have been caused by ceph osd reweight-by-utilization. Because we
have two pools that map to two separate sets of disks and one pool was
more full than the other the reweight-by-utilization had reduced the
weight of
Perhaps a deep scrub will cause a scrub error Which you can try to ceph pg
repair?
Btw. It seems that you use 2 replicas Which is not recommendet except for dev
environments.
Am 24. Januar 2017 22:58:14 MEZ schrieb Richard Bade :
>Hi Everyone,
>I've got a strange one. After doing a reweight of
Hi Everyone,
I've got a strange one. After doing a reweight of some osd's the other
night our cluster is showing 1pg stuck unclean.
2017-01-25 09:48:41 : 1 pgs stuck unclean | recovery 140/71532872
objects degraded (0.000%) | recovery 2553/71532872 objects misplaced
(0.004%)
When I query the pg i