[ceph-users] Re: erasure coded pool PG stuck inconsistent on ceph Pacific 15.2.13

2021-11-19 Thread Wesley Dillingham
You may also be able to use an upmap (or the upmap balancer) to help make room for you on the osd which is too full. Respectfully, *Wes Dillingham* w...@wesdillingham.com LinkedIn On Fri, Nov 19, 2021 at 1:14 PM Wesley Dillingham wrote: > Okay,

[ceph-users] Re: erasure coded pool PG stuck inconsistent on ceph Pacific 15.2.13

2021-11-19 Thread Wesley Dillingham
Okay, now I see your attachment, the pg is in state: "state": "active+undersized+degraded+remapped+inconsistent+backfill_toofull", The reason it cant scrub or repair is that its degraded and further it seems that the cluster doesnt have the space to make that recovery happen "backfill_toofull"

[ceph-users] Re: erasure coded pool PG stuck inconsistent on ceph Pacific 15.2.13

2021-11-18 Thread Wesley Dillingham
That response is typically indicative of a pg whose OSD sets has changed since it was last scrubbed (typically from a disk failing). Are you sure its actually getting scrubbed when you issue the scrub? For example you can issue: "ceph pg query" and look for "last_deep_scrub_stamp" which will