Hi Robert,
thanks for looking at this. The explanation is a different one though.
Today I added disks to the second server that was in exactly the same state as
the other one reported below. I used this opportunity to do a modified reboot +
OSD adding sequence.
To recall the situation, I added
On Tue, Oct 1, 2019 at 5:25 AM Frank Schilder wrote:
>
> I'm running a cepf fs with an 8+2 EC data pool. Disks are on 10 hosts and
> failure domain is host. Version is mimic 13.2.2. Today I added a few OSDs to
> one of the hosts and observed that a lot of PGs became inactive even though 9
> out