15:23:10
To: Frank Schilder
Cc: Rainer Krienke; Eugen Block; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: ceph Nautilus lost two disk over night everything
hangs
I thought that recovery below min_size for EC pools wasn't expected to work
until Octopus. From the Octopus release notes:
gards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
>
> From: Rainer Krienke
> Sent: 30 March 2021 13:30:00
> To: Frank Schilder; Eugen Block; ceph-users@ceph.io
> Subject: Re: [ceph-users] Re
ank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Frank Schilder
Sent: 30 March 2021 14:53:18
To: Rainer Krienke; Eugen Block; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: ceph Nautilus lost two disk over night everything
hangs
Dear Rainer,
Campus
Bygning 109, rum S14
From: Rainer Krienke
Sent: 30 March 2021 13:30:00
To: Frank Schilder; Eugen Block; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: ceph Nautilus lost two disk over night everything
hangs
Hello Frank,
the option is actually
: Eugen Block; ceph-users@ceph.io
Subject: [ceph-users] Re: ceph Nautilus lost two disk over night everything
hangs
Hello,
yes your assumptions are correct pxa-rbd ist the metadata pool for
pxa-ec which uses a erasure coding 4+2 profile.
In the last hours ceph repaired most of the damage. One
Hello,
in between ceph is runing again normally, except for the two osds that
are down because of the failed disks.
What really helped in my situation was to lower min_size from 5 (k+1)
to 4 in my 4+2 erasure code setup. So I am also greatful for the
programmer who put the helping hint in c
Hi,
On 30.03.21 13:05, Rainer Krienke wrote:
Hello,
yes your assumptions are correct pxa-rbd ist the metadata pool for
pxa-ec which uses a erasure coding 4+2 profile.
In the last hours ceph repaired most of the damage. One inactive PG
remained and in ceph health detail then told me:
-
Hello Frank,
the option is actually set. On one of my monitors:
# ceph daemon /var/run/ceph/ceph-mon.*.asok config show|grep
osd_allow_recovery_below_min_size
"osd_allow_recovery_below_min_size": "true",
Thank you very much
Rainer
Am 30.03.21 um 13:20 schrieb Frank Schilder:
Hi, this is
Hello,
yes your assumptions are correct pxa-rbd ist the metadata pool for
pxa-ec which uses a erasure coding 4+2 profile.
In the last hours ceph repaired most of the damage. One inactive PG
remained and in ceph health detail then told me:
-
HEALTH_WARN Reduced data availability: 1 p
Hi,
from what you've sent my conclusion about the stalled I/O would be
indeed the min_size of the EC pool.
There's only one PG reported as incomplete, I assume that is the EC
pool, not the replicated pxa-rbd, right? Both pools are for rbd so I'm
guessing the rbd headers are in pxa-rbd while
10 matches
Mail list logo