Thanks, good question.
I discovered the OMAP format conversion since I asked this, which could be
bypassed by setting false on bluestore_fsck_quick_fix_on_mount.
In which case the answer would be, no, it won't take 18 minutes again.
Will test.
-- Trey
On Mon, Feb 14, 2022 at 4:37 PM Marc
an
hour or two to do the next node. The primary RGW data pool is 3+2 EC so I
expect that recovery is a little slower than it would be in a replicated
pool.
Under Luminous they were only taking a few minutes to connect.
Any ideas what could be happening here?
Thanks,
Trey Palmer
Thank y'all. This metric is exactly what we need. Turns out it was
introduced in 14.2.17 and we have 14.2.9.
On Wed, Feb 9, 2022 at 2:32 AM Konstantin Shalygin wrote:
> Hi,
>
> On 9 Feb 2022, at 09:03, BenoƮt Knecht wrote:
>
> I don't remember in which Ceph release it was introduced, but
r versions, as well.
Thanks,
Trey Palmer
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ot as big I'm
assuming because the buckets are less transient.
Thanks,
Trey Palmer
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io