snaptrimming on the go.
FYI - The journal's co-located on drive.
Kind regards
Geoff
On Fri, 24 Feb 2023 at 18:30, Anthony D'Atri wrote:
> Are you only doing 2 replicas?
>
>
>
>
>
> On Feb 24, 2023, at 08:20, Geoffrey Rhodes wrote:
>
>
Hello all, I'd really appreciate some input from the more knowledgeable
here.
Is there a way I can access OSD objects if I have a BlueFS replay error?
This error prevents me starting the OSD and also throws an error if I try
using the bluestore or objectstore tools. - I can however run a
ceph-blu
hing obvious.
Set debug bluefs = 20 , saw this in another post.
https://pastebin.com/3PkCabdf
https://pastebin.com/BT9bnhSb
Kind regards
Geoffrey Rhodes
On Wed, 25 Jan 2023 at 12:44, Geoffrey Rhodes
wrote:
> Good day all,
>
> I've an issue with a few OSDs (in two different nodes)
journal's for each osd are co-located
on each drive
Kind regards
Geoffrey Rhodes
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
node from the
cluster.
Then start over, installing the node and adding it to the correct crush
bucket, etc.
This feels like an unnecessary course of action when all I need to do is
replace the OS drive.
OS: Ubuntu 18.04.6 LTS
Ceph version: 15.2.17 - Octopus
Kind regards
Geoff
pool with two host failures.
RUN: sudo ceph osd pool set ec32pool min_size 3
Kind regards
Geoffrey Rhodes
On Mon, 27 Jan 2020 at 22:11, wrote:
> Send ceph-users mailing list submissions to
> ceph-users@ceph.io
>
> To subscribe or unsubscribe via email, send a message with s