[ceph-users] Re: MDS_DAMAGE dir_frag

2022-12-12 Thread Dhairya Parmar
Hi there, You might want to look at [1] for this, also I found a relevant thread [2] that could be helpful. [1] https://docs.ceph.com/en/latest/cephfs/disaster-recovery-experts/#disaster-recovery-experts [2] https://www.spinics.net/lists/ceph-users/msg53202.html - Dhairya On Mon, Dec 12, 2022

[ceph-users] Re: MDS_DAMAGE dir_frag

2022-12-12 Thread Sascha Lucas
Hi Dhairya, On Mon, 12 Dec 2022, Dhairya Parmar wrote: You might want to look at [1] for this, also I found a relevant thread [2] that could be helpful. Thanks a lot. I already found [1,2], too. But I did not considered it, because I felt not having a "disaster"? Nothing seems broken nor cr

[ceph-users] Re: MDS_DAMAGE dir_frag

2022-12-12 Thread Gregory Farnum
On Mon, Dec 12, 2022 at 12:10 PM Sascha Lucas wrote: > Hi Dhairya, > > On Mon, 12 Dec 2022, Dhairya Parmar wrote: > > > You might want to look at [1] for this, also I found a relevant thread > [2] > > that could be helpful. > > > > Thanks a lot. I already found [1,2], too. But I did not considere

[ceph-users] Re: MDS_DAMAGE dir_frag

2022-12-12 Thread Sascha Lucas
Hi Greg, On Mon, 12 Dec 2022, Gregory Farnum wrote: On Mon, Dec 12, 2022 at 12:10 PM Sascha Lucas wrote: A follow-up of [2] also mentioned having random meta-data corruption: "We have 4 clusters (all running same version) and have experienced meta-data corruption on the majority of them at

[ceph-users] Re: MDS_DAMAGE dir_frag

2022-12-12 Thread William Edwards
> Op 12 dec. 2022 om 22:47 heeft Sascha Lucas het > volgende geschreven: > > Hi Greg, > >> On Mon, 12 Dec 2022, Gregory Farnum wrote: >> >> On Mon, Dec 12, 2022 at 12:10 PM Sascha Lucas wrote: > >>> A follow-up of [2] also mentioned having random meta-data corruption: "We >>> have 4 cluste

[ceph-users] Re: MDS_DAMAGE dir_frag

2022-12-13 Thread Sascha Lucas
Hi, On Mon, 12 Dec 2022, Sascha Lucas wrote: On Mon, 12 Dec 2022, Gregory Farnum wrote: Yes, we’d very much like to understand this. What versions of the server and kernel client are you using? What platform stack — I see it looks like you are using CephFS through the volumes interface? The

[ceph-users] Re: MDS_DAMAGE dir_frag

2022-12-13 Thread Sascha Lucas
Hi William, On Mon, 12 Dec 2022, William Edwards wrote: Op 12 dec. 2022 om 22:47 heeft Sascha Lucas het volgende geschreven: Ceph "servers" like MONs, OSDs, MDSs etc. are all 17.2.5/cephadm/podman. The filesystem kernel clients are co-located on the same hosts running the "servers". Isn

[ceph-users] Re: MDS_DAMAGE dir_frag

2022-12-14 Thread Venky Shankar
Hi Sascha, On Tue, Dec 13, 2022 at 6:43 PM Sascha Lucas wrote: > > Hi, > > On Mon, 12 Dec 2022, Sascha Lucas wrote: > > > On Mon, 12 Dec 2022, Gregory Farnum wrote: > > >> Yes, we’d very much like to understand this. What versions of the server > >> and kernel client are you using? What platform

[ceph-users] Re: MDS_DAMAGE dir_frag

2022-12-14 Thread Sascha Lucas
Hi Venky, On Wed, 14 Dec 2022, Venky Shankar wrote: On Tue, Dec 13, 2022 at 6:43 PM Sascha Lucas wrote: Just an update: "scrub / recursive,repair" does not uncover additional errors. But also does not fix the single dirfrag error. File system scrub does not clear entries from the damage l