in the future.
I guess I'll just have to scratch cephfs for other use case higher up the
chain for now, given this isn't a situation that would be acceptable in
most of our clusters.
Cheers,
Izzy
On Fri, 11 Feb 2022 at 19:58, Gregory Farnum wrote:
> On Fri, Feb 11, 2022 at 10:53 AM Izzy Ku
USED AVAIL
cephfs.backupfs.meta metadata 198G 623G
cephfs.backupfs.datadata 97.9T 30.4T
On Fri, 11 Feb 2022 at 17:05, Izzy Kulbe wrote:
> Hi,
>
> at the moment no clients should be connected to the MDS(since the MDS
> doesn't come up) and the clust
quot;: 0
},
"buffer_anon": {
"items": 63,
"bytes": 4132815
},
"buffer_meta": {
"items": 3,
"bytes": 264
},
"osd": {
"items": 0,
"bytes": 0
},
"osd_
be while
> it is rejoining the cluster?
> If the latter, this could be another case of:
> https://tracker.ceph.com/issues/54253
>
> Cheers, Dan
>
>
> On Wed, Feb 9, 2022 at 7:23 PM Izzy Kulbe wrote:
> >
> > Hi,
> >
> > last weekend we upgraded on
nother day or two before resetting/recreating the FS.
Thanks,
Izzy Kulbe
MDS Log:
20 mds.0.cache.dir(0x604.00000*) lookup_exact_snap (head, '1eff917')
10 mds.0.cache.snaprealm(0x1eff917 seq 3398 0x55c541856200)
adjust_parent 0 -> 0x55ba9f1e2e00
12 mds.0.cache.dir(0x