Hi,
I'm still unable to get our filesystem back.
I now have this:
fs_cluster - 0 clients
==
RANK STATE MDS ACTIVITY DNSINOS DIRS CAPS
0rejoin cephmd4b90.0k 89.4k 14.7k 0
1rejoin cephmd6b 105k 105k 21.3k 0
2failed
; >does not change a MDS; it manipulates the file system rank which has been
> >marked damaged.
>
> Maybe that could bring it back up? Did you set max_mds to 1 at some point? If
> you do it now (and you currently have only one active MDS), maybe that would
> clean up the failed ran
Hi,
after our desaster yesterday, it seems that we got our MONs back.
One of the filesystems, however, seems in a strange state:
% ceph fs status
fs_cluster - 782 clients
==
RANK STATE MDSACTIVITY DNSINOS DIRS CAPS
0active cephmd6a Reqs:
Hi,
we ran into a bigger problem today with our ceph cluster (Quincy,
Alma8.9).
We have 4 filesystems and a total of 6 MDs, the largest fs having
two ranks assigned (i.e. one standby).
Since we often have the problem of MDs lagging behind, we restart
the MDs occasionally. Helps ususally, the