Aditional information:

- We already tried to restart services and hole machine
- Part of jounalctl:

jan 13 02:40:18 s1.ceph.infra.ufscar.br 
ceph-bab39b74-c93a-4e34-aae9-a44a5569d52c-mon-s1[6343]: debug 
2023-01-13T05:40:18.653+0000 7fc370b64700  0 log_channel(cluster) log [WRN] : 
Replacing daemon mds.cephfs.s1.nvopyf as rank 1 with standby daemon 
mds.cephfs.s2.qikxmw
jan 13 02:40:18 s1.ceph.infra.ufscar.br 
ceph-bab39b74-c93a-4e34-aae9-a44a5569d52c-mon-s1[6343]: debug 
2023-01-13T05:40:18.653+0000 7fc370b64700  1 mon.s1@0(leader).mds e653196 
fail_mds_gid 107853765 mds.cephfs.s1.nvopyf role 1
jan 13 02:40:18 s1.ceph.infra.ufscar.br 
ceph-bab39b74-c93a-4e34-aae9-a44a5569d52c-mon-s1[6343]: debug 
2023-01-13T05:40:18.653+0000 7fc370b64700  0 log_channel(cluster) log [INF] : 
MDS daemon mds.cephfs.s1.nvopyf is removed because it is dead or otherwise 
unavailable.
jan 13 02:40:18 s1.ceph.infra.ufscar.br 
ceph-bab39b74-c93a-4e34-aae9-a44a5569d52c-mon-s1[6343]: debug 
2023-01-13T05:40:18.677+0000 7fc370b64700  0 log_channel(cluster) log [WRN] : 
Health check failed: 1 filesystem is degraded (FS_DEGRADED)
jan 13 02:40:18 s1.ceph.infra.ufscar.br 
ceph-bab39b74-c93a-4e34-aae9-a44a5569d52c-mon-s1[6343]: debug 
2023-01-13T05:40:18.677+0000 7fc370b64700  0 log_channel(cluster) log [WRN] : 
Health check failed: insufficient standby MDS daemons available 
(MDS_INSUFFICIENT_STANDBY)
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to