Thanks for your information, I tried to new some mds pods, but it seems the same issue.
[root@vm-01 examples]# cat filesystem.yaml | grep activeCount activeCount: 3 [root@vm-01 examples]# [root@vm-01 examples]# kubectl get pod -nrook-ceph | grep mds rook-ceph-mds-myfs-a-6d46fcfd4c-lxc8m 2/2 Running 0 11m rook-ceph-mds-myfs-b-755685bcfb-mnfbv 2/2 Running 0 11m rook-ceph-mds-myfs-c-75c78b68bf-h5m9b 2/2 Running 0 9m13s rook-ceph-mds-myfs-d-6b595c4c98-tq6rl 2/2 Running 0 9m12s rook-ceph-mds-myfs-e-5dbfb9445f-4hbrn 2/2 Running 0 117s rook-ceph-mds-myfs-f-7957c55bc6-xtczr 2/2 Running 0 116s [root@vm-01 examples]# [root@vm-01 examples]# kubectl exec -it `kubectl get pod -nrook-ceph | grep tools | awk -F ' ' '{print $1}'` -n rook-ceph bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. bash-4.4$ bash-4.4$ ceph -s cluster: id: de9af3fe-d3b1-4a4b-bf61-929a990295f6 health: HEALTH_ERR 1 filesystem is offline 1 filesystem is online with fewer MDS than max_mds services: mon: 3 daemons, quorum a,b,d (age 74m) mgr: a(active, since 5d), standbys: b mds: 3/3 daemons up, 3 hot standby osd: 3 osds: 3 up (since 84m), 3 in (since 6d) rgw: 1 daemon active (1 hosts, 1 zones) data: volumes: 2/2 healthy pools: 14 pools, 233 pgs objects: 633 objects, 450 MiB usage: 2.0 GiB used, 208 GiB / 210 GiB avail pgs: 233 active+clean io: client: 19 KiB/s rd, 0 B/s wr, 21 op/s rd, 10 op/s wr bash-4.4$ bash-4.4$ bash-4.4$ ceph health detail HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds [ERR] MDS_ALL_DOWN: 1 filesystem is offline fs kingcephfs is offline because no MDS is active for it. [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds fs kingcephfs has 0 MDS online, but wants 1 bash-4.4$ bash-4.4$ bash-4.4$ _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io