HI Mr Patrick,

We are in same situation with Sake, now my MDS is crashed , NFS service is down 
with CEPHFS not responding. with my "ceph -s" result

health: HEALTH_WARN
            3 failed cephadm daemon(s)
            1 filesystem is degraded
            insufficient standby MDS daemons available
 data:
    volumes: 0/1 healthy, 1 recovering
    pools:   15 pools, 1457 pgs
    pgs:     15664126/110662485 objects misplaced (14.155%)
             1110 active+clean
             305  active+remapped+backfill_wait
             17   active+remapped+backfilling
             13   active+remapped+backfill_toofull


Could you help me explain the status of volume "recovering" ? what is it ? how 
can we track the progress of this?
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to