My ceph -s output is this:
  cluster:
    id:     bfe08dcf-aabd-4cac-ac4f-9e56af3df11b
    health: HEALTH_ERR
            1/3 mons down, quorum omicron-m1,omicron-m2
            6 scrub errors
            Possible data damage: 1 pg inconsistent
            Degraded data redundancy: 626702/20558920 objects degraded 
(3.048%), 32 pgs degraded, 32 pgs undersized
            94 daemons have recently crashed
 
  services:
    mon: 3 daemons, quorum omicron-m1,omicron-m2 (age 8h), out of quorum: 
omicron-m0
    mgr: omicron-m0(active, since 28h)
    osd: 33 osds: 32 up (since 28h), 32 in (since 28h)
 
  data:
    pools:   8 pools, 736 pgs
    objects: 9.97M objects, 9.9 TiB
    usage:   21 TiB used, 27 TiB / 47 TiB avail
    pgs:     626702/20558920 objects degraded (3.048%)
             702 active+clean
             32  active+undersized+degraded
             1   active+clean+inconsistent
             1   active+clean+scrubbing+deep+repair

and ceph fs status prints nothing.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to