This usually means that your OSDs all stopped running at the same time, and
will eventually be marked down by the monitors. You should verify that
they're running.
-Greg

On Saturday, April 26, 2014, Srinivasa Rao Ragolu <srag...@mvista.com>
wrote:

> Hi,
>
> My monitor node and osd nodes are running fine. But my cluster health is
> "stale+active+clean"
>
> root@node1:/etc/ceph# ceph status
>     cluster a7f64266-0894-4f1e-a635-d0aeaca0e993
>      health HEALTH_WARN 2856 pgs stale; 2856 pgs stuck stale
>      monmap e1: 1 mons at {mon=192.168.0.102:6789/0}, election epoch 1,
> quorum 0 mon
>      osdmap e48: 2 osds: 2 up, 2 in
>       pgmap v451: 2856 pgs, 11 pools, 1590 bytes data, 49 objects
>             2072 MB used, 9117 MB / 11837 MB avail
>                 2856 stale+active+clean
>
> How can I make it to active + clean state ?
>
>
> Thanks,
> Srinivas.
>


-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to