> On Feb 10, 2015, at 12:37 PM, B L <super.itera...@gmail.com> wrote:
> 
> Hi Vickie,
> 
> Thanks for your reply!
> 
> You can find the dump in this link:
> 
> https://gist.github.com/anonymous/706d4a1ec81c93fd1eca 
> <https://gist.github.com/anonymous/706d4a1ec81c93fd1eca>
> 
> Thanks!
> B.
> 
> 
>> On Feb 10, 2015, at 12:23 PM, Vickie ch <mika.leaf...@gmail.com 
>> <mailto:mika.leaf...@gmail.com>> wrote:
>> 
>> Hi Beanos:
>>    Would you post the reult of "$ceph osd dump"?
>> 
>> Best wishes,
>> Vickie
>> 
>> 2015-02-10 16:36 GMT+08:00 B L <super.itera...@gmail.com 
>> <mailto:super.itera...@gmail.com>>:
>> Having problem with my fresh non-healthy cluster, my cluster status summary 
>> shows this:
>> 
>> ceph@ceph-node1:~$ ceph -s
>> 
>>     cluster 17bea68b-1634-4cd1-8b2a-00a60ef4761d
>>      health HEALTH_WARN 256 pgs incomplete; 256 pgs stuck inactive; 256 pgs 
>> stuck unclean; pool data pg_num 128 > pgp_num 64
>>      monmap e1: 1 mons at {ceph-node1=172.31.0.84:6789/0 
>> <http://172.31.0.84:6789/0>}, election epoch 2, quorum 0 ceph-node1
>>      osdmap e25: 6 osds: 6 up, 6 in
>>       pgmap v82: 256 pgs, 3 pools, 0 bytes data, 0 objects
>>             198 MB used, 18167 MB / 18365 MB avail
>>                  192 incomplete
>>                   64 creating+incomplete
>> 
>> 
>> Where shall I start troubleshooting this?
>> 
>> P.S. I’m new to CEPH.
>> 
>> Thanks!
>> Beanos
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>> 
>> 
> 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to