Hi Benjamin,
On 15.12.2014 03:31, Benjamin wrote:
> Hey there,
>
> I've set up a small VirtualBox cluster of Ceph VMs. I have one
> "ceph-admin0" node, and three "ceph0,ceph1,ceph2" nodes for a total of 4.
>
> I've been following this
> guide: http://ceph.com/docs/master/start/quick-ceph-deploy/ to the letter.
>
> At the end of the guide, it calls for you to run "ceph health"... this
> is what happens when I do.
>
> "HEALTH_ERR 64 pgs stale; 64 pgs stuck stale; 2 full osd(s); 2/2 in
> osds are down"
hmm, why you have two OSDs only with tree nodes?

Can you post the output of following commands
ceph health detail
ceph osd tree
rados df
ceph osd pool get data size
ceph osd pool get rbd size
df -h # on all OSD-nodes

/etc/init.d/ceph start osd.0      # on node with osd.0
/etc/init.d/ceph start osd.1      # on node with osd.1


Udo


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to