Miki,

osd crush chooseleaf type is set to 1 by default, which means that it looks
to peer with placement groups on another node, not the same node. You would
need to set that to 0 for a 1-node cluster.

John


On Sun, Jun 8, 2014 at 10:40 PM, Miki Habryn <dic...@rcpt.to> wrote:

> I set up a single-node, dual-osd cluster following the Quick Start on
> ceph.com with Firefly packages, adding "osd pool default size = 2".
> All of the pgs came up in active+remapped or active+degraded status. I
> read up on tunables and set them to optimal, to no result, so I added
> a third osd instead. About 39 pgs moved to active status, but the rest
> stayed in active+remapped or active+degraded. When I raised the
> replication level to 3 with "ceph osd pool set ... size 3", all the
> pgs went back to degraded or remapped. Just for kicks, I tried to set
> the replication level to 1, and I still only got 39 pgs active. Is
> there something obvious I'm doing wrong?
>
> m.
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
John Wilkins
Senior Technical Writer
Intank
john.wilk...@inktank.com
(415) 425-9599
http://inktank.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to