On 01/13/2014 12:39 PM, Dietmar Maurer wrote:
> I am still playing around with a small setup using 3 Nodes, each running
> 4 OSDs (=12 OSDs).
> 
>  
> 
> When using a pool size of 3, I get the following behavior when one OSD
> fails:
> * the affected PGs get marked active+degraded
> 
> * there is no data movement/backfill

Works as designed, if you have the default crush map in place (all
replicas must be on DIFFERENT hosts). You need to tweak your crush map
in this case, but be aware that this can have serious effects (think of
all your data residing on 3 disks on a single host).

Wolfgang


-- 
http://www.wogri.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to