Yes, you can use a crush rule with two steps:

take default
chooseleaf indep 5
emit
take default
chooseleaf indep 2
emit

You'll have to adjust it when adding a server, so it's not a great
solution. I'm not sure if there's a way to do it without hardcoding
the number of servers (I don't think there is).

Paul


Am Do., 4. Okt. 2018 um 20:28 Uhr schrieb Vladimir Brik
<vladimir.b...@icecube.wisc.edu>:
>
> Hello
>
> I have a 5-server cluster and I am wondering if it's possible to create
> pool that uses k=5 m=2 erasure code. In my experiments, I ended up with
> pools whose pgs are stuck in creating+incomplete state even when I
> created the erasure code profile with --crush-failure-domain=osd.
>
> Assuming that what I want to do is possible, will CRUSH distribute
> chunks evenly among servers, so that if I need to bring one server down
> (e.g. reboot), clients' ability to write or read any object would not be
> disrupted? (I guess something would need to ensure that no server holds
> more than two chunks of an object)
>
> Thanks,
>
> Vlad
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to