Oh, and you'll need to use m>=3 to ensure availability during a node failure.
Paul
Am Fr., 5. Okt. 2018 um 11:22 Uhr schrieb Caspar Smit :
>
> Hi Vlad,
>
> You can check this blog:
> http://cephnotes.ksperis.com/blog/2017/01/27/erasure-code-on-small-clusters
>
> Note! Be aware that these setting
Hi Vlad,
You can check this blog:
http://cephnotes.ksperis.com/blog/2017/01/27/erasure-code-on-small-clusters
Note! Be aware that these settings do not automatically cover a node
failure.
Check out this thread why:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024423.html
K
Yes, you can use a crush rule with two steps:
take default
chooseleaf indep 5
emit
take default
chooseleaf indep 2
emit
You'll have to adjust it when adding a server, so it's not a great
solution. I'm not sure if there's a way to do it without hardcoding
the number of servers (I don't think there
Hello
I have a 5-server cluster and I am wondering if it's possible to create
pool that uses k=5 m=2 erasure code. In my experiments, I ended up with
pools whose pgs are stuck in creating+incomplete state even when I
created the erasure code profile with --crush-failure-domain=osd.
Assuming that