Hi Groups,

Recently I was setting up a ceph cluster with 10 nodes 144 osd, and I use S3 
for it with pool erasure code EC3+2 on it.

I have a question, how many osd nodes can fail with erasure code 3+2 with 
cluster working normal (read, write)? and can i choose better erasure code 
ec7+3, 8+2 etc..?

With the erasure code algorithm, it only ensures no data loss, but does not 
guarantee that the cluster operates normally and does not block IO when osd 
nodes down. Is that right?

Thanks to the community.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to