Please ignore. I found the mistake.
On 19-10-2017 21:08, Josy wrote:
Hi,
I created a testprofile, but not able to create a pool using it
==
$ ceph osd erasure-code-profile get testprofile1
crush-device-class=
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
Hi,
I created a testprofile, but not able to create a pool using it
==
$ ceph osd erasure-code-profile get testprofile1
crush-device-class=
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=10
m=4
plugin=jerasure
technique=reed_sol_van
w=8
$ ceph osd pool cr
Hi,
If you want to split your data to 10 peaces (stripes), and hold 4 parity
peaces in extra (so your cluster can handle the loss of any 4 osds),
then you need a minimum of 14 osds to hold your data.
Denes.
On 10/19/2017 04:24 PM, Josy wrote:
Hi,
I would like to set up an erasure code pr
Hi,
I would like to set up an erasure code profile with k=10 amd m=4 settings.
Is there any minimum requirement of OSD nodes and OSDs to achieve this
setting ?
Can I create a pool with 8 OSD servers, with one disk each in it ?
___
ceph-users mail