The way LRC works is that is creates an additionnal parity chunk every l
OSD.
So with k=m=l=2, you will have 2 data chunks, 2 parity chunks and 2
locality parity chunks.

Your ruleset-failure-domain is set to "host", as well as your
ruleset-locality, so you will need 6 hosts in order to create the
placements groups.

You can edit you EC Profile / crushmap to set your ruleset-failure-domain
and ruleset-locality to "osd".

Adrien

On Thu, Feb 25, 2016 at 6:51 AM, Sharath Gururaj <sharat...@flipkart.com>
wrote:

> Try using more OSDs.
> I was encountering this scenario when my osds were equal to k+m
> The errors went away when I used k+m+2
> So in your case try with 8 or 10 osds.
>
> On Thu, Feb 25, 2016 at 11:18 AM, Daleep Singh Bais <daleepb...@gmail.com>
> wrote:
>
>> hi All,
>>
>> Any help in this regard will be appreciated.
>>
>> Thanks..
>> Daleep Singh Bais
>>
>>
>> -------- Forwarded Message --------
>> Subject: Erasure code Plugins
>> Date: Fri, 19 Feb 2016 12:13:36 +0530
>> From: Daleep Singh Bais <daleepb...@gmail.com> <daleepb...@gmail.com>
>> To: ceph-users <ceph-us...@ceph.com> <ceph-us...@ceph.com>
>>
>> Hi All,
>>
>> I am experimenting with erasure profiles and would like to understand
>> more about them. I created an LRC profile based on *
>> <http://docs.ceph.com/docs/master/rados/operations/erasure-code-lrc/>http://docs.ceph.com/docs/master/rados/operations/erasure-code-lrc/
>> <http://docs.ceph.com/docs/master/rados/operations/erasure-code-lrc/>*
>>
>> The LRC profile created by me is
>>
>> *ceph osd erasure-code-profile get lrctest1*
>> k=2
>> l=2
>> m=2
>> plugin=lrc
>> ruleset-failure-domain=host
>> ruleset-locality=host
>> ruleset-root=default
>>
>> However, when I create a pool based on this profile, I see a health
>> warning in ceph -w ( 128 pgs stuck inactive and 128 pgs stuck unclean).
>> This is the first pool in cluster.
>>
>> As i understand, m is parity bit and l will create additional parity bit
>> for data bit k. Please correct me if I am wrong.
>>
>> Below is output of ceph -w
>>
>> health HEALTH_WARN
>>             *128 pgs stuck inactive*
>> *            128 pgs stuck unclean*
>>      monmap e7: 1 mons at {node1=192.168.1.111:6789/0}
>>             election epoch 101, quorum 0 node1
>>      osdmap e928: *6 osds: 6 up, 6 in*
>>             flags sortbitwise
>>       pgmap v54114: 128 pgs, 1 pools, 0 bytes data, 0 objects
>>             10182 MB used, 5567 GB / 5589 GB avail
>>                  *128 creating*
>>
>>
>> Any help or guidance in this regard is highly appreciated.
>>
>> Thanks,
>>
>> Daleep Singh Bais
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to