This can be changed to a failure domain of OSD in which case it could
satisfy the criteria.  The problem with a failure domain of OSD, is that
all of your data could reside on a single host and you could lose access to
your data after restarting a single host.

On Mon, Oct 23, 2017 at 3:23 PM LOPEZ Jean-Charles <jelo...@redhat.com>
wrote:

> Hi,
>
> the default failure domain if not specified on the CLI at the moment you
> create your EC profile is set to HOST. So you need 14 OSDs spread across 14
> different nodes by default. And you only have 8 different nodes.
>
> Regards
> JC
>
> On 23 Oct 2017, at 21:13, Karun Josy <karunjo...@gmail.com> wrote:
>
> Thank you for the reply.
>
> There are 8 OSD nodes with 23 OSDs in total. (However, they are not
> distributed equally on all nodes)
>
> So it satisfies that criteria, right?
>
>
>
> Karun Josy
>
> On Tue, Oct 24, 2017 at 12:30 AM, LOPEZ Jean-Charles <jelo...@redhat.com>
> wrote:
>
>> Hi,
>>
>> yes you need as many OSDs that k+m is equal to. In your example you need
>> a minimum of 14 OSDs for each PG to become active+clean.
>>
>> Regards
>> JC
>>
>> On 23 Oct 2017, at 20:29, Karun Josy <karunjo...@gmail.com> wrote:
>>
>> Hi,
>>
>> While creating a pool with erasure code profile k=10, m=4, I get PG
>> status as
>> "200 creating+incomplete"
>>
>> While creating pool with profile k=5, m=3 it works fine.
>>
>> Cluster has 8 OSDs with total 23 disks.
>>
>> Is there any requirements for setting the first profile ?
>>
>> Karun
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to