If you use a OSD failure domain, if a node goes down you can lose your
data and the cluster wont be able to work.

If you restart the OSD it might work, but you could even lose your data
as your cluster can't rebuild itself.

You can try to know where the CRUSH rule is going to set your data but I
wouldn't risk so much.

If you have 8 nodes, maybe you could have M=8 and K=2, divided by nodes
so you would have 6 nodes with 1 chunk and 2 nodes with 2 chunks,  so if
you unlucky lose a 2 chunks node, you can still rebuild the data.


El 23/10/2017 a las 21:53, David Turner escribió:
> This can be changed to a failure domain of OSD in which case it could
> satisfy the criteria.  The problem with a failure domain of OSD, is
> that all of your data could reside on a single host and you could lose
> access to your data after restarting a single host.
>
> On Mon, Oct 23, 2017 at 3:23 PM LOPEZ Jean-Charles <jelo...@redhat.com
> <mailto:jelo...@redhat.com>> wrote:
>
>     Hi,
>
>     the default failure domain if not specified on the CLI at the
>     moment you create your EC profile is set to HOST. So you need 14
>     OSDs spread across 14 different nodes by default. And you only
>     have 8 different nodes.
>
>     Regards
>     JC
>
>>     On 23 Oct 2017, at 21:13, Karun Josy <karunjo...@gmail.com
>>     <mailto:karunjo...@gmail.com>> wrote:
>>
>>     Thank you for the reply.
>>
>>     There are 8 OSD nodes with 23 OSDs in total. (However, they are
>>     not distributed equally on all nodes)
>>
>>     So it satisfies that criteria, right?
>>
>>
>>
>>     Karun Josy
>>
>>     On Tue, Oct 24, 2017 at 12:30 AM, LOPEZ Jean-Charles
>>     <jelo...@redhat.com <mailto:jelo...@redhat.com>> wrote:
>>
>>         Hi,
>>
>>         yes you need as many OSDs that k+m is equal to. In your
>>         example you need a minimum of 14 OSDs for each PG to become
>>         active+clean.
>>
>>         Regards
>>         JC
>>
>>>         On 23 Oct 2017, at 20:29, Karun Josy <karunjo...@gmail.com
>>>         <mailto:karunjo...@gmail.com>> wrote:
>>>
>>>         Hi,
>>>
>>>         While creating a pool with erasure code profile k=10, m=4, I
>>>         get PG status as
>>>         "200 creating+incomplete"
>>>
>>>         While creating pool with profile k=5, m=3 it works fine.
>>>
>>>         Cluster has 8 OSDs with total 23 disks.
>>>
>>>         Is there any requirements for setting the first profile ?
>>>
>>>         Karun 
>>>         _______________________________________________
>>>         ceph-users mailing list
>>>         ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>>>         http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
------------------------------------------------------------------------
*Jorge Pinilla López*
jorp...@unizar.es
Estudiante de ingenieria informática
Becario del area de sistemas (SICUZ)
Universidad de Zaragoza
PGP-KeyID: A34331932EBC715A
<http://pgp.rediris.es:11371/pks/lookup?op=get&search=0xA34331932EBC715A>
------------------------------------------------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to