Thank you for detailed explanation!

Got one another doubt,

This is the total space available in the cluster :

TOTAL 23490G
Use 10170G
Avail : 13320G


But ecpool shows max avail as just 3 TB.



Karun Josy

On Tue, Dec 5, 2017 at 1:06 AM, David Turner <drakonst...@gmail.com> wrote:

> No, I would only add disks to 1 failure domain at a time.  So in your
> situation where you're adding 2 more disks to each node, I would recommend
> adding the 2 disks into 1 node at a time.  Your failure domain is the
> crush-failure-domain=host.  So you can lose a host and only lose 1 copy of
> the data.  If all of your pools are using the k=5 m=3 profile, then I would
> say it's fine to add the disks into 2 nodes at a time.  If you have any
> replica pools for RGW metadata or anything, then I would stick with the 1
> host at a time.
>
> On Mon, Dec 4, 2017 at 2:29 PM Karun Josy <karunjo...@gmail.com> wrote:
>
>> Thanks for your reply!
>>
>> I am using erasure coded profile with k=5, m=3 settings
>>
>> $ ceph osd erasure-code-profile get profile5by3
>> crush-device-class=
>> crush-failure-domain=host
>> crush-root=default
>> jerasure-per-chunk-alignment=false
>> k=5
>> m=3
>> plugin=jerasure
>> technique=reed_sol_van
>> w=8
>>
>>
>> Cluster has 8 nodes, with 3 disks each. We are planning to add 2 more on
>> each nodes.
>>
>> If I understand correctly, then I can add 3 disks at once right ,
>> assuming 3 disks can fail at a time as per the ec code profile.
>>
>> Karun Josy
>>
>> On Tue, Dec 5, 2017 at 12:06 AM, David Turner <drakonst...@gmail.com>
>> wrote:
>>
>>> Depending on how well you burn-in/test your new disks, I like to only
>>> add 1 failure domain of disks at a time in case you have bad disks that
>>> you're adding.  If you are confident that your disks aren't likely to fail
>>> during the backfilling, then you can go with more.  I just added 8 servers
>>> (16 OSDs each) to a cluster with 15 servers (16 OSDs each) all at the same
>>> time, but we spent 2 weeks testing the hardware before adding the new nodes
>>> to the cluster.
>>>
>>> If you add 1 failure domain at a time, then any DoA disks in the new
>>> nodes will only be able to fail with 1 copy of your data instead of across
>>> multiple nodes.
>>>
>>> On Mon, Dec 4, 2017 at 12:54 PM Karun Josy <karunjo...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> Is it recommended to add OSD disks one by one or can I add couple of
>>>> disks at a time ?
>>>>
>>>> Current cluster size is about 4 TB.
>>>>
>>>>
>>>>
>>>> Karun
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>
>>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to