We have a paramater in reaper yaml file called
repairManagerSchrdulingIntervalSeconds default is 10 seconds   , i tested
with 8,6,5 seconds and found 5 seconds optimal for my environment ..you go
down further but it will have cascading effects in cpu and memory
consumption.
So test well.

On Monday, May 21, 2018, Surbhi Gupta <surbhi.gupt...@gmail.com> wrote:

> Thanks a lot for your inputs,
> Abdul, how did u tune reaper?
>
> On Sun, May 20, 2018 at 10:10 AM Jonathan Haddad <j...@jonhaddad.com>
> wrote:
>
>> FWIW the largest deployment I know about is a single reaper instance
>> managing 50 clusters and over 2000 nodes.
>>
>> There might be bigger, but I either don’t know about it or can’t
>> remember.
>>
>> On Sun, May 20, 2018 at 10:04 AM Abdul Patel <abd786...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I recently tested reaper and it actually helped us alot. Even with our
>>> small footprint 18 node reaper takes close to 6 hrs.<intially took 13 hrs
>>> ,i was able to tune it 50%>. But it really depends on number nodes. For
>>> example if you have 4 nodes then it runs on 4*256<vnodes> =1024 segements ,
>>> so for your env. Ut will be 256*144 close to 36k segements.
>>> Better test on poc box how much time it takes and then proceed further
>>> ..i have tested so far in 1 dc only , we can actually have seperate reaper
>>> instance handling seperate dc but havent tested it yet.
>>>
>>>
>>> On Sunday, May 20, 2018, Surbhi Gupta <surbhi.gupt...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> We have a cluster with 144 nodes( 3 datacenter) with 256 Vnodes .
>>>> When we tried to start repairs from opscenter then it showed 1.9Million
>>>> ranges to repair .
>>>> And even after doing compaction and strekamthroughput to 0 , opscenter
>>>> is not able to help us much to finish repair in 9 days timeframe .
>>>>
>>>> What is your thought on Reaper ?
>>>> Do you think , Reaper might be able to help us in this scenario ?
>>>>
>>>> Thanks
>>>> Surbhi
>>>>
>>>>
>>>> --
>> Jon Haddad
>> http://www.rustyrazorblade.com
>> twitter: rustyrazorblade
>>
>>
>>

Reply via email to