jtuglu-netflix commented on PR #17903:
URL: https://github.com/apache/druid/pull/17903#issuecomment-2848518480

   > Do you feel that the current scale up logic adds too many tasks or too few?
   
   Depending on the scenario, it could be either. What I was aiming for was to 
minimize the "saw-toothing" or late reactions to spikes (if say the minimum 
time between scaling is long, e.g ~10+ minutes).
   
   > In this patch, what is the formula to compute the "proportional" task 
count for a given value of lag?
   
   I took the average (call it A) of the samples of desired aggregate function 
across partitions, and compare that to the scaling threshold T (up/down). I 
then calculate newTaskCount = ceil(currentTaskCount * min(scaling step, A/T)) 
and use the "ratio" config params to determine whether if newTaskCount falls 
within some % of currentTaskCount, we skip.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to