huyuanfeng2018 commented on code in PR #921:
URL:
https://github.com/apache/flink-kubernetes-operator/pull/921#discussion_r1874712096
##########
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/ScalingMetricEvaluator.java:
##########
@@ -296,8 +297,14 @@ protected static void computeProcessingRateThresholds(
upperUtilization = 1.0;
lowerUtilization = 0.0;
} else {
- upperUtilization = targetUtilization + utilizationBoundary;
- lowerUtilization = targetUtilization - utilizationBoundary;
+ upperUtilization =
+ conf.getOptional(TARGET_UTILIZATION_BOUNDARY)
+ .map(boundary -> targetUtilization + boundary)
+ .orElseGet(() -> conf.get(UTILIZATION_MAX));
+ lowerUtilization =
+ conf.getOptional(TARGET_UTILIZATION_BOUNDARY)
+ .map(boundary -> targetUtilization - boundary)
+ .orElseGet(() -> conf.get(UTILIZATION_MIN));
}
double scaleUpThreshold =
Review Comment:
Thanks for review.
The last commit was not fully checked and there were some errors in some
places, which have been fixed.
> In `AutoScalerUtils.getTargetProcessingCapacity`, we adjust
`targetUtilization` between 0.0 and 1.0, it's reasonable.
>
> After introducing `UTILIZATION_MAX` and `UTILIZATION_MIN`, it's better to
ensure:
>
> * 1.0 >= UTILIZATION_MAX >= UTILIZATION_TARGET
> * UTILIZATION_TARGET >= UTILIZATION_MIN >= 0.0
>
> And it's needed to add some tests to check them.
>
Added `DefaultValidator#validateNumberOrder` to verify that multiple
parameters need to follow the logic of size order, and added test cases.
Also`validateNumber` can also limit the maximum value
> Also, could we unify the logic in this method instead of
`AutoScalerUtils.getTargetProcessingCapacity`?
`AutoScalerUtils.getTargetProcessingCapacity` I think it is reasonable to
keep them in this pr, Because it is used in many places
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]