mxm commented on code in PR #762:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/762#discussion_r1478450912


##########
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/controller/FlinkResourceContext.java:
##########
@@ -65,7 +65,7 @@ public KubernetesJobAutoScalerContext 
getJobAutoScalerContext() {
     }
 
     private KubernetesJobAutoScalerContext createJobAutoScalerContext() {
-        Configuration conf = new Configuration(getObserveConfig());
+        Configuration conf = new 
Configuration(getDeployConfig(resource.getSpec()));

Review Comment:
   @gyfora @1996fanrui I changed this logic because I believe it is the right 
config to return in the context of the autoscaler. We want to run autoscaling 
based on the current user-provided configuration, not based on what is 
currently deployed. The former is the user intend, the latter is the result of 
a past reconciliation decision. This will also ensure that overrides are 
cleared immediately and not after completing the full reconciliation.
   
   For the tuning, the current logic only works correctly if we operate on the 
"deploy config" because that allows us to correctly determine the original 
memory configuration which is required for dynamically increasing / decreasing 
total memory and managed memory without going past the original user-provided 
limits. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to