mxm commented on code in PR #710:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/710#discussion_r1391538017


##########
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/autoscaler/ConfigMapStore.java:
##########
@@ -67,19 +67,14 @@ protected Optional<String> getSerializedState(
     }
 
     protected void removeSerializedState(KubernetesJobAutoScalerContext 
jobContext, String key) {
-        getConfigMap(jobContext)
-                .ifPresentOrElse(
-                        configMap -> configMap.getData().remove(key),
-                        () -> {
-                            throw new IllegalStateException(
-                                    "The configMap isn't created, so the 
remove is unavailable.");
-                        });
+        getConfigMap(jobContext).ifPresent(configMap -> 
configMap.getData().remove(key));
     }
 
     public void flush(KubernetesJobAutoScalerContext jobContext) {
         Optional<ConfigMap> configMapOpt = cache.get(jobContext.getJobKey());
         if (configMapOpt == null || configMapOpt.isEmpty()) {
             LOG.debug("The configMap isn't updated, so skip the flush.");
+            // Do not flush if there are no updates.

Review Comment:
   Thanks for adding the tests. You are right that the retrieval will always go 
to Kubernetes. The current implementation has some bugs with the bookkeeping to 
minimize the number of API calls. It does make more requests than necessary. 
   
   I've rewrote the ConfigMapStore to minimize the number of times we go to 
Kubernetes. We only retrieve once now on lookup. If the ConfigMap does not 
exist, we will only create the ConfigMap once we flush back any changes.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to