TisonKun commented on a change in pull request #7880: [FLINK-11336][zk] Delete 
ZNodes when ZooKeeperHaServices#closeAndCleanupAllData
URL: https://github.com/apache/flink/pull/7880#discussion_r262008151
 
 

 ##########
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/zookeeper/ZooKeeperHaServices.java
 ##########
 @@ -225,6 +227,12 @@ public void closeAndCleanupAllData() throws Exception {
                        exception = t;
                }
 
+               try {
+                       cleanupZooKeeperPaths();
 
 Review comment:
   You're right that without container mode we need 
`tryDeleteEmptyParentZNodes` to delete `/flink/clusterid`. 
   
   However, if we wouldn't upgrade to ZooKeeper 3.5.x in the near future, it 
can be better that we parse path using `ZKPaths#getPathAndNode` instead of 
re-implement it. Besides, `CuratorFramework#getNamespace` always starts without 
a slash, we can directly add one or use 
`ZKPaths#makePath(client.getNamespace(), "")` to make it transparent.
   
   I'm more curious that how we ensure that on a standby cluster shutdown, it 
doesn't accidentally delete all znodes under the running cluster. Is it an 
impossible case?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to