somandal commented on code in PR #15175:
URL: https://github.com/apache/pinot/pull/15175#discussion_r1994318189
##########
pinot-controller/src/main/java/org/apache/pinot/controller/helix/core/rebalance/DefaultRebalancePreChecker.java:
##########
@@ -138,4 +157,80 @@ private boolean checkIsMinimizeDataMovement(String
rebalanceJobId, String tableN
return false;
}
}
+
+ private String checkDiskUtilization(String tableNameWithType, Map<String,
Map<String, String>> currentAssignment,
+ Map<String, Map<String, String>> targetAssignment,
+ TableSizeReader.TableSubTypeSizeDetails tableSubTypeSizeDetails, double
threshold) {
+ boolean isDiskUtilSafe = true;
+ StringBuilder message = new StringBuilder("UNSAFE. Servers with unsafe
disk util footprint: ");
+ String sep = "";
+ Map<String, Set<String>> existingServersToSegmentMap = new HashMap<>();
+ Map<String, Set<String>> newServersToSegmentMap = new HashMap<>();
+
+ for (Map.Entry<String, Map<String, String>> entrySet :
currentAssignment.entrySet()) {
+ for (String segmentKey : entrySet.getValue().keySet()) {
+ existingServersToSegmentMap.computeIfAbsent(segmentKey, k -> new
HashSet<>()).add(entrySet.getKey());
+ }
+ }
+
+ for (Map.Entry<String, Map<String, String>> entrySet :
targetAssignment.entrySet()) {
+ for (String segmentKey : entrySet.getValue().keySet()) {
+ newServersToSegmentMap.computeIfAbsent(segmentKey, k -> new
HashSet<>()).add(entrySet.getKey());
+ }
+ }
+
+ long avgSegmentSize = getAverageSegmentSize(tableSubTypeSizeDetails,
currentAssignment);
+
+ for (Map.Entry<String, Set<String>> entry :
newServersToSegmentMap.entrySet()) {
+ String server = entry.getKey();
+ DiskUsageInfo diskUsage = getDiskUsageInfoOfInstance(server);
+
+ if (diskUsage.getTotalSpaceBytes() < 0) {
+ return "Disk usage info not enabled. Try to set
controller.enable.resource.utilization.check=true";
+ }
+
+ Set<String> segmentSet = entry.getValue();
+
+ Set<String> newSegmentSet = new HashSet<>(segmentSet);
+ Set<String> existingSegmentSet = new HashSet<>();
+ Set<String> intersection = new HashSet<>();
+ if (existingServersToSegmentMap.containsKey(server)) {
+ Set<String> segmentSetForServer =
existingServersToSegmentMap.get(server);
+ existingSegmentSet.addAll(segmentSetForServer);
+ intersection.addAll(segmentSetForServer);
+ intersection.retainAll(newSegmentSet);
+ }
+ newSegmentSet.removeAll(intersection);
+ Set<String> removedSegmentSet = new HashSet<>(existingSegmentSet);
+ removedSegmentSet.removeAll(intersection);
+
+ long diskUtilizationGain = newSegmentSet.size() * avgSegmentSize;
+ long diskUtilizationLoss = removedSegmentSet.size() * avgSegmentSize;
Review Comment:
I think the final view of what rebalance should look like in the end might
be useful too (with a lower threshold of course) as it'll give a more realistic
picture to the user about how much actual disk usage they'll see. It'll help
differentiate between:
- Just rebalance operation will run out of disk, but after deletions it will
look fine -> temp disk/hosts increase
- Overall we don't have enough space to store the segments even after
deletions -> permanent disk/hosts increase
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]