amogh-jahagirdar commented on code in PR #6090:
URL: https://github.com/apache/iceberg/pull/6090#discussion_r1012465971


##########
core/src/main/java/org/apache/iceberg/FileCleanupStrategy.java:
##########
@@ -79,4 +80,15 @@ protected void deleteFiles(Set<String> pathsToDelete, String 
fileType) {
             (file, thrown) -> LOG.warn("Delete failed for {} file: {}", 
fileType, file, thrown))
         .run(deleteFunc::accept);
   }
+
+  protected Set<String> expiredStatisticsFilesLocations(
+      TableMetadata beforeExpiration, Set<Long> expiredIds) {
+    Set<String> expiredStatisticsFilesLocations = Sets.newHashSet();
+    for (StatisticsFile statisticsFile : beforeExpiration.statisticsFiles()) {
+      if (expiredIds.contains(statisticsFile.snapshotId())) {
+        expiredStatisticsFilesLocations.add(statisticsFile.path());
+      }
+    }
+    return expiredStatisticsFilesLocations;
+  }

Review Comment:
   Curious, is it guaranteed that StatisticsFIles produced at a given snapshot 
cannot be reused in subsequent snapshots?
   
    
   The comment on StatisticsFile#snapshotId says 
   
   `  /** ID of the Iceberg table's snapshot the statistics were computed from. 
*/
     long snapshotId();
   `
   For example, Is it possible that a StatisticsFile "file1" is computed at 
snapshot 1, but then still considered a valid reachable statistics file at 
Snapshot 2? and then we wouldn't want to do a removal of that file.
   
   If it is possible it seems that we need to capture the reachability set 
rather than just removing all the statistics file before expiration. let me 
know if i misunderstood ! 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to