[ 
https://issues.apache.org/jira/browse/HIVE-25883?focusedWorklogId=715670&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-715670
 ]

ASF GitHub Bot logged work on HIVE-25883:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 26/Jan/22 13:22
            Start Date: 26/Jan/22 13:22
    Worklog Time Spent: 10m 
      Work Description: klcopp commented on a change in pull request #2971:
URL: https://github.com/apache/hive/pull/2971#discussion_r792628396



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##########
@@ -434,8 +437,18 @@ private boolean removeFiles(String location, 
ValidWriteIdList writeIdList, Compa
     return success;
   }
 
-  private boolean hasDataBelowWatermark(FileSystem fs, Path path, long 
highWatermark) throws IOException {
-    FileStatus[] children = fs.listStatus(path);
+  private boolean hasDataBelowWatermark(AcidDirectory acidDir, FileSystem fs, 
Path path, long highWatermark)
+      throws IOException {
+    Set<Path> acidPaths = new HashSet<>();
+    for (ParsedDelta delta : acidDir.getCurrentDirectories()) {
+      acidPaths.add(delta.getPath());
+    }
+    if (acidDir.getBaseDirectory() != null) {
+      acidPaths.add(acidDir.getBaseDirectory());
+    }
+    FileStatus[] children = fs.listStatus(path, p -> {
+      return !acidPaths.contains(p);
+    });
     for (FileStatus child : children) {
       if (isFileBelowWatermark(child, highWatermark)) {

Review comment:
       1.
   > I believe that in case there are files in the dir they already should be 
in the obsolete list
   
   Not necessarily, because the AcidDirectory the Cleaner uses is computed 
based on an older txnId (cleanerWaterMark), so there is a chance its obsolete 
list does not contain files that should be cleaned up eventually, which is what 
this method is supposed to figure out. (Right?)
   
   @deniskuzZ please correct me if I'm wrong about this since I know there have 
been recent changes to this logic
   
   2. I meant that if the table dir contains:
   
   - delta_5_5
   - delta_1_5_v100 (minor compacted)
   
   Then the cleaner should eventually remove delta_5_5, so there will be files 
to remove later, when the cleanerWaterMark is high enough




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 715670)
    Time Spent: 1.5h  (was: 1h 20m)

> Enhance Compaction Cleaner to skip when there is nothing to do
> --------------------------------------------------------------
>
>                 Key: HIVE-25883
>                 URL: https://issues.apache.org/jira/browse/HIVE-25883
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Zoltan Haindrich
>            Assignee: Zoltan Haindrich
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 4.0.0
>
>          Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> the cleaner works the following way:
> * it identifies obsolete directories (delta dirs ; which doesn't have open 
> txns)
> * removes them and done
> if there are no obsolete directoris that is attributed to that there might be 
> open txns so the request should be retried later.
> however if for some reason the directory was already cleaned - similarily it 
> has no obsolete directories; and thus the request is retried for forever 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to