deniskuzZ commented on a change in pull request #2277:
URL: https://github.com/apache/hive/pull/2277#discussion_r672900959



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##########
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
     assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
     ValidReaderWriteIdList validWriteIdList =
         
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-    boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-    if (delayedCleanupEnabled) {
-      /*
-       * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-       * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-       * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-       */
-      validWriteIdList = 
validWriteIdList.updateHighWatermark(ci.highestWriteId);
-    }
+    /*
+     * We need to filter the obsoletes dir list, to only remove directories 
that were made obsolete by this compaction
+     * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
+     * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
+     */
+    validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);

Review comment:
       not sure I got the question. but highestWriteId is recorded at the time 
when the compaction txn starts, so it records all open txns that have to be 
ignored.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##########
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
     assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
     ValidReaderWriteIdList validWriteIdList =
         
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-    boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-    if (delayedCleanupEnabled) {
-      /*
-       * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-       * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-       * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-       */
-      validWriteIdList = 
validWriteIdList.updateHighWatermark(ci.highestWriteId);
-    }
+    /*
+     * We need to filter the obsoletes dir list, to only remove directories 
that were made obsolete by this compaction
+     * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
+     * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
+     */
+    validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);

Review comment:
       not sure I got the question. but highestWriteId is recorded at the time 
when the compaction txn starts, so it records writeid hwm and all open txns 
below it that have to be ignored.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##########
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
     assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
     ValidReaderWriteIdList validWriteIdList =
         
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-    boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-    if (delayedCleanupEnabled) {
-      /*
-       * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-       * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-       * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-       */
-      validWriteIdList = 
validWriteIdList.updateHighWatermark(ci.highestWriteId);
-    }
+    /*
+     * We need to filter the obsoletes dir list, to only remove directories 
that were made obsolete by this compaction
+     * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
+     * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
+     */
+    validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);

Review comment:
       validWriteIdList besides the updated hwm has also an exceptions list, 
that would show if there are any open txns in that range. What we are doing 
here is just lowering the hwm so that cleaner won't remove more than this 
compaction was responsible for.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to