deniskuzZ commented on a change in pull request #2974:
URL: https://github.com/apache/hive/pull/2974#discussion_r795811803
##########
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##########
@@ -155,18 +158,23 @@ public void run() {
// when min_history_level is finally dropped, than every HMS will
commit compaction the new way
// and minTxnIdSeenOpen can be removed and minOpenTxnId can be
used instead.
for (CompactionInfo compactionInfo : readyToClean) {
-
cleanerList.add(CompletableFuture.runAsync(ThrowingRunnable.unchecked(
- () -> clean(compactionInfo, cleanerWaterMark,
metricsEnabled)), cleanerExecutor));
+ String tableName = compactionInfo.getFullTableName();
+ String partition = compactionInfo.getFullPartitionName();
+ CompletableFuture<Void> asyncJob =
+ CompletableFuture.runAsync(
+ ThrowingRunnable.unchecked(() ->
clean(compactionInfo, cleanerWaterMark, metricsEnabled)),
+ cleanerExecutor)
+ .exceptionally(t -> {
+ LOG.error("Error during the cleaning the table {} /
partition {}", tableName, partition, t);
Review comment:
How useful is this if we don't log the exception? If we log it somewhere
else, what's the purpose of doing it twice?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]