pvargacl commented on a change in pull request #1592:
URL: https://github.com/apache/hive/pull/1592#discussion_r514919991
##########
File path:
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java
##########
@@ -281,9 +280,14 @@ public void markCompacted(CompactionInfo info) throws
MetaException {
try {
dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
stmt = dbConn.createStatement();
- String s = "SELECT \"CQ_ID\", \"CQ_DATABASE\", \"CQ_TABLE\",
\"CQ_PARTITION\", " +
- "\"CQ_TYPE\", \"CQ_RUN_AS\", \"CQ_HIGHEST_WRITE_ID\" FROM
\"COMPACTION_QUEUE\" " +
- "WHERE \"CQ_STATE\" = '" + READY_FOR_CLEANING + "'";
+ /*
+ * By filtering on minOpenTxnWaterMark, we will only cleanup after
every transaction is committed, that could see
+ * the uncompacted deltas. This way the cleaner can clean up
everything that was made obsolete by this compaction.
+ */
+ long minOpenTxnWaterMark = getMinOpenTxnIdWaterMark(dbConn);
Review comment:
1. Passing the minOpenTxn as an argument now
2. Changed the findMinOpenTxnIdForCleaner to use getMinOpenTxnIdWaterMark.
The timeout boundary checking is needed, since HIVE-23084, because it might be
possible for an open txn to appear later, that has txnId lower than the current
minOpen and higher the timeout boundary. Probably it wouldn't cause any problem
for the Cleaner, but better safe than sorry, this way it always gives correct
result.
This also means that the max(cq_next_txnid) check is removed, but I think
this will only mean, that if there were any txn after the compaction that were
aborted, we are going to clean those up also, which is a good side effect.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]