[jira] [Commented] (HIVE-18772) Make Acid Cleaner use MIN_HISTORY_LEVEL
[ https://issues.apache.org/jira/browse/HIVE-18772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16672392#comment-16672392 ] Eugene Koifman commented on HIVE-18772: --- patch 4 is a rebase of 3 > Make Acid Cleaner use MIN_HISTORY_LEVEL > --- > > Key: HIVE-18772 > URL: https://issues.apache.org/jira/browse/HIVE-18772 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-18772.01.patch, HIVE-18772.02.patch, > HIVE-18772.02.patch, HIVE-18772.03.patch, HIVE-18772.04.patch > > > Instead of using Lock Manager state as it currently does. > This will eliminate possible race conditions > See this > [comment|https://issues.apache.org/jira/browse/HIVE-18192?focusedCommentId=16338208&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16338208] > Suppose A is the set of all ValidTxnList across all active readers. Each > ValidTxnList has minOpenTxnId. > MIN_HISTORY_LEVEL allows us to determine X = min(minOpenTxnId) across all > currently active readers > This means that no active transaction in the system sees any txn with txnid < > X as open. > This means if construct ValidTxnIdList with HWM=X-1 and use that in > getAcidState(), any files determined by this call as 'obsolete', will be seen > as obsolete by any existing/future reader, i.e. can be physically deleted. > This is also necessary for multi-statement transactions where relying on the > state of Lock Manager is not sufficient. For example > Suppose txn 17 starts at t1 and sees txnid 13 with writeID 13 open. > 13 commits (via it's parent txn) at t2 > t1. (17 is still running). > Compaction runs at t3 >t2 to produce base_14 (or delta_10_14 for example) on > Table1/Part1 (17 is still running) > Now delta_13 may be cleaned since it can be seen as obsolete and there may be > no locks on it, i.e. no one is reading it. > Now at t4 > t3 17 may (multi stmt txn) needs to read Table1/Part1. It cannot > use base_14 is that may have absorbed delete events from delete_delta_14. > Another Use Case > There is delta_1_1 and delta_2_2 on disk both created by committed txns. > T5 starts reading these. At the same time compactor creates delta_1_2. > Now Cleaner sees delta_1_1 and delta_1_2 as obsolete and may remove them > while the read is still in progress. This is because Compactor itself is not > running in a txn and the files that > it produces are visible immediately. If it ran in a txn, the new files would > only be visible once > this txn is visible to others (including the Cleaner). > Using MIN_HISTORY_LEVEL solves this. > See description of HIVE-18747 for more details on MIN_HISTORY_LEVEL -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18772) Make Acid Cleaner use MIN_HISTORY_LEVEL
[ https://issues.apache.org/jira/browse/HIVE-18772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16604371#comment-16604371 ] Hive QA commented on HIVE-18772: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12938398/HIVE-18772.02.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 45 failed/errored test(s), 14921 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.ql.TestTxnCommands2.testACIDwithSchemaEvolutionAndCompaction (batchId=297) org.apache.hadoop.hive.ql.TestTxnCommands2.testCleanerForTxnToWriteId (batchId=297) org.apache.hadoop.hive.ql.TestTxnCommands2.testInitiatorWithMultipleFailedCompactions (batchId=297) org.apache.hadoop.hive.ql.TestTxnCommands2.testInsertOverwrite1 (batchId=297) org.apache.hadoop.hive.ql.TestTxnCommands2.testInsertOverwrite2 (batchId=297) org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion1 (batchId=297) org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion2 (batchId=297) org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion3 (batchId=297) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testACIDwithSchemaEvolutionAndCompaction (batchId=310) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testCleanerForTxnToWriteId (batchId=310) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testInitiatorWithMultipleFailedCompactions (batchId=310) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testInsertOverwrite1 (batchId=310) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testInsertOverwrite2 (batchId=310) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testNonAcidToAcidConversion1 (batchId=310) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testNonAcidToAcidConversion2 (batchId=310) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testNonAcidToAcidConversion3 (batchId=310) org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.testInsertOverwriteForPartitionedMmTable (batchId=275) org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.testInsertOverwriteWithUnionAll (batchId=275) org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.testOperationsOnCompletedTxnComponentsForMmTable (batchId=275) org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.testSnapshotIsolationWithAbortedTxnOnMmTable (batchId=275) org.apache.hadoop.hive.ql.TestTxnCommandsForOrcMmTable.testInsertOverwriteForPartitionedMmTable (batchId=297) org.apache.hadoop.hive.ql.TestTxnCommandsForOrcMmTable.testInsertOverwriteWithUnionAll (batchId=297) org.apache.hadoop.hive.ql.TestTxnCommandsForOrcMmTable.testOperationsOnCompletedTxnComponentsForMmTable (batchId=297) org.apache.hadoop.hive.ql.TestTxnCommandsForOrcMmTable.testSnapshotIsolationWithAbortedTxnOnMmTable (batchId=297) org.apache.hadoop.hive.ql.TestTxnNoBuckets.testNoBuckets (batchId=297) org.apache.hadoop.hive.ql.TestTxnNoBucketsVectorized.testNoBuckets (batchId=299) org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testMetastoreTablesCleanup (batchId=314) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMajorPartitionCompaction (batchId=284) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMajorPartitionCompactionNoBase (batchId=284) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMajorTableCompaction (batchId=284) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMinorPartitionCompaction (batchId=284) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMinorTableCompaction (batchId=284) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner2.cleanupAfterMajorPartitionCompaction (batchId=285) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner2.cleanupAfterMajorPartitionCompactionNoBase (batchId=285) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner2.cleanupAfterMajorTableCompaction (batchId=285) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner2.cleanupAfterMinorPartitionCompaction (batchId=285) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner2.cleanupAfterMinorTableCompaction (batchId=285) org.apache.hadoop.hive.ql.txn.compactor.TestCleanerWithReplication.cleanupAfterMajorPartitionCompaction (batchId=241) org.apache.hadoop.hive.ql.txn.compactor.TestCleanerWithReplication.cleanupAfterMajorTableCompaction (batchId=241) org.apache.hadoop.hive.ql.txn.compactor.TestCleanerWithReplication.cleanupAfterMinorPartitionCompaction (batchId=241) org.apache.hadoop.hive.ql.txn.compactor.TestCleanerWithReplication.cleanupAfterMinorTableCompaction (batchId=241) org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.mmTableOpenWriteId[0] (batchId=240) org.apache.hadoop.hive.ql.txn.compactor.TestCompacto
[jira] [Commented] (HIVE-18772) Make Acid Cleaner use MIN_HISTORY_LEVEL
[ https://issues.apache.org/jira/browse/HIVE-18772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16604336#comment-16604336 ] Hive QA commented on HIVE-18772: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 9s{color} | {color:blue} ql in master has 2310 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 15s{color} | {color:red} metastore-server in master failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 57s{color} | {color:red} ql in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 45s{color} | {color:red} ql: The patch generated 16 new + 549 unchanged - 2 fixed = 565 total (was 551) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 25s{color} | {color:red} ql generated 1 new + 2309 unchanged - 1 fixed = 2310 total (was 2310) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 15s{color} | {color:red} metastore-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 28m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Exception is caught when Exception is not thrown in org.apache.hadoop.hive.ql.txn.compactor.Cleaner.clean(CompactionInfo, Long) At Cleaner.java:is not thrown in org.apache.hadoop.hive.ql.txn.compactor.Cleaner.clean(CompactionInfo, Long) At Cleaner.java:[line 218] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13598/dev-support/hive-personality.sh | | git revision | master / 154ca3e | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13598/yetus/branch-findbugs-standalone-metastore_metastore-server.txt | | mvninstall | http://104.198.109.242/logs//PreCommit-HIVE-Build-13598/yetus/patch-mvninstall-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13598/yetus/diff-checkstyle-ql.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13598/yetus/new-findbugs-ql.html | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13598/yetus/patch-findbugs-standalone-metastore_metastore-server.txt | | modules | C: ql standalone-metastore/metastore-server U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13598/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Make Acid Cleaner use MIN_HISTORY_LEVEL >
[jira] [Commented] (HIVE-18772) Make Acid Cleaner use MIN_HISTORY_LEVEL
[ https://issues.apache.org/jira/browse/HIVE-18772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16592531#comment-16592531 ] Hive QA commented on HIVE-18772: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12937099/HIVE-18772.01.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 14896 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.blockedByLockPartition (batchId=284) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.blockedByLockTable (batchId=284) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner2.blockedByLockPartition (batchId=285) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner2.blockedByLockTable (batchId=285) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13466/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13466/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13466/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12937099 - PreCommit-HIVE-Build > Make Acid Cleaner use MIN_HISTORY_LEVEL > --- > > Key: HIVE-18772 > URL: https://issues.apache.org/jira/browse/HIVE-18772 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-18772.01.patch > > > Instead of using Lock Manager state as it currently does. > This will eliminate possible race conditions > See this > [comment|https://issues.apache.org/jira/browse/HIVE-18192?focusedCommentId=16338208&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16338208] > Suppose A is the set of all ValidTxnList across all active readers. Each > ValidTxnList has minOpenTxnId. > MIN_HISTORY_LEVEL allows us to determine X = min(minOpenTxnId) across all > currently active readers > This means that no active transaction in the system sees any txn with txnid < > X as open. > This means if construct ValidTxnIdList with HWM=X-1 and use that in > getAcidState(), any files determined by this call as 'obsolete', will be seen > as obsolete by any existing/future reader, i.e. can be physically deleted. > This is also necessary for multi-statement transactions where relying on the > state of Lock Manager is not sufficient. For example > Suppose txn 17 starts at t1 and sees txnid 13 with writeID 13 open. > 13 commits (via it's parent txn) at t2 > t1. (17 is still running). > Compaction runs at t3 >t2 to produce base_14 (or delta_10_14 for example) on > Table1/Part1 (17 is still running) > Now delta_13 may be cleaned since it can be seen as obsolete and there may be > no locks on it, i.e. no one is reading it. > Now at t4 > t3 17 may (multi stmt txn) needs to read Table1/Part1. It cannot > use base_14 is that may have absorbed delete events from delete_delta_14. > Using MIN_HISTORY_LEVEL solves this. > See description of HIVE-18747 for more details on MIN_HISTORY_LEVEL -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18772) Make Acid Cleaner use MIN_HISTORY_LEVEL
[ https://issues.apache.org/jira/browse/HIVE-18772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16592292#comment-16592292 ] Eugene Koifman commented on HIVE-18772: --- HIVE-20459 would be nice to have here > Make Acid Cleaner use MIN_HISTORY_LEVEL > --- > > Key: HIVE-18772 > URL: https://issues.apache.org/jira/browse/HIVE-18772 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > > Instead of using Lock Manager state as it currently does. > This will eliminate possible race conditions > See this > [comment|https://issues.apache.org/jira/browse/HIVE-18192?focusedCommentId=16338208&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16338208] > Suppose A is the set of all ValidTxnList across all active readers. Each > ValidTxnList has minOpenTxnId. > MIN_HISTORY_LEVEL allows us to determine X = min(minOpenTxnId) across all > currently active readers > This means that no active transaction in the system sees any txn with txnid < > X as open. > This means if construct ValidTxnIdList with HWM=X-1 and use that in > getAcidState(), any files determined by this call as 'obsolete', will be seen > as obsolete by any existing/future reader, i.e. can be physically deleted. > This is also necessary for multi-statement transactions where relying on the > state of Lock Manager is not sufficient. For example > Suppose txn 17 starts at t1 and sees txnid 13 with writeID 13 open. > 13 commits (via it's parent txn) at t2 > t1. (17 is still running). > Compaction runs at t3 >t2 to produce base_14 (or delta_10_14 for example) on > Table1/Part1 (17 is still running) > Now delta_13 may be cleaned since it can be seen as obsolete and there may be > no locks on it, i.e. no one is reading it. > Now at t4 > t3 17 may (multi stmt txn) needs to read Table1/Part1. It cannot > use base_14 is that may have absorbed delete events from delete_delta_14. > Using MIN_HISTORY_LEVEL solves this. > See description of HIVE-18747 for more details on MIN_HISTORY_LEVEL -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18772) Make Acid Cleaner use MIN_HISTORY_LEVEL
[ https://issues.apache.org/jira/browse/HIVE-18772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590579#comment-16590579 ] Sankar Hariappan commented on HIVE-18772: - [~ekoifman], no impact onĀ replication as we runĀ ACID compaction and cleaner at target cluster itself. > Make Acid Cleaner use MIN_HISTORY_LEVEL > --- > > Key: HIVE-18772 > URL: https://issues.apache.org/jira/browse/HIVE-18772 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > > Instead of using Lock Manager state as it currently does. > This will eliminate possible race conditions > See this > [comment|https://issues.apache.org/jira/browse/HIVE-18192?focusedCommentId=16338208&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16338208] > Suppose A is the set of all ValidTxnList across all active readers. Each > ValidTxnList has minOpenTxnId. > MIN_HISTORY_LEVEL allows us to determine X = min(minOpenTxnId) across all > currently active readers > This means that no active transaction in the system sees any txn with txnid < > X as open. > This means if construct ValidTxnIdList with HWM=X-1 and use that in > getAcidState(), any files determined by this call as 'obsolete', will be seen > as obsolete by any existing/future reader, i.e. can be physically deleted. > This is also necessary for multi-statement transactions where relying on the > state of Lock Manager is not sufficient. For example > Suppose txn 17 starts at t1 and sees txnid 13 with writeID 13 open. > 13 commits (via it's parent txn) at t2 > t1. (17 is still running). > Compaction runs at t3 >t2 to produce base_14 (or delta_10_14 for example) on > Table1/Part1 (17 is still running) > Now delta_13 may be cleaned since it can be seen as obsolete and there may be > no locks on it, i.e. no one is reading it. > Now at t4 > t3 17 may (multi stmt txn) needs to read Table1/Part1. It cannot > use base_14 is that may have absorbed delete events from delete_delta_14. > Using MIN_HISTORY_LEVEL solves this. > See description of HIVE-18747 for more details on MIN_HISTORY_LEVEL -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18772) Make Acid Cleaner use MIN_HISTORY_LEVEL
[ https://issues.apache.org/jira/browse/HIVE-18772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590496#comment-16590496 ] Eugene Koifman commented on HIVE-18772: --- [~sankarh], would this have any impact on replication? > Make Acid Cleaner use MIN_HISTORY_LEVEL > --- > > Key: HIVE-18772 > URL: https://issues.apache.org/jira/browse/HIVE-18772 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > > Instead of using Lock Manager state as it currently does. > This will eliminate possible race conditions > See this > [comment|https://issues.apache.org/jira/browse/HIVE-18192?focusedCommentId=16338208&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16338208] > Suppose A is the set of all ValidTxnList across all active readers. Each > ValidTxnList has minOpenTxnId. > MIN_HISTORY_LEVEL allows us to determine X = min(minOpenTxnId) across all > currently active readers > This means that no active transaction in the system sees any txn with txnid < > X as open. > This means if construct ValidTxnIdList with HWM=X-1 and use that in > getAcidState(), any files determined by this call as 'obsolete', will be seen > as obsolete by any existing/future reader, i.e. can be physically deleted. > This is also necessary for multi-statement transactions where relying on the > state of Lock Manager is not sufficient. For example > Suppose txn 17 starts at t1 and sees txnid 13 with writeID 13 open. > 13 commits (via it's parent txn) at t2 > t1. (17 is still running). > Compaction runs at t3 >t2 to produce base_14 (or delta_10_14 for example) on > Table1/Part1 (17 is still running) > Now delta_13 may be cleaned since it can be seen as obsolete and there may be > no locks on it, i.e. no one is reading it. > Now at t4 > t3 17 may (multi stmt txn) needs to read Table1/Part1. It cannot > use base_14 is that may have absorbed delete events from delete_delta_14. > Using MIN_HISTORY_LEVEL solves this. > See description of HIVE-18747 for more details on MIN_HISTORY_LEVEL -- This message was sent by Atlassian JIRA (v7.6.3#76005)