[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16447558#comment-16447558 ] ASF GitHub Bot commented on HIVE-19219: --- Github user sankarh closed the pull request at: https://github.com/apache/hive/pull/334 > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19219.01-branch-3.patch, HIVE-19219.01.patch, > HIVE-19219.02.patch, HIVE-19219.03.patch, HIVE-19219.04.patch, > HIVE-19219.05.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > Cleaner thread is concurrently removing the expired events from > NOTIFICATION_LOG table. So, it is necessary to check if the current dump > missed any event while dumping. After fetching events in batches, we shall > check if it is fetched in contiguous sequence of event id. If it is not in > contiguous sequence, then likely some events missed in the dump and hence > throw error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16447556#comment-16447556 ] Sankar Hariappan commented on HIVE-19219: - Test failures in branch-3 ptest build is irrelevant to the patch. Patch 01-branch-3.patch is committed to branch-3. > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-19219.01-branch-3.patch, HIVE-19219.01.patch, > HIVE-19219.02.patch, HIVE-19219.03.patch, HIVE-19219.04.patch, > HIVE-19219.05.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > Cleaner thread is concurrently removing the expired events from > NOTIFICATION_LOG table. So, it is necessary to check if the current dump > missed any event while dumping. After fetching events in batches, we shall > check if it is fetched in contiguous sequence of event id. If it is not in > contiguous sequence, then likely some events missed in the dump and hence > throw error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446522#comment-16446522 ] Hive QA commented on HIVE-19219: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12919940/HIVE-19219.01-branch-3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 47 failed/errored test(s), 14145 tests executed *Failed tests:* {noformat} TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestTezPerfCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[results_cache_invalidation2] (batchId=39) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=153) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=160) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[avro_non_nullable_union] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[cachingprintstream] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[compute_stats_long] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dyn_part3] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dyn_part_max_per_node] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dynamic_partitions_with_whitelist] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[script_broken_pipe2] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[script_broken_pipe3] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[script_error] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[serde_regex2] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_2] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_1] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_2] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_corr_in_agg] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_in_implicit_gby] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_notin_implicit_gby] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_scalar_corr_multi_rows] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_scalar_multi_rows] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_assert_true2] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_assert_true] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error_reduce] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[local_mapred_error_cache] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe] (batchId=98) org.apache.hadoop.hive.druid.TestDruidStorageHandler.testCommitMultiInsertOverwriteTable (batchId=261) org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testGetSplitsLocks (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01
[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446456#comment-16446456 ] Hive QA commented on HIVE-19219: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 13s{color} | {color:red} /data/hiveptest/logs/PreCommit-HIVE-Build-10377/patches/PreCommit-HIVE-Build-10377.patch does not apply to master. Rebase required? Wrong Branch? See http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10377/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-19219.01-branch-3.patch, HIVE-19219.01.patch, > HIVE-19219.02.patch, HIVE-19219.03.patch, HIVE-19219.04.patch, > HIVE-19219.05.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > Cleaner thread is concurrently removing the expired events from > NOTIFICATION_LOG table. So, it is necessary to check if the current dump > missed any event while dumping. After fetching events in batches, we shall > check if it is fetched in contiguous sequence of event id. If it is not in > contiguous sequence, then likely some events missed in the dump and hence > throw error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445204#comment-16445204 ] Sankar Hariappan commented on HIVE-19219: - Attached branch-3 version of the patch for ptest. > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-19219.01-branch-3.patch, HIVE-19219.01.patch, > HIVE-19219.02.patch, HIVE-19219.03.patch, HIVE-19219.04.patch, > HIVE-19219.05.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > Cleaner thread is concurrently removing the expired events from > NOTIFICATION_LOG table. So, it is necessary to check if the current dump > missed any event while dumping. After fetching events in batches, we shall > check if it is fetched in contiguous sequence of event id. If it is not in > contiguous sequence, then likely some events missed in the dump and hence > throw error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445190#comment-16445190 ] Sankar Hariappan commented on HIVE-19219: - Test failures are locally tested and all passed. 05.patch is committed to master. Thanks for the review [~thejas], [~maheshk114]! > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-19219.01.patch, HIVE-19219.02.patch, > HIVE-19219.03.patch, HIVE-19219.04.patch, HIVE-19219.05.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > Cleaner thread is concurrently removing the expired events from > NOTIFICATION_LOG table. So, it is necessary to check if the current dump > missed any event while dumping. After fetching events in batches, we shall > check if it is fetched in contiguous sequence of event id. If it is not in > contiguous sequence, then likely some events missed in the dump and hence > throw error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444922#comment-16444922 ] Hive QA commented on HIVE-19219: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12919835/HIVE-19219.05.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 37 failed/errored test(s), 14279 tests executed *Failed tests:* {noformat} TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93) [infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q] TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_smb] (batchId=92) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[results_cache_invalidation2] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_join_hash] (batchId=54) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=171) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=183) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe] (batchId=98) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=225) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.addNoSuchTable[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.createGetDrop2Column[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.createGetDrop[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.createTableWithConstraintsPkInOtherCatalog[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.createTableWithConstraintsPk[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.doubleAddPrimaryKey[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.dropNoSuchCatalog[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.dropNoSuchConstraint[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.dropNoSuchDatabase[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.dropNoSuchTable[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.getNoSuchCatalog[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.getNoSuchDb[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.getNoSuchTable[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.inOtherCatalog[Embedded] (batchId=211) org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.testSnapshotIsolationWithAbortedTxnOnMmTable (batchId=264) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel (batchId=235) org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie.testCookieNegative (batchId=254) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10352/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10352/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10352/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 37 tests failed {noformat} This message
[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444841#comment-16444841 ] Hive QA commented on HIVE-19219: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 49s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s{color} | {color:red} itests/hive-unit: The patch generated 11 new + 515 unchanged - 39 fixed = 526 total (was 554) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 18m 18s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10352/dev-support/hive-personality.sh | | git revision | master / 9f15e22 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10352/yetus/diff-checkstyle-itests_hive-unit.txt | | modules | C: itests/hive-unit standalone-metastore U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10352/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-19219.01.patch, HIVE-19219.02.patch, > HIVE-19219.03.patch, HIVE-19219.04.patch, HIVE-19219.05.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > Cleaner thread is concurrently removing the expired events from > NOTIFICATION_LOG table. So, it is necessary to check if the current dump > missed any event while dumping. After fetching events in batches, we shall > check if it is fetched in contiguous sequence of event id. If it is not in > contiguous sequence, then likely some events missed in the dump and hence > throw
[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444279#comment-16444279 ] Sankar Hariappan commented on HIVE-19219: - Added 05.patch after rebasing with master. > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-19219.01.patch, HIVE-19219.02.patch, > HIVE-19219.03.patch, HIVE-19219.04.patch, HIVE-19219.05.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > Cleaner thread is concurrently removing the expired events from > NOTIFICATION_LOG table. So, it is necessary to check if the current dump > missed any event while dumping. After fetching events in batches, we shall > check if it is fetched in contiguous sequence of event id. If it is not in > contiguous sequence, then likely some events missed in the dump and hence > throw error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444245#comment-16444245 ] Hive QA commented on HIVE-19219: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12919806/HIVE-19219.04.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10344/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10344/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10344/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-04-19 15:25:53.439 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-10344/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-04-19 15:25:53.442 + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive 5ae174c..fb22f57 master -> origin/master + git reset --hard HEAD HEAD is now at 5ae174c HIVE-19141 : TestNegativeCliDriver insert_into_notnull_constraint, insert_into_acid_notnull failing (Igor Kryvenko via Ashutosh Chauhan) + git clean -f -d + git checkout master Already on 'master' Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/master HEAD is now at fb22f57 HIVE-19243 : Upgrade hadoop.version to 3.1.0 (Gour Saha via Sahil Takiar) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-04-19 15:25:59.335 + rm -rf ../yetus_PreCommit-HIVE-Build-10344 + mkdir ../yetus_PreCommit-HIVE-Build-10344 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-10344 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10344/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenarios.java: does not exist in index error: a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java: does not exist in index error: a/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClientPreCatalog.java: does not exist in index Going to apply patch with: git apply -p1 + [[ maven == \m\a\v\e\n ]] + rm -rf /data/hiveptest/working/maven/org/apache/hive + mvn -B clean install -DskipTests -T 4 -q -Dmaven.repo.local=/data/hiveptest/working/maven protoc-jar: executing: [/tmp/protoc5272366524077379115.exe, --version] libprotoc 2.5.0 protoc-jar: executing: [/tmp/protoc5272366524077379115.exe, -I/data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore, --java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources, /data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto] ANTLR Parser Generator Version 3.5.2 Output file /data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java does not exist: must build /data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g org/apache/hadoop/hive/metastore/parser/Filter.g [ERROR] Failed to execute goal org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (process-resource-bundles) on project hive-shims-0.23: Execution process-resource-bundles of goal
[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16442938#comment-16442938 ] Sankar Hariappan commented on HIVE-19219: - Attached 04.patch after rebasing with master. Waiting for ptest run. > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-19219.01.patch, HIVE-19219.02.patch, > HIVE-19219.03.patch, HIVE-19219.04.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > Cleaner thread is concurrently removing the expired events from > NOTIFICATION_LOG table. So, it is necessary to check if the current dump > missed any event while dumping. After fetching events in batches, we shall > check if it is fetched in contiguous sequence of event id. If it is not in > contiguous sequence, then likely some events missed in the dump and hence > throw error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16442862#comment-16442862 ] Thejas M Nair commented on HIVE-19219: -- +1 > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-19219.01.patch, HIVE-19219.02.patch, > HIVE-19219.03.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > Cleaner thread is concurrently removing the expired events from > NOTIFICATION_LOG table. So, it is necessary to check if the current dump > missed any event while dumping. After fetching events in batches, we shall > check if it is fetched in contiguous sequence of event id. If it is not in > contiguous sequence, then likely some events missed in the dump and hence > throw error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16442776#comment-16442776 ] mahesh kumar behera commented on HIVE-19219: The code changes looks fine [~thejas] [~sankarh] > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-19219.01.patch, HIVE-19219.02.patch, > HIVE-19219.03.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > Cleaner thread is concurrently removing the expired events from > NOTIFICATION_LOG table. So, it is necessary to check if the current dump > missed any event while dumping. After fetching events in batches, we shall > check if it is fetched in contiguous sequence of event id. If it is not in > contiguous sequence, then likely some events missed in the dump and hence > throw error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440782#comment-16440782 ] Hive QA commented on HIVE-19219: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12919243/HIVE-19219.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 87 failed/errored test(s), 13843 tests executed *Failed tests:* {noformat} TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=95)
[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440740#comment-16440740 ] Hive QA commented on HIVE-19219: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 49s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s{color} | {color:red} itests/hive-unit: The patch generated 11 new + 594 unchanged - 39 fixed = 605 total (was 633) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 15s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10269/dev-support/hive-personality.sh | | git revision | master / 5404635 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10269/yetus/diff-checkstyle-itests_hive-unit.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-10269/yetus/patch-asflicense-problems.txt | | modules | C: itests/hive-unit standalone-metastore U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10269/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-19219.01.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > The next incremental repl dump could check if the events fetched from source > notification_log table is retrieved in continuous sequence with no events > missing. If any event is missing, it should throw error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439680#comment-16439680 ] Sankar Hariappan commented on HIVE-19219: - Added 01.patch * REPL DUMP throw IllegalStateException if events found missing in source. * Modified testIncrementalLoadWithVariableLengthEventId to rename the dumped event directory names instead of changing event id while dumping. Stack trace: {quote} 2018-04-16T09:21:22,417 WARN [main] metastore.HiveMetaStoreClient: Requested events are found missing in NOTIFICATION_LOG table. Probably, cleaner would've cleaned it up. Try setting higher value for hive.metastore.event.db.listener.timetolive. Also, bootstrap the system again to get back the consistent replicated state. 2018-04-16T09:21:22,418 ERROR [main] repl.ReplDumpTask: failed java.lang.IllegalStateException: Notification events are missing. at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:2587) ~[hive-standalone-metastore-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_112] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_112] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_112] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) ~[hive-standalone-metastore-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] at com.sun.proxy.$Proxy40.getNextNotification(Unknown Source) ~[?:?] at org.apache.hadoop.hive.metastore.messaging.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:94) ~[hive-standalone-metastore-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] at org.apache.hadoop.hive.metastore.messaging.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:146) ~[hive-standalone-metastore-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] at org.apache.hadoop.hive.metastore.messaging.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:176) ~[hive-standalone-metastore-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] at org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:160) ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] at org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:111) [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] {quote} [~thejas], [~maheshk114], [~anishek], Can you please take a look? > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-19219.01.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > The next incremental repl dump could check if the events fetched from source > notification_log table is retrieved in continuous sequence with no events > missing. If any event is missing, it should throw error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)