[jira] [Updated] (HIVE-23133) Numeric operations can have different result across hardware archs
[ https://issues.apache.org/jira/browse/HIVE-23133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yikun Jiang updated HIVE-23133: --- Attachment: HIVE-23133.1.patch Status: Patch Available (was: Open) > Numeric operations can have different result across hardware archs > -- > > Key: HIVE-23133 > URL: https://issues.apache.org/jira/browse/HIVE-23133 > Project: Hive > Issue Type: Sub-task >Reporter: Zhenyu Zheng >Assignee: Yikun Jiang >Priority: Major > Attachments: HIVE-23133.1.patch > > > Currently, we have set up an ARM CI to test out how Hive works on ARM > platform: > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/] > Among the failures, we have observed that some numeric operations can have > different result across hardware archs, such as: > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_vector_decimal_udf2_/] > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestSparkCliDriver/testCliDriver_subquery_select_/] > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestSparkCliDriver/testCliDriver_vectorized_math_funcs_/] > we can see that the calculation results of log, exp, cos, toRadians etc is > slitly different than the .out file results that we are > comparing(they are tested and wrote on X86 machines), this is because of we > use [Math > Library|https://docs.oracle.com/javase/6/docs/api/java/lang/Math.html] for > these kind of calculations. > and according to the > [illustration|https://docs.oracle.com/javase/6/docs/api/java/lang/Math.html]: > _Unlike some of the numeric methods of class StrictMath, all implementations > of the equivalent functions of class Math are not_ > _defined to return the bit-for-bit same results. This relaxation permits > better-performing implementations where strict reproducibility_ > _is not required._ > _By default many of the Math methods simply call the equivalent method in > StrictMath for their implementation._ > _Code generators are encouraged to use platform-specific native libraries or > microprocessor instructions, where available,_ > _to provide higher-performance implementations of Math methods._ > so the result will have difference across hardware archs. > On the other hand, JAVA provided another library > [StrictMath|https://docs.oracle.com/javase/6/docs/api/java/lang/StrictMath.html] > that will not have this kind of problem as according to its' > [reference|https://docs.oracle.com/javase/6/docs/api/java/lang/StrictMath.html]: > _To help ensure portability of Java programs, the definitions of some of the > numeric functions in this package require that they produce_ > _the same results as certain published algorithms._ > So in order to fix the above mentioned problem, we have to consider switch to > use StrictMath instead of Math. > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23133) Numeric operations can have different result across hardware archs
[ https://issues.apache.org/jira/browse/HIVE-23133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yikun Jiang updated HIVE-23133: --- Status: Open (was: Patch Available) > Numeric operations can have different result across hardware archs > -- > > Key: HIVE-23133 > URL: https://issues.apache.org/jira/browse/HIVE-23133 > Project: Hive > Issue Type: Sub-task >Reporter: Zhenyu Zheng >Assignee: Yikun Jiang >Priority: Major > > Currently, we have set up an ARM CI to test out how Hive works on ARM > platform: > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/] > Among the failures, we have observed that some numeric operations can have > different result across hardware archs, such as: > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_vector_decimal_udf2_/] > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestSparkCliDriver/testCliDriver_subquery_select_/] > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestSparkCliDriver/testCliDriver_vectorized_math_funcs_/] > we can see that the calculation results of log, exp, cos, toRadians etc is > slitly different than the .out file results that we are > comparing(they are tested and wrote on X86 machines), this is because of we > use [Math > Library|https://docs.oracle.com/javase/6/docs/api/java/lang/Math.html] for > these kind of calculations. > and according to the > [illustration|https://docs.oracle.com/javase/6/docs/api/java/lang/Math.html]: > _Unlike some of the numeric methods of class StrictMath, all implementations > of the equivalent functions of class Math are not_ > _defined to return the bit-for-bit same results. This relaxation permits > better-performing implementations where strict reproducibility_ > _is not required._ > _By default many of the Math methods simply call the equivalent method in > StrictMath for their implementation._ > _Code generators are encouraged to use platform-specific native libraries or > microprocessor instructions, where available,_ > _to provide higher-performance implementations of Math methods._ > so the result will have difference across hardware archs. > On the other hand, JAVA provided another library > [StrictMath|https://docs.oracle.com/javase/6/docs/api/java/lang/StrictMath.html] > that will not have this kind of problem as according to its' > [reference|https://docs.oracle.com/javase/6/docs/api/java/lang/StrictMath.html]: > _To help ensure portability of Java programs, the definitions of some of the > numeric functions in this package require that they produce_ > _the same results as certain published algorithms._ > So in order to fix the above mentioned problem, we have to consider switch to > use StrictMath instead of Math. > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23133) Numeric operations can have different result across hardware archs
[ https://issues.apache.org/jira/browse/HIVE-23133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yikun Jiang updated HIVE-23133: --- Status: Patch Available (was: Open) > Numeric operations can have different result across hardware archs > -- > > Key: HIVE-23133 > URL: https://issues.apache.org/jira/browse/HIVE-23133 > Project: Hive > Issue Type: Sub-task >Reporter: Zhenyu Zheng >Assignee: Yikun Jiang >Priority: Major > > Currently, we have set up an ARM CI to test out how Hive works on ARM > platform: > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/] > Among the failures, we have observed that some numeric operations can have > different result across hardware archs, such as: > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_vector_decimal_udf2_/] > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestSparkCliDriver/testCliDriver_subquery_select_/] > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestSparkCliDriver/testCliDriver_vectorized_math_funcs_/] > we can see that the calculation results of log, exp, cos, toRadians etc is > slitly different than the .out file results that we are > comparing(they are tested and wrote on X86 machines), this is because of we > use [Math > Library|https://docs.oracle.com/javase/6/docs/api/java/lang/Math.html] for > these kind of calculations. > and according to the > [illustration|https://docs.oracle.com/javase/6/docs/api/java/lang/Math.html]: > _Unlike some of the numeric methods of class StrictMath, all implementations > of the equivalent functions of class Math are not_ > _defined to return the bit-for-bit same results. This relaxation permits > better-performing implementations where strict reproducibility_ > _is not required._ > _By default many of the Math methods simply call the equivalent method in > StrictMath for their implementation._ > _Code generators are encouraged to use platform-specific native libraries or > microprocessor instructions, where available,_ > _to provide higher-performance implementations of Math methods._ > so the result will have difference across hardware archs. > On the other hand, JAVA provided another library > [StrictMath|https://docs.oracle.com/javase/6/docs/api/java/lang/StrictMath.html] > that will not have this kind of problem as according to its' > [reference|https://docs.oracle.com/javase/6/docs/api/java/lang/StrictMath.html]: > _To help ensure portability of Java programs, the definitions of some of the > numeric functions in this package require that they produce_ > _the same results as certain published algorithms._ > So in order to fix the above mentioned problem, we have to consider switch to > use StrictMath instead of Math. > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-23133) Numeric operations can have different result across hardware archs
[ https://issues.apache.org/jira/browse/HIVE-23133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yikun Jiang reassigned HIVE-23133: -- Assignee: Yikun Jiang > Numeric operations can have different result across hardware archs > -- > > Key: HIVE-23133 > URL: https://issues.apache.org/jira/browse/HIVE-23133 > Project: Hive > Issue Type: Sub-task >Reporter: Zhenyu Zheng >Assignee: Yikun Jiang >Priority: Major > > Currently, we have set up an ARM CI to test out how Hive works on ARM > platform: > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/] > Among the failures, we have observed that some numeric operations can have > different result across hardware archs, such as: > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_vector_decimal_udf2_/] > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestSparkCliDriver/testCliDriver_subquery_select_/] > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestSparkCliDriver/testCliDriver_vectorized_math_funcs_/] > we can see that the calculation results of log, exp, cos, toRadians etc is > slitly different than the .out file results that we are > comparing(they are tested and wrote on X86 machines), this is because of we > use [Math > Library|https://docs.oracle.com/javase/6/docs/api/java/lang/Math.html] for > these kind of calculations. > and according to the > [illustration|https://docs.oracle.com/javase/6/docs/api/java/lang/Math.html]: > _Unlike some of the numeric methods of class StrictMath, all implementations > of the equivalent functions of class Math are not_ > _defined to return the bit-for-bit same results. This relaxation permits > better-performing implementations where strict reproducibility_ > _is not required._ > _By default many of the Math methods simply call the equivalent method in > StrictMath for their implementation._ > _Code generators are encouraged to use platform-specific native libraries or > microprocessor instructions, where available,_ > _to provide higher-performance implementations of Math methods._ > so the result will have difference across hardware archs. > On the other hand, JAVA provided another library > [StrictMath|https://docs.oracle.com/javase/6/docs/api/java/lang/StrictMath.html] > that will not have this kind of problem as according to its' > [reference|https://docs.oracle.com/javase/6/docs/api/java/lang/StrictMath.html]: > _To help ensure portability of Java programs, the definitions of some of the > numeric functions in this package require that they produce_ > _the same results as certain published algorithms._ > So in order to fix the above mentioned problem, we have to consider switch to > use StrictMath instead of Math. > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan
[ https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078974#comment-17078974 ] Hive QA commented on HIVE-21304: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12999328/HIVE-21304.24.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 128 failed/errored test(s), 18196 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_10] (batchId=307) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_11] (batchId=307) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_12] (batchId=307) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_13] (batchId=307) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_16] (batchId=307) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_1] (batchId=307) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_2] (batchId=307) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_3] (batchId=307) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_7] (batchId=307) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_directory] (batchId=310) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join32] (batchId=100) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_1] (batchId=37) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_2] (batchId=76) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_3] (batchId=78) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_4] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_5] (batchId=27) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_6] (batchId=97) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_7] (batchId=44) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_8] (batchId=43) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketizedhiveinputformat_auto] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketsortoptimize_insert_4] (batchId=29) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketsortoptimize_insert_5] (batchId=67) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketsortoptimize_insert_8] (batchId=5) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_auto_join1] (batchId=4) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[explain_rearrange] (batchId=14) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort_map_operators] (batchId=76) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_join] (batchId=23) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_join_partition_key] (batchId=15) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin9] (batchId=47) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_10] (batchId=102) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_11] (batchId=2) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_12] (batchId=11) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_13] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_16] (batchId=49) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_1] (batchId=58) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_2] (batchId=66) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_3] (batchId=26) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_46] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_47] (batchId=34) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_7] (batchId=67) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smblimit] (batchId=88) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[sort_merge_join_desc_1] (batchId=10) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[sort_merge_join_desc_2] (batchId=25) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[sort_merge_join_desc_3] (batchId=53) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[sort_merge_join_desc_5] (batchId=20) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[sort_merge_join_desc_8] (batchId=9) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_bucketmapjoin1] (batchId=29) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[explainuser_2] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_smb_mapjoin_14] (batchId=184) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
[jira] [Updated] (HIVE-23162) Remove swapping logic to merge joins in AST converter
[ https://issues.apache.org/jira/browse/HIVE-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-23162: --- Attachment: HIVE-23162.02.patch > Remove swapping logic to merge joins in AST converter > - > > Key: HIVE-23162 > URL: https://issues.apache.org/jira/browse/HIVE-23162 > Project: Hive > Issue Type: Bug > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-23162.01.patch, HIVE-23162.02.patch > > > In ASTConverter, there is some logic to invert join inputs so the logic to > merge joins in SemanticAnalyzer kicks in. > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ASTConverter.java#L407 > There is a bug because inputs are swapped but the schema is not. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23028) Should not use group parameter when run tests in standalone-metastore-common
[ https://issues.apache.org/jira/browse/HIVE-23028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhenyu Zheng updated HIVE-23028: Parent: (was: HIVE-23161) Issue Type: Bug (was: Sub-task) > Should not use group parameter when run tests in standalone-metastore-common > > > Key: HIVE-23028 > URL: https://issues.apache.org/jira/browse/HIVE-23028 > Project: Hive > Issue Type: Bug >Reporter: Zhenyu Zheng >Assignee: Zhenyu Zheng >Priority: Major > Attachments: HIVE-23028.1.patch > > > Should not use group parameter when run tests in standalone-metastore-common > we inherited the `group` parameter from ‘standalone-metastore’s pom.xml and > there the parameter > is set to : org.apache.hadoop.hive.metastore.annotation.MetastoreUnitTest > ([https://github.com/apache/hive/blob/master/standalone-metastore/pom.xml#L61]) > which only exists in metastore-server package: > [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/annotation/MetastoreUnitTest.java] > and for tests for metastore-common package does not have the @Catagory header: > [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-common/src/test/java/org/apache/hadoop/hive/metastore/utils/RetryTest.java] > > and it was actually introduced by: > [https://github.com/apache/hive/commit/7411d42579ffa0bad96e8da731a1a35afc9ff614#diff-171fcb0dda3bcba577fa13720d5b6571] > we should remove the group paramenter in > [https://github.com/apache/hive/blob/master/standalone-metastore/pom.xml|https://github.com/apache/hive/blob/master/standalone-metastore/pom.xml#L61] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan
[ https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078941#comment-17078941 ] Hive QA commented on HIVE-21304: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 37s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 9s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 41s{color} | {color:blue} ql in master has 1528 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 54s{color} | {color:red} ql: The patch generated 6 new + 1246 unchanged - 10 fixed = 1252 total (was 1256) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 48s{color} | {color:red} ql generated 3 new + 1528 unchanged - 0 fixed = 1531 total (was 1528) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 29m 8s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Suspicious comparison of Integer references in org.apache.hadoop.hive.ql.optimizer.BucketVersionPopulator$BucketingVersionResult.merge(BucketVersionPopulator$BucketingVersionResult) At BucketVersionPopulator.java:in org.apache.hadoop.hive.ql.optimizer.BucketVersionPopulator$BucketingVersionResult.merge(BucketVersionPopulator$BucketingVersionResult) At BucketVersionPopulator.java:[line 64] | | | Suspicious comparison of Integer references in org.apache.hadoop.hive.ql.optimizer.BucketVersionPopulator$BucketingVersionResult.merge2(BucketVersionPopulator$BucketingVersionResult) At BucketVersionPopulator.java:in org.apache.hadoop.hive.ql.optimizer.BucketVersionPopulator$BucketingVersionResult.merge2(BucketVersionPopulator$BucketingVersionResult) At BucketVersionPopulator.java:[line 74] | | | Nullcheck of table_desc at line 8208 of value previously dereferenced in org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.createFileSinkDesc(String, TableDesc, Partition, Path, int, boolean, boolean, boolean, Path, SemanticAnalyzer$SortBucketRSCtx, DynamicPartitionCtx, ListBucketingCtx, RowSchema, boolean, Table, Long, boolean, Integer, QB, boolean) At SemanticAnalyzer.java:8208 of value previously dereferenced in org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.createFileSinkDesc(String, TableDesc, Partition, Path, int, boolean, boolean, boolean, Path, SemanticAnalyzer$SortBucketRSCtx, DynamicPartitionCtx, ListBucketingCtx, RowSchema, boolean, Table, Long, boolean, Integer, QB, boolean) At SemanticAnalyzer.java:[line 8201] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-21525/dev-support/hive-personality.sh |
[jira] [Commented] (HIVE-22458) Add more constraints on showing partitions
[ https://issues.apache.org/jira/browse/HIVE-22458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078938#comment-17078938 ] Zhihua Deng commented on HIVE-22458: Fix test errors and refine code style. > Add more constraints on showing partitions > -- > > Key: HIVE-22458 > URL: https://issues.apache.org/jira/browse/HIVE-22458 > Project: Hive > Issue Type: Improvement >Reporter: Zhihua Deng >Priority: Major > Attachments: HIVE-22458.2.patch, HIVE-22458.3.patch, > HIVE-22458.branch-1.02.patch, HIVE-22458.branch-1.patch, HIVE-22458.patch > > > When we showing partitions of a table with thousands of partitions, all the > partitions will be returned and it's not easy to catch the specified one from > them, this make showing partitions hard to use. We can add where/limit/order > by constraints to show partitions like: > show partitions table_name [partition_specs] where partition_key >= value > order by partition_key desc limit n; > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22458) Add more constraints on showing partitions
[ https://issues.apache.org/jira/browse/HIVE-22458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhihua Deng updated HIVE-22458: --- Attachment: HIVE-22458.3.patch > Add more constraints on showing partitions > -- > > Key: HIVE-22458 > URL: https://issues.apache.org/jira/browse/HIVE-22458 > Project: Hive > Issue Type: Improvement >Reporter: Zhihua Deng >Priority: Major > Attachments: HIVE-22458.2.patch, HIVE-22458.3.patch, > HIVE-22458.branch-1.02.patch, HIVE-22458.branch-1.patch, HIVE-22458.patch > > > When we showing partitions of a table with thousands of partitions, all the > partitions will be returned and it's not easy to catch the specified one from > them, this make showing partitions hard to use. We can add where/limit/order > by constraints to show partitions like: > show partitions table_name [partition_specs] where partition_key >= value > order by partition_key desc limit n; > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23114) Insert overwrite with dynamic partitioning is not working correctly with direct insert
[ https://issues.apache.org/jira/browse/HIVE-23114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078927#comment-17078927 ] Hive QA commented on HIVE-23114: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12999327/HIVE-23114.3.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 18199 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.metastore.TestMetastoreHousekeepingLeaderEmptyConfig.testHouseKeepingThreadExistence (batchId=253) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/21524/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21524/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21524/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12999327 - PreCommit-HIVE-Build > Insert overwrite with dynamic partitioning is not working correctly with > direct insert > -- > > Key: HIVE-23114 > URL: https://issues.apache.org/jira/browse/HIVE-23114 > Project: Hive > Issue Type: Bug >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-23114.1.patch, HIVE-23114.2.patch, > HIVE-23114.3.patch > > > This is a follow-up Jira for the > [conversation|https://issues.apache.org/jira/browse/HIVE-21164?focusedCommentId=17059280&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17059280] > in HIVE-21164 > Doing an insert overwrite from a multi-insert statement with dynamic > partitioning will give wrong results for ACID tables when > 'hive.acid.direct.insert.enabled' is true or for insert-only tables. > Reproduction: > {noformat} > set hive.acid.direct.insert.enabled=true; > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > set hive.vectorized.execution.enabled=false; > set hive.stats.autogather=false; > create external table multiinsert_test_text (a int, b int, c int) stored as > textfile; > insert into multiinsert_test_text values (, 11, ), (, 22, ), > (, 33, ), (, 44, NULL), (, 55, NULL); > create table multiinsert_test_acid (a int, b int) partitioned by (c int) > stored as orc tblproperties('transactional'='true'); > create table multiinsert_test_mm (a int, b int) partitioned by (c int) stored > as orc tblproperties('transactional'='true', > 'transactional_properties'='insert_only'); > from multiinsert_test_text a > insert overwrite table multiinsert_test_acid partition (c) > select > a.a, > a.b, > a.c > where a.c is not null > insert overwrite table multiinsert_test_acid partition (c) > select > a.a, > a.b, > a.c > where a.c is null; > select * from multiinsert_test_acid; > from multiinsert_test_text a > insert overwrite table multiinsert_test_mm partition (c) > select > a.a, > a.b, > a.c > where a.c is not null > insert overwrite table multiinsert_test_mm partition (c) > select > a.a, > a.b, > a.c > where a.c is null; > select * from multiinsert_test_mm; > {noformat} > The result of these steps can be different, it depends on the execution order > of the FileSinkOperators of the insert overwrite statements. It can happen > that an error occurs due to manifest file collision, it can happen that no > error occurs but the result will be incorrect. > Running the same insert query with an external table of with and ACID table > with 'hive.acid.direct.insert.enabled=false' will give the follwing result: > {noformat} > 11 > 22 > 33 > 44 NULL > 55 NULL > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23114) Insert overwrite with dynamic partitioning is not working correctly with direct insert
[ https://issues.apache.org/jira/browse/HIVE-23114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078903#comment-17078903 ] Hive QA commented on HIVE-23114: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 47s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 46s{color} | {color:blue} ql in master has 1528 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 43s{color} | {color:red} ql: The patch generated 1 new + 313 unchanged - 1 fixed = 314 total (was 314) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 55s{color} | {color:red} ql generated 1 new + 1528 unchanged - 0 fixed = 1529 total (was 1528) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | The field org.apache.hadoop.hive.ql.exec.FileSinkOperator.dynamicPartitionSpecs is transient but isn't set by deserialization In FileSinkOperator.java:but isn't set by deserialization In FileSinkOperator.java | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-21524/dev-support/hive-personality.sh | | git revision | master / d2163cb | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-21524/yetus/diff-checkstyle-ql.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-21524/yetus/new-findbugs-ql.html | | modules | C: ql itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-21524/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Insert overwrite with dynamic partitioning is not working correctly with > direct insert > -- > > Key: HIVE-23114 > URL: https://issues.apache.org/jira/browse/HIVE-23114 > Project: Hive > Issue Type: Bug >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-23114.1.patch, HIVE-23114.2.patch, > HIVE-23114.3.patch > > > This is a follow-up Jira for the > [conversation|https://issues.apache.org/jira/browse/HIVE-21164?focusedCommentId=17059280&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17059280] > in HIVE-21164 > Doing an insert overwrite from a multi-insert statement with dy
[jira] [Commented] (HIVE-23058) Compaction task reattempt fails with FileAlreadyExistsException
[ https://issues.apache.org/jira/browse/HIVE-23058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078891#comment-17078891 ] Riju Trivedi commented on HIVE-23058: - [~lpinter] All the above 3 tests failure seems unrelated to the patch . Can you please confirm. > Compaction task reattempt fails with FileAlreadyExistsException > --- > > Key: HIVE-23058 > URL: https://issues.apache.org/jira/browse/HIVE-23058 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.1.0 >Reporter: Riju Trivedi >Assignee: Riju Trivedi >Priority: Major > Attachments: HIVE-23058.2.patch, HIVE_23058.1.patch, HIVE_23058.patch > > > Issue occurs when compaction attempt is relaunched after first task attempt > failure due to preemption by Scheduler or any other reason. > Since _tmp directory was created by first attempt and was left uncleaned > after task attempt failure. Second attempt of the the task fails with > "FileAlreadyExistsException" exception. > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.FileAlreadyExistsException): > > /warehouse/tablespace/managed/hive/default.db/compaction_test/_tmp_3670bbef-ba7a-4c10-918d-9a2ee17cbd22/base_186/bucket_5 > for client 10.xx.xx.xxx already exists -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23058) Compaction task reattempt fails with FileAlreadyExistsException
[ https://issues.apache.org/jira/browse/HIVE-23058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078889#comment-17078889 ] Hive QA commented on HIVE-23058: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12999320/HIVE-23058.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 18196 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.metastore.TestMetastoreHousekeepingLeaderEmptyConfig.testHouseKeepingThreadExistence (batchId=252) org.apache.hadoop.hive.metastore.TestPartitionManagement.testPartitionDiscoveryEnabledBothTableTypes (batchId=230) org.apache.hadoop.hive.metastore.TestPartitionManagement.testPartitionExprFilter (batchId=230) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/21523/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21523/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21523/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12999320 - PreCommit-HIVE-Build > Compaction task reattempt fails with FileAlreadyExistsException > --- > > Key: HIVE-23058 > URL: https://issues.apache.org/jira/browse/HIVE-23058 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.1.0 >Reporter: Riju Trivedi >Assignee: Riju Trivedi >Priority: Major > Attachments: HIVE-23058.2.patch, HIVE_23058.1.patch, HIVE_23058.patch > > > Issue occurs when compaction attempt is relaunched after first task attempt > failure due to preemption by Scheduler or any other reason. > Since _tmp directory was created by first attempt and was left uncleaned > after task attempt failure. Second attempt of the the task fails with > "FileAlreadyExistsException" exception. > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.FileAlreadyExistsException): > > /warehouse/tablespace/managed/hive/default.db/compaction_test/_tmp_3670bbef-ba7a-4c10-918d-9a2ee17cbd22/base_186/bucket_5 > for client 10.xx.xx.xxx already exists -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23162) Remove swapping logic to merge joins in AST converter
[ https://issues.apache.org/jira/browse/HIVE-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078884#comment-17078884 ] Vineet Garg commented on HIVE-23162: +1 pending tests > Remove swapping logic to merge joins in AST converter > - > > Key: HIVE-23162 > URL: https://issues.apache.org/jira/browse/HIVE-23162 > Project: Hive > Issue Type: Bug > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-23162.01.patch > > > In ASTConverter, there is some logic to invert join inputs so the logic to > merge joins in SemanticAnalyzer kicks in. > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ASTConverter.java#L407 > There is a bug because inputs are swapped but the schema is not. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23134) Hive & Kudu interaction not available on ARM
[ https://issues.apache.org/jira/browse/HIVE-23134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078875#comment-17078875 ] RuiChen commented on HIVE-23134: We are working on Kudu ARM support, related issue link https://issues.apache.org/jira/browse/KUDU-3007 > Hive & Kudu interaction not available on ARM > > > Key: HIVE-23134 > URL: https://issues.apache.org/jira/browse/HIVE-23134 > Project: Hive > Issue Type: Sub-task >Reporter: Zhenyu Zheng >Priority: Major > > Currently, we have set up an ARM CI to test out how Hive works on ARM > platform: > https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/ > According to the results, Hive & Kudu interaction is not available on ARM > platform: > https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.kudu/ > this is because that we use Kudu version 1.10 and that version does not come > with ARM workable packages. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23162) Remove swapping logic to merge joins in AST converter
[ https://issues.apache.org/jira/browse/HIVE-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-23162: --- Description: In ASTConverter, there is some logic to invert join inputs so the logic to merge joins in SemanticAnalyzer kicks in. https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ASTConverter.java#L407 There is a bug because inputs are swapped but the schema is not. was: In ASTConverter, there is some logic to invert join inputs so the logic to merge joins in SemanticAnalyzer kicks in. https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ASTConverter.java#L407 There is a bug because inputs are swapped but the schema is not. However, it turns out that logic is not needed now that merging is off by default. > Remove swapping logic to merge joins in AST converter > - > > Key: HIVE-23162 > URL: https://issues.apache.org/jira/browse/HIVE-23162 > Project: Hive > Issue Type: Bug > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-23162.01.patch > > > In ASTConverter, there is some logic to invert join inputs so the logic to > merge joins in SemanticAnalyzer kicks in. > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ASTConverter.java#L407 > There is a bug because inputs are swapped but the schema is not. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23162) Remove swapping logic to merge joins in AST converter
[ https://issues.apache.org/jira/browse/HIVE-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-23162: --- Attachment: (was: HIVE-23162.patch) > Remove swapping logic to merge joins in AST converter > - > > Key: HIVE-23162 > URL: https://issues.apache.org/jira/browse/HIVE-23162 > Project: Hive > Issue Type: Bug > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-23162.01.patch > > > In ASTConverter, there is some logic to invert join inputs so the logic to > merge joins in SemanticAnalyzer kicks in. > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ASTConverter.java#L407 > There is a bug because inputs are swapped but the schema is not. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23162) Remove swapping logic to merge joins in AST converter
[ https://issues.apache.org/jira/browse/HIVE-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-23162: --- Attachment: HIVE-23162.01.patch > Remove swapping logic to merge joins in AST converter > - > > Key: HIVE-23162 > URL: https://issues.apache.org/jira/browse/HIVE-23162 > Project: Hive > Issue Type: Bug > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-23162.01.patch, HIVE-23162.patch > > > In ASTConverter, there is some logic to invert join inputs so the logic to > merge joins in SemanticAnalyzer kicks in. > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ASTConverter.java#L407 > There is a bug because inputs are swapped but the schema is not. However, it > turns out that logic is not needed now that merging is off by default. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23058) Compaction task reattempt fails with FileAlreadyExistsException
[ https://issues.apache.org/jira/browse/HIVE-23058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078873#comment-17078873 ] Hive QA commented on HIVE-23058: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 56s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 58s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 43s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 38s{color} | {color:blue} ql in master has 1528 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 41s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 43s{color} | {color:red} ql: The patch generated 1 new + 37 unchanged - 0 fixed = 38 total (was 37) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s{color} | {color:red} itests/hive-unit: The patch generated 5 new + 127 unchanged - 0 fixed = 132 total (was 127) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 30m 26s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-21523/dev-support/hive-personality.sh | | git revision | master / d2163cb | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-21523/yetus/diff-checkstyle-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-21523/yetus/diff-checkstyle-itests_hive-unit.txt | | modules | C: ql itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-21523/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Compaction task reattempt fails with FileAlreadyExistsException > --- > > Key: HIVE-23058 > URL: https://issues.apache.org/jira/browse/HIVE-23058 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.1.0 >Reporter: Riju Trivedi >Assignee: Riju Trivedi >Priority: Major > Attachments: HIVE-23058.2.patch, HIVE_23058.1.patch, HIVE_23058.patch > > > Issue occurs when compaction attempt is relaunched after first task attempt > failure due to preemption by Scheduler or any other reason. > Since _tmp directory was created by first attempt and was left uncleaned >
[jira] [Updated] (HIVE-23162) Remove swapping logic to merge joins in AST converter
[ https://issues.apache.org/jira/browse/HIVE-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-23162: --- Attachment: HIVE-23162.patch > Remove swapping logic to merge joins in AST converter > - > > Key: HIVE-23162 > URL: https://issues.apache.org/jira/browse/HIVE-23162 > Project: Hive > Issue Type: Bug > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-23162.patch > > > In ASTConverter, there is some logic to invert join inputs so the logic to > merge joins in SemanticAnalyzer kicks in. > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ASTConverter.java#L407 > There is a bug because inputs are swapped but the schema is not. However, it > turns out that logic is not needed now that merging is off by default. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23162) Remove swapping logic to merge joins in AST converter
[ https://issues.apache.org/jira/browse/HIVE-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-23162: --- Status: Patch Available (was: In Progress) > Remove swapping logic to merge joins in AST converter > - > > Key: HIVE-23162 > URL: https://issues.apache.org/jira/browse/HIVE-23162 > Project: Hive > Issue Type: Bug > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > > In ASTConverter, there is some logic to invert join inputs so the logic to > merge joins in SemanticAnalyzer kicks in. > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ASTConverter.java#L407 > There is a bug because inputs are swapped but the schema is not. However, it > turns out that logic is not needed now that merging is off by default. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-23162) Remove swapping logic to merge joins in AST converter
[ https://issues.apache.org/jira/browse/HIVE-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez reassigned HIVE-23162: -- > Remove swapping logic to merge joins in AST converter > - > > Key: HIVE-23162 > URL: https://issues.apache.org/jira/browse/HIVE-23162 > Project: Hive > Issue Type: Bug > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > > In ASTConverter, there is some logic to invert join inputs so the logic to > merge joins in SemanticAnalyzer kicks in. > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ASTConverter.java#L407 > There is a bug because inputs are swapped but the schema is not. However, it > turns out that logic is not needed now that merging is off by default. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work started] (HIVE-23162) Remove swapping logic to merge joins in AST converter
[ https://issues.apache.org/jira/browse/HIVE-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-23162 started by Jesus Camacho Rodriguez. -- > Remove swapping logic to merge joins in AST converter > - > > Key: HIVE-23162 > URL: https://issues.apache.org/jira/browse/HIVE-23162 > Project: Hive > Issue Type: Bug > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > > In ASTConverter, there is some logic to invert join inputs so the logic to > merge joins in SemanticAnalyzer kicks in. > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ASTConverter.java#L407 > There is a bug because inputs are swapped but the schema is not. However, it > turns out that logic is not needed now that merging is off by default. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23133) Numeric operations can have different result across hardware archs
[ https://issues.apache.org/jira/browse/HIVE-23133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhenyu Zheng updated HIVE-23133: Parent: HIVE-23161 Issue Type: Sub-task (was: Bug) > Numeric operations can have different result across hardware archs > -- > > Key: HIVE-23133 > URL: https://issues.apache.org/jira/browse/HIVE-23133 > Project: Hive > Issue Type: Sub-task >Reporter: Zhenyu Zheng >Priority: Major > > Currently, we have set up an ARM CI to test out how Hive works on ARM > platform: > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/] > Among the failures, we have observed that some numeric operations can have > different result across hardware archs, such as: > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_vector_decimal_udf2_/] > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestSparkCliDriver/testCliDriver_subquery_select_/] > [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestSparkCliDriver/testCliDriver_vectorized_math_funcs_/] > we can see that the calculation results of log, exp, cos, toRadians etc is > slitly different than the .out file results that we are > comparing(they are tested and wrote on X86 machines), this is because of we > use [Math > Library|https://docs.oracle.com/javase/6/docs/api/java/lang/Math.html] for > these kind of calculations. > and according to the > [illustration|https://docs.oracle.com/javase/6/docs/api/java/lang/Math.html]: > _Unlike some of the numeric methods of class StrictMath, all implementations > of the equivalent functions of class Math are not_ > _defined to return the bit-for-bit same results. This relaxation permits > better-performing implementations where strict reproducibility_ > _is not required._ > _By default many of the Math methods simply call the equivalent method in > StrictMath for their implementation._ > _Code generators are encouraged to use platform-specific native libraries or > microprocessor instructions, where available,_ > _to provide higher-performance implementations of Math methods._ > so the result will have difference across hardware archs. > On the other hand, JAVA provided another library > [StrictMath|https://docs.oracle.com/javase/6/docs/api/java/lang/StrictMath.html] > that will not have this kind of problem as according to its' > [reference|https://docs.oracle.com/javase/6/docs/api/java/lang/StrictMath.html]: > _To help ensure portability of Java programs, the definitions of some of the > numeric functions in this package require that they produce_ > _the same results as certain published algorithms._ > So in order to fix the above mentioned problem, we have to consider switch to > use StrictMath instead of Math. > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23028) Should not use group parameter when run tests in standalone-metastore-common
[ https://issues.apache.org/jira/browse/HIVE-23028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhenyu Zheng updated HIVE-23028: Parent: HIVE-23161 Issue Type: Sub-task (was: Bug) > Should not use group parameter when run tests in standalone-metastore-common > > > Key: HIVE-23028 > URL: https://issues.apache.org/jira/browse/HIVE-23028 > Project: Hive > Issue Type: Sub-task >Reporter: Zhenyu Zheng >Assignee: Zhenyu Zheng >Priority: Major > Attachments: HIVE-23028.1.patch > > > Should not use group parameter when run tests in standalone-metastore-common > we inherited the `group` parameter from ‘standalone-metastore’s pom.xml and > there the parameter > is set to : org.apache.hadoop.hive.metastore.annotation.MetastoreUnitTest > ([https://github.com/apache/hive/blob/master/standalone-metastore/pom.xml#L61]) > which only exists in metastore-server package: > [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/annotation/MetastoreUnitTest.java] > and for tests for metastore-common package does not have the @Catagory header: > [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-common/src/test/java/org/apache/hadoop/hive/metastore/utils/RetryTest.java] > > and it was actually introduced by: > [https://github.com/apache/hive/commit/7411d42579ffa0bad96e8da731a1a35afc9ff614#diff-171fcb0dda3bcba577fa13720d5b6571] > we should remove the group paramenter in > [https://github.com/apache/hive/blob/master/standalone-metastore/pom.xml|https://github.com/apache/hive/blob/master/standalone-metastore/pom.xml#L61] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23134) Hive & Kudu interaction not available on ARM
[ https://issues.apache.org/jira/browse/HIVE-23134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhenyu Zheng updated HIVE-23134: Parent: HIVE-23161 Issue Type: Sub-task (was: Bug) > Hive & Kudu interaction not available on ARM > > > Key: HIVE-23134 > URL: https://issues.apache.org/jira/browse/HIVE-23134 > Project: Hive > Issue Type: Sub-task >Reporter: Zhenyu Zheng >Priority: Major > > Currently, we have set up an ARM CI to test out how Hive works on ARM > platform: > https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/ > According to the results, Hive & Kudu interaction is not available on ARM > platform: > https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.kudu/ > this is because that we use Kudu version 1.10 and that version does not come > with ARM workable packages. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23161) Umbrella issue for Hive on ARM issues
[ https://issues.apache.org/jira/browse/HIVE-23161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhenyu Zheng updated HIVE-23161: Summary: Umbrella issue for Hive on ARM issues (was: Umbrela issue for Hive on ARM issues) > Umbrella issue for Hive on ARM issues > - > > Key: HIVE-23161 > URL: https://issues.apache.org/jira/browse/HIVE-23161 > Project: Hive > Issue Type: Bug >Reporter: Zhenyu Zheng >Priority: Major > > Umbrella issue for Hive on ARM issues -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23161) Umbrela issue for Hive on ARM issues
[ https://issues.apache.org/jira/browse/HIVE-23161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhenyu Zheng updated HIVE-23161: Description: Umbrella issue for Hive on ARM issues > Umbrela issue for Hive on ARM issues > > > Key: HIVE-23161 > URL: https://issues.apache.org/jira/browse/HIVE-23161 > Project: Hive > Issue Type: Bug >Reporter: Zhenyu Zheng >Priority: Major > > Umbrella issue for Hive on ARM issues -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23006) Compiler support for Probe MapJoin
[ https://issues.apache.org/jira/browse/HIVE-23006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078864#comment-17078864 ] Hive QA commented on HIVE-23006: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12999306/HIVE-23006.02.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/21522/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21522/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21522/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2020-04-09 02:26:29.657 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-21522/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2020-04-09 02:26:29.661 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at d2163cb HIVE-23128: SHOW CREATE TABLE Creates Incorrect Syntax When Database Specified (David Mollitor, reviewed by Miklos Gergely) + git clean -f -d Removing ${project.basedir}/ Removing itests/${project.basedir}/ Removing standalone-metastore/metastore-server/src/gen/ + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at d2163cb HIVE-23128: SHOW CREATE TABLE Creates Incorrect Syntax When Database Specified (David Mollitor, reviewed by Miklos Gergely) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2020-04-09 02:26:30.860 + rm -rf ../yetus_PreCommit-HIVE-Build-21522 + mkdir ../yetus_PreCommit-HIVE-Build-21522 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-21522 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-21522/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch Trying to apply the patch with -p0 error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/TezCompiler.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/plan/MapWork.java: does not exist in index error: a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ProbeDecodeOptimizer.java: does not exist in index error: a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ProbeDecodeOptimizer.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/TezCompiler.java: does not exist in index error: a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/SharedWorkOptimizer.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/TezCompiler.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java: does not exist in index error: a/itests/src/test/resources/testconfiguration.properties: does not exist in index Trying to apply
[jira] [Commented] (HIVE-23104) Minimize critical paths of TxnHandler::commitTxn and abortTxn
[ https://issues.apache.org/jira/browse/HIVE-23104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078862#comment-17078862 ] Hive QA commented on HIVE-23104: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12999339/HIVE-23104.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 18195 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testMerge3Way02 (batchId=362) org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testMergePartitioned02 (batchId=362) org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testMergeUnpartitioned01 (batchId=362) org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testWriteSetTracking10 (batchId=362) org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testWriteSetTracking11 (batchId=362) org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testWriteSetTracking3 (batchId=362) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/21521/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21521/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21521/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 6 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12999339 - PreCommit-HIVE-Build > Minimize critical paths of TxnHandler::commitTxn and abortTxn > - > > Key: HIVE-23104 > URL: https://issues.apache.org/jira/browse/HIVE-23104 > Project: Hive > Issue Type: Improvement >Reporter: Marton Bod >Assignee: Marton Bod >Priority: Major > Attachments: HIVE-23104.1.patch, HIVE-23104.1.patch, > HIVE-23104.1.patch, HIVE-23104.2.patch > > > Investigate whether any code sections in TxnHandler::commitTxn and abortTxn > can be lifted out/executed async in order to reduce the overall execution > time of these methods. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23104) Minimize critical paths of TxnHandler::commitTxn and abortTxn
[ https://issues.apache.org/jira/browse/HIVE-23104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078848#comment-17078848 ] Hive QA commented on HIVE-23104: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 17s{color} | {color:blue} standalone-metastore/metastore-server in master has 190 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s{color} | {color:red} standalone-metastore/metastore-server: The patch generated 11 new + 532 unchanged - 10 fixed = 543 total (was 542) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 24s{color} | {color:red} standalone-metastore/metastore-server generated 3 new + 190 unchanged - 0 fixed = 193 total (was 190) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 40s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:standalone-metastore/metastore-server | | | org.apache.hadoop.hive.metastore.txn.TxnHandler.moveTxnComponentsToCompleted(Statement, long, char) passes a nonconstant String to an execute or addBatch method on an SQL statement At TxnHandler.java:nonconstant String to an execute or addBatch method on an SQL statement At TxnHandler.java:[line 1366] | | | org.apache.hadoop.hive.metastore.txn.TxnHandler.checkForWriteConflict(Statement, long) passes a nonconstant String to an execute or addBatch method on an SQL statement At TxnHandler.java:String to an execute or addBatch method on an SQL statement At TxnHandler.java:[line 1327] | | | org.apache.hadoop.hive.metastore.txn.TxnHandler.updateKeyValueAssociatedWithTxn(CommitTxnRequest, Statement) passes a nonconstant String to an execute or addBatch method on an SQL statement At TxnHandler.java:String to an execute or addBatch method on an SQL statement At TxnHandler.java:[line 1411] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-21521/dev-support/hive-personality.sh | | git revision | master / d2163cb | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-21521/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-21521/yetus/new-findbugs-standalone-metastore_metastore-server.html | | modules | C: standalone-metastore/metastore-server U: standalone-metastore/metastore-server | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-21521/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Minimize critical paths of TxnHandler::commitTxn and abortTxn > - > > Key
[jira] [Commented] (HIVE-23107) Remove MIN_HISTORY_LEVEL table
[ https://issues.apache.org/jira/browse/HIVE-23107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078838#comment-17078838 ] Hive QA commented on HIVE-23107: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12999302/HIVE-23107.06.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 80 failed/errored test(s), 18180 tests executed *Failed tests:* {noformat} TestMiniLlapCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=162) [unionDistinct_1.q,table_nonprintable.q,file_with_header_footer_aggregation.q,orc_llap_counters1.q,mm_cttas.q,whroot_external1.q,global_limit.q,rcfile_createas1.q,dynamic_partition_pruning_2.q,intersect_merge.q,parquet_struct_type_vectorization.q,results_cache_diff_fs.q,parallel_colstats.q,load_hdfs_file_with_space_in_the_name.q,orc_merge3.q] org.apache.hadoop.hive.ql.TestTxnCommands2.testACIDwithSchemaEvolutionAndCompaction (batchId=344) org.apache.hadoop.hive.ql.TestTxnCommands2.testCleanerForTxnToWriteId (batchId=344) org.apache.hadoop.hive.ql.TestTxnCommands2.testInitiatorWithMultipleFailedCompactions (batchId=344) org.apache.hadoop.hive.ql.TestTxnCommands2.testInsertOverwrite1 (batchId=344) org.apache.hadoop.hive.ql.TestTxnCommands2.testInsertOverwrite2 (batchId=344) org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion1 (batchId=344) org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion2 (batchId=344) org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion3 (batchId=344) org.apache.hadoop.hive.ql.TestTxnCommands2.testSchemaEvolutionCompaction (batchId=344) org.apache.hadoop.hive.ql.TestTxnCommands2.writeBetweenWorkerAndCleaner (batchId=344) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testACIDwithSchemaEvolutionAndCompaction (batchId=358) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testCleanerForTxnToWriteId (batchId=358) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testInitiatorWithMultipleFailedCompactions (batchId=358) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testInsertOverwrite1 (batchId=358) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testInsertOverwrite2 (batchId=358) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testNonAcidToAcidConversion1 (batchId=358) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testNonAcidToAcidConversion2 (batchId=358) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testNonAcidToAcidConversion3 (batchId=358) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testSchemaEvolutionCompaction (batchId=358) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.writeBetweenWorkerAndCleaner (batchId=358) org.apache.hadoop.hive.ql.TestTxnCommands3.testCleaner2 (batchId=359) org.apache.hadoop.hive.ql.TestTxnCommands3.testCompactionAbort (batchId=359) org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.testInsertOverwriteForPartitionedMmTable (batchId=318) org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.testInsertOverwriteWithUnionAll (batchId=318) org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.testOperationsOnCompletedTxnComponentsForMmTable (batchId=318) org.apache.hadoop.hive.ql.TestTxnCommandsForOrcMmTable.testInsertOverwriteForPartitionedMmTable (batchId=340) org.apache.hadoop.hive.ql.TestTxnCommandsForOrcMmTable.testInsertOverwriteWithUnionAll (batchId=340) org.apache.hadoop.hive.ql.TestTxnCommandsForOrcMmTable.testOperationsOnCompletedTxnComponentsForMmTable (batchId=340) org.apache.hadoop.hive.ql.TestTxnNoBuckets.testNoBuckets (batchId=344) org.apache.hadoop.hive.ql.TestTxnNoBucketsVectorized.testNoBuckets (batchId=344) org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testMetastoreTablesCleanup (batchId=362) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMajorPartitionCompaction (batchId=329) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMajorPartitionCompactionNoBase (batchId=329) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMajorTableCompaction (batchId=329) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMinorPartitionCompaction (batchId=329) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMinorTableCompaction (batchId=329) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner2.cleanupAfterMajorPartitionCompaction (batchId=329) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner2.cleanupAfterMajorPartitionCompactionNoBase (batchId=329) org.apache.hadoop.hive.ql.txn.compactor.TestCleaner2.cleanupAfterMajorTableCompaction (batchId=329) org.apache.hadoop.hive.ql.txn.compactor.Te
[jira] [Commented] (HIVE-23107) Remove MIN_HISTORY_LEVEL table
[ https://issues.apache.org/jira/browse/HIVE-23107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078825#comment-17078825 ] Hive QA commented on HIVE-23107: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 54s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 17s{color} | {color:blue} standalone-metastore/metastore-server in master has 190 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 51s{color} | {color:blue} ql in master has 1528 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 38s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} standalone-metastore/metastore-server: The patch generated 0 new + 666 unchanged - 2 fixed = 666 total (was 668) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} The patch ql passed checkstyle {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 21s{color} | {color:green} standalone-metastore/metastore-server generated 0 new + 187 unchanged - 3 fixed = 187 total (was 190) {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 59s{color} | {color:green} ql in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 32m 13s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-21520/dev-support/hive-personality.sh | | git revision | master / d2163cb | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: standalone-metastore/metastore-server ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-21520/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Remove MIN_HISTORY_LEVEL table > -- > > Key: HIVE-23107 > URL: https://issues.apache.org/jira/browse/HIVE-23107 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: László Pintér >Assignee: László Pintér >Priority: Major > Attachments: HIVE-23107.01.patch, HIVE-23107.02.patch, > HIVE-23107.03.patch, HIVE-23107.04.patch, HIVE-23107.05.patch, > HIVE-23107.06.patch > > > MIN_HISTORY_LEVEL table is used in two places: > * Cleaner uses it to decide if the files can be removed - this could be > replaced by adding a new column to compaction_queue storing t
[jira] [Commented] (HIVE-23048) Use sequences for TXN_ID generation
[ https://issues.apache.org/jira/browse/HIVE-23048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078814#comment-17078814 ] Hive QA commented on HIVE-23048: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12999301/HIVE-23048.2.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 634 failed/errored test(s), 18199 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[materialized_view_create_rewrite] (batchId=307) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_bloom_filter_orc_file_dump] (batchId=99) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_insert_overwrite] (batchId=56) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_join] (batchId=18) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] (batchId=12) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_nullscan] (batchId=77) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_stats2] (batchId=54) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_stats3] (batchId=17) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_stats4] (batchId=70) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_stats5] (batchId=24) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_stats] (batchId=45) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_directories_test] (batchId=40) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] (batchId=61) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_vectorization] (batchId=75) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_vectorization_partition] (batchId=84) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_vectorization_project] (batchId=22) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_view_delete] (batchId=38) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] (batchId=14) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas] (batchId=7) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_all_non_partitioned] (batchId=33) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_all_partitioned] (batchId=33) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_orig_table] (batchId=46) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_tmp_table] (batchId=59) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_where_no_match] (batchId=33) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_where_non_partitioned] (batchId=45) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_where_partitioned] (batchId=46) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_whole_partition] (batchId=11) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_acid_dynamic_partition] (batchId=23) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_nonacid_from_acid] (batchId=86) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_orig_table] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_update_delete] (batchId=99) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_values_dynamic_partitioned] (batchId=87) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_values_non_partitioned] (batchId=24) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_values_partitioned] (batchId=88) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_values_tmp_table] (batchId=5) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_reader] (batchId=9) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_mv] (batchId=94) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_acid] (batchId=76) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_all] (batchId=78) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_buckets] (batchId=69) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_cttas] (batchId=54) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nullability_transitive_inference] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_ppd_exception] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=91) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_nonpart] (batchId=14) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_part2] (batchId=23) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_part] (batchId=55) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[temp_table_insert_values_dynamic_partitioned] (batchId=65) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[temp_table_insert_values_partitioned] (batchId=95) org.apache.hadoop.hive.c
[jira] [Commented] (HIVE-23048) Use sequences for TXN_ID generation
[ https://issues.apache.org/jira/browse/HIVE-23048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078798#comment-17078798 ] Hive QA commented on HIVE-23048: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 56s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 22s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 37s{color} | {color:blue} standalone-metastore/metastore-common in master has 35 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 17s{color} | {color:blue} standalone-metastore/metastore-server in master has 190 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 41s{color} | {color:blue} ql in master has 1528 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 27s{color} | {color:red} standalone-metastore/metastore-server: The patch generated 15 new + 635 unchanged - 19 fixed = 650 total (was 654) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 12 line(s) with tabs. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 25s{color} | {color:red} standalone-metastore/metastore-server generated 2 new + 189 unchanged - 1 fixed = 191 total (was 190) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 41s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:standalone-metastore/metastore-server | | | org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.findMinOpenTxnGLB(Statement) may fail to clean up java.sql.ResultSet Obligation to clean up resource created at CompactionTxnHandler.java:up java.sql.ResultSet Obligation to clean up resource created at CompactionTxnHandler.java:[line 340] is not discharged | | | org.apache.hadoop.hive.metastore.txn.TxnHandler.deleteInvalidOpenTransactions(Connection, List) may fail to clean up java.sql.Statement Obligation to clean up resource created at TxnHandler.java:clean up java.sql.Statement Obligation to clean up resource created at TxnHandler.java:[line 922] is not discharged | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-21519/dev-support/hive-personality.sh | | git revision | master / d2163cb | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-21519/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-21519/yetus/whitespace-tabs.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-21519/yetus/new-findbugs-standalone-metastore_
[jira] [Updated] (HIVE-23153) deregister from zookeeper is not properly worked on kerberized environment
[ https://issues.apache.org/jira/browse/HIVE-23153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Chung updated HIVE-23153: Attachment: HIVE-23153.02.patch Status: Patch Available (was: Open) [^HIVE-23153.02.patch] * The style error is fixed. * For those two test cases of [^HIVE-23153.01.patch], I don't see relevance. My patch is just for the deregister command of hiveserver2. * org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query19] showed a query error : Caused by: MetaException(message:Unable to select from transaction database java.sql.SQLSyntaxErrorException: Table/View 'NEXT_TXN_ID' does not exist. * org.apache.hive.service.auth.TestImproperTrustDomainAuthenticationBinary.org.apache.hive.service.auth.TestImproperTrustDomainAuthenticationBinary showed the listen port binding error. {code:java} Caused by: org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:1.Caused by: org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:1. at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:109) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:91) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:87) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.common.auth.HiveAuthUtils.getServerSocket(HiveAuthUtils.java:87) ~[hive-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hive.service.cli.thrift.ThriftBinaryCLIService.initServer(ThriftBinaryCLIService.java:86) ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] ... 23 moreCaused by: java.net.BindException: Address already in use at java.net.PlainSocketImpl.socketBind(Native Method) ~[?:1.8.0_102] at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387) ~[?:1.8.0_102] at java.net.ServerSocket.bind(ServerSocket.java:375) ~[?:1.8.0_102] at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:106) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:91) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:87) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.common.auth.HiveAuthUtils.getServerSocket(HiveAuthUtils.java:87) ~[hive-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hive.service.cli.thrift.ThriftBinaryCLIService.initServer(ThriftBinaryCLIService.java:86) ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] ... 23 more{code} > deregister from zookeeper is not properly worked on kerberized environment > -- > > Key: HIVE-23153 > URL: https://issues.apache.org/jira/browse/HIVE-23153 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Eugene Chung >Assignee: Eugene Chung >Priority: Minor > Attachments: HIVE-23153.01.patch, HIVE-23153.02.patch, Screen Shot > 2020-04-08 at 5.00.40.png > > > Deregistering from Zookeeper, initiated by the command 'hive --service > hiveserver2 -deregister ', is not properly worked when HiveServer2 > and Zookeeper are kerberized. Even though hive-site.xml has configuration for > Zookeeper Kerberos login (hive.server2.authentication.kerberos.principal and > keytab), it isn't used. I know that kinit with hiveserver2 keytab would make > it work. But as I said, hive-site.xml already has so that the user doesn't > need to do kinit. > * Kerberos login to Zookeeper Failed : Will not attempt to authenticate > using SASL (unknown error) > {code:java} > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: > -78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-server-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-util-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.0.0-78/tez/lib/slf4j-
[jira] [Commented] (HIVE-23004) Support Decimal64 operations across multiple vertices
[ https://issues.apache.org/jira/browse/HIVE-23004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078773#comment-17078773 ] Ramesh Kumar Thangarajan commented on HIVE-23004: - Hi [~ashutoshc] [~gopalv] Can you please review the patch in the PR [https://github.com/apache/hive/pull/973] > Support Decimal64 operations across multiple vertices > - > > Key: HIVE-23004 > URL: https://issues.apache.org/jira/browse/HIVE-23004 > Project: Hive > Issue Type: Bug >Reporter: Ramesh Kumar Thangarajan >Assignee: Ramesh Kumar Thangarajan >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23004.1.patch, HIVE-23004.10.patch, > HIVE-23004.11.patch, HIVE-23004.12.patch, HIVE-23004.13.patch, > HIVE-23004.14.patch, HIVE-23004.15.patch, HIVE-23004.16.patch, > HIVE-23004.17.patch, HIVE-23004.2.patch, HIVE-23004.4.patch, > HIVE-23004.6.patch, HIVE-23004.7.patch, HIVE-23004.8.patch, HIVE-23004.9.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Support Decimal64 operations across multiple vertices -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-23004) Support Decimal64 operations across multiple vertices
[ https://issues.apache.org/jira/browse/HIVE-23004?focusedWorklogId=418944&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-418944 ] ASF GitHub Bot logged work on HIVE-23004: - Author: ASF GitHub Bot Created on: 08/Apr/20 22:27 Start Date: 08/Apr/20 22:27 Worklog Time Spent: 10m Work Description: ramesh0201 commented on pull request #973: HIVE-23004 Support Decimal64 operations across multiple vertices URL: https://github.com/apache/hive/pull/973 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 418944) Remaining Estimate: 0h Time Spent: 10m > Support Decimal64 operations across multiple vertices > - > > Key: HIVE-23004 > URL: https://issues.apache.org/jira/browse/HIVE-23004 > Project: Hive > Issue Type: Bug >Reporter: Ramesh Kumar Thangarajan >Assignee: Ramesh Kumar Thangarajan >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23004.1.patch, HIVE-23004.10.patch, > HIVE-23004.11.patch, HIVE-23004.12.patch, HIVE-23004.13.patch, > HIVE-23004.14.patch, HIVE-23004.15.patch, HIVE-23004.16.patch, > HIVE-23004.17.patch, HIVE-23004.2.patch, HIVE-23004.4.patch, > HIVE-23004.6.patch, HIVE-23004.7.patch, HIVE-23004.8.patch, HIVE-23004.9.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Support Decimal64 operations across multiple vertices -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23004) Support Decimal64 operations across multiple vertices
[ https://issues.apache.org/jira/browse/HIVE-23004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-23004: -- Labels: pull-request-available (was: ) > Support Decimal64 operations across multiple vertices > - > > Key: HIVE-23004 > URL: https://issues.apache.org/jira/browse/HIVE-23004 > Project: Hive > Issue Type: Bug >Reporter: Ramesh Kumar Thangarajan >Assignee: Ramesh Kumar Thangarajan >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23004.1.patch, HIVE-23004.10.patch, > HIVE-23004.11.patch, HIVE-23004.12.patch, HIVE-23004.13.patch, > HIVE-23004.14.patch, HIVE-23004.15.patch, HIVE-23004.16.patch, > HIVE-23004.17.patch, HIVE-23004.2.patch, HIVE-23004.4.patch, > HIVE-23004.6.patch, HIVE-23004.7.patch, HIVE-23004.8.patch, HIVE-23004.9.patch > > > Support Decimal64 operations across multiple vertices -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22750) Consolidate LockType naming
[ https://issues.apache.org/jira/browse/HIVE-22750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marton Bod updated HIVE-22750: -- Attachment: HIVE-22750.12.patch > Consolidate LockType naming > --- > > Key: HIVE-22750 > URL: https://issues.apache.org/jira/browse/HIVE-22750 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Zoltan Chovan >Assignee: Marton Bod >Priority: Minor > Attachments: HIVE-22750.1.patch, HIVE-22750.10.patch, > HIVE-22750.11.patch, HIVE-22750.12.patch, HIVE-22750.12.patch, > HIVE-22750.2.patch, HIVE-22750.3.patch, HIVE-22750.4.patch, > HIVE-22750.5.patch, HIVE-22750.5.patch, HIVE-22750.6.patch, > HIVE-22750.7.patch, HIVE-22750.8.patch, HIVE-22750.9.patch, > HIVE-22750.9.patch, HIVE-22750.9.patch, HIVE-22750.9.patch > > > Extend enum with string literal to remove unnecessary `id` to `char` casting > for the LockType: > {code:java} > switch (lockType) { > case EXCLUSIVE: > lockChar = LOCK_EXCLUSIVE; > break; > case SHARED_READ: > lockChar = LOCK_SHARED; > break; > case SHARED_WRITE: > lockChar = LOCK_SEMI_SHARED; > break; > } > {code} > Consolidate LockType naming in code and schema upgrade scripts: > {code:java} > CASE WHEN HL.`HL_LOCK_TYPE` = 'e' THEN 'exclusive' WHEN HL.`HL_LOCK_TYPE` = > 'r' THEN 'shared' WHEN HL.`HL_LOCK_TYPE` = 'w' THEN *'semi-shared'* END AS > LOCK_TYPE, > {code} > +*Lock types:*+ > EXCLUSIVE > EXCL_WRITE > SHARED_WRITE > SHARED_READ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23076) Add batching for openTxn
[ https://issues.apache.org/jira/browse/HIVE-23076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078768#comment-17078768 ] Hive QA commented on HIVE-23076: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12999299/HIVE-23076.11.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 18195 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/21518/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21518/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21518/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12999299 - PreCommit-HIVE-Build > Add batching for openTxn > > > Key: HIVE-23076 > URL: https://issues.apache.org/jira/browse/HIVE-23076 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-23076.10.patch, HIVE-23076.11.patch, > HIVE-23076.2.patch, HIVE-23076.3.patch, HIVE-23076.4.patch, > HIVE-23076.5.patch, HIVE-23076.6.patch, HIVE-23076.7.patch, > HIVE-23076.8.patch, HIVE-23076.9.patch, HIVE-23076.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23153) deregister from zookeeper is not properly worked on kerberized environment
[ https://issues.apache.org/jira/browse/HIVE-23153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Chung updated HIVE-23153: Status: Open (was: Patch Available) > deregister from zookeeper is not properly worked on kerberized environment > -- > > Key: HIVE-23153 > URL: https://issues.apache.org/jira/browse/HIVE-23153 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Eugene Chung >Assignee: Eugene Chung >Priority: Minor > Attachments: HIVE-23153.01.patch, Screen Shot 2020-04-08 at > 5.00.40.png > > > Deregistering from Zookeeper, initiated by the command 'hive --service > hiveserver2 -deregister ', is not properly worked when HiveServer2 > and Zookeeper are kerberized. Even though hive-site.xml has configuration for > Zookeeper Kerberos login (hive.server2.authentication.kerberos.principal and > keytab), it isn't used. I know that kinit with hiveserver2 keytab would make > it work. But as I said, hive-site.xml already has so that the user doesn't > need to do kinit. > * Kerberos login to Zookeeper Failed : Will not attempt to authenticate > using SASL (unknown error) > {code:java} > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: > -78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-server-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-util-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.0.0-78/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.0.0-78/tez/conf > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client > environment:java.library.path=: > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client > environment:java.io.tmpdir=/tmp > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client > environment:java.compiler= > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client > environment:os.name=Linux > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client > environment:os.arch=amd64 > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client > environment:os.version=... > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client > environment:user.name=... > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client > environment:user.home=... > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client > environment:user.dir=... > 2020-04-08 04:45:44,699 INFO [main] zookeeper.ZooKeeper: Initiating client > connection, connectString=... sessionTimeout=6 > watcher=org.apache.curator.ConnectionState@706eab5d > 2020-04-08 04:45:44,725 INFO [main-SendThread(...)] zookeeper.ClientCnxn: > Opening socket connection to server ...:2181. Will not attempt to > authenticate using SASL (unknown error) > 2020-04-08 04:45:44,731 INFO [main-SendThread(...:2181)] > zookeeper.ClientCnxn: Socket connection established to ...:2181, initiating > session > 2020-04-08 04:45:44,743 INFO [main-SendThread(...:2181)] > zookeeper.ClientCnxn: Session establishment complete on server ...:2181, > sessionid = 0x27148fd2ab1002e, negotiated timeout = 6 > 2020-04-08 04:45:44,751 INFO [main-EventThread] state.ConnectionStateManager: > State change: CONNECTED > 2020-04-08 04:45:44,760 WARN [main] server.HiveServer2: Will attempt to > remove the znode: > /hiveserver2/serverUri=...;version=3.1.2;sequence=49 from ZooKeeper > Will attempt to remove the znode: > /hiveserver2/serverUri=...;version=3.1.2;sequence=49 from ZooKeeper > 2020-04-08 04:45:44,768 INFO [Curator-Framework-0] imps.CuratorFrameworkImpl: > backgroundOperationsLoop exiting > 2020-04-08 04:45:44,771 INFO [main] zookeeper.ZooKeeper: Session: > 0x27148fd2ab1002e closed > 2020-04-08 04:45:44,771 INFO [main-EventThread] zookeeper.ClientCnxn: > EventThread shut down > 2020-04-08 04:45:44,794 INFO [shutdown-hook-0] server.HiveServer2: > SHUTDOWN_MSG: > / > SHUTDOWN_MSG: Shutting down HiveServer2 at ... > *
[jira] [Commented] (HIVE-23076) Add batching for openTxn
[ https://issues.apache.org/jira/browse/HIVE-23076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078728#comment-17078728 ] Hive QA commented on HIVE-23076: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 10s{color} | {color:blue} standalone-metastore/metastore-server in master has 190 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s{color} | {color:red} standalone-metastore/metastore-server: The patch generated 1 new + 542 unchanged - 0 fixed = 543 total (was 542) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 19s{color} | {color:green} standalone-metastore/metastore-server generated 0 new + 189 unchanged - 1 fixed = 189 total (was 190) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-21518/dev-support/hive-personality.sh | | git revision | master / d2163cb | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-21518/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt | | modules | C: standalone-metastore/metastore-server U: standalone-metastore/metastore-server | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-21518/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Add batching for openTxn > > > Key: HIVE-23076 > URL: https://issues.apache.org/jira/browse/HIVE-23076 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-23076.10.patch, HIVE-23076.11.patch, > HIVE-23076.2.patch, HIVE-23076.3.patch, HIVE-23076.4.patch, > HIVE-23076.5.patch, HIVE-23076.6.patch, HIVE-23076.7.patch, > HIVE-23076.8.patch, HIVE-23076.9.patch, HIVE-23076.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23145) get_partitions_with_specs fails if filter expression is not parsable
[ https://issues.apache.org/jira/browse/HIVE-23145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-23145: --- Status: Patch Available (was: Open) > get_partitions_with_specs fails if filter expression is not parsable > > > Key: HIVE-23145 > URL: https://issues.apache.org/jira/browse/HIVE-23145 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Affects Versions: 4.0.0 >Reporter: Vineet Garg >Priority: Major > Attachments: HIVE-23145.1.patch > > > Expression is not parsable in most of the cases. Current API > *get_partitions_by_expr* anticipates this and provide a fallback mechanism. > This basically deserialize the provided expression, fetches all partition > names for the table, prune partition names using the expression and then uses > the names to fetch required partition data. > Note that this expect serialized expression instead of string. > This need to be done for both Direct SQL and JDO path. > e.g. Following error is thrown for tpcds query 55 which provide expression > * IS NOT NULL filter* > *ERROR* > {code:java} > MetaException(message:Error parsing partition filter; lexer error: null; > exception NoViableAltException(13@[]))MetaException(message:Error parsing > partition filter; lexer error: null; exception NoViableAltException(13@[])) > at > org.apache.hadoop.hive.metastore.PartFilterExprUtil.getFilterParser(PartFilterExprUtil.java:154) > at > org.apache.hadoop.hive.metastore.ObjectStore$15.initExpressionTree(ObjectStore.java:4339) > at > org.apache.hadoop.hive.metastore.ObjectStore$15.canUseDirectSql(ObjectStore.java:4319) > at > org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.start(ObjectStore.java:4021) > at > org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:3985) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionSpecsByFilterAndProjection(ObjectStore.java:4395) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) > at com.sun.proxy.$Proxy26.getPartitionSpecsByFilterAndProjection(Unknown > Source) at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_with_specs(HiveMetaStore.java:5356) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > at com.sun.proxy.$Proxy27.get_partitions_with_specs(Unknown Source) at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_with_specs.getResult(ThriftHiveMetastore.java:21620) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_with_specs.getResult(ThriftHiveMetastore.java:21604) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at > org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:643) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:638) > at java.security.AccessController.doPrivileged(Native Method) at > javax.security.auth.Subject.doAs(Subject.java:422) at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:638) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23145) get_partitions_with_specs fails if filter expression is not parsable
[ https://issues.apache.org/jira/browse/HIVE-23145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-23145: --- Attachment: HIVE-23145.1.patch > get_partitions_with_specs fails if filter expression is not parsable > > > Key: HIVE-23145 > URL: https://issues.apache.org/jira/browse/HIVE-23145 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Affects Versions: 4.0.0 >Reporter: Vineet Garg >Priority: Major > Attachments: HIVE-23145.1.patch > > > Expression is not parsable in most of the cases. Current API > *get_partitions_by_expr* anticipates this and provide a fallback mechanism. > This basically deserialize the provided expression, fetches all partition > names for the table, prune partition names using the expression and then uses > the names to fetch required partition data. > Note that this expect serialized expression instead of string. > This need to be done for both Direct SQL and JDO path. > e.g. Following error is thrown for tpcds query 55 which provide expression > * IS NOT NULL filter* > *ERROR* > {code:java} > MetaException(message:Error parsing partition filter; lexer error: null; > exception NoViableAltException(13@[]))MetaException(message:Error parsing > partition filter; lexer error: null; exception NoViableAltException(13@[])) > at > org.apache.hadoop.hive.metastore.PartFilterExprUtil.getFilterParser(PartFilterExprUtil.java:154) > at > org.apache.hadoop.hive.metastore.ObjectStore$15.initExpressionTree(ObjectStore.java:4339) > at > org.apache.hadoop.hive.metastore.ObjectStore$15.canUseDirectSql(ObjectStore.java:4319) > at > org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.start(ObjectStore.java:4021) > at > org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:3985) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionSpecsByFilterAndProjection(ObjectStore.java:4395) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) > at com.sun.proxy.$Proxy26.getPartitionSpecsByFilterAndProjection(Unknown > Source) at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_with_specs(HiveMetaStore.java:5356) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > at com.sun.proxy.$Proxy27.get_partitions_with_specs(Unknown Source) at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_with_specs.getResult(ThriftHiveMetastore.java:21620) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_with_specs.getResult(ThriftHiveMetastore.java:21604) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at > org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:643) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:638) > at java.security.AccessController.doPrivileged(Native Method) at > javax.security.auth.Subject.doAs(Subject.java:422) at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:638) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-23145) get_partitions_with_specs fails if filter expression is not parsable
[ https://issues.apache.org/jira/browse/HIVE-23145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg reassigned HIVE-23145: -- Assignee: Vineet Garg > get_partitions_with_specs fails if filter expression is not parsable > > > Key: HIVE-23145 > URL: https://issues.apache.org/jira/browse/HIVE-23145 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Affects Versions: 4.0.0 >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-23145.1.patch > > > Expression is not parsable in most of the cases. Current API > *get_partitions_by_expr* anticipates this and provide a fallback mechanism. > This basically deserialize the provided expression, fetches all partition > names for the table, prune partition names using the expression and then uses > the names to fetch required partition data. > Note that this expect serialized expression instead of string. > This need to be done for both Direct SQL and JDO path. > e.g. Following error is thrown for tpcds query 55 which provide expression > * IS NOT NULL filter* > *ERROR* > {code:java} > MetaException(message:Error parsing partition filter; lexer error: null; > exception NoViableAltException(13@[]))MetaException(message:Error parsing > partition filter; lexer error: null; exception NoViableAltException(13@[])) > at > org.apache.hadoop.hive.metastore.PartFilterExprUtil.getFilterParser(PartFilterExprUtil.java:154) > at > org.apache.hadoop.hive.metastore.ObjectStore$15.initExpressionTree(ObjectStore.java:4339) > at > org.apache.hadoop.hive.metastore.ObjectStore$15.canUseDirectSql(ObjectStore.java:4319) > at > org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.start(ObjectStore.java:4021) > at > org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:3985) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionSpecsByFilterAndProjection(ObjectStore.java:4395) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) > at com.sun.proxy.$Proxy26.getPartitionSpecsByFilterAndProjection(Unknown > Source) at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_with_specs(HiveMetaStore.java:5356) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > at com.sun.proxy.$Proxy27.get_partitions_with_specs(Unknown Source) at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_with_specs.getResult(ThriftHiveMetastore.java:21620) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_with_specs.getResult(ThriftHiveMetastore.java:21604) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at > org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:643) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:638) > at java.security.AccessController.doPrivileged(Native Method) at > javax.security.auth.Subject.doAs(Subject.java:422) at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:638) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22750) Consolidate LockType naming
[ https://issues.apache.org/jira/browse/HIVE-22750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078705#comment-17078705 ] Hive QA commented on HIVE-22750: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12999344/HIVE-22750.12.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 18192 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.ql.parse.TestStatsReplicationScenariosACID.org.apache.hadoop.hive.ql.parse.TestStatsReplicationScenariosACID (batchId=262) org.apache.hadoop.hive.ql.schq.TestScheduledQueryStatements.testExecuteImmediate (batchId=359) org.apache.hive.jdbc.TestActivePassiveHA.testActivePassiveHA (batchId=292) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/21517/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21517/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21517/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12999344 - PreCommit-HIVE-Build > Consolidate LockType naming > --- > > Key: HIVE-22750 > URL: https://issues.apache.org/jira/browse/HIVE-22750 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Zoltan Chovan >Assignee: Marton Bod >Priority: Minor > Attachments: HIVE-22750.1.patch, HIVE-22750.10.patch, > HIVE-22750.11.patch, HIVE-22750.12.patch, HIVE-22750.2.patch, > HIVE-22750.3.patch, HIVE-22750.4.patch, HIVE-22750.5.patch, > HIVE-22750.5.patch, HIVE-22750.6.patch, HIVE-22750.7.patch, > HIVE-22750.8.patch, HIVE-22750.9.patch, HIVE-22750.9.patch, > HIVE-22750.9.patch, HIVE-22750.9.patch > > > Extend enum with string literal to remove unnecessary `id` to `char` casting > for the LockType: > {code:java} > switch (lockType) { > case EXCLUSIVE: > lockChar = LOCK_EXCLUSIVE; > break; > case SHARED_READ: > lockChar = LOCK_SHARED; > break; > case SHARED_WRITE: > lockChar = LOCK_SEMI_SHARED; > break; > } > {code} > Consolidate LockType naming in code and schema upgrade scripts: > {code:java} > CASE WHEN HL.`HL_LOCK_TYPE` = 'e' THEN 'exclusive' WHEN HL.`HL_LOCK_TYPE` = > 'r' THEN 'shared' WHEN HL.`HL_LOCK_TYPE` = 'w' THEN *'semi-shared'* END AS > LOCK_TYPE, > {code} > +*Lock types:*+ > EXCLUSIVE > EXCL_WRITE > SHARED_WRITE > SHARED_READ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-23160) get_partitions_with_specs fail to close the query
[ https://issues.apache.org/jira/browse/HIVE-23160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg resolved HIVE-23160. Resolution: Cannot Reproduce > get_partitions_with_specs fail to close the query > - > > Key: HIVE-23160 > URL: https://issues.apache.org/jira/browse/HIVE-23160 > Project: Hive > Issue Type: Sub-task > Components: Metastore, Standalone Metastore >Affects Versions: 4.0.0 >Reporter: Vineet Garg >Priority: Major > > The api relies on try to close the resource (query) but it fails (likely > because try is calling close but instead closeAll need to be called) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22750) Consolidate LockType naming
[ https://issues.apache.org/jira/browse/HIVE-22750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078691#comment-17078691 ] Hive QA commented on HIVE-22750: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 37s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 57s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 49s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 36s{color} | {color:blue} standalone-metastore/metastore-common in master has 35 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 9s{color} | {color:blue} standalone-metastore/metastore-server in master has 190 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 41s{color} | {color:blue} ql in master has 1528 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 29s{color} | {color:blue} hcatalog/streaming in master has 11 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 27s{color} | {color:blue} streaming in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 51s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 58s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 25s{color} | {color:red} standalone-metastore/metastore-server: The patch generated 45 new + 525 unchanged - 44 fixed = 570 total (was 569) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 14s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 16s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-21517/dev-support/hive-personality.sh | | git revision | master / d2163cb | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-21517/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-21517/yetus/patch-asflicense-problems.txt | | modules | C: standalone-metastore/metastore-common metastore standalone-metastore/metastore-server ql hcatalog/streaming streaming U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-21517/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Consolidate LockType naming > --- > > Key: HIVE-22750 > URL: https://issues.apache.org/jira/browse/HIVE-22750 > Project: Hive > Issue Type: Improvement > Components:
[jira] [Work logged] (HIVE-23006) Compiler support for Probe MapJoin
[ https://issues.apache.org/jira/browse/HIVE-23006?focusedWorklogId=418829&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-418829 ] ASF GitHub Bot logged work on HIVE-23006: - Author: ASF GitHub Bot Created on: 08/Apr/20 19:45 Start Date: 08/Apr/20 19:45 Worklog Time Spent: 10m Work Description: jcamachor commented on pull request #952: HIVE-23006 ProbeDecode compiler support URL: https://github.com/apache/hive/pull/952#discussion_r405770526 ## File path: ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java ## @@ -196,6 +196,11 @@ public MapWork createMapWork(GenTezProcContext context, Operator root, mapWork.setIncludedBuckets(ts.getConf().getIncludedBuckets()); } +if (ts.getProbeDecodeContext() != null) { + // TODO: some operators like VectorPTFEvaluator do not allow the use of Selected take this into account here? Review comment: I think this may be taken into account when we are pushing the SJ predicates down since it would not be valid in that case either (which I believe your logic to create the context is relying on?). Could you verify that? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 418829) Time Spent: 2h 20m (was: 2h 10m) > Compiler support for Probe MapJoin > -- > > Key: HIVE-23006 > URL: https://issues.apache.org/jira/browse/HIVE-23006 > Project: Hive > Issue Type: Sub-task >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23006.01.patch, HIVE-23006.02.patch > > Time Spent: 2h 20m > Remaining Estimate: 0h > > The decision of pushing down information to the Record reader (potentially > reducing decoding time by row-level filtering) should be done at query > compilation time. > This patch adds an extra optimisation step with the goal of finding Table > Scan operators that could reduce the number of rows decoded at runtime using > extra available information. > It currently looks for all the available MapJoin operators that could use the > smaller HashTable on the probing side (where TS is) to filter-out rows that > would never match. > To do so the HashTable information is pushed down to the TS properties and > then propagated as part of MapWork. > If the a single TS is used by multiple operators (shared-word), this rule can > not be applied. > This rule can be extended to support static filter expressions like: > _select * from sales where sold_state = 'PR';_ > This optimisation manly targets the Tez execution engine running on Llap. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-23006) Compiler support for Probe MapJoin
[ https://issues.apache.org/jira/browse/HIVE-23006?focusedWorklogId=418825&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-418825 ] ASF GitHub Bot logged work on HIVE-23006: - Author: ASF GitHub Bot Created on: 08/Apr/20 19:42 Start Date: 08/Apr/20 19:42 Worklog Time Spent: 10m Work Description: jcamachor commented on pull request #952: HIVE-23006 ProbeDecode compiler support URL: https://github.com/apache/hive/pull/952#discussion_r405769389 ## File path: ql/src/java/org/apache/hadoop/hive/ql/optimizer/SharedWorkOptimizer.java ## @@ -487,6 +487,15 @@ private static boolean sharedWorkOptimization(ParseContext pctx, SharedWorkOptim } LOG.debug("Input operator removed: {}", op); } + + // A shared TSop across branches can not have probeContext that utilizes single branch info + // Filtered-out rows from one branch might be needed by another branch sharing a TSop + if (retainableTsOp.getProbeDecodeContext() != null) { Review comment: In this case, should we remove the `ProbeDecodeContext` or should we skip merging these two TS operators? It may be that in some cases merging will backfire, i.e., if those two filters were very selective? Just for reference, if I remember correctly, SharedWorkOptimizer only merges TS operators targeted by SJs if at least one of the TS operators do not contain a SJ (since we would incur the full scan cost in any case at least once). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 418825) Time Spent: 2h 10m (was: 2h) > Compiler support for Probe MapJoin > -- > > Key: HIVE-23006 > URL: https://issues.apache.org/jira/browse/HIVE-23006 > Project: Hive > Issue Type: Sub-task >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23006.01.patch, HIVE-23006.02.patch > > Time Spent: 2h 10m > Remaining Estimate: 0h > > The decision of pushing down information to the Record reader (potentially > reducing decoding time by row-level filtering) should be done at query > compilation time. > This patch adds an extra optimisation step with the goal of finding Table > Scan operators that could reduce the number of rows decoded at runtime using > extra available information. > It currently looks for all the available MapJoin operators that could use the > smaller HashTable on the probing side (where TS is) to filter-out rows that > would never match. > To do so the HashTable information is pushed down to the TS properties and > then propagated as part of MapWork. > If the a single TS is used by multiple operators (shared-word), this rule can > not be applied. > This rule can be extended to support static filter expressions like: > _select * from sales where sold_state = 'PR';_ > This optimisation manly targets the Tez execution engine running on Llap. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-23006) Compiler support for Probe MapJoin
[ https://issues.apache.org/jira/browse/HIVE-23006?focusedWorklogId=418820&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-418820 ] ASF GitHub Bot logged work on HIVE-23006: - Author: ASF GitHub Bot Created on: 08/Apr/20 19:34 Start Date: 08/Apr/20 19:34 Worklog Time Spent: 10m Work Description: jcamachor commented on pull request #952: HIVE-23006 ProbeDecode compiler support URL: https://github.com/apache/hive/pull/952#discussion_r405765078 ## File path: ql/src/java/org/apache/hadoop/hive/ql/parse/TezCompiler.java ## @@ -1482,18 +1490,131 @@ private void removeSemijoinsParallelToMapJoin(OptimizeTezProcContext procCtx) deque.addAll(op.getChildOperators()); } } +// No need to remove SJ branches when we have semi-join reduction or when semijoins are enabled for parallel mapjoins. +if (!procCtx.conf.getBoolVar(ConfVars.TEZ_DYNAMIC_SEMIJOIN_REDUCTION_FOR_MAPJOIN)) { + if (semijoins.size() > 0) { +for (Entry semiEntry : semijoins.entrySet()) { + SemiJoinBranchInfo sjInfo = procCtx.parseContext.getRsToSemiJoinBranchInfo().get(semiEntry.getKey()); + if (sjInfo.getIsHint() || !sjInfo.getShouldRemove()) { +// Created by hint, skip it +continue; + } + if (LOG.isDebugEnabled()) { +LOG.debug("Semijoin optimization with parallel edge to map join. Removing semijoin " + +OperatorUtils.getOpNamePretty(semiEntry.getKey()) + " - " + OperatorUtils.getOpNamePretty(semiEntry.getValue())); + } + GenTezUtils.removeBranch(semiEntry.getKey()); + GenTezUtils.removeSemiJoinOperator(procCtx.parseContext, semiEntry.getKey(), semiEntry.getValue()); +} + } +} +if (procCtx.conf.getBoolVar(ConfVars.HIVE_OPTIMIZE_SCAN_PROBEDECODE)) { + if (probeDecodeMJoins.size() > 0) { Review comment: The path for `HIVE_OPTIMIZE_SCAN_PROBEDECODE` seems independent from SJ optimization. Should we add a mechanism to remove the context for the optimization when we think it is not going to be beneficial, e.g., it is not filtering any data? Or you think that the cost of checking is negligible and we should always apply this optimization? What do you experiments show in the worst case scenario? (In any case, this could be tackled in a follow-up but I wanted to ask) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 418820) Time Spent: 2h (was: 1h 50m) > Compiler support for Probe MapJoin > -- > > Key: HIVE-23006 > URL: https://issues.apache.org/jira/browse/HIVE-23006 > Project: Hive > Issue Type: Sub-task >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23006.01.patch, HIVE-23006.02.patch > > Time Spent: 2h > Remaining Estimate: 0h > > The decision of pushing down information to the Record reader (potentially > reducing decoding time by row-level filtering) should be done at query > compilation time. > This patch adds an extra optimisation step with the goal of finding Table > Scan operators that could reduce the number of rows decoded at runtime using > extra available information. > It currently looks for all the available MapJoin operators that could use the > smaller HashTable on the probing side (where TS is) to filter-out rows that > would never match. > To do so the HashTable information is pushed down to the TS properties and > then propagated as part of MapWork. > If the a single TS is used by multiple operators (shared-word), this rule can > not be applied. > This rule can be extended to support static filter expressions like: > _select * from sales where sold_state = 'PR';_ > This optimisation manly targets the Tez execution engine running on Llap. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-23159) Cleanup ShowCreateTableOperation
[ https://issues.apache.org/jira/browse/HIVE-23159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor reassigned HIVE-23159: - > Cleanup ShowCreateTableOperation > > > Key: HIVE-23159 > URL: https://issues.apache.org/jira/browse/HIVE-23159 > Project: Hive > Issue Type: Bug >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > > * Move StringTemplate templates to external files > * Explore better leveraging StringTemplate capabilities to remove duplicate > functionality in the class > * General clean up and formatting -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23128) SHOW CREATE TABLE Creates Incorrect Syntax When Database Specified
[ https://issues.apache.org/jira/browse/HIVE-23128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HIVE-23128: -- Fix Version/s: 4.0.0 Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to master. Thank you [~mgergely] for the review! > SHOW CREATE TABLE Creates Incorrect Syntax When Database Specified > -- > > Key: HIVE-23128 > URL: https://issues.apache.org/jira/browse/HIVE-23128 > Project: Hive > Issue Type: Bug >Affects Versions: 2.4.0, 3.1.2 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-23128.1.patch, HIVE-23128.2.patch, > HIVE-23128.2.patch, HIVE-23128.2.patch > > > {code:sql} > show create table `sample_07`; > show create table `default`.`sample_07`; > show create table `default.sample_07`; > {code} > {code:none|title=Results} > CREATE TABLE `sample_07`(...) > CREATE TABLE `default.sample_07`(...) > CREATE TABLE `default.sample_07`(...); > {code} > All three {{show create table}} statements complete in Hive 2.x and 3.x and > generate {{CREATE TABLE}} statements as show above. The first result is > correct because it does not include the database name, however, the > subsequent two results are invalid: each field must be quoted individually. > This causes a failure in recent versions of Hive because "SemanticException > Table or database name may not contain dot(.) character." > {quote}If any components of a multiple-part name require quoting, quote them > individually rather than quoting the name as a whole. For example, write > `my-table`.`my-column`, not `my-table.my-column`. > * [https://dev.mysql.com/doc/refman/8.0/en/identifier-qualifiers.html]{quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-23006) Compiler support for Probe MapJoin
[ https://issues.apache.org/jira/browse/HIVE-23006?focusedWorklogId=418816&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-418816 ] ASF GitHub Bot logged work on HIVE-23006: - Author: ASF GitHub Bot Created on: 08/Apr/20 19:30 Start Date: 08/Apr/20 19:30 Worklog Time Spent: 10m Work Description: jcamachor commented on pull request #952: HIVE-23006 ProbeDecode compiler support URL: https://github.com/apache/hive/pull/952#discussion_r405762816 ## File path: ql/src/java/org/apache/hadoop/hive/ql/parse/TezCompiler.java ## @@ -1483,17 +1489,64 @@ private void removeSemijoinsParallelToMapJoin(OptimizeTezProcContext procCtx) } } -if (semijoins.size() > 0) { - for (ReduceSinkOperator rs : semijoins.keySet()) { -if (LOG.isDebugEnabled()) { - LOG.debug("Semijoin optimization with parallel edge to map join. Removing semijoin " - + OperatorUtils.getOpNamePretty(rs) + " - " + OperatorUtils.getOpNamePretty(semijoins.get(rs))); +if (!procCtx.conf.getBoolVar(ConfVars.TEZ_DYNAMIC_SEMIJOIN_REDUCTION_FOR_MAPJOIN)) { + if (semijoins.size() > 0) { Review comment: Probably I am missing part of the mechanism to enable your optimization. It seems you are skipping the removal of SJ, so you are adding the context AND keeping those branches in the plan. Is the intention to keep both optimizations in these cases? Or is it because there is some dependency between them? I thought for MJ, the intention was to keep only your new optimization, that's maybe where my misunderstanding is coming from. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 418816) Time Spent: 1h 50m (was: 1h 40m) > Compiler support for Probe MapJoin > -- > > Key: HIVE-23006 > URL: https://issues.apache.org/jira/browse/HIVE-23006 > Project: Hive > Issue Type: Sub-task >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23006.01.patch, HIVE-23006.02.patch > > Time Spent: 1h 50m > Remaining Estimate: 0h > > The decision of pushing down information to the Record reader (potentially > reducing decoding time by row-level filtering) should be done at query > compilation time. > This patch adds an extra optimisation step with the goal of finding Table > Scan operators that could reduce the number of rows decoded at runtime using > extra available information. > It currently looks for all the available MapJoin operators that could use the > smaller HashTable on the probing side (where TS is) to filter-out rows that > would never match. > To do so the HashTable information is pushed down to the TS properties and > then propagated as part of MapWork. > If the a single TS is used by multiple operators (shared-word), this rule can > not be applied. > This rule can be extended to support static filter expressions like: > _select * from sales where sold_state = 'PR';_ > This optimisation manly targets the Tez execution engine running on Llap. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23158) Optimize S3A recordReader policy for Random IO formats
[ https://issues.apache.org/jira/browse/HIVE-23158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078596#comment-17078596 ] Panagiotis Garefalakis commented on HIVE-23158: --- [~rbalamohan] [~prasanth_j] Thoughts? > Optimize S3A recordReader policy for Random IO formats > -- > > Key: HIVE-23158 > URL: https://issues.apache.org/jira/browse/HIVE-23158 > Project: Hive > Issue Type: Bug >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Trivial > Labels: pull-request-available > Attachments: HIVE-23158.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > S3A filesystem client (inherited by Hadoop) supports the notion of input > policies. > These policies tune the behaviour of HTTP requests that are used for reading > different filetypes such as TEXT or ORC. > For formats such as ORC and Parquet do a lot of seek operations, thus there > is an optimized RANDOM mode that reads files only partially instead of fully > (default). > I am suggesting to add some extra logic as part of HiveInputFormat to make > sure we optimize for random IO when data is stored on S3A using formats such > as ORC or Parquet. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23158) Optimize S3A recordReader policy for Random IO formats
[ https://issues.apache.org/jira/browse/HIVE-23158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Panagiotis Garefalakis updated HIVE-23158: -- Status: Patch Available (was: In Progress) > Optimize S3A recordReader policy for Random IO formats > -- > > Key: HIVE-23158 > URL: https://issues.apache.org/jira/browse/HIVE-23158 > Project: Hive > Issue Type: Bug >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Trivial > Labels: pull-request-available > Attachments: HIVE-23158.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > S3A filesystem client (inherited by Hadoop) supports the notion of input > policies. > These policies tune the behaviour of HTTP requests that are used for reading > different filetypes such as TEXT or ORC. > For formats such as ORC and Parquet do a lot of seek operations, thus there > is an optimized RANDOM mode that reads files only partially instead of fully > (default). > I am suggesting to add some extra logic as part of HiveInputFormat to make > sure we optimize for random IO when data is stored on S3A using formats such > as ORC or Parquet. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work started] (HIVE-23158) Optimize S3A recordReader policy for Random IO formats
[ https://issues.apache.org/jira/browse/HIVE-23158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-23158 started by Panagiotis Garefalakis. - > Optimize S3A recordReader policy for Random IO formats > -- > > Key: HIVE-23158 > URL: https://issues.apache.org/jira/browse/HIVE-23158 > Project: Hive > Issue Type: Bug >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Trivial > Labels: pull-request-available > Attachments: HIVE-23158.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > S3A filesystem client (inherited by Hadoop) supports the notion of input > policies. > These policies tune the behaviour of HTTP requests that are used for reading > different filetypes such as TEXT or ORC. > For formats such as ORC and Parquet do a lot of seek operations, thus there > is an optimized RANDOM mode that reads files only partially instead of fully > (default). > I am suggesting to add some extra logic as part of HiveInputFormat to make > sure we optimize for random IO when data is stored on S3A using formats such > as ORC or Parquet. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23158) Optimize S3A recordReader policy for Random IO formats
[ https://issues.apache.org/jira/browse/HIVE-23158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Panagiotis Garefalakis updated HIVE-23158: -- Attachment: HIVE-23158.01.patch > Optimize S3A recordReader policy for Random IO formats > -- > > Key: HIVE-23158 > URL: https://issues.apache.org/jira/browse/HIVE-23158 > Project: Hive > Issue Type: Bug >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Trivial > Labels: pull-request-available > Attachments: HIVE-23158.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > S3A filesystem client (inherited by Hadoop) supports the notion of input > policies. > These policies tune the behaviour of HTTP requests that are used for reading > different filetypes such as TEXT or ORC. > For formats such as ORC and Parquet do a lot of seek operations, thus there > is an optimized RANDOM mode that reads files only partially instead of fully > (default). > I am suggesting to add some extra logic as part of HiveInputFormat to make > sure we optimize for random IO when data is stored on S3A using formats such > as ORC or Parquet. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-23158) Optimize S3A recordReader policy for Random IO formats
[ https://issues.apache.org/jira/browse/HIVE-23158?focusedWorklogId=418791&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-418791 ] ASF GitHub Bot logged work on HIVE-23158: - Author: ASF GitHub Bot Created on: 08/Apr/20 18:43 Start Date: 08/Apr/20 18:43 Worklog Time Spent: 10m Work Description: pgaref commented on pull request #972: HIVE-23158 initial patch URL: https://github.com/apache/hive/pull/972 Change-Id: I156397ce2a64485f125c0e6923da4a8012a8d53c This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 418791) Remaining Estimate: 0h Time Spent: 10m > Optimize S3A recordReader policy for Random IO formats > -- > > Key: HIVE-23158 > URL: https://issues.apache.org/jira/browse/HIVE-23158 > Project: Hive > Issue Type: Bug >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Trivial > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > S3A filesystem client (inherited by Hadoop) supports the notion of input > policies. > These policies tune the behaviour of HTTP requests that are used for reading > different filetypes such as TEXT or ORC. > For formats such as ORC and Parquet do a lot of seek operations, thus there > is an optimized RANDOM mode that reads files only partially instead of fully > (default). > I am suggesting to add some extra logic as part of HiveInputFormat to make > sure we optimize for random IO when data is stored on S3A using formats such > as ORC or Parquet. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23154) Fix race condition in Utilities::mvFileToFinalPath
[ https://issues.apache.org/jira/browse/HIVE-23154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078591#comment-17078591 ] Hive QA commented on HIVE-23154: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12999296/HIVE-23154.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 18195 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_remove_15] (batchId=100) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_remove_16] (batchId=85) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_remove_18] (batchId=8) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_remove_25] (batchId=103) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/21516/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21516/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21516/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12999296 - PreCommit-HIVE-Build > Fix race condition in Utilities::mvFileToFinalPath > -- > > Key: HIVE-23154 > URL: https://issues.apache.org/jira/browse/HIVE-23154 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Reporter: Rajesh Balamohan >Priority: Major > Attachments: HIVE-23154.1.patch > > > Utilities::mvFileToFinalPath is used for moving files from "/_tmp.-ext to > "/-ext" folder. Tasks write data to "_tmp" . Before writing to final > destination, they are moved to "-ext" folder. As part of it, it has checks to > ensure that run-away task outputs are not copied to "-ext" folder. > Currently, there is a race condition between computing the snapshot of files > to be copied and the rename operation. Same issue persists in "insert into" > case as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23158) Optimize S3A recordReader policy for Random IO formats
[ https://issues.apache.org/jira/browse/HIVE-23158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-23158: -- Labels: pull-request-available (was: ) > Optimize S3A recordReader policy for Random IO formats > -- > > Key: HIVE-23158 > URL: https://issues.apache.org/jira/browse/HIVE-23158 > Project: Hive > Issue Type: Bug >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Trivial > Labels: pull-request-available > > S3A filesystem client (inherited by Hadoop) supports the notion of input > policies. > These policies tune the behaviour of HTTP requests that are used for reading > different filetypes such as TEXT or ORC. > For formats such as ORC and Parquet do a lot of seek operations, thus there > is an optimized RANDOM mode that reads files only partially instead of fully > (default). > I am suggesting to add some extra logic as part of HiveInputFormat to make > sure we optimize for random IO when data is stored on S3A using formats such > as ORC or Parquet. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-23158) Optimize S3A recordReader policy for Random IO formats
[ https://issues.apache.org/jira/browse/HIVE-23158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Panagiotis Garefalakis reassigned HIVE-23158: - > Optimize S3A recordReader policy for Random IO formats > -- > > Key: HIVE-23158 > URL: https://issues.apache.org/jira/browse/HIVE-23158 > Project: Hive > Issue Type: Bug >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Trivial > > S3A filesystem client (inherited by Hadoop) supports the notion of input > policies. > These policies tune the behaviour of HTTP requests that are used for reading > different filetypes such as TEXT or ORC. > For formats such as ORC and Parquet do a lot of seek operations, thus there > is an optimized RANDOM mode that reads files only partially instead of fully > (default). > I am suggesting to add some extra logic as part of HiveInputFormat to make > sure we optimize for random IO when data is stored on S3A using formats such > as ORC or Parquet. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23154) Fix race condition in Utilities::mvFileToFinalPath
[ https://issues.apache.org/jira/browse/HIVE-23154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078538#comment-17078538 ] Hive QA commented on HIVE-23154: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 44s{color} | {color:blue} ql in master has 1528 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s{color} | {color:red} ql: The patch generated 1 new + 108 unchanged - 0 fixed = 109 total (was 108) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 49s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-21516/dev-support/hive-personality.sh | | git revision | master / d91cc0c | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-21516/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-21516/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Fix race condition in Utilities::mvFileToFinalPath > -- > > Key: HIVE-23154 > URL: https://issues.apache.org/jira/browse/HIVE-23154 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Reporter: Rajesh Balamohan >Priority: Major > Attachments: HIVE-23154.1.patch > > > Utilities::mvFileToFinalPath is used for moving files from "/_tmp.-ext to > "/-ext" folder. Tasks write data to "_tmp" . Before writing to final > destination, they are moved to "-ext" folder. As part of it, it has checks to > ensure that run-away task outputs are not copied to "-ext" folder. > Currently, there is a race condition between computing the snapshot of files > to be copied and the rename operation. Same issue persists in "insert into" > case as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22957) Add Predicate Filtering In MSCK REPAIR TABLE
[ https://issues.apache.org/jira/browse/HIVE-22957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078506#comment-17078506 ] Hive QA commented on HIVE-22957: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12997101/HIVE-22957.01.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 18196 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/21515/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21515/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21515/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12997101 - PreCommit-HIVE-Build > Add Predicate Filtering In MSCK REPAIR TABLE > > > Key: HIVE-22957 > URL: https://issues.apache.org/jira/browse/HIVE-22957 > Project: Hive > Issue Type: Improvement >Reporter: Syed Shameerur Rahman >Assignee: Syed Shameerur Rahman >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: Design Doc_ Partition Filtering In MSCK REPAIR > TABLE.pdf, HIVE-22957.01.patch > > Time Spent: 0.5h > Remaining Estimate: 0h > > *Design Doc: * > [^Design Doc_ Partition Filtering In MSCK REPAIR TABLE.pdf] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-23095) NDV might be overestimated for a table with ~70 value
[ https://issues.apache.org/jira/browse/HIVE-23095?focusedWorklogId=418718&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-418718 ] ASF GitHub Bot logged work on HIVE-23095: - Author: ASF GitHub Bot Created on: 08/Apr/20 17:06 Start Date: 08/Apr/20 17:06 Worklog Time Spent: 10m Work Description: prasanthj commented on pull request #964: HIVE-23095 ndv 70 URL: https://github.com/apache/hive/pull/964#discussion_r405679222 ## File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/common/ndv/hll/HLLSparseRegister.java ## @@ -148,8 +148,12 @@ public int encodeHash(long hashcode) { } } - public int getSize() { -return sparseMap.size() + tempListIdx; + public boolean isSizeGreaterThan(int s) { +if (sparseMap.size() + tempListIdx > s) { + mergeTempListToSparseMap(); Review comment: also can we remove fastutil dependency? For such small encoding switch threshold I don't think fastutils fast hashmap implementation is worth it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 418718) Time Spent: 1h 20m (was: 1h 10m) > NDV might be overestimated for a table with ~70 value > - > > Key: HIVE-23095 > URL: https://issues.apache.org/jira/browse/HIVE-23095 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23095.01.patch, HIVE-23095.02.patch, > HIVE-23095.03.patch, HIVE-23095.04.patch, HIVE-23095.04.patch, > HIVE-23095.04.patch, HIVE-23095.05.patch, hll-bench.md > > Time Spent: 1h 20m > Remaining Estimate: 0h > > uncovered during looking into HIVE-23082 > https://issues.apache.org/jira/browse/HIVE-23082?focusedCommentId=17067773&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17067773 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-23095) NDV might be overestimated for a table with ~70 value
[ https://issues.apache.org/jira/browse/HIVE-23095?focusedWorklogId=418716&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-418716 ] ASF GitHub Bot logged work on HIVE-23095: - Author: ASF GitHub Bot Created on: 08/Apr/20 17:03 Start Date: 08/Apr/20 17:03 Worklog Time Spent: 10m Work Description: prasanthj commented on pull request #964: HIVE-23095 ndv 70 URL: https://github.com/apache/hive/pull/964#discussion_r405677422 ## File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/common/ndv/hll/HLLSparseRegister.java ## @@ -148,8 +148,12 @@ public int encodeHash(long hashcode) { } } - public int getSize() { -return sparseMap.size() + tempListIdx; + public boolean isSizeGreaterThan(int s) { +if (sparseMap.size() + tempListIdx > s) { + mergeTempListToSparseMap(); Review comment: ah oh.. i did the math with p=14 and forgot that we switched to 10 in hive. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 418716) Time Spent: 1h (was: 50m) > NDV might be overestimated for a table with ~70 value > - > > Key: HIVE-23095 > URL: https://issues.apache.org/jira/browse/HIVE-23095 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23095.01.patch, HIVE-23095.02.patch, > HIVE-23095.03.patch, HIVE-23095.04.patch, HIVE-23095.04.patch, > HIVE-23095.04.patch, HIVE-23095.05.patch, hll-bench.md > > Time Spent: 1h > Remaining Estimate: 0h > > uncovered during looking into HIVE-23082 > https://issues.apache.org/jira/browse/HIVE-23082?focusedCommentId=17067773&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17067773 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-23095) NDV might be overestimated for a table with ~70 value
[ https://issues.apache.org/jira/browse/HIVE-23095?focusedWorklogId=418717&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-418717 ] ASF GitHub Bot logged work on HIVE-23095: - Author: ASF GitHub Bot Created on: 08/Apr/20 17:03 Start Date: 08/Apr/20 17:03 Worklog Time Spent: 10m Work Description: prasanthj commented on pull request #964: HIVE-23095 ndv 70 URL: https://github.com/apache/hive/pull/964#discussion_r405677422 ## File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/common/ndv/hll/HLLSparseRegister.java ## @@ -148,8 +148,12 @@ public int encodeHash(long hashcode) { } } - public int getSize() { -return sparseMap.size() + tempListIdx; + public boolean isSizeGreaterThan(int s) { +if (sparseMap.size() + tempListIdx > s) { + mergeTempListToSparseMap(); Review comment: ah ok.. i did the math with p=14 and forgot that we switched to 10 in hive. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 418717) Time Spent: 1h 10m (was: 1h) > NDV might be overestimated for a table with ~70 value > - > > Key: HIVE-23095 > URL: https://issues.apache.org/jira/browse/HIVE-23095 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23095.01.patch, HIVE-23095.02.patch, > HIVE-23095.03.patch, HIVE-23095.04.patch, HIVE-23095.04.patch, > HIVE-23095.04.patch, HIVE-23095.05.patch, hll-bench.md > > Time Spent: 1h 10m > Remaining Estimate: 0h > > uncovered during looking into HIVE-23082 > https://issues.apache.org/jira/browse/HIVE-23082?focusedCommentId=17067773&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17067773 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23073) Shade netty and upgrade to netty 4.1.48.Final
[ https://issues.apache.org/jira/browse/HIVE-23073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] László Bodor updated HIVE-23073: Attachment: HIVE-23073.06.patch > Shade netty and upgrade to netty 4.1.48.Final > - > > Key: HIVE-23073 > URL: https://issues.apache.org/jira/browse/HIVE-23073 > Project: Hive > Issue Type: Improvement >Reporter: László Bodor >Assignee: László Bodor >Priority: Major > Attachments: HIVE-23073.01.patch, HIVE-23073.02.patch, > HIVE-23073.03.patch, HIVE-23073.04.patch, HIVE-23073.05.patch, > HIVE-23073.06.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22957) Add Predicate Filtering In MSCK REPAIR TABLE
[ https://issues.apache.org/jira/browse/HIVE-22957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078476#comment-17078476 ] Hive QA commented on HIVE-22957: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 51s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 10s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 58s{color} | {color:blue} parser in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 10s{color} | {color:blue} standalone-metastore/metastore-server in master has 190 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 45s{color} | {color:blue} ql in master has 1528 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} The patch parser passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} standalone-metastore/metastore-server: The patch generated 0 new + 121 unchanged - 2 fixed = 121 total (was 123) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} The patch ql passed checkstyle {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 34m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-21515/dev-support/hive-personality.sh | | git revision | master / d91cc0c | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | modules | C: parser standalone-metastore/metastore-server ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-21515/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Add Predicate Filtering In MSCK REPAIR TABLE > > > Key: HIVE-22957 > URL: https://issues.apache.org/jira/browse/HIVE-22957 > Project: Hive > Issue Type: Improvement >Reporter: Syed Shameerur Rahman >Assignee: Syed Shameerur Rahman >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: Design Doc_ Partition Filtering In MSCK REPAIR > TABLE.pdf, HIVE-22957.01.patch > > Time Spent: 0.5h > Remaining Estimate: 0
[jira] [Updated] (HIVE-23020) Avoid using _files for replication data copy during incremental run
[ https://issues.apache.org/jira/browse/HIVE-23020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] PRAVIN KUMAR SINHA updated HIVE-23020: -- Attachment: HIVE-23020.11.patch > Avoid using _files for replication data copy during incremental run > --- > > Key: HIVE-23020 > URL: https://issues.apache.org/jira/browse/HIVE-23020 > Project: Hive > Issue Type: Task >Reporter: PRAVIN KUMAR SINHA >Assignee: PRAVIN KUMAR SINHA >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23020.01.patch, HIVE-23020.02.patch, > HIVE-23020.03.patch, HIVE-23020.04.patch, HIVE-23020.05.patch, > HIVE-23020.06.patch, HIVE-23020.07.patch, HIVE-23020.08.patch, > HIVE-23020.09.patch, HIVE-23020.10.patch, HIVE-23020.11.patch > > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23020) Avoid using _files for replication data copy during incremental run
[ https://issues.apache.org/jira/browse/HIVE-23020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078422#comment-17078422 ] Hive QA commented on HIVE-23020: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12999294/HIVE-23020.10.patch {color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 18195 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.llap.cache.TestBuddyAllocator.testMTT[2] (batchId=373) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/21514/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21514/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21514/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12999294 - PreCommit-HIVE-Build > Avoid using _files for replication data copy during incremental run > --- > > Key: HIVE-23020 > URL: https://issues.apache.org/jira/browse/HIVE-23020 > Project: Hive > Issue Type: Task >Reporter: PRAVIN KUMAR SINHA >Assignee: PRAVIN KUMAR SINHA >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23020.01.patch, HIVE-23020.02.patch, > HIVE-23020.03.patch, HIVE-23020.04.patch, HIVE-23020.05.patch, > HIVE-23020.06.patch, HIVE-23020.07.patch, HIVE-23020.08.patch, > HIVE-23020.09.patch, HIVE-23020.10.patch > > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23151) LLAP: default hive.llap.file.cleanup.delay.seconds=0s
[ https://issues.apache.org/jira/browse/HIVE-23151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] László Bodor updated HIVE-23151: Attachment: HIVE-23151.01.patch > LLAP: default hive.llap.file.cleanup.delay.seconds=0s > - > > Key: HIVE-23151 > URL: https://issues.apache.org/jira/browse/HIVE-23151 > Project: Hive > Issue Type: Bug > Components: llap >Reporter: László Bodor >Assignee: László Bodor >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-23151.01.patch, HIVE-23151.01.patch > > > The current default value (300s) reflects more a debugging scenario, let's > set this to 0s in order to make shuffle local files be cleaned up immediately > after dag complete. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23114) Insert overwrite with dynamic partitioning is not working correctly with direct insert
[ https://issues.apache.org/jira/browse/HIVE-23114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078406#comment-17078406 ] Sungwoo commented on HIVE-23114: I tested with the new commit using HDFS. The ORC database was loaded with a problem, and TPC-DS queries run okay. (I tested with both TEXT and ORC databases, and obtained the same result.) 'catalog_returns where _returned_date_sk is null' contains no row whereas 'catalog_returns where _returned_date_sk is not null' returns a non-empty list. > Insert overwrite with dynamic partitioning is not working correctly with > direct insert > -- > > Key: HIVE-23114 > URL: https://issues.apache.org/jira/browse/HIVE-23114 > Project: Hive > Issue Type: Bug >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-23114.1.patch, HIVE-23114.2.patch, > HIVE-23114.3.patch > > > This is a follow-up Jira for the > [conversation|https://issues.apache.org/jira/browse/HIVE-21164?focusedCommentId=17059280&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17059280] > in HIVE-21164 > Doing an insert overwrite from a multi-insert statement with dynamic > partitioning will give wrong results for ACID tables when > 'hive.acid.direct.insert.enabled' is true or for insert-only tables. > Reproduction: > {noformat} > set hive.acid.direct.insert.enabled=true; > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > set hive.vectorized.execution.enabled=false; > set hive.stats.autogather=false; > create external table multiinsert_test_text (a int, b int, c int) stored as > textfile; > insert into multiinsert_test_text values (, 11, ), (, 22, ), > (, 33, ), (, 44, NULL), (, 55, NULL); > create table multiinsert_test_acid (a int, b int) partitioned by (c int) > stored as orc tblproperties('transactional'='true'); > create table multiinsert_test_mm (a int, b int) partitioned by (c int) stored > as orc tblproperties('transactional'='true', > 'transactional_properties'='insert_only'); > from multiinsert_test_text a > insert overwrite table multiinsert_test_acid partition (c) > select > a.a, > a.b, > a.c > where a.c is not null > insert overwrite table multiinsert_test_acid partition (c) > select > a.a, > a.b, > a.c > where a.c is null; > select * from multiinsert_test_acid; > from multiinsert_test_text a > insert overwrite table multiinsert_test_mm partition (c) > select > a.a, > a.b, > a.c > where a.c is not null > insert overwrite table multiinsert_test_mm partition (c) > select > a.a, > a.b, > a.c > where a.c is null; > select * from multiinsert_test_mm; > {noformat} > The result of these steps can be different, it depends on the execution order > of the FileSinkOperators of the insert overwrite statements. It can happen > that an error occurs due to manifest file collision, it can happen that no > error occurs but the result will be incorrect. > Running the same insert query with an external table of with and ACID table > with 'hive.acid.direct.insert.enabled=false' will give the follwing result: > {noformat} > 11 > 22 > 33 > 44 NULL > 55 NULL > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HIVE-23114) Insert overwrite with dynamic partitioning is not working correctly with direct insert
[ https://issues.apache.org/jira/browse/HIVE-23114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078406#comment-17078406 ] Sungwoo edited comment on HIVE-23114 at 4/8/20, 3:47 PM: - I tested with the new commit using HDFS. The ORC database was loaded without a problem, and TPC-DS queries run okay. (I tested with both TEXT and ORC databases, and obtained the same result.) 'catalog_returns where _returned_date_sk is null' contains no row whereas 'catalog_returns where _returned_date_sk is not null' returns a non-empty list. EDIT: with a problem --> without a problem was (Author: glapark): I tested with the new commit using HDFS. The ORC database was loaded with a problem, and TPC-DS queries run okay. (I tested with both TEXT and ORC databases, and obtained the same result.) 'catalog_returns where _returned_date_sk is null' contains no row whereas 'catalog_returns where _returned_date_sk is not null' returns a non-empty list. > Insert overwrite with dynamic partitioning is not working correctly with > direct insert > -- > > Key: HIVE-23114 > URL: https://issues.apache.org/jira/browse/HIVE-23114 > Project: Hive > Issue Type: Bug >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-23114.1.patch, HIVE-23114.2.patch, > HIVE-23114.3.patch > > > This is a follow-up Jira for the > [conversation|https://issues.apache.org/jira/browse/HIVE-21164?focusedCommentId=17059280&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17059280] > in HIVE-21164 > Doing an insert overwrite from a multi-insert statement with dynamic > partitioning will give wrong results for ACID tables when > 'hive.acid.direct.insert.enabled' is true or for insert-only tables. > Reproduction: > {noformat} > set hive.acid.direct.insert.enabled=true; > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > set hive.vectorized.execution.enabled=false; > set hive.stats.autogather=false; > create external table multiinsert_test_text (a int, b int, c int) stored as > textfile; > insert into multiinsert_test_text values (, 11, ), (, 22, ), > (, 33, ), (, 44, NULL), (, 55, NULL); > create table multiinsert_test_acid (a int, b int) partitioned by (c int) > stored as orc tblproperties('transactional'='true'); > create table multiinsert_test_mm (a int, b int) partitioned by (c int) stored > as orc tblproperties('transactional'='true', > 'transactional_properties'='insert_only'); > from multiinsert_test_text a > insert overwrite table multiinsert_test_acid partition (c) > select > a.a, > a.b, > a.c > where a.c is not null > insert overwrite table multiinsert_test_acid partition (c) > select > a.a, > a.b, > a.c > where a.c is null; > select * from multiinsert_test_acid; > from multiinsert_test_text a > insert overwrite table multiinsert_test_mm partition (c) > select > a.a, > a.b, > a.c > where a.c is not null > insert overwrite table multiinsert_test_mm partition (c) > select > a.a, > a.b, > a.c > where a.c is null; > select * from multiinsert_test_mm; > {noformat} > The result of these steps can be different, it depends on the execution order > of the FileSinkOperators of the insert overwrite statements. It can happen > that an error occurs due to manifest file collision, it can happen that no > error occurs but the result will be incorrect. > Running the same insert query with an external table of with and ACID table > with 'hive.acid.direct.insert.enabled=false' will give the follwing result: > {noformat} > 11 > 22 > 33 > 44 NULL > 55 NULL > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22750) Consolidate LockType naming
[ https://issues.apache.org/jira/browse/HIVE-22750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marton Bod updated HIVE-22750: -- Attachment: HIVE-22750.12.patch > Consolidate LockType naming > --- > > Key: HIVE-22750 > URL: https://issues.apache.org/jira/browse/HIVE-22750 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Zoltan Chovan >Assignee: Marton Bod >Priority: Minor > Attachments: HIVE-22750.1.patch, HIVE-22750.10.patch, > HIVE-22750.11.patch, HIVE-22750.12.patch, HIVE-22750.2.patch, > HIVE-22750.3.patch, HIVE-22750.4.patch, HIVE-22750.5.patch, > HIVE-22750.5.patch, HIVE-22750.6.patch, HIVE-22750.7.patch, > HIVE-22750.8.patch, HIVE-22750.9.patch, HIVE-22750.9.patch, > HIVE-22750.9.patch, HIVE-22750.9.patch > > > Extend enum with string literal to remove unnecessary `id` to `char` casting > for the LockType: > {code:java} > switch (lockType) { > case EXCLUSIVE: > lockChar = LOCK_EXCLUSIVE; > break; > case SHARED_READ: > lockChar = LOCK_SHARED; > break; > case SHARED_WRITE: > lockChar = LOCK_SEMI_SHARED; > break; > } > {code} > Consolidate LockType naming in code and schema upgrade scripts: > {code:java} > CASE WHEN HL.`HL_LOCK_TYPE` = 'e' THEN 'exclusive' WHEN HL.`HL_LOCK_TYPE` = > 'r' THEN 'shared' WHEN HL.`HL_LOCK_TYPE` = 'w' THEN *'semi-shared'* END AS > LOCK_TYPE, > {code} > +*Lock types:*+ > EXCLUSIVE > EXCL_WRITE > SHARED_WRITE > SHARED_READ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23020) Avoid using _files for replication data copy during incremental run
[ https://issues.apache.org/jira/browse/HIVE-23020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078378#comment-17078378 ] Hive QA commented on HIVE-23020: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 0s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 44s{color} | {color:blue} ql in master has 1528 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 42s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 53s{color} | {color:green} ql generated 0 new + 1527 unchanged - 1 fixed = 1527 total (was 1528) {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s{color} | {color:green} hive-unit in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 30m 58s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-21514/dev-support/hive-personality.sh | | git revision | master / d91cc0c | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | modules | C: ql itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-21514/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Avoid using _files for replication data copy during incremental run > --- > > Key: HIVE-23020 > URL: https://issues.apache.org/jira/browse/HIVE-23020 > Project: Hive > Issue Type: Task >Reporter: PRAVIN KUMAR SINHA >Assignee: PRAVIN KUMAR SINHA >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23020.01.patch, HIVE-23020.02.patch, > HIVE-23020.03.patch, HIVE-23020.04.patch, HIVE-23020.05.patch, > HIVE-23020.06.patch, HIVE-23020.07.patch, HIVE-23020.08.patch, > HIVE-23020.09.patch, HIVE-23020.10.patch > > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22750) Consolidate LockType naming
[ https://issues.apache.org/jira/browse/HIVE-22750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marton Bod updated HIVE-22750: -- Attachment: HIVE-22750.11.patch > Consolidate LockType naming > --- > > Key: HIVE-22750 > URL: https://issues.apache.org/jira/browse/HIVE-22750 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Zoltan Chovan >Assignee: Marton Bod >Priority: Minor > Attachments: HIVE-22750.1.patch, HIVE-22750.10.patch, > HIVE-22750.11.patch, HIVE-22750.2.patch, HIVE-22750.3.patch, > HIVE-22750.4.patch, HIVE-22750.5.patch, HIVE-22750.5.patch, > HIVE-22750.6.patch, HIVE-22750.7.patch, HIVE-22750.8.patch, > HIVE-22750.9.patch, HIVE-22750.9.patch, HIVE-22750.9.patch, HIVE-22750.9.patch > > > Extend enum with string literal to remove unnecessary `id` to `char` casting > for the LockType: > {code:java} > switch (lockType) { > case EXCLUSIVE: > lockChar = LOCK_EXCLUSIVE; > break; > case SHARED_READ: > lockChar = LOCK_SHARED; > break; > case SHARED_WRITE: > lockChar = LOCK_SEMI_SHARED; > break; > } > {code} > Consolidate LockType naming in code and schema upgrade scripts: > {code:java} > CASE WHEN HL.`HL_LOCK_TYPE` = 'e' THEN 'exclusive' WHEN HL.`HL_LOCK_TYPE` = > 'r' THEN 'shared' WHEN HL.`HL_LOCK_TYPE` = 'w' THEN *'semi-shared'* END AS > LOCK_TYPE, > {code} > +*Lock types:*+ > EXCLUSIVE > EXCL_WRITE > SHARED_WRITE > SHARED_READ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23104) Minimize critical paths of TxnHandler::commitTxn and abortTxn
[ https://issues.apache.org/jira/browse/HIVE-23104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marton Bod updated HIVE-23104: -- Attachment: HIVE-23104.2.patch > Minimize critical paths of TxnHandler::commitTxn and abortTxn > - > > Key: HIVE-23104 > URL: https://issues.apache.org/jira/browse/HIVE-23104 > Project: Hive > Issue Type: Improvement >Reporter: Marton Bod >Assignee: Marton Bod >Priority: Major > Attachments: HIVE-23104.1.patch, HIVE-23104.1.patch, > HIVE-23104.1.patch, HIVE-23104.2.patch > > > Investigate whether any code sections in TxnHandler::commitTxn and abortTxn > can be lifted out/executed async in order to reduce the overall execution > time of these methods. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22458) Add more constraints on showing partitions
[ https://issues.apache.org/jira/browse/HIVE-22458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078350#comment-17078350 ] Hive QA commented on HIVE-22458: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12999287/HIVE-22458.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 18189 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[drop_partitions_filter] (batchId=29) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[show_partitions2] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[show_partitions] (batchId=92) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[showparts] (batchId=2) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[temp_table_drop_partitions_filter] (batchId=77) org.apache.hadoop.hive.ql.parse.TestReplicationScenariosExternalTablesMetaDataOnly.org.apache.hadoop.hive.ql.parse.TestReplicationScenariosExternalTablesMetaDataOnly (batchId=283) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/21513/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21513/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21513/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 6 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12999287 - PreCommit-HIVE-Build > Add more constraints on showing partitions > -- > > Key: HIVE-22458 > URL: https://issues.apache.org/jira/browse/HIVE-22458 > Project: Hive > Issue Type: Improvement >Reporter: Zhihua Deng >Priority: Major > Attachments: HIVE-22458.2.patch, HIVE-22458.branch-1.02.patch, > HIVE-22458.branch-1.patch, HIVE-22458.patch > > > When we showing partitions of a table with thousands of partitions, all the > partitions will be returned and it's not easy to catch the specified one from > them, this make showing partitions hard to use. We can add where/limit/order > by constraints to show partitions like: > show partitions table_name [partition_specs] where partition_key >= value > order by partition_key desc limit n; > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23039) Checkpointing for repl dump bootstrap phase
[ https://issues.apache.org/jira/browse/HIVE-23039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-23039: --- Attachment: HIVE-23039.27.patch Status: Patch Available (was: In Progress) > Checkpointing for repl dump bootstrap phase > --- > > Key: HIVE-23039 > URL: https://issues.apache.org/jira/browse/HIVE-23039 > Project: Hive > Issue Type: Bug >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23039.01.patch, HIVE-23039.02.patch, > HIVE-23039.03.patch, HIVE-23039.04.patch, HIVE-23039.05.patch, > HIVE-23039.06.patch, HIVE-23039.07.patch, HIVE-23039.08.patch, > HIVE-23039.09.patch, HIVE-23039.10.patch, HIVE-23039.11.patch, > HIVE-23039.12.patch, HIVE-23039.13.patch, HIVE-23039.14.patch, > HIVE-23039.15.patch, HIVE-23039.16.patch, HIVE-23039.17.patch, > HIVE-23039.18.patch, HIVE-23039.19.patch, HIVE-23039.20.patch, > HIVE-23039.21.patch, HIVE-23039.22.patch, HIVE-23039.23.patch, > HIVE-23039.24.patch, HIVE-23039.25.patch, HIVE-23039.26.patch, > HIVE-23039.27.patch > > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23039) Checkpointing for repl dump bootstrap phase
[ https://issues.apache.org/jira/browse/HIVE-23039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-23039: --- Status: In Progress (was: Patch Available) > Checkpointing for repl dump bootstrap phase > --- > > Key: HIVE-23039 > URL: https://issues.apache.org/jira/browse/HIVE-23039 > Project: Hive > Issue Type: Bug >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23039.01.patch, HIVE-23039.02.patch, > HIVE-23039.03.patch, HIVE-23039.04.patch, HIVE-23039.05.patch, > HIVE-23039.06.patch, HIVE-23039.07.patch, HIVE-23039.08.patch, > HIVE-23039.09.patch, HIVE-23039.10.patch, HIVE-23039.11.patch, > HIVE-23039.12.patch, HIVE-23039.13.patch, HIVE-23039.14.patch, > HIVE-23039.15.patch, HIVE-23039.16.patch, HIVE-23039.17.patch, > HIVE-23039.18.patch, HIVE-23039.19.patch, HIVE-23039.20.patch, > HIVE-23039.21.patch, HIVE-23039.22.patch, HIVE-23039.23.patch, > HIVE-23039.24.patch, HIVE-23039.25.patch, HIVE-23039.26.patch > > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22458) Add more constraints on showing partitions
[ https://issues.apache.org/jira/browse/HIVE-22458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078318#comment-17078318 ] Hive QA commented on HIVE-22458: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 50s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 47s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 46s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 50s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 32s{color} | {color:blue} standalone-metastore/metastore-common in master has 35 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 53s{color} | {color:blue} parser in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 13s{color} | {color:blue} standalone-metastore/metastore-server in master has 190 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 41s{color} | {color:blue} ql in master has 1528 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 41s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 16s{color} | {color:red} standalone-metastore/metastore-common: The patch generated 7 new + 410 unchanged - 0 fixed = 417 total (was 410) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s{color} | {color:red} standalone-metastore/metastore-server: The patch generated 28 new + 1426 unchanged - 1 fixed = 1454 total (was 1427) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 43s{color} | {color:red} ql: The patch generated 6 new + 194 unchanged - 0 fixed = 200 total (was 194) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} itests/hcatalog-unit: The patch generated 3 new + 28 unchanged - 0 fixed = 31 total (was 28) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 25s{color} | {color:red} standalone-metastore/metastore-server generated 2 new + 190 unchanged - 0 fixed = 192 total (was 190) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 47m 15s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:standalone-metastore/metastore-server | | | Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionNamesViaSqlInternal(String, String, String, String, List, List, String, Map, Integer, boolean) At MetaStoreDirectSql.java:then immediately reboxed in org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionNamesViaSqlInternal(String, String, String, String, List, List, String, Map, Integer, boolean) At MetaStoreDirectSql.java:[line 620] | | | Boxing/unboxing to parse a primitive org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionNamesViaSql(MetaStoreDirectSql$SqlFilterForPushdown, String, String, Integer) At
[jira] [Updated] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan
[ https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-21304: Attachment: HIVE-21304.24.patch > Show Bucketing version for ReduceSinkOp in explain extended plan > > > Key: HIVE-21304 > URL: https://issues.apache.org/jira/browse/HIVE-21304 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-21304.01.patch, HIVE-21304.02.patch, > HIVE-21304.03.patch, HIVE-21304.04.patch, HIVE-21304.05.patch, > HIVE-21304.06.patch, HIVE-21304.07.patch, HIVE-21304.08.patch, > HIVE-21304.09.patch, HIVE-21304.10.patch, HIVE-21304.11.patch, > HIVE-21304.12.patch, HIVE-21304.13.patch, HIVE-21304.14.patch, > HIVE-21304.15.patch, HIVE-21304.16.patch, HIVE-21304.17.patch, > HIVE-21304.18.patch, HIVE-21304.19.patch, HIVE-21304.20.patch, > HIVE-21304.21.patch, HIVE-21304.22.patch, HIVE-21304.23.patch, > HIVE-21304.24.patch > > > Show Bucketing version for ReduceSinkOp in explain extended plan. > This helps identify what hashing algorithm is being used by by ReduceSinkOp. > > cc [~vgarg] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23157) Mutual TLS authentication for Metastore
[ https://issues.apache.org/jira/browse/HIVE-23157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiru Paramasivan updated HIVE-23157: - Description: Hive Metastore allows various security features. But it does not allow SSL client authentication (Mutual TLS or mTLS), even though the underlying [ThriftServer|https://github.com/apache/thrift/blob/master/lib/java/src/org/apache/thrift/transport/TSSLTransportFactory.java#L123] supports it. This enhancement request is for additional configurations in Hive Metastore so it can pass them to ThriftServer to allow client authentication. (was: Hive Metastore allows various security features. But it does not allow SSL client authentication (Mutual TLS or mTLS), even though the underlying [ThriftServer|https://github.com/apache/thrift/blob/master/lib/java/src/org/apache/thrift/transport/TSSLTransportFactory.java#L123] supports it. This enhancement request is for additional configurations to ThriftServer to allow client authentication.) > Mutual TLS authentication for Metastore > --- > > Key: HIVE-23157 > URL: https://issues.apache.org/jira/browse/HIVE-23157 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 3.0.0, 2.3.6 >Reporter: Thiru Paramasivan >Priority: Major > > Hive Metastore allows various security features. But it does not allow SSL > client authentication (Mutual TLS or mTLS), even though the underlying > [ThriftServer|https://github.com/apache/thrift/blob/master/lib/java/src/org/apache/thrift/transport/TSSLTransportFactory.java#L123] > supports it. This enhancement request is for additional configurations in > Hive Metastore so it can pass them to ThriftServer to allow client > authentication. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23114) Insert overwrite with dynamic partitioning is not working correctly with direct insert
[ https://issues.apache.org/jira/browse/HIVE-23114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078292#comment-17078292 ] Marta Kuczora commented on HIVE-23114: -- Patch 3 contains only whitespace removal. Got +1 from [~pvary] on reviewboard. > Insert overwrite with dynamic partitioning is not working correctly with > direct insert > -- > > Key: HIVE-23114 > URL: https://issues.apache.org/jira/browse/HIVE-23114 > Project: Hive > Issue Type: Bug >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-23114.1.patch, HIVE-23114.2.patch, > HIVE-23114.3.patch > > > This is a follow-up Jira for the > [conversation|https://issues.apache.org/jira/browse/HIVE-21164?focusedCommentId=17059280&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17059280] > in HIVE-21164 > Doing an insert overwrite from a multi-insert statement with dynamic > partitioning will give wrong results for ACID tables when > 'hive.acid.direct.insert.enabled' is true or for insert-only tables. > Reproduction: > {noformat} > set hive.acid.direct.insert.enabled=true; > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > set hive.vectorized.execution.enabled=false; > set hive.stats.autogather=false; > create external table multiinsert_test_text (a int, b int, c int) stored as > textfile; > insert into multiinsert_test_text values (, 11, ), (, 22, ), > (, 33, ), (, 44, NULL), (, 55, NULL); > create table multiinsert_test_acid (a int, b int) partitioned by (c int) > stored as orc tblproperties('transactional'='true'); > create table multiinsert_test_mm (a int, b int) partitioned by (c int) stored > as orc tblproperties('transactional'='true', > 'transactional_properties'='insert_only'); > from multiinsert_test_text a > insert overwrite table multiinsert_test_acid partition (c) > select > a.a, > a.b, > a.c > where a.c is not null > insert overwrite table multiinsert_test_acid partition (c) > select > a.a, > a.b, > a.c > where a.c is null; > select * from multiinsert_test_acid; > from multiinsert_test_text a > insert overwrite table multiinsert_test_mm partition (c) > select > a.a, > a.b, > a.c > where a.c is not null > insert overwrite table multiinsert_test_mm partition (c) > select > a.a, > a.b, > a.c > where a.c is null; > select * from multiinsert_test_mm; > {noformat} > The result of these steps can be different, it depends on the execution order > of the FileSinkOperators of the insert overwrite statements. It can happen > that an error occurs due to manifest file collision, it can happen that no > error occurs but the result will be incorrect. > Running the same insert query with an external table of with and ACID table > with 'hive.acid.direct.insert.enabled=false' will give the follwing result: > {noformat} > 11 > 22 > 33 > 44 NULL > 55 NULL > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23114) Insert overwrite with dynamic partitioning is not working correctly with direct insert
[ https://issues.apache.org/jira/browse/HIVE-23114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marta Kuczora updated HIVE-23114: - Attachment: HIVE-23114.3.patch > Insert overwrite with dynamic partitioning is not working correctly with > direct insert > -- > > Key: HIVE-23114 > URL: https://issues.apache.org/jira/browse/HIVE-23114 > Project: Hive > Issue Type: Bug >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-23114.1.patch, HIVE-23114.2.patch, > HIVE-23114.3.patch > > > This is a follow-up Jira for the > [conversation|https://issues.apache.org/jira/browse/HIVE-21164?focusedCommentId=17059280&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17059280] > in HIVE-21164 > Doing an insert overwrite from a multi-insert statement with dynamic > partitioning will give wrong results for ACID tables when > 'hive.acid.direct.insert.enabled' is true or for insert-only tables. > Reproduction: > {noformat} > set hive.acid.direct.insert.enabled=true; > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > set hive.vectorized.execution.enabled=false; > set hive.stats.autogather=false; > create external table multiinsert_test_text (a int, b int, c int) stored as > textfile; > insert into multiinsert_test_text values (, 11, ), (, 22, ), > (, 33, ), (, 44, NULL), (, 55, NULL); > create table multiinsert_test_acid (a int, b int) partitioned by (c int) > stored as orc tblproperties('transactional'='true'); > create table multiinsert_test_mm (a int, b int) partitioned by (c int) stored > as orc tblproperties('transactional'='true', > 'transactional_properties'='insert_only'); > from multiinsert_test_text a > insert overwrite table multiinsert_test_acid partition (c) > select > a.a, > a.b, > a.c > where a.c is not null > insert overwrite table multiinsert_test_acid partition (c) > select > a.a, > a.b, > a.c > where a.c is null; > select * from multiinsert_test_acid; > from multiinsert_test_text a > insert overwrite table multiinsert_test_mm partition (c) > select > a.a, > a.b, > a.c > where a.c is not null > insert overwrite table multiinsert_test_mm partition (c) > select > a.a, > a.b, > a.c > where a.c is null; > select * from multiinsert_test_mm; > {noformat} > The result of these steps can be different, it depends on the execution order > of the FileSinkOperators of the insert overwrite statements. It can happen > that an error occurs due to manifest file collision, it can happen that no > error occurs but the result will be incorrect. > Running the same insert query with an external table of with and ACID table > with 'hive.acid.direct.insert.enabled=false' will give the follwing result: > {noformat} > 11 > 22 > 33 > 44 NULL > 55 NULL > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23039) Checkpointing for repl dump bootstrap phase
[ https://issues.apache.org/jira/browse/HIVE-23039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078272#comment-17078272 ] Hive QA commented on HIVE-23039: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12999323/HIVE-23039.26.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/21512/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21512/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21512/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2020-04-08 13:16:52.456 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-21512/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2020-04-08 13:16:52.458 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at d91cc0c HIVE-23142: HiveStrictManagedMigration fails with tables that have null location (Adam Szita, reviewed by Marta Kuczora) + git clean -f -d Removing standalone-metastore/metastore-server/src/gen/ + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at d91cc0c HIVE-23142: HiveStrictManagedMigration fails with tables that have null location (Adam Szita, reviewed by Marta Kuczora) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2020-04-08 13:16:53.731 + rm -rf ../yetus_PreCommit-HIVE-Build-21512 + mkdir ../yetus_PreCommit-HIVE-Build-21512 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-21512 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-21512/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch Trying to apply the patch with -p0 error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenarios.java: does not exist in index error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosAcidTables.java: does not exist in index error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosAcrossInstances.java: does not exist in index error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosExternalTables.java: does not exist in index error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestTableLevelReplicationScenarios.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/ExportTask.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/DirCopyWork.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplDumpTask.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadWork.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/events/TableEvent.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/events/filesystem/BootstrapEventsIterator.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/events/filesystem/DatabaseEventsIterator.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/events/filesystem/FSPartitionEvent.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/events/filesystem/FSTableEvent.java: does not exist in index
[jira] [Updated] (HIVE-23156) NPE if -f is used with HCatCLI
[ https://issues.apache.org/jira/browse/HIVE-23156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Suller updated HIVE-23156: --- Issue Type: Bug (was: Improvement) > NPE if -f is used with HCatCLI > -- > > Key: HIVE-23156 > URL: https://issues.apache.org/jira/browse/HIVE-23156 > Project: Hive > Issue Type: Bug > Components: HCatalog >Reporter: Ivan Suller >Priority: Critical > > After HIVE-22889 if there is no -e cli parameter then a NPE is thrown. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23100) Create RexNode factory and use it in CalcitePlanner
[ https://issues.apache.org/jira/browse/HIVE-23100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078266#comment-17078266 ] Hive QA commented on HIVE-23100: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12999284/HIVE-23100.07.patch {color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 18195 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[constprog_cast] (batchId=8) org.apache.hadoop.hive.ql.plan.mapping.TestCounterMapping.testBreakupAnd2 (batchId=360) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/21511/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21511/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21511/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12999284 - PreCommit-HIVE-Build > Create RexNode factory and use it in CalcitePlanner > --- > > Key: HIVE-23100 > URL: https://issues.apache.org/jira/browse/HIVE-23100 > Project: Hive > Issue Type: Improvement > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-23100.01.patch, HIVE-23100.02.patch, > HIVE-23100.03.patch, HIVE-23100.04.patch, HIVE-23100.05.patch, > HIVE-23100.06.patch, HIVE-23100.07.patch, HIVE-23100.patch > > > Follow-up of HIVE-22746. > This will allow us to generate directly the RexNode from the AST nodes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22750) Consolidate LockType naming
[ https://issues.apache.org/jira/browse/HIVE-22750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marton Bod updated HIVE-22750: -- Attachment: HIVE-22750.10.patch > Consolidate LockType naming > --- > > Key: HIVE-22750 > URL: https://issues.apache.org/jira/browse/HIVE-22750 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Zoltan Chovan >Assignee: Marton Bod >Priority: Minor > Attachments: HIVE-22750.1.patch, HIVE-22750.10.patch, > HIVE-22750.2.patch, HIVE-22750.3.patch, HIVE-22750.4.patch, > HIVE-22750.5.patch, HIVE-22750.5.patch, HIVE-22750.6.patch, > HIVE-22750.7.patch, HIVE-22750.8.patch, HIVE-22750.9.patch, > HIVE-22750.9.patch, HIVE-22750.9.patch, HIVE-22750.9.patch > > > Extend enum with string literal to remove unnecessary `id` to `char` casting > for the LockType: > {code:java} > switch (lockType) { > case EXCLUSIVE: > lockChar = LOCK_EXCLUSIVE; > break; > case SHARED_READ: > lockChar = LOCK_SHARED; > break; > case SHARED_WRITE: > lockChar = LOCK_SEMI_SHARED; > break; > } > {code} > Consolidate LockType naming in code and schema upgrade scripts: > {code:java} > CASE WHEN HL.`HL_LOCK_TYPE` = 'e' THEN 'exclusive' WHEN HL.`HL_LOCK_TYPE` = > 'r' THEN 'shared' WHEN HL.`HL_LOCK_TYPE` = 'w' THEN *'semi-shared'* END AS > LOCK_TYPE, > {code} > +*Lock types:*+ > EXCLUSIVE > EXCL_WRITE > SHARED_WRITE > SHARED_READ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23039) Checkpointing for repl dump bootstrap phase
[ https://issues.apache.org/jira/browse/HIVE-23039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-23039: --- Attachment: HIVE-23039.26.patch Status: Patch Available (was: In Progress) > Checkpointing for repl dump bootstrap phase > --- > > Key: HIVE-23039 > URL: https://issues.apache.org/jira/browse/HIVE-23039 > Project: Hive > Issue Type: Bug >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23039.01.patch, HIVE-23039.02.patch, > HIVE-23039.03.patch, HIVE-23039.04.patch, HIVE-23039.05.patch, > HIVE-23039.06.patch, HIVE-23039.07.patch, HIVE-23039.08.patch, > HIVE-23039.09.patch, HIVE-23039.10.patch, HIVE-23039.11.patch, > HIVE-23039.12.patch, HIVE-23039.13.patch, HIVE-23039.14.patch, > HIVE-23039.15.patch, HIVE-23039.16.patch, HIVE-23039.17.patch, > HIVE-23039.18.patch, HIVE-23039.19.patch, HIVE-23039.20.patch, > HIVE-23039.21.patch, HIVE-23039.22.patch, HIVE-23039.23.patch, > HIVE-23039.24.patch, HIVE-23039.25.patch, HIVE-23039.26.patch > > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23039) Checkpointing for repl dump bootstrap phase
[ https://issues.apache.org/jira/browse/HIVE-23039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-23039: --- Status: In Progress (was: Patch Available) > Checkpointing for repl dump bootstrap phase > --- > > Key: HIVE-23039 > URL: https://issues.apache.org/jira/browse/HIVE-23039 > Project: Hive > Issue Type: Bug >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23039.01.patch, HIVE-23039.02.patch, > HIVE-23039.03.patch, HIVE-23039.04.patch, HIVE-23039.05.patch, > HIVE-23039.06.patch, HIVE-23039.07.patch, HIVE-23039.08.patch, > HIVE-23039.09.patch, HIVE-23039.10.patch, HIVE-23039.11.patch, > HIVE-23039.12.patch, HIVE-23039.13.patch, HIVE-23039.14.patch, > HIVE-23039.15.patch, HIVE-23039.16.patch, HIVE-23039.17.patch, > HIVE-23039.18.patch, HIVE-23039.19.patch, HIVE-23039.20.patch, > HIVE-23039.21.patch, HIVE-23039.22.patch, HIVE-23039.23.patch, > HIVE-23039.24.patch, HIVE-23039.25.patch, HIVE-23039.26.patch > > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23100) Create RexNode factory and use it in CalcitePlanner
[ https://issues.apache.org/jira/browse/HIVE-23100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078231#comment-17078231 ] Hive QA commented on HIVE-23100: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 52s{color} | {color:blue} ql in master has 1528 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 53s{color} | {color:red} ql: The patch generated 289 new + 985 unchanged - 47 fixed = 1274 total (was 1032) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 0s{color} | {color:red} ql generated 7 new + 1523 unchanged - 5 fixed = 1530 total (was 1528) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Check for oddness that won't work for negative numbers in org.apache.hadoop.hive.ql.optimizer.calcite.translator.RexNodeConverter.adjustCaseBranchTypes(List, RelDataType, RexBuilder) At RexNodeConverter.java:work for negative numbers in org.apache.hadoop.hive.ql.optimizer.calcite.translator.RexNodeConverter.adjustCaseBranchTypes(List, RelDataType, RexBuilder) At RexNodeConverter.java:[line 408] | | | Check for oddness that won't work for negative numbers in org.apache.hadoop.hive.ql.optimizer.calcite.translator.RexNodeConverter.rewriteCaseChildren(String, List, RexBuilder) At RexNodeConverter.java:work for negative numbers in org.apache.hadoop.hive.ql.optimizer.calcite.translator.RexNodeConverter.rewriteCaseChildren(String, List, RexBuilder) At RexNodeConverter.java:[line 374] | | | Dead store to inputPosMap in org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLateralViewPlans(ASTNode, Map) At CalcitePlanner.java:org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLateralViewPlans(ASTNode, Map) At CalcitePlanner.java:[line 3389] | | | Exception is caught when Exception is not thrown in org.apache.hadoop.hive.ql.parse.type.HiveFunctionHelper.getWindowAggregateFunctionInfo(boolean, boolean, String, List) At HiveFunctionHelper.java:is not thrown in org.apache.hadoop.hive.ql.parse.type.HiveFunctionHelper.getWindowAggregateFunctionInfo(boolean, boolean, String, List) At HiveFunctionHelper.java:[line 425] | | | Boxing/unboxing to parse a primitive org.apache.hadoop.hive.ql.parse.type.RexNodeExprFactory.createBigintConstantExpr(String) At RexNodeExprFactory.java:org.apache.hadoop.hive.ql.parse.type.RexNodeExprFactory.createBigintConstantExpr(String) At RexNodeExprFactory.java:[line 213] | | | Boxing/unboxing to parse a primitive org.apache.hadoop.hive.ql.parse.type.RexNodeExprFactory.createIntConstantExpr(String) At RexNodeExprFactory.java:org.apache.hadoop.hive.ql.parse.type.RexNodeExprFactory.createIntConstantExpr(String) At RexNodeExprFactory.java:[line 224] | | | org.apache.hadoop.hive.ql.parse.type.RexNodeExprFactory$HiveNlsString doesn't override org.apache.calcite.util.NlsString.equals(Object) A
[jira] [Work logged] (HIVE-23095) NDV might be overestimated for a table with ~70 value
[ https://issues.apache.org/jira/browse/HIVE-23095?focusedWorklogId=418477&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-418477 ] ASF GitHub Bot logged work on HIVE-23095: - Author: ASF GitHub Bot Created on: 08/Apr/20 12:22 Start Date: 08/Apr/20 12:22 Worklog Time Spent: 10m Work Description: kgyrtkirk commented on pull request #964: HIVE-23095 ndv 70 URL: https://github.com/apache/hive/pull/964#discussion_r405482181 ## File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/common/ndv/hll/HLLSparseRegister.java ## @@ -148,8 +148,12 @@ public int encodeHash(long hashcode) { } } - public int getSize() { -return sparseMap.size() + tempListIdx; + public boolean isSizeGreaterThan(int s) { +if (sparseMap.size() + tempListIdx > s) { + mergeTempListToSparseMap(); Review comment: we are using: * sizeOptimized => p=10 * bitpacking is enabled by default formula to count the threshold in this case is: ``` 2**p * 6/8/5 = ~150 ``` https://github.com/apache/hive/blob/d91cc0cd84b7d0ecc0f29d44b109b46e21194eec/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/common/ndv/hll/HyperLogLog.java#L116 I also found that a little too few...but that's what it is... all changes are here; this conversation is on a diff which is "outdated" This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 418477) Time Spent: 50m (was: 40m) > NDV might be overestimated for a table with ~70 value > - > > Key: HIVE-23095 > URL: https://issues.apache.org/jira/browse/HIVE-23095 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23095.01.patch, HIVE-23095.02.patch, > HIVE-23095.03.patch, HIVE-23095.04.patch, HIVE-23095.04.patch, > HIVE-23095.04.patch, HIVE-23095.05.patch, hll-bench.md > > Time Spent: 50m > Remaining Estimate: 0h > > uncovered during looking into HIVE-23082 > https://issues.apache.org/jira/browse/HIVE-23082?focusedCommentId=17067773&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17067773 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23058) Compaction task reattempt fails with FileAlreadyExistsException
[ https://issues.apache.org/jira/browse/HIVE-23058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Riju Trivedi updated HIVE-23058: Attachment: HIVE-23058.2.patch > Compaction task reattempt fails with FileAlreadyExistsException > --- > > Key: HIVE-23058 > URL: https://issues.apache.org/jira/browse/HIVE-23058 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.1.0 >Reporter: Riju Trivedi >Assignee: Riju Trivedi >Priority: Major > Attachments: HIVE-23058.2.patch, HIVE_23058.1.patch, HIVE_23058.patch > > > Issue occurs when compaction attempt is relaunched after first task attempt > failure due to preemption by Scheduler or any other reason. > Since _tmp directory was created by first attempt and was left uncleaned > after task attempt failure. Second attempt of the the task fails with > "FileAlreadyExistsException" exception. > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.FileAlreadyExistsException): > > /warehouse/tablespace/managed/hive/default.db/compaction_test/_tmp_3670bbef-ba7a-4c10-918d-9a2ee17cbd22/base_186/bucket_5 > for client 10.xx.xx.xxx already exists -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23153) deregister from zookeeper is not properly worked on kerberized environment
[ https://issues.apache.org/jira/browse/HIVE-23153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17078096#comment-17078096 ] Hive QA commented on HIVE-23153: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12999273/HIVE-23153.01.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 18195 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query19] (batchId=307) org.apache.hive.service.auth.TestImproperTrustDomainAuthenticationBinary.org.apache.hive.service.auth.TestImproperTrustDomainAuthenticationBinary (batchId=287) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/21510/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21510/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21510/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12999273 - PreCommit-HIVE-Build > deregister from zookeeper is not properly worked on kerberized environment > -- > > Key: HIVE-23153 > URL: https://issues.apache.org/jira/browse/HIVE-23153 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Eugene Chung >Assignee: Eugene Chung >Priority: Minor > Attachments: HIVE-23153.01.patch, Screen Shot 2020-04-08 at > 5.00.40.png > > > Deregistering from Zookeeper, initiated by the command 'hive --service > hiveserver2 -deregister ', is not properly worked when HiveServer2 > and Zookeeper are kerberized. Even though hive-site.xml has configuration for > Zookeeper Kerberos login (hive.server2.authentication.kerberos.principal and > keytab), it isn't used. I know that kinit with hiveserver2 keytab would make > it work. But as I said, hive-site.xml already has so that the user doesn't > need to do kinit. > * Kerberos login to Zookeeper Failed : Will not attempt to authenticate > using SASL (unknown error) > {code:java} > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: > -78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-server-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-util-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.0.0-78/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.0.0-78/tez/conf > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client > environment:java.library.path=: > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client > environment:java.io.tmpdir=/tmp > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client > environment:java.compiler= > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client > environment:os.name=Linux > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client > environment:os.arch=amd64 > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client > environment:os.version=... > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client > environment:user.name=... > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client > environment:user.home=... > 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client > environment:user.dir=... > 2020-04-08 04:45:44,699 INFO [main] zookeeper.ZooKeeper: Initiating client > connection, connectString=... sessionTimeout=6 > watcher=org.apache.curator.ConnectionState@706eab5d > 2020-04-08 04:45:44,725 INFO [main-SendThread(...)] zookeeper.ClientCnxn: > Opening socket connection to server ...:2181. Will not attempt to > authenticate using SASL (unknown error) > 2020-04-08 04:45:44,7
[jira] [Updated] (HIVE-23039) Checkpointing for repl dump bootstrap phase
[ https://issues.apache.org/jira/browse/HIVE-23039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-23039: --- Status: In Progress (was: Patch Available) > Checkpointing for repl dump bootstrap phase > --- > > Key: HIVE-23039 > URL: https://issues.apache.org/jira/browse/HIVE-23039 > Project: Hive > Issue Type: Bug >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23039.01.patch, HIVE-23039.02.patch, > HIVE-23039.03.patch, HIVE-23039.04.patch, HIVE-23039.05.patch, > HIVE-23039.06.patch, HIVE-23039.07.patch, HIVE-23039.08.patch, > HIVE-23039.09.patch, HIVE-23039.10.patch, HIVE-23039.11.patch, > HIVE-23039.12.patch, HIVE-23039.13.patch, HIVE-23039.14.patch, > HIVE-23039.15.patch, HIVE-23039.16.patch, HIVE-23039.17.patch, > HIVE-23039.18.patch, HIVE-23039.19.patch, HIVE-23039.20.patch, > HIVE-23039.21.patch, HIVE-23039.22.patch, HIVE-23039.23.patch, > HIVE-23039.24.patch, HIVE-23039.25.patch > > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23039) Checkpointing for repl dump bootstrap phase
[ https://issues.apache.org/jira/browse/HIVE-23039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-23039: --- Attachment: HIVE-23039.25.patch Status: Patch Available (was: In Progress) > Checkpointing for repl dump bootstrap phase > --- > > Key: HIVE-23039 > URL: https://issues.apache.org/jira/browse/HIVE-23039 > Project: Hive > Issue Type: Bug >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-23039.01.patch, HIVE-23039.02.patch, > HIVE-23039.03.patch, HIVE-23039.04.patch, HIVE-23039.05.patch, > HIVE-23039.06.patch, HIVE-23039.07.patch, HIVE-23039.08.patch, > HIVE-23039.09.patch, HIVE-23039.10.patch, HIVE-23039.11.patch, > HIVE-23039.12.patch, HIVE-23039.13.patch, HIVE-23039.14.patch, > HIVE-23039.15.patch, HIVE-23039.16.patch, HIVE-23039.17.patch, > HIVE-23039.18.patch, HIVE-23039.19.patch, HIVE-23039.20.patch, > HIVE-23039.21.patch, HIVE-23039.22.patch, HIVE-23039.23.patch, > HIVE-23039.24.patch, HIVE-23039.25.patch > > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)