[jira] [Commented] (HIVE-18789) Disallow embedded element in UDFXPathUtil
[ https://issues.apache.org/jira/browse/HIVE-18789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375385#comment-16375385 ] Hive QA commented on HIVE-18789: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911811/HIVE-18789.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 41 failed/errored test(s), 13739 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_exchangepartition] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=174) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] (batchId=248) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez_empty] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_div0] (batchId=167) org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[smb_mapjoin_14] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[sortmerge_mapjoin_mismatch_1] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[split_sample_wrong_format] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_2] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[union22] (batchId=93) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[update_notnull_constraint] (batchId=93) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_scalar] (batchId=123) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.ql.TestAcidOnTez.testGetSplitsLocks (batchId=224) org.apache.hadoop.hive.ql.TestTxnLoadData.loadDataNonAcid2AcidConversionVectorized (batchId=259) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=187) org.apache.hive.hcatalog.listener.TestDbNotificationListener.alterIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.createIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropIndex (batchId=242) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testConnectionSchemaAPIs (batchId=238) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testHttpHeaderSize (batchId=238) org.apache.hive.jdbc.TestJdbcWithMiniLlap.testLlapInputFormatEndToEnd (batchId=235) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill (batchId=235) org.apache.hive.service.cli.TestRetryingThriftCLIServiceClient.testSessionLifeAfterTransportClose (batchId=218) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9345/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9345/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9345/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 41 tests failed {noformat} This message is automatically generated. ATTACHMENT ID:
[jira] [Commented] (HIVE-18645) invalid url address in README.txt from module hbase-handler
[ https://issues.apache.org/jira/browse/HIVE-18645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375369#comment-16375369 ] Saijin Huang commented on HIVE-18645: - Hi,[~lirui],it is a minor change.Can you plz take a quick review? > invalid url address in README.txt from module hbase-handler > --- > > Key: HIVE-18645 > URL: https://issues.apache.org/jira/browse/HIVE-18645 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Saijin Huang >Assignee: Saijin Huang >Priority: Trivial > Attachments: HIVE-18645.1.patch > > > The url "http://wiki.apache.org/hadoop/Hive/HBaseIntegration; is invalid in > README.txt from module hbase-handler. > Update the url and change .txt to .md -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17580) Remove dependency of get_fields_with_environment_context API to serde
[ https://issues.apache.org/jira/browse/HIVE-17580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375360#comment-16375360 ] Vihang Karajgaonkar commented on HIVE-17580: Posting the pre-commit result here. 294 tests failed {noformat} Test Result (294 failures / +254) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_whole_partition] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_ppd_char] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_basic2] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[typechangetest] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_ppd_varchar] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[varchar_union1] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_udf2] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_types_vectorization] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_join] Show all failed tests >>> {noformat} Updated the patch which fixes the error. > Remove dependency of get_fields_with_environment_context API to serde > - > > Key: HIVE-17580 > URL: https://issues.apache.org/jira/browse/HIVE-17580 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Labels: pull-request-available > Attachments: HIVE-17580.003-standalone-metastore.patch, > HIVE-17580.04-standalone-metastore.patch, > HIVE-17580.05-standalone-metastore.patch, > HIVE-17580.06-standalone-metastore.patch, > HIVE-17580.07-standalone-metastore.patch, > HIVE-17580.08-standalone-metastore.patch > > > {{get_fields_with_environment_context}} metastore API uses {{Deserializer}} > class to access the fields metadata for the cases where it is stored along > with the data files (avro tables). The problem is Deserializer classes is > defined in hive-serde module and in order to make metastore independent of > Hive we will have to remove this dependency (atleast we should change it to > runtime dependency instead of compile time). > The other option is investigate if we can use SearchArgument to provide this > functionality. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18795) upgrade accumulo to 1.8.1
[ https://issues.apache.org/jira/browse/HIVE-18795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Saijin Huang updated HIVE-18795: Attachment: HIVE-18795.1.patch > upgrade accumulo to 1.8.1 > - > > Key: HIVE-18795 > URL: https://issues.apache.org/jira/browse/HIVE-18795 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Saijin Huang >Assignee: Saijin Huang >Priority: Minor > Attachments: HIVE-18795.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18795) upgrade accumulo to 1.8.1
[ https://issues.apache.org/jira/browse/HIVE-18795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Saijin Huang updated HIVE-18795: Status: Patch Available (was: Open) > upgrade accumulo to 1.8.1 > - > > Key: HIVE-18795 > URL: https://issues.apache.org/jira/browse/HIVE-18795 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Saijin Huang >Assignee: Saijin Huang >Priority: Minor > Attachments: HIVE-18795.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18789) Disallow embedded element in UDFXPathUtil
[ https://issues.apache.org/jira/browse/HIVE-18789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375356#comment-16375356 ] Hive QA commented on HIVE-18789: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 34s{color} | {color:red} ql: The patch generated 7 new + 12 unchanged - 0 fixed = 19 total (was 12) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9345/dev-support/hive-personality.sh | | git revision | master / 53a590b | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9345/yetus/diff-checkstyle-ql.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9345/yetus/patch-asflicense-problems.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9345/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Disallow embedded element in UDFXPathUtil > - > > Key: HIVE-18789 > URL: https://issues.apache.org/jira/browse/HIVE-18789 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18789.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17580) Remove dependency of get_fields_with_environment_context API to serde
[ https://issues.apache.org/jira/browse/HIVE-17580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-17580: --- Attachment: HIVE-17580.08-standalone-metastore.patch > Remove dependency of get_fields_with_environment_context API to serde > - > > Key: HIVE-17580 > URL: https://issues.apache.org/jira/browse/HIVE-17580 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Labels: pull-request-available > Attachments: HIVE-17580.003-standalone-metastore.patch, > HIVE-17580.04-standalone-metastore.patch, > HIVE-17580.05-standalone-metastore.patch, > HIVE-17580.06-standalone-metastore.patch, > HIVE-17580.07-standalone-metastore.patch, > HIVE-17580.08-standalone-metastore.patch > > > {{get_fields_with_environment_context}} metastore API uses {{Deserializer}} > class to access the fields metadata for the cases where it is stored along > with the data files (avro tables). The problem is Deserializer classes is > defined in hive-serde module and in order to make metastore independent of > Hive we will have to remove this dependency (atleast we should change it to > runtime dependency instead of compile time). > The other option is investigate if we can use SearchArgument to provide this > functionality. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18524) Vectorization: Execution failure related to non-standard embedding of IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139)
[ https://issues.apache.org/jira/browse/HIVE-18524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375355#comment-16375355 ] Ke Jia commented on HIVE-18524: --- [~mmccline]: I have moved the creation of Vector expression code in VectorUDFAdaptor to VectorizationContext . The RB link is [https://reviews.apache.org/r/61019/diff/11#index_header.] If any questions, Please tell me, Thanks! > Vectorization: Execution failure related to non-standard embedding of > IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139) > - > > Key: HIVE-18524 > URL: https://issues.apache.org/jira/browse/HIVE-18524 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18524.01.patch, HIVE-18524.02.patch > > > {noformat} > insert overwrite table insert_10_1 > select cast(gpa as float), >age, >IF(age>40,cast('2011-01-01 01:01:01' as timestamp),NULL), >IF(LENGTH(name)>10,cast(name as binary),NULL) > from studentnull10k > vectorizationSchemaColumns: [0:name:string, 1:age:int, 2:gpa:double] > ExprNodeDescs: > UDFToFloat(gpa) (type: float), > age (type: int), > if((age > 40), 2011-01-01 01:01:01.0, null) (type: timestamp), > if((length(name) > 10), CAST( name AS BINARY), null) (type: binary) > selectExpressions: > VectorUDFAdaptor(if((age > 40), 2011-01-01 01:01:01.0, null)) > (children: LongColGreaterLongScalar(col 1:int, val 40) -> 4:boolean) > -> 5:timestamp, > VectorUDFAdaptor(if((length(name) > 10), CAST( name AS BINARY), null)) > (children: LongColGreaterLongScalar(col 4:int, val 10)(children: > StringLength(col 0:string) -> 4:int) -> 6:boolean, > VectorUDFAdaptor(CAST( name AS BINARY)) -> 7:binary) -> 8:binary > {noformat} > *// Notice there is no vector expression shown for the last IF stmt.* It has > been magically embedded inside the VectorUDFAdaptor object... > Execution results in this call stack. > {nocode} > Caused by: java.lang.NullPointerException > at java.util.Arrays.copyOfRange(Arrays.java:3521) > at > org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$9.writeValue(VectorExpressionWriterFactory.java:1101) > at > org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$VectorExpressionWriterBytes.writeValue(VectorExpressionWriterFactory.java:343) > at > org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFArgDesc.getDeferredJavaObject(VectorUDFArgDesc.java:123) > at > org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.setResult(VectorUDFAdaptor.java:211) > at > org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.evaluate(VectorUDFAdaptor.java:177) > at > org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:145) > ... 22 more > {nocode} > Change is due to: > HIVE-17139: Conditional expressions optimization: skip the expression > evaluation if the condition is not satisfied for vectorization engine. (Jia > Ke, reviewed by Ferdinand Xu) > Embedding a raw vector expression outside of VectorizationContext is quite > non-standard and evidently buggy. > [~Ferd] [~Ke Jia] I am inclined to revert this change. Comments? CC: > [~ashutoshc] [~hagleitn] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18795) upgrade accumulo to 1.8.1
[ https://issues.apache.org/jira/browse/HIVE-18795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Saijin Huang reassigned HIVE-18795: --- > upgrade accumulo to 1.8.1 > - > > Key: HIVE-18795 > URL: https://issues.apache.org/jira/browse/HIVE-18795 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Saijin Huang >Assignee: Saijin Huang >Priority: Minor > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18726) Implement DEFAULT constraint
[ https://issues.apache.org/jira/browse/HIVE-18726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375342#comment-16375342 ] Hive QA commented on HIVE-18726: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911806/HIVE-18726.3.patch {color:green}SUCCESS:{color} +1 due to 14 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 336 failed/errored test(s), 13739 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_deep_filters] (batchId=90) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_groupby2] (batchId=47) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_groupby] (batchId=49) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_select] (batchId=62) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cast_on_constant] (batchId=24) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[constant_prop_3] (batchId=44) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[constantfolding] (batchId=75) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[constprog_type] (batchId=1) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_with_constraints] (batchId=69) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_basic2] (batchId=11) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_basic3] (batchId=61) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_intervals] (batchId=23) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_timeseries] (batchId=59) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_topn] (batchId=3) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[except_all] (batchId=46) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[extrapolate_part_stats_date] (batchId=20) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[fold_eq_with_case_when] (batchId=81) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_cube1] (batchId=4) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_cube_multi_gby] (batchId=12) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_grouping_id3] (batchId=26) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_grouping_sets1] (batchId=69) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_grouping_sets2] (batchId=25) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_grouping_sets3] (batchId=1) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_grouping_sets4] (batchId=32) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_grouping_sets5] (batchId=49) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_grouping_sets6] (batchId=71) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_grouping_sets_grouping] (batchId=4) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_grouping_sets_limit] (batchId=17) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_grouping_window] (batchId=32) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_rollup1] (batchId=33) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_rollup_empty] (batchId=55) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[having2] (batchId=16) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[having] (batchId=43) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[implicit_cast1] (batchId=60) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort_grouping_operators] (batchId=55) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_const_type] (batchId=68) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[interval_alt] (batchId=4) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[interval_arithmetic] (batchId=47) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[limit_pushdown2] (batchId=16) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[literal_ints] (batchId=89) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_exchangepartition] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nonmr_fetch_threshold] (batchId=82) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[num_op_type_conv] (batchId=76) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge5] (batchId=56) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge6] (batchId=34) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge_incompat1] (batchId=68) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_14] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_17]
[jira] [Commented] (HIVE-18524) Vectorization: Execution failure related to non-standard embedding of IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139)
[ https://issues.apache.org/jira/browse/HIVE-18524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375323#comment-16375323 ] Ke Jia commented on HIVE-18524: --- [~mmccline]: {quote}Before we add any more vectorization features like this one I'd like to see a better testing framework. HIVE-18622 was a huge fix to many vector expressions that were handling NULLs incorrectly. In particular, IfExprColumnNull and IfExprNullColumn had bugs. By better framework, I like to see all the different UDFs generated with expressions for both row- and vector- and have data will NULLs generated and driven through those expressions and the results compared. I think a framework like this would have found the bug that resulted in the revert. And, also would have found many of the problems fixed by HIVE-18622. {quote}Now, we have passed all the origin qtest and the new added qtest I can think of . If you have comprehensive test cases, Please tell me, thanks. {quote}Vector expressions need to be created in the VectorizationContext class and not as a special case in VectorUDFAdaptor. VectorizationContext instantiates vector expressions so they have proper TypeInfo and can be displayed with EXPLAIN VECTORIZATION. {quote} I will try to move the creation of Vector expression code in VectorUDFAdaptor to VectorizationContext . {quote}What I don't understand is why an IF expr with computed THEN and/or ELSE values isn't just another vector expression. I may be missing something. I certainly see it is more sophisticated in that you want to avoid executing any THEN expressions whose IF expr row value is false and similarly avoid executing any ELSE expressions whose IF expr is true. {quote}Case When Else expression is translated to If expression in HIVE-16731. If I have wrong understanding your question, Please tell me, thanks. {quote}Usually, rather than add special cases to existing vector expressions we use GenVectorCode to generate with templates new class variations. We try to avoid complicated base classes that have lots of decision logic. That was my impression when I read the code a while ago. I could be wrong but even though it may seem redudant we generally have vectorization code variation just focus on the specific variation they are executing. {quote}The special case only focus on the row- expression (VectorUDFAdaptor.java) to solve the GenericUDFIf case and for the vector- expression, I think it can apply to all the generally case. If I have wrong understanding, Please tell me. > Vectorization: Execution failure related to non-standard embedding of > IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139) > - > > Key: HIVE-18524 > URL: https://issues.apache.org/jira/browse/HIVE-18524 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18524.01.patch, HIVE-18524.02.patch > > > {noformat} > insert overwrite table insert_10_1 > select cast(gpa as float), >age, >IF(age>40,cast('2011-01-01 01:01:01' as timestamp),NULL), >IF(LENGTH(name)>10,cast(name as binary),NULL) > from studentnull10k > vectorizationSchemaColumns: [0:name:string, 1:age:int, 2:gpa:double] > ExprNodeDescs: > UDFToFloat(gpa) (type: float), > age (type: int), > if((age > 40), 2011-01-01 01:01:01.0, null) (type: timestamp), > if((length(name) > 10), CAST( name AS BINARY), null) (type: binary) > selectExpressions: > VectorUDFAdaptor(if((age > 40), 2011-01-01 01:01:01.0, null)) > (children: LongColGreaterLongScalar(col 1:int, val 40) -> 4:boolean) > -> 5:timestamp, > VectorUDFAdaptor(if((length(name) > 10), CAST( name AS BINARY), null)) > (children: LongColGreaterLongScalar(col 4:int, val 10)(children: > StringLength(col 0:string) -> 4:int) -> 6:boolean, > VectorUDFAdaptor(CAST( name AS BINARY)) -> 7:binary) -> 8:binary > {noformat} > *// Notice there is no vector expression shown for the last IF stmt.* It has > been magically embedded inside the VectorUDFAdaptor object... > Execution results in this call stack. > {nocode} > Caused by: java.lang.NullPointerException > at java.util.Arrays.copyOfRange(Arrays.java:3521) > at > org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$9.writeValue(VectorExpressionWriterFactory.java:1101) > at > org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$VectorExpressionWriterBytes.writeValue(VectorExpressionWriterFactory.java:343) > at >
[jira] [Commented] (HIVE-18726) Implement DEFAULT constraint
[ https://issues.apache.org/jira/browse/HIVE-18726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375318#comment-16375318 ] Hive QA commented on HIVE-18726: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 15s{color} | {color:red} hcatalog-unit in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 28s{color} | {color:red} ql in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s{color} | {color:red} itests/hcatalog-unit: The patch generated 2 new + 20 unchanged - 0 fixed = 22 total (was 20) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 49s{color} | {color:red} ql: The patch generated 48 new + 1604 unchanged - 28 fixed = 1652 total (was 1632) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 27s{color} | {color:red} standalone-metastore: The patch generated 51 new + 1588 unchanged - 11 fixed = 1639 total (was 1599) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 103 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 2s{color} | {color:red} The patch 2 line(s) with tabs. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 48s{color} | {color:red} ql generated 1 new + 99 unchanged - 1 fixed = 100 total (was 100) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9344/dev-support/hive-personality.sh | | git revision | master / 53a590b | | Default Java | 1.8.0_111 | | mvninstall | http://104.198.109.242/logs//PreCommit-HIVE-Build-9344/yetus/patch-mvninstall-itests_hcatalog-unit.txt | | mvninstall | http://104.198.109.242/logs//PreCommit-HIVE-Build-9344/yetus/patch-mvninstall-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9344/yetus/diff-checkstyle-itests_hcatalog-unit.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9344/yetus/diff-checkstyle-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9344/yetus/diff-checkstyle-standalone-metastore.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-9344/yetus/whitespace-eol.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-9344/yetus/whitespace-tabs.txt | | javadoc | http://104.198.109.242/logs//PreCommit-HIVE-Build-9344/yetus/diff-javadoc-javadoc-ql.txt | | asflicense |
[jira] [Commented] (HIVE-18524) Vectorization: Execution failure related to non-standard embedding of IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139)
[ https://issues.apache.org/jira/browse/HIVE-18524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375309#comment-16375309 ] Ke Jia commented on HIVE-18524: --- [~mmccline]: HIVE-17139 mainly optimize vector- and row- expression. For the vector- expression (for example IfExprDoubleColumnDoubleColumn.java), If(expr1, expr2, expr3), When eveluate the children expression (expr1,expr2 and expr3), Firstly, we compute the expr1 and the result stored in batch.cols[arg1Column], where if the expr1 is true, the value of batch.cols[arg1Column] is 1, or is 0. Then we compute the expr2 if the batch.cols[arg1Column] is 1, or compute the expr3. After we eveluate the children expression, the value of If expression is compute based on the result of expr1, if the expr1 is 1, the value is expr2, or the value is expr3. I think it will not be NPE like HIVE-18524. If I have wrong understanding, please tell me, thanks. For the row- expression (for example VectorUDFAdaptor.java): We eveluate the children expression same as the vector- expression above. After eveluated the children expression, the current implementation in VectorUDFAdaptor gets the i-th row batch.cols[arg1Column][i], batch.cols[arg2Column][i], batch.cols[arg3Column][i] and then wrap the result with GenericUDF.DeferredObject passing to GenericUDFIf.java . And eveluate the final value of If expression in GenericUDFIf.java base on the passed GenericUDF.DeferredObject. The exception of HIVE-18524 is in the wrapping result with GenericUDF.DeferredObject phase. For example, the value of If expression is BytesColumnVector, in the i-th row, if the expr1 is 1, we will skip compute expr3 during eveluating the children expression phase. So the batch.cols[arg3Column][i] is null. And it will throws NPE. And our solution is only wrap the satisfied value and skip the not-satisfied value. For example, if the batch.cols[arg1Column][i] is 1, we only wrap the batch.cols[arg2Column][i] and not wrap the batch.cols[arg3Column][i]. And this optimization can gain 17% improvement in Q06 on TPCx-BB and +40% improvement in the complexity String operation. I think this optimization is necessary. > Vectorization: Execution failure related to non-standard embedding of > IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139) > - > > Key: HIVE-18524 > URL: https://issues.apache.org/jira/browse/HIVE-18524 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18524.01.patch, HIVE-18524.02.patch > > > {noformat} > insert overwrite table insert_10_1 > select cast(gpa as float), >age, >IF(age>40,cast('2011-01-01 01:01:01' as timestamp),NULL), >IF(LENGTH(name)>10,cast(name as binary),NULL) > from studentnull10k > vectorizationSchemaColumns: [0:name:string, 1:age:int, 2:gpa:double] > ExprNodeDescs: > UDFToFloat(gpa) (type: float), > age (type: int), > if((age > 40), 2011-01-01 01:01:01.0, null) (type: timestamp), > if((length(name) > 10), CAST( name AS BINARY), null) (type: binary) > selectExpressions: > VectorUDFAdaptor(if((age > 40), 2011-01-01 01:01:01.0, null)) > (children: LongColGreaterLongScalar(col 1:int, val 40) -> 4:boolean) > -> 5:timestamp, > VectorUDFAdaptor(if((length(name) > 10), CAST( name AS BINARY), null)) > (children: LongColGreaterLongScalar(col 4:int, val 10)(children: > StringLength(col 0:string) -> 4:int) -> 6:boolean, > VectorUDFAdaptor(CAST( name AS BINARY)) -> 7:binary) -> 8:binary > {noformat} > *// Notice there is no vector expression shown for the last IF stmt.* It has > been magically embedded inside the VectorUDFAdaptor object... > Execution results in this call stack. > {nocode} > Caused by: java.lang.NullPointerException > at java.util.Arrays.copyOfRange(Arrays.java:3521) > at > org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$9.writeValue(VectorExpressionWriterFactory.java:1101) > at > org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$VectorExpressionWriterBytes.writeValue(VectorExpressionWriterFactory.java:343) > at > org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFArgDesc.getDeferredJavaObject(VectorUDFArgDesc.java:123) > at > org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.setResult(VectorUDFAdaptor.java:211) > at > org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.evaluate(VectorUDFAdaptor.java:177) > at >
[jira] [Commented] (HIVE-18659) add acid version marker to acid files/directories
[ https://issues.apache.org/jira/browse/HIVE-18659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375303#comment-16375303 ] Hive QA commented on HIVE-18659: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911808/HIVE-18659.12.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 39 failed/errored test(s), 13020 tests executed *Failed tests:* {noformat} TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93)
[jira] [Commented] (HIVE-18788) Clean up inputs in JDBC PreparedStatement
[ https://issues.apache.org/jira/browse/HIVE-18788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375302#comment-16375302 ] Daniel Dai commented on HIVE-18788: --- Fix checkstyle warnings. > Clean up inputs in JDBC PreparedStatement > - > > Key: HIVE-18788 > URL: https://issues.apache.org/jira/browse/HIVE-18788 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18788.1.patch, HIVE-18788.2.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18788) Clean up inputs in JDBC PreparedStatement
[ https://issues.apache.org/jira/browse/HIVE-18788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-18788: -- Attachment: HIVE-18788.2.patch > Clean up inputs in JDBC PreparedStatement > - > > Key: HIVE-18788 > URL: https://issues.apache.org/jira/browse/HIVE-18788 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18788.1.patch, HIVE-18788.2.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18794) Repl load "with" clause does not pass config to tasks for non-partition tables
[ https://issues.apache.org/jira/browse/HIVE-18794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-18794: -- Attachment: HIVE-18794.1.patch > Repl load "with" clause does not pass config to tasks for non-partition tables > -- > > Key: HIVE-18794 > URL: https://issues.apache.org/jira/browse/HIVE-18794 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18794.1.patch > > > Miss one scenario in HIVE-18626. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18794) Repl load "with" clause does not pass config to tasks for non-partition tables
[ https://issues.apache.org/jira/browse/HIVE-18794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-18794: -- Status: Patch Available (was: Open) > Repl load "with" clause does not pass config to tasks for non-partition tables > -- > > Key: HIVE-18794 > URL: https://issues.apache.org/jira/browse/HIVE-18794 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18794.1.patch > > > Miss one scenario in HIVE-18626. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18794) Repl load "with" clause does not pass config to tasks for non-partition tables
[ https://issues.apache.org/jira/browse/HIVE-18794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai reassigned HIVE-18794: - > Repl load "with" clause does not pass config to tasks for non-partition tables > -- > > Key: HIVE-18794 > URL: https://issues.apache.org/jira/browse/HIVE-18794 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18794.1.patch > > > Miss one scenario in HIVE-18626. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18659) add acid version marker to acid files/directories
[ https://issues.apache.org/jira/browse/HIVE-18659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375293#comment-16375293 ] Hive QA commented on HIVE-18659: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 45s{color} | {color:red} ql: The patch generated 2 new + 1010 unchanged - 22 fixed = 1012 total (was 1032) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9343/dev-support/hive-personality.sh | | git revision | master / 53a590b | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9343/yetus/diff-checkstyle-ql.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9343/yetus/patch-asflicense-problems.txt | | modules | C: hcatalog/streaming ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9343/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > add acid version marker to acid files/directories > - > > Key: HIVE-18659 > URL: https://issues.apache.org/jira/browse/HIVE-18659 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-18659.01.patch, HIVE-18659.04.patch, > HIVE-18659.05.patch, HIVE-18659.06.patch, HIVE-18659.07.patch, > HIVE-18659.09.patch, HIVE-18659.09.patch, HIVE-18659.10.patch, > HIVE-18659.11.patch, HIVE-18659.12.patch > > > add acid version marker to acid files so that we know which version of acid > wrote the file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18158) Remove OrcRawRecordMerger.ReaderPairAcid.statementId
[ https://issues.apache.org/jira/browse/HIVE-18158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375283#comment-16375283 ] Hive QA commented on HIVE-18158: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911788/HIVE-18158.02.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 33 failed/errored test(s), 13051 tests executed *Failed tests:* {noformat} TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93)
[jira] [Commented] (HIVE-18158) Remove OrcRawRecordMerger.ReaderPairAcid.statementId
[ https://issues.apache.org/jira/browse/HIVE-18158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375263#comment-16375263 ] Hive QA commented on HIVE-18158: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} ql: The patch generated 0 new + 248 unchanged - 10 fixed = 248 total (was 258) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 29s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9342/dev-support/hive-personality.sh | | git revision | master / 53a590b | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9342/yetus/patch-asflicense-problems.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9342/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Remove OrcRawRecordMerger.ReaderPairAcid.statementId > > > Key: HIVE-18158 > URL: https://issues.apache.org/jira/browse/HIVE-18158 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Minor > Attachments: HIVE-18158.01.patch, HIVE-18158.02.patch > > > * Need to get rid of this since we can always get this from the row > itself in Acid 2.0. > * For Acid 1.0, statementId == 0 in all deltas because both > multi-statement txns and > * Split Upate are only available in test mode so there is nothing can > create a > * deltas_x_x_M with M > 0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18571) stats issues for MM tables
[ https://issues.apache.org/jira/browse/HIVE-18571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375243#comment-16375243 ] Sergey Shelukhin commented on HIVE-18571: - Rebased the patch and updated to fix some tests. Added AcidUtils method to collect files for stats based on ACID state, that can be used for analyze queries. However, I'm not sure it can be used at any other time (e.g. during insert/create) because current ACID state does not reflect the transaction that is in progress. I think we can tackle that in a followup jira, I left a bunch of TODOs. > stats issues for MM tables > -- > > Key: HIVE-18571 > URL: https://issues.apache.org/jira/browse/HIVE-18571 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-18571.01.patch, HIVE-18571.02.patch, > HIVE-18571.patch > > > There are multiple stats aggregation issues with MM tables. > Some simple stats are double counted and some stats (simple stats) are > invalid for ACID table dirs altogether. > I have a patch almost ready, need to fix some more stuff and clean up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18571) stats issues for MM tables
[ https://issues.apache.org/jira/browse/HIVE-18571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18571: Attachment: HIVE-18571.02.patch > stats issues for MM tables > -- > > Key: HIVE-18571 > URL: https://issues.apache.org/jira/browse/HIVE-18571 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-18571.01.patch, HIVE-18571.02.patch, > HIVE-18571.patch > > > There are multiple stats aggregation issues with MM tables. > Some simple stats are double counted and some stats (simple stats) are > invalid for ACID table dirs altogether. > I have a patch almost ready, need to fix some more stuff and clean up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18093) Improve logging when HoS application is killed
[ https://issues.apache.org/jira/browse/HIVE-18093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375181#comment-16375181 ] Hive QA commented on HIVE-18093: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911780/HIVE-18093.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 42 failed/errored test(s), 13019 tests executed *Failed tests:* {noformat} TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93)
[jira] [Updated] (HIVE-18776) MaterializationsInvalidationCache loading causes race condition in the metastore
[ https://issues.apache.org/jira/browse/HIVE-18776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18776: --- Attachment: HIVE-18776.01.patch > MaterializationsInvalidationCache loading causes race condition in the > metastore > > > Key: HIVE-18776 > URL: https://issues.apache.org/jira/browse/HIVE-18776 > Project: Hive > Issue Type: Bug > Components: Materialized views, Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18776.01.patch, HIVE-18776.patch > > > I am seeing occasional failures running metastore tests where operations are > failing saying that there is no open transaction. I have traced this to a > race condition in loading the materialized view invalidation cache. When it > is initialized (either in HiveMetaStoreClient in embedded mode or in > HiveMetaStore in remote mode) it grabs a copy of the current RawStore > instance and then loads the cache in a separate thread. But ObjectStore > keeps state regarding JDO transactions with the underlying RDBMS. So with > the loader thread and the initial thread both doing operations against the > RawStore they sometimes mess up each others transaction stack. In a quick > test I used HMSHandler.newRawStoreForConf() to fix this, which seemed to work. > A reference to the TxnHandler is also called. I suspect this will run into a > similar issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18792) Allow standard compliant syntax for insert on partitioned tables
[ https://issues.apache.org/jira/browse/HIVE-18792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375151#comment-16375151 ] Ashutosh Chauhan commented on HIVE-18792: - In essence following 4 items need to reconciled: 1. Partition clause is required which should be made optional. 2. Schema should be allowed to be specified on partitioned tables. 3. This should work in same manner with select clause (in addition to values clause) 4. Update/delete statement should allow this syntax. > Allow standard compliant syntax for insert on partitioned tables > > > Key: HIVE-18792 > URL: https://issues.apache.org/jira/browse/HIVE-18792 > Project: Hive > Issue Type: Improvement > Components: SQL >Reporter: Ashutosh Chauhan >Priority: Major > > Following works: > {code} > create table t1 (a int, b int, c int); > create table t2 (a int, b int, c int) partitioned by (d int); > insert into t1 values (1,2,3); > insert into t1 (c, b, a) values (1,2,3); > insert into t1 (a,b) values (1,2); > {code} > For partitioned tables it should work similarly but doesn't. All of > following fails: > {code} > insert into t2 values (1,2,3,4); > insert into t2 (a, b, c, d) values (1,2,3,4); > insert into t2 (c,d) values (1,2); > insert into t2 (a,b) values (1,2); > {code} > All of above should work. Also note following works: > {code} > insert into t2 partition(d) values (1,2,3,4); > insert into t2 partition(d=4) values (1,2,3); > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18093) Improve logging when HoS application is killed
[ https://issues.apache.org/jira/browse/HIVE-18093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375138#comment-16375138 ] Hive QA commented on HIVE-18093: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 8m 36s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9340/dev-support/hive-personality.sh | | git revision | master / 53a590b | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9340/yetus/patch-asflicense-problems.txt | | modules | C: spark-client U: spark-client | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9340/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Improve logging when HoS application is killed > -- > > Key: HIVE-18093 > URL: https://issues.apache.org/jira/browse/HIVE-18093 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18093.1.patch > > > When a HoS jobs is explicitly killed via a user (via a yarn command), the > logs just say "RPC channel closed" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18775) HIVE-17983 missed deleting metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql
[ https://issues.apache.org/jira/browse/HIVE-18775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-18775: --- Attachment: HIVE-18775.2.patch > HIVE-17983 missed deleting > metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql > -- > > Key: HIVE-18775 > URL: https://issues.apache.org/jira/browse/HIVE-18775 > Project: Hive > Issue Type: Bug >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-18775.1.patch, HIVE-18775.2.patch > > > HIVE-17983 moved hive metastore schema sql files for all databases but derby > to standalone-metastore. As a result there are not two copies of > {{hive-schema-3.0.0.derby.sql}}. > {{metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql}} needs to be > removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18775) HIVE-17983 missed deleting metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql
[ https://issues.apache.org/jira/browse/HIVE-18775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-18775: --- Status: Open (was: Patch Available) > HIVE-17983 missed deleting > metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql > -- > > Key: HIVE-18775 > URL: https://issues.apache.org/jira/browse/HIVE-18775 > Project: Hive > Issue Type: Bug >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-18775.1.patch, HIVE-18775.2.patch > > > HIVE-17983 moved hive metastore schema sql files for all databases but derby > to standalone-metastore. As a result there are not two copies of > {{hive-schema-3.0.0.derby.sql}}. > {{metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql}} needs to be > removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18775) HIVE-17983 missed deleting metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql
[ https://issues.apache.org/jira/browse/HIVE-18775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-18775: --- Status: Patch Available (was: Open) > HIVE-17983 missed deleting > metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql > -- > > Key: HIVE-18775 > URL: https://issues.apache.org/jira/browse/HIVE-18775 > Project: Hive > Issue Type: Bug >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-18775.1.patch, HIVE-18775.2.patch > > > HIVE-17983 moved hive metastore schema sql files for all databases but derby > to standalone-metastore. As a result there are not two copies of > {{hive-schema-3.0.0.derby.sql}}. > {{metastore/scripts/upgrade/derby/hive-schema-3.0.0.derby.sql}} needs to be > removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18625) SessionState Not Checking For Directory Creation Result
[ https://issues.apache.org/jira/browse/HIVE-18625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375103#comment-16375103 ] Andrew Sherman commented on HIVE-18625: --- I created HIVE-18791 to track the regression I have created. > SessionState Not Checking For Directory Creation Result > --- > > Key: HIVE-18625 > URL: https://issues.apache.org/jira/browse/HIVE-18625 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0, 2.4.0, 2.3.2 >Reporter: BELUGA BEHR >Assignee: Andrew Sherman >Priority: Minor > Fix For: 3.0.0 > > Attachments: HIVE-18625.1.patch, HIVE-18625.2.patch > > > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java#L773 > {code:java} > private static void createPath(HiveConf conf, Path path, String permission, > boolean isLocal, > boolean isCleanUp) throws IOException { > FsPermission fsPermission = new FsPermission(permission); > FileSystem fs; > if (isLocal) { > fs = FileSystem.getLocal(conf); > } else { > fs = path.getFileSystem(conf); > } > if (!fs.exists(path)) { > fs.mkdirs(path, fsPermission); > String dirType = isLocal ? "local" : "HDFS"; > LOG.info("Created " + dirType + " directory: " + path.toString()); > } > if (isCleanUp) { > fs.deleteOnExit(path); > } > } > {code} > The method {{fs.mkdirs(path, fsPermission)}} returns a boolean value > indicating if the directory creation was successful or not. Hive ignores > this return value and therefore could be acting on a directory that doesn't > exist. > Please capture the result, check it, and throw an Exception if it failed -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18791) Fix TestJdbcWithMiniHS2#testHttpHeaderSize
[ https://issues.apache.org/jira/browse/HIVE-18791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Sherman reassigned HIVE-18791: - > Fix TestJdbcWithMiniHS2#testHttpHeaderSize > -- > > Key: HIVE-18791 > URL: https://issues.apache.org/jira/browse/HIVE-18791 > Project: Hive > Issue Type: Bug >Reporter: Andrew Sherman >Assignee: Andrew Sherman >Priority: Major > > TestJdbcWithMiniHS2#testHttpHeaderSize tests whether config of http header > sizes works by using a long username. The local scratch directory for the > session uses the username as part of its path. When this name is more than > 255 chars (on most modern file systems) then the directory creation will > fail. HIVE-18625 made this failure throw an exception, which has caused a > regression in testHttpHeaderSize. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18789) Disallow embedded element in UDFXPathUtil
[ https://issues.apache.org/jira/browse/HIVE-18789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375063#comment-16375063 ] Thejas M Nair commented on HIVE-18789: -- +1 pending tests > Disallow embedded element in UDFXPathUtil > - > > Key: HIVE-18789 > URL: https://issues.apache.org/jira/browse/HIVE-18789 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18789.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18788) Clean up inputs in JDBC PreparedStatement
[ https://issues.apache.org/jira/browse/HIVE-18788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375061#comment-16375061 ] Thejas M Nair commented on HIVE-18788: -- Changes look good to me. [~daijy] can you also please check the checkstyle warnings ? > Clean up inputs in JDBC PreparedStatement > - > > Key: HIVE-18788 > URL: https://issues.apache.org/jira/browse/HIVE-18788 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18788.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18788) Clean up inputs in JDBC PreparedStatement
[ https://issues.apache.org/jira/browse/HIVE-18788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375053#comment-16375053 ] Hive QA commented on HIVE-18788: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 50s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s{color} | {color:red} jdbc: The patch generated 2 new + 17 unchanged - 0 fixed = 19 total (was 17) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9339/dev-support/hive-personality.sh | | git revision | master / 53a590b | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9339/yetus/diff-checkstyle-jdbc.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9339/yetus/patch-asflicense-problems.txt | | modules | C: itests/hive-unit jdbc U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9339/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Clean up inputs in JDBC PreparedStatement > - > > Key: HIVE-18788 > URL: https://issues.apache.org/jira/browse/HIVE-18788 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18788.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18726) Implement DEFAULT constraint
[ https://issues.apache.org/jira/browse/HIVE-18726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375048#comment-16375048 ] Vineet Garg commented on HIVE-18726: Thanks [~daijy]. I'll open a separate ticket. > Implement DEFAULT constraint > > > Key: HIVE-18726 > URL: https://issues.apache.org/jira/browse/HIVE-18726 > Project: Hive > Issue Type: New Feature > Components: Query Planning, Query Processor >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18726.1.patch, HIVE-18726.2.patch, > HIVE-18726.3.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18777) Add Authorization interface to support information_schema integration with external authorization
[ https://issues.apache.org/jira/browse/HIVE-18777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375037#comment-16375037 ] ASF GitHub Bot commented on HIVE-18777: --- GitHub user thejasmn opened a pull request: https://github.com/apache/hive/pull/312 HIVE-18777 You can merge this pull request into a Git repository by running: $ git pull https://github.com/thejasmn/hive HIVE-18777-authprovider Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hive/pull/312.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #312 commit e4627ce304ea44ddeffa6f822247fc5e105d9aba Author: Thejas M NairDate: 2018-02-23T22:08:08Z add policy provider interfaces commit 4e0157d3aecf3f1d94eb790cb1a0f91dfeb3e25a Author: Thejas M Nair Date: 2018-02-23T22:23:51Z Add ASL header > Add Authorization interface to support information_schema integration with > external authorization > - > > Key: HIVE-18777 > URL: https://issues.apache.org/jira/browse/HIVE-18777 > Project: Hive > Issue Type: Bug >Reporter: Thejas M Nair >Assignee: Thejas M Nair >Priority: Major > Labels: pull-request-available > Attachments: HIVE-18777.1.patch > > > HIVE-1010 added support for information_schema. However, the authorization > information is not integrated when another project such as Ranger is used to > do the authorization. > We need to add API which Ranger/Sentry can implement, so that it is possible > to retrieve authorization policy information from them. > The existing API only supports checking if user has a permission on an object > and can't be used to retrieve policy details. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18777) Add Authorization interface to support information_schema integration with external authorization
[ https://issues.apache.org/jira/browse/HIVE-18777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-18777: -- Labels: pull-request-available (was: ) > Add Authorization interface to support information_schema integration with > external authorization > - > > Key: HIVE-18777 > URL: https://issues.apache.org/jira/browse/HIVE-18777 > Project: Hive > Issue Type: Bug >Reporter: Thejas M Nair >Assignee: Thejas M Nair >Priority: Major > Labels: pull-request-available > Attachments: HIVE-18777.1.patch > > > HIVE-1010 added support for information_schema. However, the authorization > information is not integrated when another project such as Ranger is used to > do the authorization. > We need to add API which Ranger/Sentry can implement, so that it is possible > to retrieve authorization policy information from them. > The existing API only supports checking if user has a permission on an object > and can't be used to retrieve policy details. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18777) Add Authorization interface to support information_schema integration with external authorization
[ https://issues.apache.org/jira/browse/HIVE-18777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-18777: - Status: Patch Available (was: Open) > Add Authorization interface to support information_schema integration with > external authorization > - > > Key: HIVE-18777 > URL: https://issues.apache.org/jira/browse/HIVE-18777 > Project: Hive > Issue Type: Bug >Reporter: Thejas M Nair >Assignee: Thejas M Nair >Priority: Major > Attachments: HIVE-18777.1.patch > > > HIVE-1010 added support for information_schema. However, the authorization > information is not integrated when another project such as Ranger is used to > do the authorization. > We need to add API which Ranger/Sentry can implement, so that it is possible > to retrieve authorization policy information from them. > The existing API only supports checking if user has a permission on an object > and can't be used to retrieve policy details. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18777) Add Authorization interface to support information_schema integration with external authorization
[ https://issues.apache.org/jira/browse/HIVE-18777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-18777: - Attachment: HIVE-18777.1.patch > Add Authorization interface to support information_schema integration with > external authorization > - > > Key: HIVE-18777 > URL: https://issues.apache.org/jira/browse/HIVE-18777 > Project: Hive > Issue Type: Bug >Reporter: Thejas M Nair >Assignee: Thejas M Nair >Priority: Major > Attachments: HIVE-18777.1.patch > > > HIVE-1010 added support for information_schema. However, the authorization > information is not integrated when another project such as Ranger is used to > do the authorization. > We need to add API which Ranger/Sentry can implement, so that it is possible > to retrieve authorization policy information from them. > The existing API only supports checking if user has a permission on an object > and can't be used to retrieve policy details. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18776) MaterializationsInvalidationCache loading causes race condition in the metastore
[ https://issues.apache.org/jira/browse/HIVE-18776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375034#comment-16375034 ] Hive QA commented on HIVE-18776: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911774/HIVE-18776.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 88 failed/errored test(s), 13414 tests executed *Failed tests:* {noformat} TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=94)
[jira] [Commented] (HIVE-18726) Implement DEFAULT constraint
[ https://issues.apache.org/jira/browse/HIVE-18726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375024#comment-16375024 ] Daniel Dai commented on HIVE-18726: --- Thanks [~vgarg] for bringing this out. Default constraint should be handled similar to notnull/unique. We need to handle both bootstrap (similar to HIVE-17366) and increment (similar HIVE-15705) cases, and also add proper test to TestReplicationScenarios. I think it is fine to add it on a separate ticket. > Implement DEFAULT constraint > > > Key: HIVE-18726 > URL: https://issues.apache.org/jira/browse/HIVE-18726 > Project: Hive > Issue Type: New Feature > Components: Query Planning, Query Processor >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18726.1.patch, HIVE-18726.2.patch, > HIVE-18726.3.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18726) Implement DEFAULT constraint
[ https://issues.apache.org/jira/browse/HIVE-18726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375009#comment-16375009 ] Vineet Garg commented on HIVE-18726: This patch is still missing event handling for Replication. [~daijy] [~sankarh] [~anishek] Can you help me add event handlers for default constraint? Should I take a look at existing code for NotNull constraint and follow it or is there anything special I need to do for this? > Implement DEFAULT constraint > > > Key: HIVE-18726 > URL: https://issues.apache.org/jira/browse/HIVE-18726 > Project: Hive > Issue Type: New Feature > Components: Query Planning, Query Processor >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18726.1.patch, HIVE-18726.2.patch, > HIVE-18726.3.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-18726) Implement DEFAULT constraint
[ https://issues.apache.org/jira/browse/HIVE-18726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375009#comment-16375009 ] Vineet Garg edited comment on HIVE-18726 at 2/23/18 10:00 PM: -- This patch is still missing event handling for Replication. [~daijy] [~sankarh] [~anishek] Can you help me add event handlers for default constraint? Should I take a look at existing code for NotNull constraint and follow it or is there anything special I need to do for this? Also can it be done in separate patch? was (Author: vgarg): This patch is still missing event handling for Replication. [~daijy] [~sankarh] [~anishek] Can you help me add event handlers for default constraint? Should I take a look at existing code for NotNull constraint and follow it or is there anything special I need to do for this? > Implement DEFAULT constraint > > > Key: HIVE-18726 > URL: https://issues.apache.org/jira/browse/HIVE-18726 > Project: Hive > Issue Type: New Feature > Components: Query Planning, Query Processor >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18726.1.patch, HIVE-18726.2.patch, > HIVE-18726.3.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18789) Disallow embedded element in UDFXPathUtil
[ https://issues.apache.org/jira/browse/HIVE-18789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-18789: -- Attachment: HIVE-18789.1.patch > Disallow embedded element in UDFXPathUtil > - > > Key: HIVE-18789 > URL: https://issues.apache.org/jira/browse/HIVE-18789 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18789.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18789) Disallow embedded element in UDFXPathUtil
[ https://issues.apache.org/jira/browse/HIVE-18789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-18789: -- Status: Patch Available (was: Open) > Disallow embedded element in UDFXPathUtil > - > > Key: HIVE-18789 > URL: https://issues.apache.org/jira/browse/HIVE-18789 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18789.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18726) Implement DEFAULT constraint
[ https://issues.apache.org/jira/browse/HIVE-18726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-18726: --- Status: Patch Available (was: Open) Patch(3) adds the following: * Metastore changes for all databases * Type checking to make sure default value's type is valid * Disallow constraints on external table and on partition columns (not null and default) * Restrict constraint name, default value to be less 255 char * Throw an error if trying to create constraint with existing name. * More tests (including negative) > Implement DEFAULT constraint > > > Key: HIVE-18726 > URL: https://issues.apache.org/jira/browse/HIVE-18726 > Project: Hive > Issue Type: New Feature > Components: Query Planning, Query Processor >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18726.1.patch, HIVE-18726.2.patch, > HIVE-18726.3.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-18726) Implement DEFAULT constraint
[ https://issues.apache.org/jira/browse/HIVE-18726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16375000#comment-16375000 ] Vineet Garg edited comment on HIVE-18726 at 2/23/18 9:50 PM: - Patch(3) adds the following: * Metastore changes for all databases * Type checking to make sure default values type is valid * Disallow constraints on external table and on partition columns (not null and default) * Restrict constraint name, default value to be less 255 char * Throw an error if trying to create constraint with existing name. * More tests (including negative) was (Author: vgarg): Patch(3) adds the following: * Metastore changes for all databases * Type checking to make sure default value's type is valid * Disallow constraints on external table and on partition columns (not null and default) * Restrict constraint name, default value to be less 255 char * Throw an error if trying to create constraint with existing name. * More tests (including negative) > Implement DEFAULT constraint > > > Key: HIVE-18726 > URL: https://issues.apache.org/jira/browse/HIVE-18726 > Project: Hive > Issue Type: New Feature > Components: Query Planning, Query Processor >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18726.1.patch, HIVE-18726.2.patch, > HIVE-18726.3.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18659) add acid version marker to acid files/directories
[ https://issues.apache.org/jira/browse/HIVE-18659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18659: -- Attachment: HIVE-18659.12.patch > add acid version marker to acid files/directories > - > > Key: HIVE-18659 > URL: https://issues.apache.org/jira/browse/HIVE-18659 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-18659.01.patch, HIVE-18659.04.patch, > HIVE-18659.05.patch, HIVE-18659.06.patch, HIVE-18659.07.patch, > HIVE-18659.09.patch, HIVE-18659.09.patch, HIVE-18659.10.patch, > HIVE-18659.11.patch, HIVE-18659.12.patch > > > add acid version marker to acid files so that we know which version of acid > wrote the file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18726) Implement DEFAULT constraint
[ https://issues.apache.org/jira/browse/HIVE-18726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-18726: --- Status: Open (was: Patch Available) > Implement DEFAULT constraint > > > Key: HIVE-18726 > URL: https://issues.apache.org/jira/browse/HIVE-18726 > Project: Hive > Issue Type: New Feature > Components: Query Planning, Query Processor >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18726.1.patch, HIVE-18726.2.patch, > HIVE-18726.3.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18726) Implement DEFAULT constraint
[ https://issues.apache.org/jira/browse/HIVE-18726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-18726: --- Attachment: HIVE-18726.3.patch > Implement DEFAULT constraint > > > Key: HIVE-18726 > URL: https://issues.apache.org/jira/browse/HIVE-18726 > Project: Hive > Issue Type: New Feature > Components: Query Planning, Query Processor >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18726.1.patch, HIVE-18726.2.patch, > HIVE-18726.3.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18789) Disallow embedded element in UDFXPathUtil
[ https://issues.apache.org/jira/browse/HIVE-18789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai reassigned HIVE-18789: - > Disallow embedded element in UDFXPathUtil > - > > Key: HIVE-18789 > URL: https://issues.apache.org/jira/browse/HIVE-18789 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-15077) Acid LockManager is unfair
[ https://issues.apache.org/jira/browse/HIVE-15077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-15077: -- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) committed to master thanks Alan for the review > Acid LockManager is unfair > -- > > Key: HIVE-15077 > URL: https://issues.apache.org/jira/browse/HIVE-15077 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.3.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Blocker > Fix For: 3.0.0 > > Attachments: HIVE-15077.02.patch > > > HIVE-10242 made the acid LM unfair. > In TxnHandler.checkLock(), suppose we are trying to acquire SR5 (the number > is extLockId). > Then > LockInfo[] locks = lockSet.toArray(new LockInfo[lockSet.size()]); > may look like this (all explicitly listed locks are in Waiting state) > {, SR5 SW3 X4} > So the algorithm will find SR5 in the list and start looking backwards (to > the left). > According to IDs, SR5 should wait for X4 to be granted but X4 won't even be > examined and so SR5 may be granted. > Theoretically, this could cause starvation. > The query that generates the list already has > query.append(" and hl_lock_ext_id <= ").append(extLockId); > but it should use "<" rather than "<=" to exclude the locks being checked > from "locks" list which will make the algorithm look at all locks "in front" > of a given lock. > Here is an example (add to TestDbTxnManager2) > {noformat} > @Test > public void testFairness2() throws Exception { > dropTable(new String[]{"T7"}); > CommandProcessorResponse cpr = driver.run("create table if not exists T7 > (a int) partitioned by (p int) stored as orc TBLPROPERTIES > ('transactional'='true')"); > checkCmdOnDriver(cpr); > checkCmdOnDriver(driver.run("insert into T7 partition(p) > values(1,1),(1,2)"));//create 2 partitions > cpr = driver.compileAndRespond("select a from T7 "); > checkCmdOnDriver(cpr); > txnMgr.acquireLocks(driver.getPlan(), ctx, "Fifer");//gets S lock on T7 > HiveTxnManager txnMgr2 = > TxnManagerFactory.getTxnManagerFactory().getTxnManager(conf); > swapTxnManager(txnMgr2); > cpr = driver.compileAndRespond("alter table T7 drop partition (p=1)"); > checkCmdOnDriver(cpr); > //tries to get X lock on T7.p=1 and gets Waiting state > LockState lockState = ((DbTxnManager) > txnMgr2).acquireLocks(driver.getPlan(), ctx, "Fiddler", false); > List locks = getLocks(); > Assert.assertEquals("Unexpected lock count", 4, locks.size()); > checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T7", > null, locks); > checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T7", > "p=1", locks); > checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T7", > "p=2", locks); > checkLock(LockType.EXCLUSIVE, LockState.WAITING, "default", "T7", "p=1", > locks); > HiveTxnManager txnMgr3 = > TxnManagerFactory.getTxnManagerFactory().getTxnManager(conf); > swapTxnManager(txnMgr3); > //this should block behind the X lock on T7.p=1 > cpr = driver.compileAndRespond("select a from T7"); > checkCmdOnDriver(cpr); > txnMgr3.acquireLocks(driver.getPlan(), ctx, "Fifer");//gets S lock on T6 > locks = getLocks(); > Assert.assertEquals("Unexpected lock count", 7, locks.size()); > checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T7", > null, locks); > checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T7", > "p=1", locks); > checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T7", > "p=2", locks); > checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T7", > null, locks); > checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T7", > "p=1", locks); > checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T7", > "p=2", locks); > checkLock(LockType.EXCLUSIVE, LockState.WAITING, "default", "T7", "p=1", > locks); > } > {noformat} > The 2nd {{locks = getLocks();}} output shows that all locks for the 2nd > {{select * from T7}} are all acquired while they should block behind the X > lock to be fair. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18776) MaterializationsInvalidationCache loading causes race condition in the metastore
[ https://issues.apache.org/jira/browse/HIVE-18776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374978#comment-16374978 ] Hive QA commented on HIVE-18776: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 28s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s{color} | {color:red} standalone-metastore: The patch generated 2 new + 546 unchanged - 1 fixed = 548 total (was 547) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 14s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9338/dev-support/hive-personality.sh | | git revision | master / ed487ac | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9338/yetus/diff-checkstyle-standalone-metastore.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9338/yetus/patch-asflicense-problems.txt | | modules | C: standalone-metastore U: standalone-metastore | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9338/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > MaterializationsInvalidationCache loading causes race condition in the > metastore > > > Key: HIVE-18776 > URL: https://issues.apache.org/jira/browse/HIVE-18776 > Project: Hive > Issue Type: Bug > Components: Materialized views, Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18776.patch > > > I am seeing occasional failures running metastore tests where operations are > failing saying that there is no open transaction. I have traced this to a > race condition in loading the materialized view invalidation cache. When it > is initialized (either in HiveMetaStoreClient in embedded mode or in > HiveMetaStore in remote mode) it grabs a copy of the current RawStore > instance and then loads the cache in a separate thread. But ObjectStore > keeps state regarding JDO transactions with the underlying RDBMS. So with > the loader thread and the initial thread both doing operations against the > RawStore they sometimes mess up each others transaction stack. In a quick > test I used HMSHandler.newRawStoreForConf() to fix this, which seemed to work. > A reference to the TxnHandler is also called. I suspect this will run into a > similar issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-18524) Vectorization: Execution failure related to non-standard embedding of IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139)
[ https://issues.apache.org/jira/browse/HIVE-18524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374965#comment-16374965 ] Matt McCline edited comment on HIVE-18524 at 2/23/18 9:12 PM: -- Currently, we have: vector.expressions: {noformat} IfExprColumnNull IfExprNullColumn IfExprNullNull {noformat} vector.expressions.gen classes for Long/Double: {noformat} IfExprLongColumnDoubleScalar IfExprLongColumnLongColumn IfExprLongColumnLongScalar IfExprLongScalarDoubleColumn IfExprLongScalarDoubleScalar IfExprLongScalarLongColumn IfExprLongScalarLongScalar IfExprDoubleColumnDoubleColumn IfExprDoubleColumnDoubleScalar IfExprDoubleColumnLongScalar IfExprDoubleScalarDoubleColumn IfExprDoubleScalarDoubleScalar IfExprDoubleScalarLongColumn IfExprDoubleScalarLongScalar {noformat} (there are others for StringGroup, Timestamp, IntervalDayTime, etc) Wherever there is a Column is the existing classes, we could generate additional Expr classes, too. E.g. {noformat} IfExprLongExprDoubleScalar IfExprLongExprLongExpr IfExprLongExprLongScalar IfExprLongScalarDoubleExpr IfExprLongScalarLongExpr IfExprLongScalarLongScalar IfExprDoubleExprDoubleExpr IfExprDoubleExprDoubleScalar IfExprDoubleExprLongScalar IfExprDoubleScalarDoubleExpr IfExprDoubleScalarLongExpr {noformat} And, for IfExprColumnNull, etc, we could add: {noformat} IfExprExprNull IfExprNullExpr {noformat} (if we were keen on performance, we could even generate the type specific versions for the Null classes...) was (Author: mmccline): Currently, we have: vector.expressions: {noformat} IfExprColumnNull IfExprNullColumn IfExprNullNull {noformat} vector.expressions.gen classes for Long/Double: {noformat} IfExprLongColumnDoubleScalar IfExprLongColumnLongColumn IfExprLongColumnLongScalar IfExprLongScalarDoubleColumn IfExprLongScalarDoubleScalar IfExprLongScalarLongColumn IfExprLongScalarLongScalar IfExprDoubleColumnDoubleColumn IfExprDoubleColumnDoubleScalar IfExprDoubleColumnLongScalar IfExprDoubleScalarDoubleColumn IfExprDoubleScalarDoubleScalar IfExprDoubleScalarLongColumn IfExprDoubleScalarLongScalar {noformat} (there are others for StringGroup, Timestamp, IntervalDayTime, etc) Wherever there is a Column is the existing classes, we could generate Expr classes, too. E.g. {noformat} IfExprLongExprDoubleScalar {noformat} > Vectorization: Execution failure related to non-standard embedding of > IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139) > - > > Key: HIVE-18524 > URL: https://issues.apache.org/jira/browse/HIVE-18524 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18524.01.patch, HIVE-18524.02.patch > > > {noformat} > insert overwrite table insert_10_1 > select cast(gpa as float), >age, >IF(age>40,cast('2011-01-01 01:01:01' as timestamp),NULL), >IF(LENGTH(name)>10,cast(name as binary),NULL) > from studentnull10k > vectorizationSchemaColumns: [0:name:string, 1:age:int, 2:gpa:double] > ExprNodeDescs: > UDFToFloat(gpa) (type: float), > age (type: int), > if((age > 40), 2011-01-01 01:01:01.0, null) (type: timestamp), > if((length(name) > 10), CAST( name AS BINARY), null) (type: binary) > selectExpressions: > VectorUDFAdaptor(if((age > 40), 2011-01-01 01:01:01.0, null)) > (children: LongColGreaterLongScalar(col 1:int, val 40) -> 4:boolean) > -> 5:timestamp, > VectorUDFAdaptor(if((length(name) > 10), CAST( name AS BINARY), null)) > (children: LongColGreaterLongScalar(col 4:int, val 10)(children: > StringLength(col 0:string) -> 4:int) -> 6:boolean, > VectorUDFAdaptor(CAST( name AS BINARY)) -> 7:binary) -> 8:binary > {noformat} > *// Notice there is no vector expression shown for the last IF stmt.* It has > been magically embedded inside the VectorUDFAdaptor object... > Execution results in this call stack. > {nocode} > Caused by: java.lang.NullPointerException > at java.util.Arrays.copyOfRange(Arrays.java:3521) > at > org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$9.writeValue(VectorExpressionWriterFactory.java:1101) > at > org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$VectorExpressionWriterBytes.writeValue(VectorExpressionWriterFactory.java:343) > at > org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFArgDesc.getDeferredJavaObject(VectorUDFArgDesc.java:123) > at > org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.setResult(VectorUDFAdaptor.java:211) > at >
[jira] [Comment Edited] (HIVE-18524) Vectorization: Execution failure related to non-standard embedding of IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139)
[ https://issues.apache.org/jira/browse/HIVE-18524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374965#comment-16374965 ] Matt McCline edited comment on HIVE-18524 at 2/23/18 9:06 PM: -- Currently, we have: vector.expressions: {noformat} IfExprColumnNull IfExprNullColumn IfExprNullNull {noformat} vector.expressions.gen classes for Long/Double: {noformat} IfExprLongColumnDoubleScalar IfExprLongColumnLongColumn IfExprLongColumnLongScalar IfExprLongScalarDoubleColumn IfExprLongScalarDoubleScalar IfExprLongScalarLongColumn IfExprLongScalarLongScalar IfExprDoubleColumnDoubleColumn IfExprDoubleColumnDoubleScalar IfExprDoubleColumnLongScalar IfExprDoubleScalarDoubleColumn IfExprDoubleScalarDoubleScalar IfExprDoubleScalarLongColumn IfExprDoubleScalarLongScalar {noformat} (there are others for StringGroup, Timestamp, IntervalDayTime, etc) Wherever there is a Column is the existing classes, we could generate Expr classes, too. E.g. {noformat} IfExprLongExprDoubleScalar {noformat} was (Author: mmccline): Currently, we have: vector.expressions: {noformat} IfExprColumnNull IfExprNullColumn IfExprNullNull {noformat} vector.expressions.gen classes for Long/Double: {noformat} IfExprLongColumnDoubleScalar IfExprLongColumnLongColumn IfExprLongColumnLongScalar IfExprLongScalarDoubleColumn IfExprLongScalarDoubleScalar IfExprLongScalarLongColumn IfExprLongScalarLongScalar IfExprDoubleColumnDoubleColumn IfExprDoubleColumnDoubleScalar IfExprDoubleColumnLongScalar IfExprDoubleScalarDoubleColumn IfExprDoubleScalarDoubleScalar IfExprDoubleScalarLongColumn IfExprDoubleScalarLongScalar {noformat} (there are others for StringGroup, Timestamp, IntervalDayTime, etc) Wherever there is a Column is the existing classes, we could add Expr. E.g. {noformat} IfExprLongExprDoubleScalar {noformat} > Vectorization: Execution failure related to non-standard embedding of > IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139) > - > > Key: HIVE-18524 > URL: https://issues.apache.org/jira/browse/HIVE-18524 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18524.01.patch, HIVE-18524.02.patch > > > {noformat} > insert overwrite table insert_10_1 > select cast(gpa as float), >age, >IF(age>40,cast('2011-01-01 01:01:01' as timestamp),NULL), >IF(LENGTH(name)>10,cast(name as binary),NULL) > from studentnull10k > vectorizationSchemaColumns: [0:name:string, 1:age:int, 2:gpa:double] > ExprNodeDescs: > UDFToFloat(gpa) (type: float), > age (type: int), > if((age > 40), 2011-01-01 01:01:01.0, null) (type: timestamp), > if((length(name) > 10), CAST( name AS BINARY), null) (type: binary) > selectExpressions: > VectorUDFAdaptor(if((age > 40), 2011-01-01 01:01:01.0, null)) > (children: LongColGreaterLongScalar(col 1:int, val 40) -> 4:boolean) > -> 5:timestamp, > VectorUDFAdaptor(if((length(name) > 10), CAST( name AS BINARY), null)) > (children: LongColGreaterLongScalar(col 4:int, val 10)(children: > StringLength(col 0:string) -> 4:int) -> 6:boolean, > VectorUDFAdaptor(CAST( name AS BINARY)) -> 7:binary) -> 8:binary > {noformat} > *// Notice there is no vector expression shown for the last IF stmt.* It has > been magically embedded inside the VectorUDFAdaptor object... > Execution results in this call stack. > {nocode} > Caused by: java.lang.NullPointerException > at java.util.Arrays.copyOfRange(Arrays.java:3521) > at > org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$9.writeValue(VectorExpressionWriterFactory.java:1101) > at > org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$VectorExpressionWriterBytes.writeValue(VectorExpressionWriterFactory.java:343) > at > org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFArgDesc.getDeferredJavaObject(VectorUDFArgDesc.java:123) > at > org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.setResult(VectorUDFAdaptor.java:211) > at > org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.evaluate(VectorUDFAdaptor.java:177) > at > org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:145) > ... 22 more > {nocode} > Change is due to: > HIVE-17139: Conditional expressions optimization: skip the expression > evaluation if the condition is not satisfied for vectorization engine. (Jia > Ke, reviewed by Ferdinand Xu) > Embedding a raw vector expression outside of
[jira] [Commented] (HIVE-18524) Vectorization: Execution failure related to non-standard embedding of IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139)
[ https://issues.apache.org/jira/browse/HIVE-18524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374965#comment-16374965 ] Matt McCline commented on HIVE-18524: - Currently, we have: vector.expressions: {noformat} IfExprColumnNull IfExprNullColumn IfExprNullNull {noformat} vector.expressions.gen classes for Long/Double: {noformat} IfExprLongColumnDoubleScalar IfExprLongColumnLongColumn IfExprLongColumnLongScalar IfExprLongScalarDoubleColumn IfExprLongScalarDoubleScalar IfExprLongScalarLongColumn IfExprLongScalarLongScalar IfExprDoubleColumnDoubleColumn IfExprDoubleColumnDoubleScalar IfExprDoubleColumnLongScalar IfExprDoubleScalarDoubleColumn IfExprDoubleScalarDoubleScalar IfExprDoubleScalarLongColumn IfExprDoubleScalarLongScalar {noformat} (there are others for StringGroup, Timestamp, IntervalDayTime, etc) Wherever there is a Column is the existing classes, we could add Expr. E.g. {noformat} IfExprLongExprDoubleScalar {noformat} > Vectorization: Execution failure related to non-standard embedding of > IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139) > - > > Key: HIVE-18524 > URL: https://issues.apache.org/jira/browse/HIVE-18524 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18524.01.patch, HIVE-18524.02.patch > > > {noformat} > insert overwrite table insert_10_1 > select cast(gpa as float), >age, >IF(age>40,cast('2011-01-01 01:01:01' as timestamp),NULL), >IF(LENGTH(name)>10,cast(name as binary),NULL) > from studentnull10k > vectorizationSchemaColumns: [0:name:string, 1:age:int, 2:gpa:double] > ExprNodeDescs: > UDFToFloat(gpa) (type: float), > age (type: int), > if((age > 40), 2011-01-01 01:01:01.0, null) (type: timestamp), > if((length(name) > 10), CAST( name AS BINARY), null) (type: binary) > selectExpressions: > VectorUDFAdaptor(if((age > 40), 2011-01-01 01:01:01.0, null)) > (children: LongColGreaterLongScalar(col 1:int, val 40) -> 4:boolean) > -> 5:timestamp, > VectorUDFAdaptor(if((length(name) > 10), CAST( name AS BINARY), null)) > (children: LongColGreaterLongScalar(col 4:int, val 10)(children: > StringLength(col 0:string) -> 4:int) -> 6:boolean, > VectorUDFAdaptor(CAST( name AS BINARY)) -> 7:binary) -> 8:binary > {noformat} > *// Notice there is no vector expression shown for the last IF stmt.* It has > been magically embedded inside the VectorUDFAdaptor object... > Execution results in this call stack. > {nocode} > Caused by: java.lang.NullPointerException > at java.util.Arrays.copyOfRange(Arrays.java:3521) > at > org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$9.writeValue(VectorExpressionWriterFactory.java:1101) > at > org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory$VectorExpressionWriterBytes.writeValue(VectorExpressionWriterFactory.java:343) > at > org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFArgDesc.getDeferredJavaObject(VectorUDFArgDesc.java:123) > at > org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.setResult(VectorUDFAdaptor.java:211) > at > org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.evaluate(VectorUDFAdaptor.java:177) > at > org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:145) > ... 22 more > {nocode} > Change is due to: > HIVE-17139: Conditional expressions optimization: skip the expression > evaluation if the condition is not satisfied for vectorization engine. (Jia > Ke, reviewed by Ferdinand Xu) > Embedding a raw vector expression outside of VectorizationContext is quite > non-standard and evidently buggy. > [~Ferd] [~Ke Jia] I am inclined to revert this change. Comments? CC: > [~ashutoshc] [~hagleitn] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18659) add acid version marker to acid files/directories
[ https://issues.apache.org/jira/browse/HIVE-18659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374958#comment-16374958 ] Hive QA commented on HIVE-18659: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911757/HIVE-18659.11.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9337/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9337/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9337/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-02-23 20:59:30.597 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-9337/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-02-23 20:59:30.600 + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive 571ef51..ed487ac master -> origin/master d72c35e..2bcab14 branch-2 -> origin/branch-2 + git reset --hard HEAD HEAD is now at 571ef51 HIVE-18663: Logged Spark Job Id contains a UUID instead of the actual id (Sahil Takiar, reviewed by Vihang Karajgaonkar) + git clean -f -d + git checkout master Already on 'master' Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/master HEAD is now at ed487ac HIVE-18654 : Add Hiveserver2 specific HADOOP_OPTS environment variable (Vihang Karajgaonkar, reviewed by Sahil Takiar) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-02-23 20:59:35.464 + rm -rf ../yetus_PreCommit-HIVE-Build-9337 + mkdir ../yetus_PreCommit-HIVE-Build-9337 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-9337 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-9337/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java:39 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java' cleanly. error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcRecordUpdater.java:227 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcRecordUpdater.java' with conflicts. error: patch failed: ql/src/test/results/clientpositive/acid_nullscan.q.out:42 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/acid_nullscan.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/acid_table_stats.q.out:95 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/acid_table_stats.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/autoColumnStats_4.q.out:241 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/autoColumnStats_4.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/llap/acid_bucket_pruning.q.out:103 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/llap/acid_bucket_pruning.q.out' with conflicts. Going to apply patch with: git apply -p0 /data/hiveptest/working/scratch/build.patch:735: trailing whitespace. totalSize 7966 /data/hiveptest/working/scratch/build.patch:744: trailing whitespace. totalSize 7966 /data/hiveptest/working/scratch/build.patch:757: trailing whitespace. totalSize 1834 /data/hiveptest/working/scratch/build.patch:766:
[jira] [Updated] (HIVE-18654) Add Hiveserver2 specific HADOOP_OPTS environment variable
[ https://issues.apache.org/jira/browse/HIVE-18654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-18654: --- Resolution: Fixed Fix Version/s: 2.4.0 3.0.0 Status: Resolved (was: Patch Available) Patch merged in master and branch-2. Thanks for the review [~stakiar] > Add Hiveserver2 specific HADOOP_OPTS environment variable > -- > > Key: HIVE-18654 > URL: https://issues.apache.org/jira/browse/HIVE-18654 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Minor > Fix For: 3.0.0, 2.4.0 > > Attachments: HIVE-18654.01.patch > > > HIVE-2665 added support to include metastore specific HADOOP_OPTS variable. > This is helpful in debugging especially if you want to add some jvm > parameters to metastore's process. A similar setting for Hiveserver2 is > missing and could be very helpful in debugging. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18730) Use LLAP as execution engine for Druid mini Cluster Tests
[ https://issues.apache.org/jira/browse/HIVE-18730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374954#comment-16374954 ] Hive QA commented on HIVE-18730: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911736/HIVE-18730.2.patch {color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 34 failed/errored test(s), 13019 tests executed *Failed tests:* {noformat} TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93)
[jira] [Commented] (HIVE-18730) Use LLAP as execution engine for Druid mini Cluster Tests
[ https://issues.apache.org/jira/browse/HIVE-18730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374934#comment-16374934 ] Hive QA commented on HIVE-18730: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 45s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 48s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 51s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 36s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 6s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 55s{color} | {color:green} root: The patch generated 0 new + 836 unchanged - 2 fixed = 836 total (was 838) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} itests/util: The patch generated 0 new + 135 unchanged - 2 fixed = 135 total (was 137) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} The patch ql passed checkstyle {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 18 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 16s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 46m 31s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile xml | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9336/dev-support/hive-personality.sh | | git revision | master / 571ef51 | | Default Java | 1.8.0_111 | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-9336/yetus/whitespace-tabs.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9336/yetus/patch-asflicense-problems.txt | | modules | C: . itests itests/util ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9336/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Use LLAP as execution engine for Druid mini Cluster Tests > - > > Key: HIVE-18730 > URL: https://issues.apache.org/jira/browse/HIVE-18730 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18730.2.patch, HIVE-18730.patch > > > Currently, we are using local MR to run Mini Cluster tests. It will be better > to use LLAP cluster or TEZ. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18710) extend inheritPerms to ACID in Hive 2.X
[ https://issues.apache.org/jira/browse/HIVE-18710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18710: Resolution: Fixed Fix Version/s: 2.4.0 Status: Resolved (was: Patch Available) Committed to branch-2. Test failures look unrelated, ACID test has an order change. Thanks for the review! > extend inheritPerms to ACID in Hive 2.X > --- > > Key: HIVE-18710 > URL: https://issues.apache.org/jira/browse/HIVE-18710 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 2.4.0 > > Attachments: HIVE-18710-branch-2.patch, HIVE-18710.01-branch-2.patch, > HIVE-18710.02-branch-2.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18264) CachedStore: Store cached partitions/col stats within the table cache
[ https://issues.apache.org/jira/browse/HIVE-18264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-18264: Attachment: HIVE-18264.3.patch > CachedStore: Store cached partitions/col stats within the table cache > --- > > Key: HIVE-18264 > URL: https://issues.apache.org/jira/browse/HIVE-18264 > Project: Hive > Issue Type: Bug >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-18264.1.patch, HIVE-18264.2.patch, > HIVE-18264.3.patch > > > Currently we have a separate cache for partitions and partition col stats > which results in some calls iterating through each of these for > retrieving/updating. We can get better performance by organizing > hierarchically. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18158) Remove OrcRawRecordMerger.ReaderPairAcid.statementId
[ https://issues.apache.org/jira/browse/HIVE-18158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18158: -- Attachment: HIVE-18158.02.patch > Remove OrcRawRecordMerger.ReaderPairAcid.statementId > > > Key: HIVE-18158 > URL: https://issues.apache.org/jira/browse/HIVE-18158 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Minor > Attachments: HIVE-18158.01.patch, HIVE-18158.02.patch > > > * Need to get rid of this since we can always get this from the row > itself in Acid 2.0. > * For Acid 1.0, statementId == 0 in all deltas because both > multi-statement txns and > * Split Upate are only available in test mode so there is nothing can > create a > * deltas_x_x_M with M > 0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18559) Allow hive.metastore.limit.partition.request to be user configurable
[ https://issues.apache.org/jira/browse/HIVE-18559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374899#comment-16374899 ] Vihang Karajgaonkar commented on HIVE-18559: +1 patch looks good to me. > Allow hive.metastore.limit.partition.request to be user configurable > > > Key: HIVE-18559 > URL: https://issues.apache.org/jira/browse/HIVE-18559 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Mohit Sabharwal >Assignee: Mohit Sabharwal >Priority: Minor > Attachments: HIVE-18559.patch > > > HIVE-13884 introduced hive.metastore.limit.partition.request to limit the > number of partitions that can be requested from the Metastore for a given > table. > This config is set by cluster admins to prevent ad-hoc queries from putting > memory pressure on HMS. But the limit can be too restrictive for some (savvy) > users who require the ability to set a higher limit for a subset of workloads > (during low HMS memory pressure, for example). Such users generally instruct > admins not to set this confg. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18745) Fix MetaStore creation in tests, so multiple MetaStores can be started on the same machine
[ https://issues.apache.org/jira/browse/HIVE-18745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374890#comment-16374890 ] Hive QA commented on HIVE-18745: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911720/HIVE-18745.3.patch {color:green}SUCCESS:{color} +1 due to 23 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 37 failed/errored test(s), 13017 tests executed *Failed tests:* {noformat} TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93)
[jira] [Commented] (HIVE-15077) Acid LockManager is unfair
[ https://issues.apache.org/jira/browse/HIVE-15077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374883#comment-16374883 ] Alan Gates commented on HIVE-15077: --- +1. The previous code was trying to save time by only comparing against some of the locks, but since the sort or was changed in HIVE-10242 that no longer works. > Acid LockManager is unfair > -- > > Key: HIVE-15077 > URL: https://issues.apache.org/jira/browse/HIVE-15077 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.3.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Blocker > Attachments: HIVE-15077.02.patch > > > HIVE-10242 made the acid LM unfair. > In TxnHandler.checkLock(), suppose we are trying to acquire SR5 (the number > is extLockId). > Then > LockInfo[] locks = lockSet.toArray(new LockInfo[lockSet.size()]); > may look like this (all explicitly listed locks are in Waiting state) > {, SR5 SW3 X4} > So the algorithm will find SR5 in the list and start looking backwards (to > the left). > According to IDs, SR5 should wait for X4 to be granted but X4 won't even be > examined and so SR5 may be granted. > Theoretically, this could cause starvation. > The query that generates the list already has > query.append(" and hl_lock_ext_id <= ").append(extLockId); > but it should use "<" rather than "<=" to exclude the locks being checked > from "locks" list which will make the algorithm look at all locks "in front" > of a given lock. > Here is an example (add to TestDbTxnManager2) > {noformat} > @Test > public void testFairness2() throws Exception { > dropTable(new String[]{"T7"}); > CommandProcessorResponse cpr = driver.run("create table if not exists T7 > (a int) partitioned by (p int) stored as orc TBLPROPERTIES > ('transactional'='true')"); > checkCmdOnDriver(cpr); > checkCmdOnDriver(driver.run("insert into T7 partition(p) > values(1,1),(1,2)"));//create 2 partitions > cpr = driver.compileAndRespond("select a from T7 "); > checkCmdOnDriver(cpr); > txnMgr.acquireLocks(driver.getPlan(), ctx, "Fifer");//gets S lock on T7 > HiveTxnManager txnMgr2 = > TxnManagerFactory.getTxnManagerFactory().getTxnManager(conf); > swapTxnManager(txnMgr2); > cpr = driver.compileAndRespond("alter table T7 drop partition (p=1)"); > checkCmdOnDriver(cpr); > //tries to get X lock on T7.p=1 and gets Waiting state > LockState lockState = ((DbTxnManager) > txnMgr2).acquireLocks(driver.getPlan(), ctx, "Fiddler", false); > List locks = getLocks(); > Assert.assertEquals("Unexpected lock count", 4, locks.size()); > checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T7", > null, locks); > checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T7", > "p=1", locks); > checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T7", > "p=2", locks); > checkLock(LockType.EXCLUSIVE, LockState.WAITING, "default", "T7", "p=1", > locks); > HiveTxnManager txnMgr3 = > TxnManagerFactory.getTxnManagerFactory().getTxnManager(conf); > swapTxnManager(txnMgr3); > //this should block behind the X lock on T7.p=1 > cpr = driver.compileAndRespond("select a from T7"); > checkCmdOnDriver(cpr); > txnMgr3.acquireLocks(driver.getPlan(), ctx, "Fifer");//gets S lock on T6 > locks = getLocks(); > Assert.assertEquals("Unexpected lock count", 7, locks.size()); > checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T7", > null, locks); > checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T7", > "p=1", locks); > checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T7", > "p=2", locks); > checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T7", > null, locks); > checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T7", > "p=1", locks); > checkLock(LockType.SHARED_READ, LockState.ACQUIRED, "default", "T7", > "p=2", locks); > checkLock(LockType.EXCLUSIVE, LockState.WAITING, "default", "T7", "p=1", > locks); > } > {noformat} > The 2nd {{locks = getLocks();}} output shows that all locks for the 2nd > {{select * from T7}} are all acquired while they should block behind the X > lock to be fair. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18785) Make JSON SerDe First-Class SerDe
[ https://issues.apache.org/jira/browse/HIVE-18785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374873#comment-16374873 ] Vihang Karajgaonkar commented on HIVE-18785: I can take a look at this. This may have some effects on the standalone-metastore as well (hopefully not too bad). > Make JSON SerDe First-Class SerDe > - > > Key: HIVE-18785 > URL: https://issues.apache.org/jira/browse/HIVE-18785 > Project: Hive > Issue Type: New Feature > Components: Serializers/Deserializers >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Priority: Major > > According to the [Hive SerDe > Docs|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RowFormats], > there are some extra steps involved in getting the JSON SerDe to work: > {quote} > ROW FORMAT SERDE > 'org.apache.hive.hcatalog.data.JsonSerDe' > STORED AS TEXTFILE > In some distributions, a reference to hive-hcatalog-core.jar is required. > ADD JAR /usr/lib/hive-hcatalog/lib/hive-hcatalog-core.jar; > {quote} > I would like to propose that we move this SerDe into first-class status: > {{STORED AS JSONFILE}} > The user should have to perform no additional steps to use this SerDe. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18785) Make JSON SerDe First-Class SerDe
[ https://issues.apache.org/jira/browse/HIVE-18785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar reassigned HIVE-18785: -- Assignee: Vihang Karajgaonkar > Make JSON SerDe First-Class SerDe > - > > Key: HIVE-18785 > URL: https://issues.apache.org/jira/browse/HIVE-18785 > Project: Hive > Issue Type: New Feature > Components: Serializers/Deserializers >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: Vihang Karajgaonkar >Priority: Major > > According to the [Hive SerDe > Docs|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RowFormats], > there are some extra steps involved in getting the JSON SerDe to work: > {quote} > ROW FORMAT SERDE > 'org.apache.hive.hcatalog.data.JsonSerDe' > STORED AS TEXTFILE > In some distributions, a reference to hive-hcatalog-core.jar is required. > ADD JAR /usr/lib/hive-hcatalog/lib/hive-hcatalog-core.jar; > {quote} > I would like to propose that we move this SerDe into first-class status: > {{STORED AS JSONFILE}} > The user should have to perform no additional steps to use this SerDe. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18786) NPE in Hive windowing functions
[ https://issues.apache.org/jira/browse/HIVE-18786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Bieniosek updated HIVE-18786: - Description: When I run a Hive query with windowing functions, if there's enough data I get an NPE. For example something like this query might break: select id, created_date, max(created_date) over (partition by id) latest_created_any from ... The only workaround I've found is to remove the windowing functions entirely. The stacktrace looks suspiciously similar to +HIVE-15278+, but I'm in hive-2.3.2 which appears to have the bugfix applied. Caused by: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=0) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:297) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:317) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:185) ... 14 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=0) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:365) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:287) ... 16 more Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.persistence.PTFRowContainer.first(PTFRowContainer.java:115) at org.apache.hadoop.hive.ql.exec.PTFPartition.iterator(PTFPartition.java:114) at org.apache.hadoop.hive.ql.udf.ptf.BasePartitionEvaluator.getPartitionAgg(BasePartitionEvaluator.java:200) at org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.evaluateFunctionOnPartition(WindowingTableFunction.java:155) at org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.iterator(WindowingTableFunction.java:538) at org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.finishPartition(PTFOperator.java:349) at org.apache.hadoop.hive.ql.exec.PTFOperator.process(PTFOperator.java:123) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897) at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:356) was: When I run a Hive query with windowing functions, if there's enough data I get an NPE. For example something like this query might break: select id, created_date, max(created_date) over (partition by id) latest_created_any from ... The only workaround I've found is to remove the windowing functions entirely. The stacktrace looks suspiciously similar to +HIVE-15278+, but I'm in hive-2.3.2 which appears to have the bugfix applied. Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.persistence.PTFRowContainer.first(PTFRowContainer.java:115) at org.apache.hadoop.hive.ql.exec.PTFPartition.iterator(PTFPartition.java:114) at org.apache.hadoop.hive.ql.udf.ptf.BasePartitionEvaluator.getPartitionAgg(BasePartitionEvaluator.java:200) at org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.evaluateFunctionOnPartition(WindowingTableFunction.java:155) at org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.iterator(WindowingTableFunction.java:538) at org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.finishPartition(PTFOperator.java:349) at org.apache.hadoop.hive.ql.exec.PTFOperator.process(PTFOperator.java:123) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897) at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:356) > NPE in Hive windowing functions > --- > > Key: HIVE-18786 > URL: https://issues.apache.org/jira/browse/HIVE-18786 > Project: Hive > Issue Type: Bug >Affects Versions: 2.3.2 >Reporter: Michael Bieniosek >Priority: Major > > When I run a Hive query with windowing functions, if there's enough data I > get an NPE. > For example something like this query might break: > select id, created_date, max(created_date) over (partition by id) > latest_created_any from ... > The only workaround I've found is to remove the windowing functions entirely. > The stacktrace looks suspiciously similar to +HIVE-15278+, but I'm in > hive-2.3.2 which appears to have the bugfix applied. > > Caused by: java.lang.RuntimeException: >
[jira] [Comment Edited] (HIVE-17580) Remove dependency of get_fields_with_environment_context API to serde
[ https://issues.apache.org/jira/browse/HIVE-17580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374823#comment-16374823 ] Vihang Karajgaonkar edited comment on HIVE-17580 at 2/23/18 7:10 PM: - pre-commit didn't trigger for some reason. Reattaching the patch again. was (Author: vihangk1): pre-commit didn't trigger for some reason. Reattaching again the patch again. > Remove dependency of get_fields_with_environment_context API to serde > - > > Key: HIVE-17580 > URL: https://issues.apache.org/jira/browse/HIVE-17580 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Labels: pull-request-available > Attachments: HIVE-17580.003-standalone-metastore.patch, > HIVE-17580.04-standalone-metastore.patch, > HIVE-17580.05-standalone-metastore.patch, > HIVE-17580.06-standalone-metastore.patch, > HIVE-17580.07-standalone-metastore.patch > > > {{get_fields_with_environment_context}} metastore API uses {{Deserializer}} > class to access the fields metadata for the cases where it is stored along > with the data files (avro tables). The problem is Deserializer classes is > defined in hive-serde module and in order to make metastore independent of > Hive we will have to remove this dependency (atleast we should change it to > runtime dependency instead of compile time). > The other option is investigate if we can use SearchArgument to provide this > functionality. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18745) Fix MetaStore creation in tests, so multiple MetaStores can be started on the same machine
[ https://issues.apache.org/jira/browse/HIVE-18745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374842#comment-16374842 ] Hive QA commented on HIVE-18745: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 37s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 3s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} The patch core passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} The patch java-client passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} itests/hive-unit: The patch generated 0 new + 643 unchanged - 1 fixed = 643 total (was 644) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} The patch hive-unit-hadoop2 passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} The patch util passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} The patch ql passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} The patch standalone-metastore passed checkstyle {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 15s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 29m 57s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9335/dev-support/hive-personality.sh | | git revision | master / 571ef51 | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9335/yetus/patch-asflicense-problems.txt | | modules | C: hcatalog/core hcatalog/webhcat/java-client itests/hive-unit itests/hive-unit-hadoop2 itests/util ql standalone-metastore U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9335/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Fix MetaStore creation in tests, so multiple MetaStores can be started on the > same machine > -- > > Key: HIVE-18745 > URL: https://issues.apache.org/jira/browse/HIVE-18745 > Project: Hive > Issue Type: Sub-task >
[jira] [Commented] (HIVE-18785) Make JSON SerDe First-Class SerDe
[ https://issues.apache.org/jira/browse/HIVE-18785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374826#comment-16374826 ] BELUGA BEHR commented on HIVE-18785: Or as I've seen it, user don't realize that they need to perform an extra step to us it and then are blocked. > Make JSON SerDe First-Class SerDe > - > > Key: HIVE-18785 > URL: https://issues.apache.org/jira/browse/HIVE-18785 > Project: Hive > Issue Type: New Feature > Components: Serializers/Deserializers >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Priority: Major > > According to the [Hive SerDe > Docs|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RowFormats], > there are some extra steps involved in getting the JSON SerDe to work: > {quote} > ROW FORMAT SERDE > 'org.apache.hive.hcatalog.data.JsonSerDe' > STORED AS TEXTFILE > In some distributions, a reference to hive-hcatalog-core.jar is required. > ADD JAR /usr/lib/hive-hcatalog/lib/hive-hcatalog-core.jar; > {quote} > I would like to propose that we move this SerDe into first-class status: > {{STORED AS JSONFILE}} > The user should have to perform no additional steps to use this SerDe. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17580) Remove dependency of get_fields_with_environment_context API to serde
[ https://issues.apache.org/jira/browse/HIVE-17580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374823#comment-16374823 ] Vihang Karajgaonkar commented on HIVE-17580: pre-commit didn't trigger for some reason. Reattaching again the patch again. > Remove dependency of get_fields_with_environment_context API to serde > - > > Key: HIVE-17580 > URL: https://issues.apache.org/jira/browse/HIVE-17580 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Labels: pull-request-available > Attachments: HIVE-17580.003-standalone-metastore.patch, > HIVE-17580.04-standalone-metastore.patch, > HIVE-17580.05-standalone-metastore.patch, > HIVE-17580.06-standalone-metastore.patch, > HIVE-17580.07-standalone-metastore.patch > > > {{get_fields_with_environment_context}} metastore API uses {{Deserializer}} > class to access the fields metadata for the cases where it is stored along > with the data files (avro tables). The problem is Deserializer classes is > defined in hive-serde module and in order to make metastore independent of > Hive we will have to remove this dependency (atleast we should change it to > runtime dependency instead of compile time). > The other option is investigate if we can use SearchArgument to provide this > functionality. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17580) Remove dependency of get_fields_with_environment_context API to serde
[ https://issues.apache.org/jira/browse/HIVE-17580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-17580: --- Attachment: HIVE-17580.07-standalone-metastore.patch > Remove dependency of get_fields_with_environment_context API to serde > - > > Key: HIVE-17580 > URL: https://issues.apache.org/jira/browse/HIVE-17580 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Labels: pull-request-available > Attachments: HIVE-17580.003-standalone-metastore.patch, > HIVE-17580.04-standalone-metastore.patch, > HIVE-17580.05-standalone-metastore.patch, > HIVE-17580.06-standalone-metastore.patch, > HIVE-17580.07-standalone-metastore.patch > > > {{get_fields_with_environment_context}} metastore API uses {{Deserializer}} > class to access the fields metadata for the cases where it is stored along > with the data files (avro tables). The problem is Deserializer classes is > defined in hive-serde module and in order to make metastore independent of > Hive we will have to remove this dependency (atleast we should change it to > runtime dependency instead of compile time). > The other option is investigate if we can use SearchArgument to provide this > functionality. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18788) Clean up inputs in JDBC PreparedStatement
[ https://issues.apache.org/jira/browse/HIVE-18788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-18788: -- Status: Patch Available (was: Open) > Clean up inputs in JDBC PreparedStatement > - > > Key: HIVE-18788 > URL: https://issues.apache.org/jira/browse/HIVE-18788 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18788.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18093) Improve logging when HoS application is killed
[ https://issues.apache.org/jira/browse/HIVE-18093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-18093: Attachment: HIVE-18093.1.patch > Improve logging when HoS application is killed > -- > > Key: HIVE-18093 > URL: https://issues.apache.org/jira/browse/HIVE-18093 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18093.1.patch > > > When a HoS jobs is explicitly killed via a user (via a yarn command), the > logs just say "RPC channel closed" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18788) Clean up inputs in JDBC PreparedStatement
[ https://issues.apache.org/jira/browse/HIVE-18788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-18788: -- Attachment: HIVE-18788.1.patch > Clean up inputs in JDBC PreparedStatement > - > > Key: HIVE-18788 > URL: https://issues.apache.org/jira/browse/HIVE-18788 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18788.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18093) Improve logging when HoS application is killed
[ https://issues.apache.org/jira/browse/HIVE-18093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-18093: Status: Patch Available (was: Open) > Improve logging when HoS application is killed > -- > > Key: HIVE-18093 > URL: https://issues.apache.org/jira/browse/HIVE-18093 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18093.1.patch > > > When a HoS jobs is explicitly killed via a user (via a yarn command), the > logs just say "RPC channel closed" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18788) Clean up inputs in JDBC PreparedStatement
[ https://issues.apache.org/jira/browse/HIVE-18788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai reassigned HIVE-18788: - > Clean up inputs in JDBC PreparedStatement > - > > Key: HIVE-18788 > URL: https://issues.apache.org/jira/browse/HIVE-18788 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18663) Logged Spark Job Id contains a UUID instead of the actual id
[ https://issues.apache.org/jira/browse/HIVE-18663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-18663: Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks Vihang for the review! > Logged Spark Job Id contains a UUID instead of the actual id > > > Key: HIVE-18663 > URL: https://issues.apache.org/jira/browse/HIVE-18663 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18663.1.patch > > > We have logs like {{Spark Job[job-id]}} but the {{[job-id]}} is set to a UUID > that is created by the RSC {{ClientProtocol}}. It should be pretty easy to > print out the actual job id instead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18327) Remove the unnecessary HiveConf dependency for MiniHiveKdc
[ https://issues.apache.org/jira/browse/HIVE-18327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374795#comment-16374795 ] Ashutosh Chauhan commented on HIVE-18327: - +1 Are any new failures related? > Remove the unnecessary HiveConf dependency for MiniHiveKdc > -- > > Key: HIVE-18327 > URL: https://issues.apache.org/jira/browse/HIVE-18327 > Project: Hive > Issue Type: Test > Components: Test >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Daniel Voros >Priority: Major > Attachments: HIVE-18327.1.patch, HIVE-18327.2.patch > > > MiniHiveKdc takes HiveConf as input parameter while it's not needed. Remove > the unnecessary HiveConf. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18723) CompactorOutputCommitter.commitJob() - check rename() ret val
[ https://issues.apache.org/jira/browse/HIVE-18723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374791#comment-16374791 ] Hive QA commented on HIVE-18723: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911714/HIVE-18723.1.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9334/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9334/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9334/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-02-23 18:20:43.463 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-9334/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-02-23 18:20:43.466 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at e3c4d51 HIVE-18765: SparkClientImpl swallows exception messages from the RemoteDriver (Sahil Takiar, reviewed by Xuefu Zhang) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at e3c4d51 HIVE-18765: SparkClientImpl swallows exception messages from the RemoteDriver (Sahil Takiar, reviewed by Xuefu Zhang) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-02-23 18:20:44.050 + rm -rf ../yetus_PreCommit-HIVE-Build-9334 + mkdir ../yetus_PreCommit-HIVE-Build-9334 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-9334 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-9334/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java:32 error: repository lacks the necessary blob to fall back on 3-way merge. error: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java: patch does not apply error: src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java: does not exist in index error: java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java: does not exist in index The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12911714 - PreCommit-HIVE-Build > CompactorOutputCommitter.commitJob() - check rename() ret val > - > > Key: HIVE-18723 > URL: https://issues.apache.org/jira/browse/HIVE-18723 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Kryvenko Igor >Priority: Major > Attachments: HIVE-18723.1.patch, HIVE-18723.patch > > > right now ret val is ignored {{fs.rename(fileStatus.getPath(), newPath); }} > Should this use {{FileUtils.ename(FileSystem fs, Path sourcePath, Path > destPath, Configuration conf) }} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18716) Delete unnecessary parameters from TaskFactory
[ https://issues.apache.org/jira/browse/HIVE-18716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374788#comment-16374788 ] Hive QA commented on HIVE-18716: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911702/HIVE-18716.2.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9333/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9333/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9333/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-02-23 18:18:27.380 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-9333/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-02-23 18:18:27.383 + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive cbb9233..e3c4d51 master -> origin/master + git reset --hard HEAD HEAD is now at cbb9233 HIVE-18192: Introduce WriteID per table rather than using global transaction ID (Sankar Hariappan, reviewed by Eugene Koifman) + git clean -f -d + git checkout master Already on 'master' Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/master HEAD is now at e3c4d51 HIVE-18765: SparkClientImpl swallows exception messages from the RemoteDriver (Sahil Takiar, reviewed by Xuefu Zhang) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-02-23 18:18:30.664 + rm -rf ../yetus_PreCommit-HIVE-Build-9333 + mkdir ../yetus_PreCommit-HIVE-Build-9333 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-9333 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-9333/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java:2020 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java' cleanly. error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java:409 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java' with conflicts. Going to apply patch with: git apply -p0 error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java:2020 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java' cleanly. error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java:409 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java' with conflicts. U ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12911702 - PreCommit-HIVE-Build > Delete unnecessary parameters from TaskFactory > -- > > Key: HIVE-18716 > URL: https://issues.apache.org/jira/browse/HIVE-18716 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: Gergely Hajós >Assignee: Gergely Hajós >Priority: Trivial > Attachments: HIVE-18716.1.patch, HIVE-18716.2.patch > > > * In _TaskFactory class conf_ parameter is not used here > {code:java} > public static Task get(Class workClass, > HiveConf
[jira] [Commented] (HIVE-18327) Remove the unnecessary HiveConf dependency for MiniHiveKdc
[ https://issues.apache.org/jira/browse/HIVE-18327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374786#comment-16374786 ] Hive QA commented on HIVE-18327: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911696/HIVE-18327.2.patch {color:green}SUCCESS:{color} +1 due to 12 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 37 failed/errored test(s), 13005 tests executed *Failed tests:* {noformat} TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=91) [infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q] TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93)
[jira] [Commented] (HIVE-18730) Use LLAP as execution engine for Druid mini Cluster Tests
[ https://issues.apache.org/jira/browse/HIVE-18730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374772#comment-16374772 ] Ashutosh Chauhan commented on HIVE-18730: - Lets separate new tests with switching over to Llap and use this ticket to just move to llap. > Use LLAP as execution engine for Druid mini Cluster Tests > - > > Key: HIVE-18730 > URL: https://issues.apache.org/jira/browse/HIVE-18730 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18730.2.patch, HIVE-18730.patch > > > Currently, we are using local MR to run Mini Cluster tests. It will be better > to use LLAP cluster or TEZ. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18776) MaterializationsInvalidationCache loading causes race condition in the metastore
[ https://issues.apache.org/jira/browse/HIVE-18776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18776: --- Status: Patch Available (was: In Progress) > MaterializationsInvalidationCache loading causes race condition in the > metastore > > > Key: HIVE-18776 > URL: https://issues.apache.org/jira/browse/HIVE-18776 > Project: Hive > Issue Type: Bug > Components: Materialized views, Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18776.patch > > > I am seeing occasional failures running metastore tests where operations are > failing saying that there is no open transaction. I have traced this to a > race condition in loading the materialized view invalidation cache. When it > is initialized (either in HiveMetaStoreClient in embedded mode or in > HiveMetaStore in remote mode) it grabs a copy of the current RawStore > instance and then loads the cache in a separate thread. But ObjectStore > keeps state regarding JDO transactions with the underlying RDBMS. So with > the loader thread and the initial thread both doing operations against the > RawStore they sometimes mess up each others transaction stack. In a quick > test I used HMSHandler.newRawStoreForConf() to fix this, which seemed to work. > A reference to the TxnHandler is also called. I suspect this will run into a > similar issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18776) MaterializationsInvalidationCache loading causes race condition in the metastore
[ https://issues.apache.org/jira/browse/HIVE-18776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18776: --- Attachment: HIVE-18776.patch > MaterializationsInvalidationCache loading causes race condition in the > metastore > > > Key: HIVE-18776 > URL: https://issues.apache.org/jira/browse/HIVE-18776 > Project: Hive > Issue Type: Bug > Components: Materialized views, Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18776.patch > > > I am seeing occasional failures running metastore tests where operations are > failing saying that there is no open transaction. I have traced this to a > race condition in loading the materialized view invalidation cache. When it > is initialized (either in HiveMetaStoreClient in embedded mode or in > HiveMetaStore in remote mode) it grabs a copy of the current RawStore > instance and then loads the cache in a separate thread. But ObjectStore > keeps state regarding JDO transactions with the underlying RDBMS. So with > the loader thread and the initial thread both doing operations against the > RawStore they sometimes mess up each others transaction stack. In a quick > test I used HMSHandler.newRawStoreForConf() to fix this, which seemed to work. > A reference to the TxnHandler is also called. I suspect this will run into a > similar issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18765) SparkClientImpl swallows exception messages from the RemoteDriver
[ https://issues.apache.org/jira/browse/HIVE-18765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-18765: Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks Xuefu for the review! > SparkClientImpl swallows exception messages from the RemoteDriver > - > > Key: HIVE-18765 > URL: https://issues.apache.org/jira/browse/HIVE-18765 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18765.1.patch > > > {{SparkClientImpl#handle(ChannelHandlerContext, Error)}} swallows the cause > of the error message: > {code} > LOG.warn("Error reported from remote driver.", msg.cause); > {code} > There should be a '{}' in the message. Without it the {{msg.cause}} info gets > swallowed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18765) SparkClientImpl swallows exception messages from the RemoteDriver
[ https://issues.apache.org/jira/browse/HIVE-18765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374742#comment-16374742 ] Sahil Takiar commented on HIVE-18765: - The only two test HoS test failures are unrelated to this patch: TestSparkCliDriver.testCliDriver[subquery_scalar] - HIVE-18787 TestSparkCliDriver.testCliDriver[ppd_join5] - HIVE-18640 > SparkClientImpl swallows exception messages from the RemoteDriver > - > > Key: HIVE-18765 > URL: https://issues.apache.org/jira/browse/HIVE-18765 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18765.1.patch > > > {{SparkClientImpl#handle(ChannelHandlerContext, Error)}} swallows the cause > of the error message: > {code} > LOG.warn("Error reported from remote driver.", msg.cause); > {code} > There should be a '{}' in the message. Without it the {{msg.cause}} info gets > swallowed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18786) NPE in Hive windowing functions
[ https://issues.apache.org/jira/browse/HIVE-18786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Bieniosek updated HIVE-18786: - Description: When I run a Hive query with windowing functions, if there's enough data I get an NPE. For example something like this query might break: select id, created_date, max(created_date) over (partition by id) latest_created_any from ... The only workaround I've found is to remove the windowing functions entirely. The stacktrace looks suspiciously similar to +HIVE-15278+, but I'm in hive-2.3.2 which appears to have the bugfix applied. Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.persistence.PTFRowContainer.first(PTFRowContainer.java:115) at org.apache.hadoop.hive.ql.exec.PTFPartition.iterator(PTFPartition.java:114) at org.apache.hadoop.hive.ql.udf.ptf.BasePartitionEvaluator.getPartitionAgg(BasePartitionEvaluator.java:200) at org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.evaluateFunctionOnPartition(WindowingTableFunction.java:155) at org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.iterator(WindowingTableFunction.java:538) at org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.finishPartition(PTFOperator.java:349) at org.apache.hadoop.hive.ql.exec.PTFOperator.process(PTFOperator.java:123) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897) at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:356) was: When I run a Hive query with windowing functions, if there's enough data I get an NPE. For example something like this query might break: select id, created_date, max(created_date) over (partition by id) latest_created_any from ... The only workaround I've found is to remove the windowing functions entirely. The stacktrace looks suspiciously similar to HADOOP-2931, but I'm in hive-2.3.2 which appears to have the bugfix applied. Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.persistence.PTFRowContainer.first(PTFRowContainer.java:115) at org.apache.hadoop.hive.ql.exec.PTFPartition.iterator(PTFPartition.java:114) at org.apache.hadoop.hive.ql.udf.ptf.BasePartitionEvaluator.getPartitionAgg(BasePartitionEvaluator.java:200) at org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.evaluateFunctionOnPartition(WindowingTableFunction.java:155) at org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.iterator(WindowingTableFunction.java:538) at org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.finishPartition(PTFOperator.java:349) at org.apache.hadoop.hive.ql.exec.PTFOperator.process(PTFOperator.java:123) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897) at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:356) > NPE in Hive windowing functions > --- > > Key: HIVE-18786 > URL: https://issues.apache.org/jira/browse/HIVE-18786 > Project: Hive > Issue Type: Bug >Affects Versions: 2.3.2 >Reporter: Michael Bieniosek >Priority: Major > > When I run a Hive query with windowing functions, if there's enough data I > get an NPE. > For example something like this query might break: > select id, created_date, max(created_date) over (partition by id) > latest_created_any from ... > The only workaround I've found is to remove the windowing functions entirely. > The stacktrace looks suspiciously similar to +HIVE-15278+, but I'm in > hive-2.3.2 which appears to have the bugfix applied. > > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.persistence.PTFRowContainer.first(PTFRowContainer.java:115) > at > org.apache.hadoop.hive.ql.exec.PTFPartition.iterator(PTFPartition.java:114) > at > org.apache.hadoop.hive.ql.udf.ptf.BasePartitionEvaluator.getPartitionAgg(BasePartitionEvaluator.java:200) > at > org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.evaluateFunctionOnPartition(WindowingTableFunction.java:155) > at > org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.iterator(WindowingTableFunction.java:538) > at > org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.finishPartition(PTFOperator.java:349) > at > org.apache.hadoop.hive.ql.exec.PTFOperator.process(PTFOperator.java:123) > at > org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897) > at
[jira] [Assigned] (HIVE-18787) TestSparkCliDriver.testCliDriver[subquery_scalar] is consistently failing
[ https://issues.apache.org/jira/browse/HIVE-18787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar reassigned HIVE-18787: --- > TestSparkCliDriver.testCliDriver[subquery_scalar] is consistently failing > - > > Key: HIVE-18787 > URL: https://issues.apache.org/jira/browse/HIVE-18787 > Project: Hive > Issue Type: Test > Components: Test >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > > Not sure what caused this to start failing, but its been failing for a while. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18625) SessionState Not Checking For Directory Creation Result
[ https://issues.apache.org/jira/browse/HIVE-18625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374724#comment-16374724 ] Andrew Sherman commented on HIVE-18625: --- Thanks, I can reproduce and I'll take a look later this (California) morning. > SessionState Not Checking For Directory Creation Result > --- > > Key: HIVE-18625 > URL: https://issues.apache.org/jira/browse/HIVE-18625 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0, 2.4.0, 2.3.2 >Reporter: BELUGA BEHR >Assignee: Andrew Sherman >Priority: Minor > Fix For: 3.0.0 > > Attachments: HIVE-18625.1.patch, HIVE-18625.2.patch > > > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java#L773 > {code:java} > private static void createPath(HiveConf conf, Path path, String permission, > boolean isLocal, > boolean isCleanUp) throws IOException { > FsPermission fsPermission = new FsPermission(permission); > FileSystem fs; > if (isLocal) { > fs = FileSystem.getLocal(conf); > } else { > fs = path.getFileSystem(conf); > } > if (!fs.exists(path)) { > fs.mkdirs(path, fsPermission); > String dirType = isLocal ? "local" : "HDFS"; > LOG.info("Created " + dirType + " directory: " + path.toString()); > } > if (isCleanUp) { > fs.deleteOnExit(path); > } > } > {code} > The method {{fs.mkdirs(path, fsPermission)}} returns a boolean value > indicating if the directory creation was successful or not. Hive ignores > this return value and therefore could be acting on a directory that doesn't > exist. > Please capture the result, check it, and throw an Exception if it failed -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18659) add acid version marker to acid files/directories
[ https://issues.apache.org/jira/browse/HIVE-18659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18659: -- Attachment: HIVE-18659.11.patch > add acid version marker to acid files/directories > - > > Key: HIVE-18659 > URL: https://issues.apache.org/jira/browse/HIVE-18659 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-18659.01.patch, HIVE-18659.04.patch, > HIVE-18659.05.patch, HIVE-18659.06.patch, HIVE-18659.07.patch, > HIVE-18659.09.patch, HIVE-18659.09.patch, HIVE-18659.10.patch, > HIVE-18659.11.patch > > > add acid version marker to acid files so that we know which version of acid > wrote the file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18327) Remove the unnecessary HiveConf dependency for MiniHiveKdc
[ https://issues.apache.org/jira/browse/HIVE-18327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374663#comment-16374663 ] Hive QA commented on HIVE-18327: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} itests/hive-minikdc: The patch generated 0 new + 87 unchanged - 1 fixed = 87 total (was 88) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 9m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-9332/dev-support/hive-personality.sh | | git revision | master / cbb9233 | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9332/yetus/patch-asflicense-problems.txt | | modules | C: itests/hive-minikdc U: itests/hive-minikdc | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9332/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Remove the unnecessary HiveConf dependency for MiniHiveKdc > -- > > Key: HIVE-18327 > URL: https://issues.apache.org/jira/browse/HIVE-18327 > Project: Hive > Issue Type: Test > Components: Test >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Daniel Voros >Priority: Major > Attachments: HIVE-18327.1.patch, HIVE-18327.2.patch > > > MiniHiveKdc takes HiveConf as input parameter while it's not needed. Remove > the unnecessary HiveConf. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-18625) SessionState Not Checking For Directory Creation Result
[ https://issues.apache.org/jira/browse/HIVE-18625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374642#comment-16374642 ] BELUGA BEHR edited comment on HIVE-18625 at 2/23/18 5:06 PM: - Did this change detect it's first bug? :) Maybe the {{*}} in the file path was (Author: belugabehr): Did this change detect it's first bug? :) > SessionState Not Checking For Directory Creation Result > --- > > Key: HIVE-18625 > URL: https://issues.apache.org/jira/browse/HIVE-18625 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0, 2.4.0, 2.3.2 >Reporter: BELUGA BEHR >Assignee: Andrew Sherman >Priority: Minor > Fix For: 3.0.0 > > Attachments: HIVE-18625.1.patch, HIVE-18625.2.patch > > > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java#L773 > {code:java} > private static void createPath(HiveConf conf, Path path, String permission, > boolean isLocal, > boolean isCleanUp) throws IOException { > FsPermission fsPermission = new FsPermission(permission); > FileSystem fs; > if (isLocal) { > fs = FileSystem.getLocal(conf); > } else { > fs = path.getFileSystem(conf); > } > if (!fs.exists(path)) { > fs.mkdirs(path, fsPermission); > String dirType = isLocal ? "local" : "HDFS"; > LOG.info("Created " + dirType + " directory: " + path.toString()); > } > if (isCleanUp) { > fs.deleteOnExit(path); > } > } > {code} > The method {{fs.mkdirs(path, fsPermission)}} returns a boolean value > indicating if the directory creation was successful or not. Hive ignores > this return value and therefore could be acting on a directory that doesn't > exist. > Please capture the result, check it, and throw an Exception if it failed -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18625) SessionState Not Checking For Directory Creation Result
[ https://issues.apache.org/jira/browse/HIVE-18625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374642#comment-16374642 ] BELUGA BEHR commented on HIVE-18625: Did this change detect it's first bug? :) > SessionState Not Checking For Directory Creation Result > --- > > Key: HIVE-18625 > URL: https://issues.apache.org/jira/browse/HIVE-18625 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0, 2.4.0, 2.3.2 >Reporter: BELUGA BEHR >Assignee: Andrew Sherman >Priority: Minor > Fix For: 3.0.0 > > Attachments: HIVE-18625.1.patch, HIVE-18625.2.patch > > > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java#L773 > {code:java} > private static void createPath(HiveConf conf, Path path, String permission, > boolean isLocal, > boolean isCleanUp) throws IOException { > FsPermission fsPermission = new FsPermission(permission); > FileSystem fs; > if (isLocal) { > fs = FileSystem.getLocal(conf); > } else { > fs = path.getFileSystem(conf); > } > if (!fs.exists(path)) { > fs.mkdirs(path, fsPermission); > String dirType = isLocal ? "local" : "HDFS"; > LOG.info("Created " + dirType + " directory: " + path.toString()); > } > if (isCleanUp) { > fs.deleteOnExit(path); > } > } > {code} > The method {{fs.mkdirs(path, fsPermission)}} returns a boolean value > indicating if the directory creation was successful or not. Hive ignores > this return value and therefore could be acting on a directory that doesn't > exist. > Please capture the result, check it, and throw an Exception if it failed -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18784) TestJdbcWithMiniKdcSQLAuthBinary runs with HTTP transport mode instead of binary
[ https://issues.apache.org/jira/browse/HIVE-18784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374630#comment-16374630 ] Hive QA commented on HIVE-18784: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911694/HIVE-18784.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 41 failed/errored test(s), 13408 tests executed *Failed tests:* {noformat} TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=94)
[jira] [Commented] (HIVE-18785) Make JSON SerDe First-Class SerDe
[ https://issues.apache.org/jira/browse/HIVE-18785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374617#comment-16374617 ] Ashutosh Chauhan commented on HIVE-18785: - This is a good idea. Since this doesn't come out of the box many users end up using json serde from outside of Hive. > Make JSON SerDe First-Class SerDe > - > > Key: HIVE-18785 > URL: https://issues.apache.org/jira/browse/HIVE-18785 > Project: Hive > Issue Type: New Feature > Components: Serializers/Deserializers >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Priority: Major > > According to the [Hive SerDe > Docs|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RowFormats], > there are some extra steps involved in getting the JSON SerDe to work: > {quote} > ROW FORMAT SERDE > 'org.apache.hive.hcatalog.data.JsonSerDe' > STORED AS TEXTFILE > In some distributions, a reference to hive-hcatalog-core.jar is required. > ADD JAR /usr/lib/hive-hcatalog/lib/hive-hcatalog-core.jar; > {quote} > I would like to propose that we move this SerDe into first-class status: > {{STORED AS JSONFILE}} > The user should have to perform no additional steps to use this SerDe. -- This message was sent by Atlassian JIRA (v7.6.3#76005)