[jira] [Updated] (HIVE-19967) SMB Join : Need Optraits for PTFOperator ala GBY Op
[ https://issues.apache.org/jira/browse/HIVE-19967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-19967: -- Attachment: HIVE-19967.01-branch-3.patch > SMB Join : Need Optraits for PTFOperator ala GBY Op > --- > > Key: HIVE-19967 > URL: https://issues.apache.org/jira/browse/HIVE-19967 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-19967.01-branch-3.patch, HIVE-19967.1.patch, > HIVE-19967.2.patch, HIVE-19967.3.patch, HIVE-19967.4.patch, > HIVE-19967.5.patch, HIVE-19967.6.patch, HIVE-19967.7.patch, HIVE-19967.8.patch > > > The SMB join on one or more PTF Ops should reset the optraits keys just like > GBY Op does. > Currently there is no implementation of PTFOp optraits. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19967) SMB Join : Need Optraits for PTFOperator ala GBY Op
[ https://issues.apache.org/jira/browse/HIVE-19967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-19967: -- Attachment: (was: HIVE-19967.01-branch-03.patch) > SMB Join : Need Optraits for PTFOperator ala GBY Op > --- > > Key: HIVE-19967 > URL: https://issues.apache.org/jira/browse/HIVE-19967 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-19967.1.patch, HIVE-19967.2.patch, > HIVE-19967.3.patch, HIVE-19967.4.patch, HIVE-19967.5.patch, > HIVE-19967.6.patch, HIVE-19967.7.patch, HIVE-19967.8.patch > > > The SMB join on one or more PTF Ops should reset the optraits keys just like > GBY Op does. > Currently there is no implementation of PTFOp optraits. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20013) Add an Implicit cast to date type for to_date function
[ https://issues.apache.org/jira/browse/HIVE-20013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-20013: Status: Patch Available (was: Open) > Add an Implicit cast to date type for to_date function > -- > > Key: HIVE-20013 > URL: https://issues.apache.org/jira/browse/HIVE-20013 > Project: Hive > Issue Type: Bug >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-20013.patch, HIVE-20013.patch > > > Issue - > SELECT TO_DATE(date1), TO_DATE(datetime1) FROM druid_table_n1; > Running this query on Druid returns null values when date1 and datetime1 are > of type String. > {code} > INFO : Executing > command(queryId=hive_20180627144822_d4395567-e3cb-4b20-b53b-4e5eba2d7dac): > EXPLAIN SELECT TO_DATE(datetime0) ,TO_DATE(date0) FROM calcs > INFO : Starting task [Stage-1:EXPLAIN] in serial mode > INFO : Completed executing > command(queryId=hive_20180627144822_d4395567-e3cb-4b20-b53b-4e5eba2d7dac); > Time taken: 0.003 seconds > INFO : OK > ++ > | Explain | > ++ > | Plan optimized by CBO. | > || > | Stage-0| > | Fetch Operator | > | limit:-1 | > | Select Operator [SEL_1]| > | Output:["_col0","_col1"] | > | TableScan [TS_0] | > | > Output:["vc","vc0"],properties:{"druid.fieldNames":"vc,vc0","druid.fieldTypes":"date,date","druid.query.json":"{\"queryType\":\"scan\",\"dataSource\":\"druid_tableau.calcs\",\"intervals\":[\"1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z\"],\"virtualColumns\":[{\"type\":\"expression\",\"name\":\"vc\",\"expression\":\"timestamp_floor(\\\"datetime0\\\",'P1D','','UTC')\",\"outputType\":\"LONG\"},{\"type\":\"expression\",\"name\":\"vc0\",\"expression\":\"timestamp_floor(\\\"date0\\\",'P1D','','UTC')\",\"outputType\":\"LONG\"}],\"columns\":[\"vc\",\"vc0\"],\"resultFormat\":\"compactedList\"}","druid.query.type":"scan"} > | > || > ++ > 10 rows selected (0.606 seconds) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20013) Add an Implicit cast to date type for to_date function
[ https://issues.apache.org/jira/browse/HIVE-20013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-20013: Attachment: HIVE-20013.patch > Add an Implicit cast to date type for to_date function > -- > > Key: HIVE-20013 > URL: https://issues.apache.org/jira/browse/HIVE-20013 > Project: Hive > Issue Type: Bug >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-20013.patch, HIVE-20013.patch > > > Issue - > SELECT TO_DATE(date1), TO_DATE(datetime1) FROM druid_table_n1; > Running this query on Druid returns null values when date1 and datetime1 are > of type String. > {code} > INFO : Executing > command(queryId=hive_20180627144822_d4395567-e3cb-4b20-b53b-4e5eba2d7dac): > EXPLAIN SELECT TO_DATE(datetime0) ,TO_DATE(date0) FROM calcs > INFO : Starting task [Stage-1:EXPLAIN] in serial mode > INFO : Completed executing > command(queryId=hive_20180627144822_d4395567-e3cb-4b20-b53b-4e5eba2d7dac); > Time taken: 0.003 seconds > INFO : OK > ++ > | Explain | > ++ > | Plan optimized by CBO. | > || > | Stage-0| > | Fetch Operator | > | limit:-1 | > | Select Operator [SEL_1]| > | Output:["_col0","_col1"] | > | TableScan [TS_0] | > | > Output:["vc","vc0"],properties:{"druid.fieldNames":"vc,vc0","druid.fieldTypes":"date,date","druid.query.json":"{\"queryType\":\"scan\",\"dataSource\":\"druid_tableau.calcs\",\"intervals\":[\"1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z\"],\"virtualColumns\":[{\"type\":\"expression\",\"name\":\"vc\",\"expression\":\"timestamp_floor(\\\"datetime0\\\",'P1D','','UTC')\",\"outputType\":\"LONG\"},{\"type\":\"expression\",\"name\":\"vc0\",\"expression\":\"timestamp_floor(\\\"date0\\\",'P1D','','UTC')\",\"outputType\":\"LONG\"}],\"columns\":[\"vc\",\"vc0\"],\"resultFormat\":\"compactedList\"}","druid.query.type":"scan"} > | > || > ++ > 10 rows selected (0.606 seconds) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20013) Add an Implicit cast to date type for to_date function
[ https://issues.apache.org/jira/browse/HIVE-20013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-20013: Status: Open (was: Patch Available) > Add an Implicit cast to date type for to_date function > -- > > Key: HIVE-20013 > URL: https://issues.apache.org/jira/browse/HIVE-20013 > Project: Hive > Issue Type: Bug >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-20013.patch, HIVE-20013.patch > > > Issue - > SELECT TO_DATE(date1), TO_DATE(datetime1) FROM druid_table_n1; > Running this query on Druid returns null values when date1 and datetime1 are > of type String. > {code} > INFO : Executing > command(queryId=hive_20180627144822_d4395567-e3cb-4b20-b53b-4e5eba2d7dac): > EXPLAIN SELECT TO_DATE(datetime0) ,TO_DATE(date0) FROM calcs > INFO : Starting task [Stage-1:EXPLAIN] in serial mode > INFO : Completed executing > command(queryId=hive_20180627144822_d4395567-e3cb-4b20-b53b-4e5eba2d7dac); > Time taken: 0.003 seconds > INFO : OK > ++ > | Explain | > ++ > | Plan optimized by CBO. | > || > | Stage-0| > | Fetch Operator | > | limit:-1 | > | Select Operator [SEL_1]| > | Output:["_col0","_col1"] | > | TableScan [TS_0] | > | > Output:["vc","vc0"],properties:{"druid.fieldNames":"vc,vc0","druid.fieldTypes":"date,date","druid.query.json":"{\"queryType\":\"scan\",\"dataSource\":\"druid_tableau.calcs\",\"intervals\":[\"1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z\"],\"virtualColumns\":[{\"type\":\"expression\",\"name\":\"vc\",\"expression\":\"timestamp_floor(\\\"datetime0\\\",'P1D','','UTC')\",\"outputType\":\"LONG\"},{\"type\":\"expression\",\"name\":\"vc0\",\"expression\":\"timestamp_floor(\\\"date0\\\",'P1D','','UTC')\",\"outputType\":\"LONG\"}],\"columns\":[\"vc\",\"vc0\"],\"resultFormat\":\"compactedList\"}","druid.query.type":"scan"} > | > || > ++ > 10 rows selected (0.606 seconds) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18882) Do Not Hide Exception in Hive Metastore Client Connection
[ https://issues.apache.org/jira/browse/HIVE-18882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528566#comment-16528566 ] Hive QA commented on HIVE-18882: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929527/HIVE-18882.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14632 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12267/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12267/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12267/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12929527 - PreCommit-HIVE-Build > Do Not Hide Exception in Hive Metastore Client Connection > - > > Key: HIVE-18882 > URL: https://issues.apache.org/jira/browse/HIVE-18882 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: Manoj Narayanan >Priority: Minor > Labels: noob > Attachments: HIVE-18882.1.patch, HIVE-18882.2.patch, HIVE-18882.patch > > > [https://github.com/apache/hive/blob/4047befe48c8f762c58d8854e058385c1df151c6/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L526-L531] > > {code:java} > if (LOG.isDebugEnabled()) { > LOG.warn("Failed to connect to the MetaStore Server...", e); > } else { > // Don't print full exception trace if DEBUG is not on. > LOG.warn("Failed to connect to the MetaStore Server..."); > } > {code} > I do not understand the logic here. I always want to see the reason for the > failure. Otherwise, I do not know why it is failing unless I restart the > server with debug logging enabled. By that point, the error may have > cleared. Please just use the Exception in the WARN output without adding > confusing logging for debugging. This is never an expected behavior... that > enabling debug would change a _warn_ level log message. > Also... please remove the ellipsis, they add no value. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18882) Do Not Hide Exception in Hive Metastore Client Connection
[ https://issues.apache.org/jira/browse/HIVE-18882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528552#comment-16528552 ] Hive QA commented on HIVE-18882: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 6s{color} | {color:blue} standalone-metastore in master has 228 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 30s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12267/dev-support/hive-personality.sh | | git revision | master / 761597f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: standalone-metastore U: standalone-metastore | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12267/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Do Not Hide Exception in Hive Metastore Client Connection > - > > Key: HIVE-18882 > URL: https://issues.apache.org/jira/browse/HIVE-18882 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: Manoj Narayanan >Priority: Minor > Labels: noob > Attachments: HIVE-18882.1.patch, HIVE-18882.2.patch, HIVE-18882.patch > > > [https://github.com/apache/hive/blob/4047befe48c8f762c58d8854e058385c1df151c6/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L526-L531] > > {code:java} > if (LOG.isDebugEnabled()) { > LOG.warn("Failed to connect to the MetaStore Server...", e); > } else { > // Don't print full exception trace if DEBUG is not on. > LOG.warn("Failed to connect to the MetaStore Server..."); > } > {code} > I do not understand the logic here. I always want to see the reason for the > failure. Otherwise, I do not know why it is failing unless I restart the > server with debug logging enabled. By that point, the error may have > cleared. Please just use the Exception in the WARN output without adding > confusing logging for debugging. This is never an expected behavior... that > enabling debug would change a _warn_ level log message. > Also... please remove the ellipsis, they add no value. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20002) Shipping jdbd-storage-handler dependency jars in LLAP
[ https://issues.apache.org/jira/browse/HIVE-20002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-20002: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 4.0.0 3.1.0 Status: Resolved (was: Patch Available) Patch pushed to master/branch-3. Thanks Sergey for review! > Shipping jdbd-storage-handler dependency jars in LLAP > - > > Key: HIVE-20002 > URL: https://issues.apache.org/jira/browse/HIVE-20002 > Project: Hive > Issue Type: Bug > Components: llap >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-20002.1.patch, HIVE-20002.2.patch > > > Shipping the following jars to LLAP to make jdbc storage-handler work: > commons-dbcp, commons-pool, db specific jdbc jar whichever exists in > classpath. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19850) Dynamic partition pruning in Tez is leading to 'No work found for tablescan' error
[ https://issues.apache.org/jira/browse/HIVE-19850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528544#comment-16528544 ] Hive QA commented on HIVE-19850: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929508/HIVE-19850.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14632 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12265/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12265/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12265/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12929508 - PreCommit-HIVE-Build > Dynamic partition pruning in Tez is leading to 'No work found for tablescan' > error > -- > > Key: HIVE-19850 > URL: https://issues.apache.org/jira/browse/HIVE-19850 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 3.0.0 >Reporter: Ganesha Shreedhara >Assignee: Ganesha Shreedhara >Priority: Major > Attachments: HIVE-19850.patch > > > > When multiple views are used along with union all, it is resulting in the > following error when dynamic partition pruning is enabled in tez. > > {code:java} > Exception in thread "main" java.lang.AssertionError: No work found for > tablescan TS[8] > at > org.apache.hadoop.hive.ql.parse.GenTezUtils.processAppMasterEvent(GenTezUtils.java:408) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.generateTaskTree(TezCompiler.java:383) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:205) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10371) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:208) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:479) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:347) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1203) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1257) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1140) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1130) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:204) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:433) > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:894) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:825) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:726) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.util.RunJar.run(RunJar.java:223) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136){code} > > *Steps to reproduce:* > set hive.execution.engine=tez; > set hive.tez.dynamic.partition.pruning=true; > CREATE TABLE t1(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > CREATE TABLE t2(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > CREATE TABLE t3(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > > insert into table t1 partition(dt='2018') values ('k1','v1',1,1.0,true); > insert into table t2 partition(dt='2018') values ('k2','v2',2,2.0,true); > insert into table t3 partition(dt='2018') values ('k3','v3',3,3.0,true); > > CREATE VIEW `view1` AS select > `t1`.`key`,`t1`.`value`,`t1`.`c_int`,`t1`.`c_float`,`t1`.`c_boolean`,`t1`.`dt` > from `t1` union all select > `t2`.`key`,`t2`.`value`,`t2`.`c_int`,`t2`.`c_float`,`t2`.`c_boolean`,`t2`.`dt` > from `t2`; > CREATE VIEW `view2` AS select > `t2`.`key`,`t2`.`value`,`t2`.`c_int`,`t2`.`c_float`,`t2`.`c_boolean`,`t2`.`dt` > from `t2` union all select > `t3`.`key`,`t3`.`value`,`t3`.`c_int`,`t3`.`c_float`,`t3`.`c_boolean`,`t3`.`dt` > from `t3`; > create table t4 as select key,value,c_
[jira] [Commented] (HIVE-19850) Dynamic partition pruning in Tez is leading to 'No work found for tablescan' error
[ https://issues.apache.org/jira/browse/HIVE-19850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528537#comment-16528537 ] Hive QA commented on HIVE-19850: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 51s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 58s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12265/dev-support/hive-personality.sh | | git revision | master / 761597f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12265/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Dynamic partition pruning in Tez is leading to 'No work found for tablescan' > error > -- > > Key: HIVE-19850 > URL: https://issues.apache.org/jira/browse/HIVE-19850 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 3.0.0 >Reporter: Ganesha Shreedhara >Assignee: Ganesha Shreedhara >Priority: Major > Attachments: HIVE-19850.patch > > > > When multiple views are used along with union all, it is resulting in the > following error when dynamic partition pruning is enabled in tez. > > {code:java} > Exception in thread "main" java.lang.AssertionError: No work found for > tablescan TS[8] > at > org.apache.hadoop.hive.ql.parse.GenTezUtils.processAppMasterEvent(GenTezUtils.java:408) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.generateTaskTree(TezCompiler.java:383) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:205) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10371) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:208) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:479) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:347) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1203) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1257) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java
[jira] [Commented] (HIVE-19532) fix tests for master-txnstats branch
[ https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528532#comment-16528532 ] Hive QA commented on HIVE-19532: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929501/HIVE-19532.13.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12263/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12263/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12263/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12929501/HIVE-19532.13.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12929501 - PreCommit-HIVE-Build > fix tests for master-txnstats branch > > > Key: HIVE-19532 > URL: https://issues.apache.org/jira/browse/HIVE-19532 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 3.2.0 > > Attachments: HIVE-19532.01.patch, HIVE-19532.01.prepatch, > HIVE-19532.02.patch, HIVE-19532.02.prepatch, HIVE-19532.03.patch, > HIVE-19532.04.patch, HIVE-19532.05.patch, HIVE-19532.06.patch, > HIVE-19532.07.patch, HIVE-19532.08.patch, HIVE-19532.09.patch, > HIVE-19532.10.patch, HIVE-19532.11.patch, HIVE-19532.12.patch, > HIVE-19532.13.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20021) LLAP: Fall back to Synthetic File-ids when getting a HdfsConstants.GRANDFATHER_INODE_ID
[ https://issues.apache.org/jira/browse/HIVE-20021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528531#comment-16528531 ] Hive QA commented on HIVE-20021: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929502/HIVE-20021.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14632 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12262/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12262/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12262/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12929502 - PreCommit-HIVE-Build > LLAP: Fall back to Synthetic File-ids when getting a > HdfsConstants.GRANDFATHER_INODE_ID > --- > > Key: HIVE-20021 > URL: https://issues.apache.org/jira/browse/HIVE-20021 > Project: Hive > Issue Type: Bug >Reporter: Gopal V >Assignee: Gopal V >Priority: Major > Attachments: HIVE-20021.1.patch > > > HDFS client implementations have multiple server implementations, which do > not all support the inodes for file locations. > If the client returns a 0 InodeId, fall back to the synthetic ones. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528519#comment-16528519 ] Matt McCline commented on HIVE-19951: - Ok, for now just copy #8 to #9 and resubmit. > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch, > HIVE-19951.09.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19951: Status: Patch Available (was: In Progress) > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch, > HIVE-19951.09.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19951: Attachment: HIVE-19951.09.patch > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch, > HIVE-19951.09.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19951: Status: In Progress (was: Patch Available) > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch, > HIVE-19951.09.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528516#comment-16528516 ] Matt McCline commented on HIVE-19951: - [~prasanth_j] On Apache master, I still get: [ERROR] /Users/mmccline/CtasVarCharBug/llap-server/src/java/org/apache/hadoop/hive/llap/io/api/impl/LlapRecordReader.java:[295,48] cannot find symbol [ERROR] symbol: method isOnlyImplicitConversion() [ERROR] location: variable evolution of type org.apache.orc.impl.SchemaEvolution > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17593) DataWritableWriter strip spaces for CHAR type before writing, but predicate generator doesn't do same thing.
[ https://issues.apache.org/jira/browse/HIVE-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-17593: -- Labels: pull-request-available (was: ) > DataWritableWriter strip spaces for CHAR type before writing, but predicate > generator doesn't do same thing. > > > Key: HIVE-17593 > URL: https://issues.apache.org/jira/browse/HIVE-17593 > Project: Hive > Issue Type: Bug >Affects Versions: 2.3.0, 3.0.0 >Reporter: Junjie Chen >Assignee: Junjie Chen >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0 > > Attachments: HIVE-17593.patch > > > DataWritableWriter strip spaces for CHAR type before writing. While when > generating predicate, it does NOT do same striping which should cause data > missing! > In current version, it doesn't cause data missing since predicate is not well > push down to parquet due to HIVE-17261. > Please see ConvertAstTosearchArg.java, getTypes treats CHAR and STRING as > same which will build a predicate with tail spaces. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17593) DataWritableWriter strip spaces for CHAR type before writing, but predicate generator doesn't do same thing.
[ https://issues.apache.org/jira/browse/HIVE-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528513#comment-16528513 ] ASF GitHub Bot commented on HIVE-17593: --- GitHub user cjjnjust opened a pull request: https://github.com/apache/hive/pull/383 HIVE-17593: DataWritableWriter strip spaces for CHAR type which cause… Parquet DataWritableWriter strip tailing spaces for HiveChar type, which cause predicate push down failed to work due to ConvertAstToSearchArg constructs predicate with tailing space. Actually, according to HiveChar definition, it should contains padded value. ParquetOutputFormat can handle tailing spaces through encoding. You can merge this pull request into a Git repository by running: $ git pull https://github.com/cjjnjust/hive HIVE-17593 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hive/pull/383.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #383 commit 03230c732d657706c6a95f90e16ed5c81d411af7 Author: Chen, Junjie Date: 2018-06-29T23:32:52Z HIVE-17593: DataWritableWriter strip spaces for CHAR type which cause PPD not work > DataWritableWriter strip spaces for CHAR type before writing, but predicate > generator doesn't do same thing. > > > Key: HIVE-17593 > URL: https://issues.apache.org/jira/browse/HIVE-17593 > Project: Hive > Issue Type: Bug >Affects Versions: 2.3.0, 3.0.0 >Reporter: Junjie Chen >Assignee: Junjie Chen >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0 > > Attachments: HIVE-17593.patch > > > DataWritableWriter strip spaces for CHAR type before writing. While when > generating predicate, it does NOT do same striping which should cause data > missing! > In current version, it doesn't cause data missing since predicate is not well > push down to parquet due to HIVE-17261. > Please see ConvertAstTosearchArg.java, getTypes treats CHAR and STRING as > same which will build a predicate with tail spaces. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20021) LLAP: Fall back to Synthetic File-ids when getting a HdfsConstants.GRANDFATHER_INODE_ID
[ https://issues.apache.org/jira/browse/HIVE-20021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528510#comment-16528510 ] Hive QA commented on HIVE-20021: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 48s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 21s{color} | {color:blue} shims/0.23 in master has 7 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 10m 30s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12262/dev-support/hive-personality.sh | | git revision | master / 761597f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: shims/0.23 U: shims/0.23 | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12262/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > LLAP: Fall back to Synthetic File-ids when getting a > HdfsConstants.GRANDFATHER_INODE_ID > --- > > Key: HIVE-20021 > URL: https://issues.apache.org/jira/browse/HIVE-20021 > Project: Hive > Issue Type: Bug >Reporter: Gopal V >Assignee: Gopal V >Priority: Major > Attachments: HIVE-20021.1.patch > > > HDFS client implementations have multiple server implementations, which do > not all support the inodes for file locations. > If the client returns a 0 InodeId, fall back to the synthetic ones. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528509#comment-16528509 ] Matt McCline commented on HIVE-19951: - True -- patch #8 just cratered. > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20002) Shipping jdbd-storage-handler dependency jars in LLAP
[ https://issues.apache.org/jira/browse/HIVE-20002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528506#comment-16528506 ] Hive QA commented on HIVE-20002: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929494/HIVE-20002.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14632 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12261/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12261/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12261/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12929494 - PreCommit-HIVE-Build > Shipping jdbd-storage-handler dependency jars in LLAP > - > > Key: HIVE-20002 > URL: https://issues.apache.org/jira/browse/HIVE-20002 > Project: Hive > Issue Type: Bug > Components: llap >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-20002.1.patch, HIVE-20002.2.patch > > > Shipping the following jars to LLAP to make jdbc storage-handler work: > commons-dbcp, commons-pool, db specific jdbc jar whichever exists in > classpath. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-20013) Add an Implicit cast to date type for to_date function
[ https://issues.apache.org/jira/browse/HIVE-20013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528502#comment-16528502 ] Nishant Bangarwa edited comment on HIVE-20013 at 6/30/18 2:24 AM: -- [~ashutoshc] failures are unrelated to this patch, please merge. was (Author: nishantbangarwa): [~ashutoshc] please merge. > Add an Implicit cast to date type for to_date function > -- > > Key: HIVE-20013 > URL: https://issues.apache.org/jira/browse/HIVE-20013 > Project: Hive > Issue Type: Bug >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-20013.patch > > > Issue - > SELECT TO_DATE(date1), TO_DATE(datetime1) FROM druid_table_n1; > Running this query on Druid returns null values when date1 and datetime1 are > of type String. > {code} > INFO : Executing > command(queryId=hive_20180627144822_d4395567-e3cb-4b20-b53b-4e5eba2d7dac): > EXPLAIN SELECT TO_DATE(datetime0) ,TO_DATE(date0) FROM calcs > INFO : Starting task [Stage-1:EXPLAIN] in serial mode > INFO : Completed executing > command(queryId=hive_20180627144822_d4395567-e3cb-4b20-b53b-4e5eba2d7dac); > Time taken: 0.003 seconds > INFO : OK > ++ > | Explain | > ++ > | Plan optimized by CBO. | > || > | Stage-0| > | Fetch Operator | > | limit:-1 | > | Select Operator [SEL_1]| > | Output:["_col0","_col1"] | > | TableScan [TS_0] | > | > Output:["vc","vc0"],properties:{"druid.fieldNames":"vc,vc0","druid.fieldTypes":"date,date","druid.query.json":"{\"queryType\":\"scan\",\"dataSource\":\"druid_tableau.calcs\",\"intervals\":[\"1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z\"],\"virtualColumns\":[{\"type\":\"expression\",\"name\":\"vc\",\"expression\":\"timestamp_floor(\\\"datetime0\\\",'P1D','','UTC')\",\"outputType\":\"LONG\"},{\"type\":\"expression\",\"name\":\"vc0\",\"expression\":\"timestamp_floor(\\\"date0\\\",'P1D','','UTC')\",\"outputType\":\"LONG\"}],\"columns\":[\"vc\",\"vc0\"],\"resultFormat\":\"compactedList\"}","druid.query.type":"scan"} > | > || > ++ > 10 rows selected (0.606 seconds) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20013) Add an Implicit cast to date type for to_date function
[ https://issues.apache.org/jira/browse/HIVE-20013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528502#comment-16528502 ] Nishant Bangarwa commented on HIVE-20013: - [~ashutoshc] please merge. > Add an Implicit cast to date type for to_date function > -- > > Key: HIVE-20013 > URL: https://issues.apache.org/jira/browse/HIVE-20013 > Project: Hive > Issue Type: Bug >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-20013.patch > > > Issue - > SELECT TO_DATE(date1), TO_DATE(datetime1) FROM druid_table_n1; > Running this query on Druid returns null values when date1 and datetime1 are > of type String. > {code} > INFO : Executing > command(queryId=hive_20180627144822_d4395567-e3cb-4b20-b53b-4e5eba2d7dac): > EXPLAIN SELECT TO_DATE(datetime0) ,TO_DATE(date0) FROM calcs > INFO : Starting task [Stage-1:EXPLAIN] in serial mode > INFO : Completed executing > command(queryId=hive_20180627144822_d4395567-e3cb-4b20-b53b-4e5eba2d7dac); > Time taken: 0.003 seconds > INFO : OK > ++ > | Explain | > ++ > | Plan optimized by CBO. | > || > | Stage-0| > | Fetch Operator | > | limit:-1 | > | Select Operator [SEL_1]| > | Output:["_col0","_col1"] | > | TableScan [TS_0] | > | > Output:["vc","vc0"],properties:{"druid.fieldNames":"vc,vc0","druid.fieldTypes":"date,date","druid.query.json":"{\"queryType\":\"scan\",\"dataSource\":\"druid_tableau.calcs\",\"intervals\":[\"1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z\"],\"virtualColumns\":[{\"type\":\"expression\",\"name\":\"vc\",\"expression\":\"timestamp_floor(\\\"datetime0\\\",'P1D','','UTC')\",\"outputType\":\"LONG\"},{\"type\":\"expression\",\"name\":\"vc0\",\"expression\":\"timestamp_floor(\\\"date0\\\",'P1D','','UTC')\",\"outputType\":\"LONG\"}],\"columns\":[\"vc\",\"vc0\"],\"resultFormat\":\"compactedList\"}","druid.query.type":"scan"} > | > || > ++ > 10 rows selected (0.606 seconds) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20030) Fix Java compile errors that show up in IntelliJ from ConvertJoinMapJoin.java and AnnotateRunTimeStatsOptimizer.java
[ https://issues.apache.org/jira/browse/HIVE-20030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528483#comment-16528483 ] Sergey Shelukhin commented on HIVE-20030: - Could this be Java version related? IIRC I've seen errors with generics inference when using Java 7 that I don't get with Java 8, although the errors were different. > Fix Java compile errors that show up in IntelliJ from ConvertJoinMapJoin.java > and AnnotateRunTimeStatsOptimizer.java > > > Key: HIVE-20030 > URL: https://issues.apache.org/jira/browse/HIVE-20030 > Project: Hive > Issue Type: Task >Reporter: Andrew Sherman >Assignee: Andrew Sherman >Priority: Major > Attachments: HIVE-20030.1.patch > > > For some reason the Java compiler in IntellJ is more strict that the Oracle > jdk compiler. Maybe this is something that can be configured away, but as it > is simple I propose to make the code more type correct. > {code} > /Users/asherman/git/asf/hive2/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java > Error:(613, 24) java: no suitable method found for > findOperatorsUpstream(java.util.List extends > org.apache.hadoop.hive.ql.plan.OperatorDesc>>,java.lang.Class) > method > org.apache.hadoop.hive.ql.exec.OperatorUtils.findOperatorsUpstream(org.apache.hadoop.hive.ql.exec.Operator,java.lang.Class) > is not applicable > (cannot infer type-variable(s) T > (argument mismatch; > java.util.List org.apache.hadoop.hive.ql.plan.OperatorDesc>> cannot be converted to > org.apache.hadoop.hive.ql.exec.Operator)) > method > org.apache.hadoop.hive.ql.exec.OperatorUtils.findOperatorsUpstream(java.util.Collection>,java.lang.Class) > is not applicable > (cannot infer type-variable(s) T > (argument mismatch; > java.util.List org.apache.hadoop.hive.ql.plan.OperatorDesc>> cannot be converted to > java.util.Collection>)) > method > org.apache.hadoop.hive.ql.exec.OperatorUtils.findOperatorsUpstream(org.apache.hadoop.hive.ql.exec.Operator,java.lang.Class,java.util.Set) > is not applicable > (cannot infer type-variable(s) T > (actual and formal argument lists differ in length)) > {code} > and > {code} > /Users/asherman/git/asf/hive2/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/AnnotateRunTimeStatsOptimizer.java > Error:(76, 12) java: no suitable method found for > addAll(java.util.List>) > method java.util.Collection.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable > (argument mismatch; > java.util.List> cannot be > converted to java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) > method java.util.Set.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable > (argument mismatch; > java.util.List> cannot be > converted to java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) > Error:(80, 14) java: no suitable method found for > addAll(java.util.Set>) > method java.util.Collection.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable > (argument mismatch; > java.util.Set> cannot be converted > to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>) > method java.util.Set.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable > (argument mismatch; > java.util.Set> cannot be converted > to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>) > Error:(85, 14) java: no suitable method found for > addAll(java.util.Set>) > method java.util.Collection.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable > (argument mismatch; > java.util.Set> cannot be converted > to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>) > method java.util.Set.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable > (argument mismatch; > java.util.Set> cannot be converted > to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>) > /Users/asherman/git/asf/hive2/ql/target/generated-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/gen/IntervalYearMonthScalarAddTimestampColumn.java > {code} -- This message was sent by Atlassian JIRA (v7.
[jira] [Commented] (HIVE-20002) Shipping jdbd-storage-handler dependency jars in LLAP
[ https://issues.apache.org/jira/browse/HIVE-20002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528481#comment-16528481 ] Hive QA commented on HIVE-20002: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 41s{color} | {color:blue} llap-server in master has 84 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} llap-server: The patch generated 2 new + 28 unchanged - 1 fixed = 30 total (was 29) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 55s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12261/dev-support/hive-personality.sh | | git revision | master / 761597f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-12261/yetus/diff-checkstyle-llap-server.txt | | modules | C: llap-server U: llap-server | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12261/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Shipping jdbd-storage-handler dependency jars in LLAP > - > > Key: HIVE-20002 > URL: https://issues.apache.org/jira/browse/HIVE-20002 > Project: Hive > Issue Type: Bug > Components: llap >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-20002.1.patch, HIVE-20002.2.patch > > > Shipping the following jars to LLAP to make jdbc storage-handler work: > commons-dbcp, commons-pool, db specific jdbc jar whichever exists in > classpath. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20038) Update queries on non-bucketed + partitioned tables throws NPE
[ https://issues.apache.org/jira/browse/HIVE-20038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528475#comment-16528475 ] Gopal V commented on HIVE-20038: LGTM - +1 > Update queries on non-bucketed + partitioned tables throws NPE > -- > > Key: HIVE-20038 > URL: https://issues.apache.org/jira/browse/HIVE-20038 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 4.0.0, 3.2.0 >Reporter: Kavan Suresh >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-20038.1.patch > > > With HIVE-19890 delete deltas of non-bucketed tables are computed from > ROW__ID. This can create holes in output paths (and final paths) in > FSOp.commit() resulting in NPE. > Following is the exception > {code:java} > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commitOneOutPath(FileSinkOperator.java:246) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:235) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.access$400(FileSinkOperator.java:168) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1325) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:733) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:757) > at > org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:383){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528474#comment-16528474 ] Prasanth Jayachandran commented on HIVE-19951: -- It might be easier to use orc 1.5.2 now that it is released and no green run yet. > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528472#comment-16528472 ] Hive QA commented on HIVE-19951: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929491/HIVE-19951.08.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 14630 tests executed *Failed tests:* {noformat} TestHiveSchemaTool - did not produce a TEST-*.xml file (likely timed out) (batchId=197) TestTableOutputFormat - did not produce a TEST-*.xml file (likely timed out) (batchId=197) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12259/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12259/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12259/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12929491 - PreCommit-HIVE-Build > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528417#comment-16528417 ] Hive QA commented on HIVE-19951: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 0s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 1s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 43s{color} | {color:blue} llap-server in master has 84 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s{color} | {color:red} llap-server: The patch generated 12 new + 30 unchanged - 0 fixed = 42 total (was 30) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 28m 8s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12259/dev-support/hive-personality.sh | | git revision | master / 761597f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-12259/yetus/diff-checkstyle-llap-server.txt | | modules | C: ql llap-server itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12259/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19532) fix tests for master-txnstats branch
[ https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528389#comment-16528389 ] Hive QA commented on HIVE-19532: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929501/HIVE-19532.13.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12258/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12258/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12258/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-06-30 00:05:18.796 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-12258/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-06-30 00:05:18.800 + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive 7eac7f6..761597f master -> origin/master + git reset --hard HEAD HEAD is now at 7eac7f6 HIVE-19989: Metastore uses wrong application name for HADOOP2 metrics (Vineet Garg, reviewed by Alan Gates) + git clean -f -d + git checkout master Already on 'master' Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/master HEAD is now at 761597f HIVE-19764: Add --SORT_QUERY_RESULTS to hive-blobstore/map_join.q.out (Sahil Takiar, reviewed by Vihang Karajgaonkar) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-06-30 00:05:20.441 + rm -rf ../yetus_PreCommit-HIVE-Build-12258 + mkdir ../yetus_PreCommit-HIVE-Build-12258 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-12258 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-12258/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java:46 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java' with conflicts. error: patch failed: ql/src/test/results/clientpositive/llap/acid_vectorization_original.q.out:665 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/llap/acid_vectorization_original.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/llap/dynpart_sort_optimization_acid.q.out:95 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/llap/dynpart_sort_optimization_acid.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/llap/insert_values_orig_table_use_metadata.q.out:168 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/llap/insert_values_orig_table_use_metadata.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/row__id.q.out:62 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/row__id.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/tez/acid_vectorization_original_tez.q.out:680 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/tez/acid_vectorization_original_tez.q.out' with conflicts. Going to apply patch with: git apply -p0 /data/hiveptest/working/scratch/build.patch:1209: trailing whitespace. explain insert into table stats_nonpartitioned select * from mysource where p == 100; /data/hiveptest/working/scratch/build.patch:1210: trailing whitespace. insert into table stats_nonpartitioned select * from mysource where p == 100; /
[jira] [Commented] (HIVE-19668) Over 30% of the heap wasted by duplicate org.antlr.runtime.CommonToken's and duplicate strings
[ https://issues.apache.org/jira/browse/HIVE-19668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528387#comment-16528387 ] Hive QA commented on HIVE-19668: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929477/HIVE-19668.02.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 14632 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_view_delete] (batchId=35) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[subquery_multiinsert] (batchId=87) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[subquery_unqual_corr_expr] (batchId=8) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multiinsert] (batchId=145) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12256/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12256/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12256/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12929477 - PreCommit-HIVE-Build > Over 30% of the heap wasted by duplicate org.antlr.runtime.CommonToken's and > duplicate strings > -- > > Key: HIVE-19668 > URL: https://issues.apache.org/jira/browse/HIVE-19668 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev >Priority: Major > Attachments: HIVE-19668.01.patch, HIVE-19668.02.patch, > image-2018-05-22-17-41-39-572.png > > > I've recently analyzed a HS2 heap dump, obtained when there was a huge memory > spike during compilation of some big query. The analysis was done with jxray > ([www.jxray.com).|http://www.jxray.com)./] It turns out that more than 90% of > the 20G heap was used by data structures associated with query parsing > ({{org.apache.hadoop.hive.ql.parse.QBExpr}}). There are probably multiple > opportunities for optimizations here. One of them is to stop the code from > creating duplicate instances of {{org.antlr.runtime.CommonToken}} class. See > a sample of these objects in the attached image: > !image-2018-05-22-17-41-39-572.png|width=879,height=399! > Looks like these particular {{CommonToken}} objects are constants, that don't > change once created. I see some code, e.g. in > {{org.apache.hadoop.hive.ql.parse.CalcitePlanner}}, where such objects are > apparently repeatedly created with e.g. {{new > CommonToken(HiveParser.TOK_INSERT, "TOK_INSERT")}} If these 33 token kinds > are instead created once and reused, we will save more than 1/10th of the > heap in this scenario. Plus, since these objects are small but very numerous, > getting rid of them will remove a gread deal of pressure from the GC. > Another source of waste are duplicate strings, that collectively waste 26.1% > of memory. Some of them come from CommonToken objects that have the same text > (i.e. for multiple CommonToken objects the contents of their 'text' Strings > are the same, but each has its own copy of that String). Other duplicate > strings come from other sources, that are easy enough to fix by adding > String.intern() calls. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20038) Update queries on non-bucketed + partitioned tables throws NPE
[ https://issues.apache.org/jira/browse/HIVE-20038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-20038: - Status: Patch Available (was: Open) > Update queries on non-bucketed + partitioned tables throws NPE > -- > > Key: HIVE-20038 > URL: https://issues.apache.org/jira/browse/HIVE-20038 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 4.0.0, 3.2.0 >Reporter: Kavan Suresh >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-20038.1.patch > > > With HIVE-19890 delete deltas of non-bucketed tables are computed from > ROW__ID. This can create holes in output paths (and final paths) in > FSOp.commit() resulting in NPE. > Following is the exception > {code:java} > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commitOneOutPath(FileSinkOperator.java:246) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:235) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.access$400(FileSinkOperator.java:168) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1325) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:733) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:757) > at > org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:383){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20038) Update queries on non-bucketed + partitioned tables throws NPE
[ https://issues.apache.org/jira/browse/HIVE-20038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528384#comment-16528384 ] Prasanth Jayachandran commented on HIVE-20038: -- [~gopalv] could you please review? > Update queries on non-bucketed + partitioned tables throws NPE > -- > > Key: HIVE-20038 > URL: https://issues.apache.org/jira/browse/HIVE-20038 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 4.0.0, 3.2.0 >Reporter: Kavan Suresh >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-20038.1.patch > > > With HIVE-19890 delete deltas of non-bucketed tables are computed from > ROW__ID. This can create holes in output paths (and final paths) in > FSOp.commit() resulting in NPE. > Following is the exception > {code:java} > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commitOneOutPath(FileSinkOperator.java:246) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:235) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.access$400(FileSinkOperator.java:168) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1325) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:733) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:757) > at > org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:383){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20038) Update queries on non-bucketed + partitioned tables throws NPE
[ https://issues.apache.org/jira/browse/HIVE-20038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-20038: - Attachment: HIVE-20038.1.patch > Update queries on non-bucketed + partitioned tables throws NPE > -- > > Key: HIVE-20038 > URL: https://issues.apache.org/jira/browse/HIVE-20038 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 4.0.0, 3.2.0 >Reporter: Kavan Suresh >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-20038.1.patch > > > With HIVE-19890 delete deltas of non-bucketed tables are computed from > ROW__ID. This can create holes in output paths (and final paths) in > FSOp.commit() resulting in NPE. > Following is the exception > {code:java} > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commitOneOutPath(FileSinkOperator.java:246) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:235) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.access$400(FileSinkOperator.java:168) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1325) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:733) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:757) > at > org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:383){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19792) Enable schema evolution tests for decimal 64
[ https://issues.apache.org/jira/browse/HIVE-19792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528383#comment-16528383 ] Matt McCline commented on HIVE-19792: - +1 LGTM tests pending. > Enable schema evolution tests for decimal 64 > > > Key: HIVE-19792 > URL: https://issues.apache.org/jira/browse/HIVE-19792 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19792.1.patch, HIVE-19792.2.patch > > > Following tests are disabled in HIVE-19629 as orc ConvertTreeReaderFactory > does not handle Decimal64ColumnVectors. This jira is to re-enable those tests > after orc supports it. > 1) type_change_test_int_vectorized.q > 2) type_change_test_int.q > 3) orc_schema_evolution_float.q > 4) schema_evol_orc_nonvec_part_all_primitive.q > 5) schema_evol_orc_nonvec_part_all_primitive_llap_io.q > 6) schema_evol_orc_vec_part_all_primitive.q > 7) schema_evol_orc_vec_part_all_primitive_llap_io.q > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20038) Update queries on non-bucketed + partitioned tables throws NPE
[ https://issues.apache.org/jira/browse/HIVE-20038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-20038: - Description: With HIVE-19890 delete deltas of non-bucketed tables are computed from ROW__ID. This can create holes in output paths (and final paths) in FSOp.commit() resulting in NPE. Following is the exception {code:java} Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commitOneOutPath(FileSinkOperator.java:246) at org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:235) at org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.access$400(FileSinkOperator.java:168) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1325) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:733) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:757) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:383){code} was: With HIVE-19890 delete deltas of non-bucketed tables are computed from ROW__ID. This can create holes in output paths in FSOp.commit() resulting in NPE. Following is the exception {code:java} Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commitOneOutPath(FileSinkOperator.java:246) at org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:235) at org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.access$400(FileSinkOperator.java:168) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1325) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:733) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:757) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:383){code} > Update queries on non-bucketed + partitioned tables throws NPE > -- > > Key: HIVE-20038 > URL: https://issues.apache.org/jira/browse/HIVE-20038 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 4.0.0, 3.2.0 >Reporter: Kavan Suresh >Assignee: Prasanth Jayachandran >Priority: Major > > With HIVE-19890 delete deltas of non-bucketed tables are computed from > ROW__ID. This can create holes in output paths (and final paths) in > FSOp.commit() resulting in NPE. > Following is the exception > {code:java} > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commitOneOutPath(FileSinkOperator.java:246) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:235) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.access$400(FileSinkOperator.java:168) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1325) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:733) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:757) > at > org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:383){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20038) Update queries on non-bucketed + partitioned tables throws NPE
[ https://issues.apache.org/jira/browse/HIVE-20038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran reassigned HIVE-20038: > Update queries on non-bucketed + partitioned tables throws NPE > -- > > Key: HIVE-20038 > URL: https://issues.apache.org/jira/browse/HIVE-20038 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 4.0.0, 3.2.0 >Reporter: Kavan Suresh >Assignee: Prasanth Jayachandran >Priority: Major > > With HIVE-19890 delete deltas of non-bucketed tables are computed from > ROW__ID. This can create holes in output paths in FSOp.commit() resulting in > NPE. > Following is the exception > {code:java} > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commitOneOutPath(FileSinkOperator.java:246) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:235) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.access$400(FileSinkOperator.java:168) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1325) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:733) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:757) > at > org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:383){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19668) Over 30% of the heap wasted by duplicate org.antlr.runtime.CommonToken's and duplicate strings
[ https://issues.apache.org/jira/browse/HIVE-19668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528368#comment-16528368 ] Hive QA commented on HIVE-19668: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 6s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 45s{color} | {color:red} ql: The patch generated 4 new + 725 unchanged - 0 fixed = 729 total (was 725) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 6s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12256/dev-support/hive-personality.sh | | git revision | master / 7eac7f6 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-12256/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12256/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Over 30% of the heap wasted by duplicate org.antlr.runtime.CommonToken's and > duplicate strings > -- > > Key: HIVE-19668 > URL: https://issues.apache.org/jira/browse/HIVE-19668 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev >Priority: Major > Attachments: HIVE-19668.01.patch, HIVE-19668.02.patch, > image-2018-05-22-17-41-39-572.png > > > I've recently analyzed a HS2 heap dump, obtained when there was a huge memory > spike during compilation of some big query. The analysis was done with jxray > ([www.jxray.com).|http://www.jxray.com)./] It turns out that more than 90% of > the 20G heap was used by data structures associated with query parsing > ({{org.apache.hadoop.hive.ql.parse.QBExpr}}). There are probably multiple > opportunities for optimizations here. One of them is to stop the code from > creating duplicate instances of {{org.antlr.runtime.CommonToken}} class. See > a sample of these objects in the attached image: > !image-2018-05-22-17-41-39-572.png|width=879,height=399! > Looks like these particular {{CommonToken}} objects are constants, that don't > change once created. I see some code, e.g. in > {{org.apache.hadoop.hive.ql.parse.CalcitePlanner}}, where such objects are > apparently repeatedly created with e.g
[jira] [Commented] (HIVE-17593) DataWritableWriter strip spaces for CHAR type before writing, but predicate generator doesn't do same thing.
[ https://issues.apache.org/jira/browse/HIVE-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528362#comment-16528362 ] Junjie Chen commented on HIVE-17593: Thanks [~Ferd] to response so quickly. It depends on how HiveChar defined and used in other place or other format, Hive should have unified usage on HiveChar. According to HiveChar/HiveCharWritable definition in HiveChar/HiveCharWriable.java as below: /** * HiveChar. * String values will be padded to full char length. * Character legnth, comparison, hashCode should ignore trailing spaces. */ We can know the original value of HiveChar should include padding spaces. So in ConvertAstToSearchArg.java#boxLiteral return padding value. > DataWritableWriter strip spaces for CHAR type before writing, but predicate > generator doesn't do same thing. > > > Key: HIVE-17593 > URL: https://issues.apache.org/jira/browse/HIVE-17593 > Project: Hive > Issue Type: Bug >Affects Versions: 2.3.0, 3.0.0 >Reporter: Junjie Chen >Assignee: Junjie Chen >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-17593.patch > > > DataWritableWriter strip spaces for CHAR type before writing. While when > generating predicate, it does NOT do same striping which should cause data > missing! > In current version, it doesn't cause data missing since predicate is not well > push down to parquet due to HIVE-17261. > Please see ConvertAstTosearchArg.java, getTypes treats CHAR and STRING as > same which will build a predicate with tail spaces. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19765) Add Parquet specific tests to BlobstoreCliDriver
[ https://issues.apache.org/jira/browse/HIVE-19765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-19765: Attachment: HIVE-19765.3.patch > Add Parquet specific tests to BlobstoreCliDriver > > > Key: HIVE-19765 > URL: https://issues.apache.org/jira/browse/HIVE-19765 > Project: Hive > Issue Type: Sub-task >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-19765.1.patch, HIVE-19765.2.patch, > HIVE-19765.3.patch > > > Similar to what was done for RC and ORC files. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19764) Add --SORT_QUERY_RESULTS to hive-blobstore/map_join.q.out
[ https://issues.apache.org/jira/browse/HIVE-19764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-19764: Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) Pushed to master. > Add --SORT_QUERY_RESULTS to hive-blobstore/map_join.q.out > - > > Key: HIVE-19764 > URL: https://issues.apache.org/jira/browse/HIVE-19764 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19764.1.patch > > > Fixes flakiness with this test -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19990) Query with interval literal in join condition fails
[ https://issues.apache.org/jira/browse/HIVE-19990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528344#comment-16528344 ] Hive QA commented on HIVE-19990: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929472/HIVE-19990.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 14632 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_dynamic_partition] (batchId=190) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_expressions] (batchId=190) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test1] (batchId=190) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test_alter] (batchId=190) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test_insert] (batchId=190) org.apache.hive.jdbc.TestJdbcWithMiniLlapRow.testKillQuery (batchId=247) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12255/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12255/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12255/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 6 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12929472 - PreCommit-HIVE-Build > Query with interval literal in join condition fails > --- > > Key: HIVE-19990 > URL: https://issues.apache.org/jira/browse/HIVE-19990 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19990.1.patch, HIVE-19990.2.patch > > > *Reproducer* > {code:sql} > > create table date_dim_d1( > d_week_seqint, > d_datestring); > > SELECT >d1.d_week_seq > FROM >date_dim_d1 d1 >JOIN date_dim_d1 d3 > WHERE >Cast(d3.d_date AS date) > Cast(d1.d_date AS date) + INTERVAL '5' day ; > {code} > *Exception* > {code} > org.apache.hadoop.hive.ql.parse.SemanticException: '5 00:00:00.0' > encountered with 0 children > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2780) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2775) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:3060) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2959) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genJoinTree(SemanticAnalyzer.java:9633) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11380) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11285) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:12071) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:593) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12150) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:330) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:288) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:658) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1829) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1776) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1771) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:832) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:770) > at org.apache.hadoop.hive.cli.Cli
[jira] [Commented] (HIVE-19989) Metastore uses wrong application name for HADOOP2 metrics
[ https://issues.apache.org/jira/browse/HIVE-19989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528341#comment-16528341 ] Vineet Garg commented on HIVE-19989: Pushed to master > Metastore uses wrong application name for HADOOP2 metrics > - > > Key: HIVE-19989 > URL: https://issues.apache.org/jira/browse/HIVE-19989 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19989.1.patch, HIVE-19989.2.patch > > > Right now it is hardcoded as 'metastore'. It should instead be fetched from > config like it was previously. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19989) Metastore uses wrong application name for HADOOP2 metrics
[ https://issues.apache.org/jira/browse/HIVE-19989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19989: --- Target Version/s: (was: 3.1.0) > Metastore uses wrong application name for HADOOP2 metrics > - > > Key: HIVE-19989 > URL: https://issues.apache.org/jira/browse/HIVE-19989 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19989.1.patch, HIVE-19989.2.patch > > > Right now it is hardcoded as 'metastore'. It should instead be fetched from > config like it was previously. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19989) Metastore uses wrong application name for HADOOP2 metrics
[ https://issues.apache.org/jira/browse/HIVE-19989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19989: --- Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) > Metastore uses wrong application name for HADOOP2 metrics > - > > Key: HIVE-19989 > URL: https://issues.apache.org/jira/browse/HIVE-19989 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19989.1.patch, HIVE-19989.2.patch > > > Right now it is hardcoded as 'metastore'. It should instead be fetched from > config like it was previously. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20037) Print root cause exception's toString() rather than getMessage()
[ https://issues.apache.org/jira/browse/HIVE-20037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528339#comment-16528339 ] Sahil Takiar commented on HIVE-20037: - +1 LGTM > Print root cause exception's toString() rather than getMessage() > > > Key: HIVE-20037 > URL: https://issues.apache.org/jira/browse/HIVE-20037 > Project: Hive > Issue Type: Sub-task > Components: Spark >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Trivial > Attachments: HIVE-20037.1.patch > > > When we run HoS job and if it fails for some errors, we are printing the > exception message rather than exception toString(), for some exceptions, > e.g., this java.lang.NoClassDefFoundError, we are missing the exception type > information. > {noformat} > Failed to execute Spark task Stage-1, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark > client for Spark session cf054497-b073-4327-a315-68c867ce3434: > org/apache/spark/SparkConf)' > {noformat} > If we use exception's toString(), it will be as follows and make more sense. > {noformat} > Failed to execute Spark task Stage-1, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark > client for Spark session cf054497-b073-4327-a315-68c867ce3434: > java.lang.NoClassDefFoundError: org/apache/spark/SparkConf)' > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19990) Query with interval literal in join condition fails
[ https://issues.apache.org/jira/browse/HIVE-19990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528319#comment-16528319 ] Hive QA commented on HIVE-19990: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 51s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 0s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 9 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12255/dev-support/hive-personality.sh | | git revision | master / 35cec21 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-12255/yetus/whitespace-tabs.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12255/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Query with interval literal in join condition fails > --- > > Key: HIVE-19990 > URL: https://issues.apache.org/jira/browse/HIVE-19990 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19990.1.patch, HIVE-19990.2.patch > > > *Reproducer* > {code:sql} > > create table date_dim_d1( > d_week_seqint, > d_datestring); > > SELECT >d1.d_week_seq > FROM >date_dim_d1 d1 >JOIN date_dim_d1 d3 > WHERE >Cast(d3.d_date AS date) > Cast(d1.d_date AS date) + INTERVAL '5' day ; > {code} > *Exception* > {code} > org.apache.hadoop.hive.ql.parse.SemanticException: '5 00:00:00.0' > encountered with 0 children > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2780) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2775) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:3060) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2959) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genJoinTree(SemanticAnalyzer.java:9633) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11380) > at > org.ap
[jira] [Updated] (HIVE-20037) Print root cause exception's toString() rather than getMessage()
[ https://issues.apache.org/jira/browse/HIVE-20037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-20037: Attachment: HIVE-20037.1.patch > Print root cause exception's toString() rather than getMessage() > > > Key: HIVE-20037 > URL: https://issues.apache.org/jira/browse/HIVE-20037 > Project: Hive > Issue Type: Sub-task > Components: Spark >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Trivial > Attachments: HIVE-20037.1.patch > > > When we run HoS job and if it fails for some errors, we are printing the > exception message rather than exception toString(), for some exceptions, > e.g., this java.lang.NoClassDefFoundError, we are missing the exception type > information. > {noformat} > Failed to execute Spark task Stage-1, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark > client for Spark session cf054497-b073-4327-a315-68c867ce3434: > org/apache/spark/SparkConf)' > {noformat} > If we use exception's toString(), it will be as follows and make more sense. > {noformat} > Failed to execute Spark task Stage-1, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark > client for Spark session cf054497-b073-4327-a315-68c867ce3434: > java.lang.NoClassDefFoundError: org/apache/spark/SparkConf)' > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20037) Print root cause exception's toString() rather than getMessage()
[ https://issues.apache.org/jira/browse/HIVE-20037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-20037: Status: Patch Available (was: Open) > Print root cause exception's toString() rather than getMessage() > > > Key: HIVE-20037 > URL: https://issues.apache.org/jira/browse/HIVE-20037 > Project: Hive > Issue Type: Sub-task > Components: Spark >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Trivial > Attachments: HIVE-20037.1.patch > > > When we run HoS job and if it fails for some errors, we are printing the > exception message rather than exception toString(), for some exceptions, > e.g., this java.lang.NoClassDefFoundError, we are missing the exception type > information. > {noformat} > Failed to execute Spark task Stage-1, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark > client for Spark session cf054497-b073-4327-a315-68c867ce3434: > org/apache/spark/SparkConf)' > {noformat} > If we use exception's toString(), it will be as follows and make more sense. > {noformat} > Failed to execute Spark task Stage-1, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark > client for Spark session cf054497-b073-4327-a315-68c867ce3434: > java.lang.NoClassDefFoundError: org/apache/spark/SparkConf)' > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19940) Push predicates with deterministic UDFs with RBO
[ https://issues.apache.org/jira/browse/HIVE-19940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528302#comment-16528302 ] Janaki Lahorani commented on HIVE-19940: [~vgarg] Thanks for looking at this patch. The code added in ExprWalkerProcFactory uses the isDeterministic call from FunctionRegistry, but also checks child expressions referenced within the function. > Push predicates with deterministic UDFs with RBO > > > Key: HIVE-19940 > URL: https://issues.apache.org/jira/browse/HIVE-19940 > Project: Hive > Issue Type: Improvement >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-19940.1.patch, HIVE-19940.2.patch > > > With RBO, predicates with any UDF doesn't get pushed down. It makes sense to > not pushdown the predicates with non-deterministic function as the meaning of > the query changes after the predicate is resolved to use the function. But > pushing a deterministic function is beneficial. > Test Case: > {code} > set hive.cbo.enable=false; > CREATE TABLE `testb`( >`cola` string COMMENT '', >`colb` string COMMENT '', >`colc` string COMMENT '') > PARTITIONED BY ( >`part1` string, >`part2` string, >`part3` string) > STORED AS AVRO; > CREATE TABLE `testa`( >`col1` string COMMENT '', >`col2` string COMMENT '', >`col3` string COMMENT '', >`col4` string COMMENT '', >`col5` string COMMENT '') > PARTITIONED BY ( >`part1` string, >`part2` string, >`part3` string) > STORED AS AVRO; > insert into testA partition (part1='US', part2='ABC', part3='123') > values ('12.34', '100', '200', '300', 'abc'), > ('12.341', '1001', '2001', '3001', 'abcd'); > insert into testA partition (part1='UK', part2='DEF', part3='123') > values ('12.34', '100', '200', '300', 'abc'), > ('12.341', '1001', '2001', '3001', 'abcd'); > insert into testA partition (part1='US', part2='DEF', part3='200') > values ('12.34', '100', '200', '300', 'abc'), > ('12.341', '1001', '2001', '3001', 'abcd'); > insert into testA partition (part1='CA', part2='ABC', part3='300') > values ('12.34', '100', '200', '300', 'abc'), > ('12.341', '1001', '2001', '3001', 'abcd'); > insert into testB partition (part1='CA', part2='ABC', part3='300') > values ('600', '700', 'abc'), ('601', '701', 'abcd'); > insert into testB partition (part1='CA', part2='ABC', part3='400') > values ( '600', '700', 'abc'), ( '601', '701', 'abcd'); > insert into testB partition (part1='UK', part2='PQR', part3='500') > values ('600', '700', 'abc'), ('601', '701', 'abcd'); > insert into testB partition (part1='US', part2='DEF', part3='200') > values ( '600', '700', 'abc'), ('601', '701', 'abcd'); > insert into testB partition (part1='US', part2='PQR', part3='123') > values ( '600', '700', 'abc'), ('601', '701', 'abcd'); > -- views with deterministic functions > create view viewDeterministicUDFA partitioned on (vpart1, vpart2, vpart3) as > select > cast(col1 as decimal(38,18)) as vcol1, > cast(col2 as decimal(38,18)) as vcol2, > cast(col3 as decimal(38,18)) as vcol3, > cast(col4 as decimal(38,18)) as vcol4, > cast(col5 as char(10)) as vcol5, > cast(part1 as char(2)) as vpart1, > cast(part2 as char(3)) as vpart2, > cast(part3 as char(3)) as vpart3 > from testa > where part1 in ('US', 'CA'); > create view viewDeterministicUDFB partitioned on (vpart1, vpart2, vpart3) as > select > cast(cola as decimal(38,18)) as vcolA, > cast(colb as decimal(38,18)) as vcolB, > cast(colc as char(10)) as vcolC, > cast(part1 as char(2)) as vpart1, > cast(part2 as char(3)) as vpart2, > cast(part3 as char(3)) as vpart3 > from testb > where part1 in ('US', 'CA'); > explain > select vcol1, vcol2, vcol3, vcola, vcolb > from viewDeterministicUDFA a inner join viewDeterministicUDFB b > on a.vpart1 = b.vpart1 > and a.vpart2 = b.vpart2 > and a.vpart3 = b.vpart3 > and a.vpart1 = 'US' > and a.vpart2 = 'DEF' > and a.vpart3 = '200'; > {code} > Plan where the CAST is not pushed down. > {code} > STAGE PLANS: > Stage: Stage-1 > Map Reduce > Map Operator Tree: > TableScan > alias: testa > filterExpr: (part1) IN ('US', 'CA') (type: boolean) > Statistics: Num rows: 6 Data size: 13740 Basic stats: COMPLETE > Column stats: NONE > Select Operator > expressions: CAST( col1 AS decimal(38,18)) (type: > decimal(38,18)), CAST( col2 AS decimal(38,18)) (type: decimal(38,18)), CAST( > col3 AS decimal(38,18)) (type: decimal(38,18)), CAST( part1 AS CHAR(2)) > (type: char(2)), CAST( part2 AS CHAR(3)) (type: char(3)), CAST( part3 AS > CHAR(3)) (type: char(3)) > outputColumnNames: _col0, _col1, _col2, _col5, _col6, _col7 > Statistics: Num rows: 6 Data size: 13740 Basic stats:
[jira] [Comment Edited] (HIVE-19940) Push predicates with deterministic UDFs with RBO
[ https://issues.apache.org/jira/browse/HIVE-19940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528302#comment-16528302 ] Janaki Lahorani edited comment on HIVE-19940 at 6/29/18 9:57 PM: - [~vgarg] Thanks for looking at this patch. The code added in ExprWalkerProcFactory uses the isDeterministic call from FunctionRegistry, and also checks child expressions referenced within the function. was (Author: janulatha): [~vgarg] Thanks for looking at this patch. The code added in ExprWalkerProcFactory uses the isDeterministic call from FunctionRegistry, but also checks child expressions referenced within the function. > Push predicates with deterministic UDFs with RBO > > > Key: HIVE-19940 > URL: https://issues.apache.org/jira/browse/HIVE-19940 > Project: Hive > Issue Type: Improvement >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-19940.1.patch, HIVE-19940.2.patch > > > With RBO, predicates with any UDF doesn't get pushed down. It makes sense to > not pushdown the predicates with non-deterministic function as the meaning of > the query changes after the predicate is resolved to use the function. But > pushing a deterministic function is beneficial. > Test Case: > {code} > set hive.cbo.enable=false; > CREATE TABLE `testb`( >`cola` string COMMENT '', >`colb` string COMMENT '', >`colc` string COMMENT '') > PARTITIONED BY ( >`part1` string, >`part2` string, >`part3` string) > STORED AS AVRO; > CREATE TABLE `testa`( >`col1` string COMMENT '', >`col2` string COMMENT '', >`col3` string COMMENT '', >`col4` string COMMENT '', >`col5` string COMMENT '') > PARTITIONED BY ( >`part1` string, >`part2` string, >`part3` string) > STORED AS AVRO; > insert into testA partition (part1='US', part2='ABC', part3='123') > values ('12.34', '100', '200', '300', 'abc'), > ('12.341', '1001', '2001', '3001', 'abcd'); > insert into testA partition (part1='UK', part2='DEF', part3='123') > values ('12.34', '100', '200', '300', 'abc'), > ('12.341', '1001', '2001', '3001', 'abcd'); > insert into testA partition (part1='US', part2='DEF', part3='200') > values ('12.34', '100', '200', '300', 'abc'), > ('12.341', '1001', '2001', '3001', 'abcd'); > insert into testA partition (part1='CA', part2='ABC', part3='300') > values ('12.34', '100', '200', '300', 'abc'), > ('12.341', '1001', '2001', '3001', 'abcd'); > insert into testB partition (part1='CA', part2='ABC', part3='300') > values ('600', '700', 'abc'), ('601', '701', 'abcd'); > insert into testB partition (part1='CA', part2='ABC', part3='400') > values ( '600', '700', 'abc'), ( '601', '701', 'abcd'); > insert into testB partition (part1='UK', part2='PQR', part3='500') > values ('600', '700', 'abc'), ('601', '701', 'abcd'); > insert into testB partition (part1='US', part2='DEF', part3='200') > values ( '600', '700', 'abc'), ('601', '701', 'abcd'); > insert into testB partition (part1='US', part2='PQR', part3='123') > values ( '600', '700', 'abc'), ('601', '701', 'abcd'); > -- views with deterministic functions > create view viewDeterministicUDFA partitioned on (vpart1, vpart2, vpart3) as > select > cast(col1 as decimal(38,18)) as vcol1, > cast(col2 as decimal(38,18)) as vcol2, > cast(col3 as decimal(38,18)) as vcol3, > cast(col4 as decimal(38,18)) as vcol4, > cast(col5 as char(10)) as vcol5, > cast(part1 as char(2)) as vpart1, > cast(part2 as char(3)) as vpart2, > cast(part3 as char(3)) as vpart3 > from testa > where part1 in ('US', 'CA'); > create view viewDeterministicUDFB partitioned on (vpart1, vpart2, vpart3) as > select > cast(cola as decimal(38,18)) as vcolA, > cast(colb as decimal(38,18)) as vcolB, > cast(colc as char(10)) as vcolC, > cast(part1 as char(2)) as vpart1, > cast(part2 as char(3)) as vpart2, > cast(part3 as char(3)) as vpart3 > from testb > where part1 in ('US', 'CA'); > explain > select vcol1, vcol2, vcol3, vcola, vcolb > from viewDeterministicUDFA a inner join viewDeterministicUDFB b > on a.vpart1 = b.vpart1 > and a.vpart2 = b.vpart2 > and a.vpart3 = b.vpart3 > and a.vpart1 = 'US' > and a.vpart2 = 'DEF' > and a.vpart3 = '200'; > {code} > Plan where the CAST is not pushed down. > {code} > STAGE PLANS: > Stage: Stage-1 > Map Reduce > Map Operator Tree: > TableScan > alias: testa > filterExpr: (part1) IN ('US', 'CA') (type: boolean) > Statistics: Num rows: 6 Data size: 13740 Basic stats: COMPLETE > Column stats: NONE > Select Operator > expressions: CAST( col1 AS decimal(38,18)) (type: > decimal(38,18)), CAST( col2 AS decimal(38,18)) (type: decimal(38,18)), CAST( > col3 AS decimal(38,18)) (type: decimal(3
[jira] [Commented] (HIVE-19989) Metastore uses wrong application name for HADOOP2 metrics
[ https://issues.apache.org/jira/browse/HIVE-19989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528290#comment-16528290 ] Hive QA commented on HIVE-19989: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929471/HIVE-19989.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14632 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12254/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12254/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12254/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12929471 - PreCommit-HIVE-Build > Metastore uses wrong application name for HADOOP2 metrics > - > > Key: HIVE-19989 > URL: https://issues.apache.org/jira/browse/HIVE-19989 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19989.1.patch, HIVE-19989.2.patch > > > Right now it is hardcoded as 'metastore'. It should instead be fetched from > config like it was previously. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20037) Print root cause exception's toString() rather than getMessage()
[ https://issues.apache.org/jira/browse/HIVE-20037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-20037: Description: When we run HoS job and if it fails for some errors, we are printing the exception message rather than exception toString(), for some exceptions, e.g., this java.lang.NoClassDefFoundError, we are missing the exception type information. {noformat} Failed to execute Spark task Stage-1, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session cf054497-b073-4327-a315-68c867ce3434: org/apache/spark/SparkConf)' {noformat} If we use exception's toString(), it will be as follows and make more sense. {noformat} Failed to execute Spark task Stage-1, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session cf054497-b073-4327-a315-68c867ce3434: java.lang.NoClassDefFoundError: org/apache/spark/SparkConf)' {noformat} was: When we run HoS job and if it fails for some errors, we are printing the exception message rather than exception toString(), for some exceptions, e.g., this java.lang.NoClassDefFoundError, we are missing the exception type information. {noformat} Failed to execute Spark task Stage-1, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session cf054497-b073-4327-a315-68c867ce3434: org/apache/spark/SparkConf)' {noformat} > Print root cause exception's toString() rather than getMessage() > > > Key: HIVE-20037 > URL: https://issues.apache.org/jira/browse/HIVE-20037 > Project: Hive > Issue Type: Sub-task > Components: Spark >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Trivial > > When we run HoS job and if it fails for some errors, we are printing the > exception message rather than exception toString(), for some exceptions, > e.g., this java.lang.NoClassDefFoundError, we are missing the exception type > information. > {noformat} > Failed to execute Spark task Stage-1, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark > client for Spark session cf054497-b073-4327-a315-68c867ce3434: > org/apache/spark/SparkConf)' > {noformat} > If we use exception's toString(), it will be as follows and make more sense. > {noformat} > Failed to execute Spark task Stage-1, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark > client for Spark session cf054497-b073-4327-a315-68c867ce3434: > java.lang.NoClassDefFoundError: org/apache/spark/SparkConf)' > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20037) Print root cause exception's toString() rather than getMessage()
[ https://issues.apache.org/jira/browse/HIVE-20037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu reassigned HIVE-20037: --- > Print root cause exception's toString() rather than getMessage() > > > Key: HIVE-20037 > URL: https://issues.apache.org/jira/browse/HIVE-20037 > Project: Hive > Issue Type: Sub-task > Components: Spark >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Trivial > > When we run HoS job and if it fails for some errors, we are printing the > exception message rather than exception toString(), for some exceptions, > e.g., this java.lang.NoClassDefFoundError, we are missing the exception type > information. > {noformat} > Failed to execute Spark task Stage-1, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark > client for Spark session cf054497-b073-4327-a315-68c867ce3434: > org/apache/spark/SparkConf)' > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19792) Enable schema evolution tests for decimal 64
[ https://issues.apache.org/jira/browse/HIVE-19792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19792: - Attachment: HIVE-19792.2.patch > Enable schema evolution tests for decimal 64 > > > Key: HIVE-19792 > URL: https://issues.apache.org/jira/browse/HIVE-19792 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19792.1.patch, HIVE-19792.2.patch > > > Following tests are disabled in HIVE-19629 as orc ConvertTreeReaderFactory > does not handle Decimal64ColumnVectors. This jira is to re-enable those tests > after orc supports it. > 1) type_change_test_int_vectorized.q > 2) type_change_test_int.q > 3) orc_schema_evolution_float.q > 4) schema_evol_orc_nonvec_part_all_primitive.q > 5) schema_evol_orc_nonvec_part_all_primitive_llap_io.q > 6) schema_evol_orc_vec_part_all_primitive.q > 7) schema_evol_orc_vec_part_all_primitive_llap_io.q > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19792) Enable schema evolution tests for decimal 64
[ https://issues.apache.org/jira/browse/HIVE-19792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528279#comment-16528279 ] Prasanth Jayachandran commented on HIVE-19792: -- Now that orc 1.5.2 is released. Updated .2 patch. [~mmccline] can you please take a look? > Enable schema evolution tests for decimal 64 > > > Key: HIVE-19792 > URL: https://issues.apache.org/jira/browse/HIVE-19792 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19792.1.patch, HIVE-19792.2.patch > > > Following tests are disabled in HIVE-19629 as orc ConvertTreeReaderFactory > does not handle Decimal64ColumnVectors. This jira is to re-enable those tests > after orc supports it. > 1) type_change_test_int_vectorized.q > 2) type_change_test_int.q > 3) orc_schema_evolution_float.q > 4) schema_evol_orc_nonvec_part_all_primitive.q > 5) schema_evol_orc_nonvec_part_all_primitive_llap_io.q > 6) schema_evol_orc_vec_part_all_primitive.q > 7) schema_evol_orc_vec_part_all_primitive_llap_io.q > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19792) Enable schema evolution tests for decimal 64
[ https://issues.apache.org/jira/browse/HIVE-19792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19792: - Status: Patch Available (was: Open) > Enable schema evolution tests for decimal 64 > > > Key: HIVE-19792 > URL: https://issues.apache.org/jira/browse/HIVE-19792 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19792.1.patch, HIVE-19792.2.patch > > > Following tests are disabled in HIVE-19629 as orc ConvertTreeReaderFactory > does not handle Decimal64ColumnVectors. This jira is to re-enable those tests > after orc supports it. > 1) type_change_test_int_vectorized.q > 2) type_change_test_int.q > 3) orc_schema_evolution_float.q > 4) schema_evol_orc_nonvec_part_all_primitive.q > 5) schema_evol_orc_nonvec_part_all_primitive_llap_io.q > 6) schema_evol_orc_vec_part_all_primitive.q > 7) schema_evol_orc_vec_part_all_primitive_llap_io.q > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20033) Backport HIVE-19432 to branch-2, branch-3
[ https://issues.apache.org/jira/browse/HIVE-20033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528268#comment-16528268 ] Rajkumar Singh commented on HIVE-20033: --- [~teddy.choi] patch looks good. > Backport HIVE-19432 to branch-2, branch-3 > - > > Key: HIVE-20033 > URL: https://issues.apache.org/jira/browse/HIVE-20033 > Project: Hive > Issue Type: Bug >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-20033.1.branch-2.patch, HIVE-20033.1.branch-3.patch > > > Backport HIVE-19432 to branch-2, branch-3 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19989) Metastore uses wrong application name for HADOOP2 metrics
[ https://issues.apache.org/jira/browse/HIVE-19989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528226#comment-16528226 ] Hive QA commented on HIVE-19989: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 13s{color} | {color:blue} standalone-metastore in master has 228 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s{color} | {color:red} standalone-metastore: The patch generated 1 new + 91 unchanged - 0 fixed = 92 total (was 91) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12254/dev-support/hive-personality.sh | | git revision | master / 35cec21 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-12254/yetus/diff-checkstyle-standalone-metastore.txt | | modules | C: standalone-metastore U: standalone-metastore | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12254/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Metastore uses wrong application name for HADOOP2 metrics > - > > Key: HIVE-19989 > URL: https://issues.apache.org/jira/browse/HIVE-19989 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19989.1.patch, HIVE-19989.2.patch > > > Right now it is hardcoded as 'metastore'. It should instead be fetched from > config like it was previously. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18872) Projection is not pushed properly when query involves multiple tables
[ https://issues.apache.org/jira/browse/HIVE-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528195#comment-16528195 ] Hive QA commented on HIVE-18872: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12913234/HIVE-18872.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12252/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12252/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12252/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-06-29 20:30:00.425 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-12252/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-06-29 20:30:00.428 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 35cec21 HIVE-19967 : SMB Join : Need Optraits for PTFOperator ala GBY Op (Deepak Jaiswal, reviewed by Jason Dere) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 35cec21 HIVE-19967 : SMB Join : Need Optraits for PTFOperator ala GBY Op (Deepak Jaiswal, reviewed by Jason Dere) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-06-29 20:30:01.515 + rm -rf ../yetus_PreCommit-HIVE-Build-12252 + mkdir ../yetus_PreCommit-HIVE-Build-12252 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-12252 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-12252/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java: does not exist in index error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java:716 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java' with conflicts. Going to apply patch with: git apply -p1 error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java:716 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java' with conflicts. U ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java + result=1 + '[' 1 -ne 0 ']' + rm -rf yetus_PreCommit-HIVE-Build-12252 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12913234 - PreCommit-HIVE-Build > Projection is not pushed properly when query involves multiple tables > - > > Key: HIVE-18872 > URL: https://issues.apache.org/jira/browse/HIVE-18872 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Major > Fix For: 3.2.0 > > Attachments: HIVE-18872.patch > > > Projections are not pushed down properly during getSplit() when the query > involves multiple tables although they are getting properly pushed during > getRecordReader when the task is working on the split. Due to this, storage > handler relying on projections for building query while generating input > splits are not working. > here, in below case, due to bug we will be pushing ID2 for both the aliases > "A" and "B" during addSplitsForGroup instead of pushing DB for alias "A" and > ID2 only for alias "B". > SELECT A.ID, a.db, B.ID2 from joinTable3 A join joinTable4 B on A.ID = B.ID > WHERE A.ID=10; -- This messag
[jira] [Commented] (HIVE-16041) HCatalog doesn't delete temp _SCRATCH dir when job failed
[ https://issues.apache.org/jira/browse/HIVE-16041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528194#comment-16528194 ] Hive QA commented on HIVE-16041: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12918704/HIVE-16041.3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14632 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12251/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12251/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12251/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12918704 - PreCommit-HIVE-Build > HCatalog doesn't delete temp _SCRATCH dir when job failed > -- > > Key: HIVE-16041 > URL: https://issues.apache.org/jira/browse/HIVE-16041 > Project: Hive > Issue Type: Bug > Components: HCatalog >Affects Versions: 2.2.0 >Reporter: yunfei liu >Assignee: yunfei liu >Priority: Major > Fix For: 3.2.0 > > Attachments: HIVE-16041.1.patch, HIVE-16041.2.patch, > HIVE-16041.3.patch > > > when we use HCatOutputFormat to write to an external partitioned table, a > tmp dir (which starts with "_SCRATCH" ) will appear under table path if the > job failed. > {quote} > drwxr-xr-x - yun hdfs 0 2017-02-27 01:45 > /tmp/hive/_SCRATCH0.31946356159329714 > drwxr-xr-x - yun hdfs 0 2017-02-27 01:51 > /tmp/hive/_SCRATCH0.31946356159329714/c1=1 > drwxr-xr-x - yun hdfs 0 2017-02-27 00:57 /tmp/hive/c1=1 > drwxr-xr-x - yun hdfs 0 2017-02-27 01:28 /tmp/hive/c1=1/c2=2 > -rw-r--r-- 3 yun hdfs 12 2017-02-27 00:57 > /tmp/hive/c1=1/c2=2/part-r-0 > -rw-r--r-- 3 yun hdfs 12 2017-02-27 01:28 > /tmp/hive/c1=1/c2=2/part-r-0_a_1 > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18118) Explain Extended should indicate if a file being read is an EC file
[ https://issues.apache.org/jira/browse/HIVE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Sherman updated HIVE-18118: -- Attachment: HIVE-18118.13.patch > Explain Extended should indicate if a file being read is an EC file > --- > > Key: HIVE-18118 > URL: https://issues.apache.org/jira/browse/HIVE-18118 > Project: Hive > Issue Type: Sub-task >Reporter: Sahil Takiar >Assignee: Andrew Sherman >Priority: Major > Attachments: HIVE-18118.1.patch, HIVE-18118.10.patch, > HIVE-18118.10.patch, HIVE-18118.11.patch, HIVE-18118.11.patch, > HIVE-18118.12.patch, HIVE-18118.13.patch, HIVE-18118.2.patch, > HIVE-18118.3.patch, HIVE-18118.4.patch, HIVE-18118.5.patch, > HIVE-18118.6.patch, HIVE-18118.7.patch, HIVE-18118.8.patch, HIVE-18118.9.patch > > > We already print out the files Hive will read in the explain extended > command, we just have to modify it to say whether or not its an EC file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20034) Roll back MetaStore exception handling changes for backward compatibility
[ https://issues.apache.org/jira/browse/HIVE-20034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528174#comment-16528174 ] Sergey Shelukhin commented on HIVE-20034: - +1 > Roll back MetaStore exception handling changes for backward compatibility > - > > Key: HIVE-20034 > URL: https://issues.apache.org/jira/browse/HIVE-20034 > Project: Hive > Issue Type: Bug >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Minor > Attachments: HIVE-20034.patch > > > HIVE-19418 changed thrown exceptions by HiveMetaStoreClient.createTable, > alterTable method. > For backward compatibility we should revert these changes -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528157#comment-16528157 ] Matt McCline commented on HIVE-19951: - #12259 > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16041) HCatalog doesn't delete temp _SCRATCH dir when job failed
[ https://issues.apache.org/jira/browse/HIVE-16041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528125#comment-16528125 ] Hive QA commented on HIVE-16041: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 55s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 35s{color} | {color:blue} hcatalog/core in master has 33 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s{color} | {color:red} hcatalog/core: The patch generated 1 new + 41 unchanged - 0 fixed = 42 total (was 41) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 48s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12251/dev-support/hive-personality.sh | | git revision | master / 35cec21 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-12251/yetus/diff-checkstyle-hcatalog_core.txt | | modules | C: hcatalog/core U: hcatalog/core | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12251/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > HCatalog doesn't delete temp _SCRATCH dir when job failed > -- > > Key: HIVE-16041 > URL: https://issues.apache.org/jira/browse/HIVE-16041 > Project: Hive > Issue Type: Bug > Components: HCatalog >Affects Versions: 2.2.0 >Reporter: yunfei liu >Assignee: yunfei liu >Priority: Major > Fix For: 3.2.0 > > Attachments: HIVE-16041.1.patch, HIVE-16041.2.patch, > HIVE-16041.3.patch > > > when we use HCatOutputFormat to write to an external partitioned table, a > tmp dir (which starts with "_SCRATCH" ) will appear under table path if the > job failed. > {quote} > drwxr-xr-x - yun hdfs 0 2017-02-27 01:45 > /tmp/hive/_SCRATCH0.31946356159329714 > drwxr-xr-x - yun hdfs 0 2017-02-27 01:51 > /tmp/hive/_SCRATCH0.31946356159329714/c1=1 > drwxr-xr-x - yun hdfs 0 2017-02-27 00:57 /tmp/hive/c1=1 > drwxr-xr-x - yun hdfs 0 2017-02-27 01:28 /tmp/hive/c1=1/c2=2 > -rw-r--r-- 3 yun hdfs 12 2017-02-27 00:57 > /tmp/hive/c1=1/c2=2/part-r-0 > -rw-r--r-- 3 yun hdfs 12 2017-02-27 01:28 > /tmp/hive/c1=1/c2=2/part-r-0_a_1 > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19937) Intern JobConf objects in Spark tasks
[ https://issues.apache.org/jira/browse/HIVE-19937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528117#comment-16528117 ] Sahil Takiar commented on HIVE-19937: - [~mi...@cloudera.com] thanks for the input! I didn't notice {{CopyOnFirstWriteProperties}} so that helps a lot. For {{CopyOnFirstWriteProperties}} it looks like all the properties will get copied into the "super" {{Properties}} object when there is a write, do you think it would be possible to just copy the mutated properties to the super class, rather than all of them? My concern is that its probably a common case where this properties object gets mutated, in which case copying the entire thing would probably defeat the purpose of interning. I also noticed that in {{PartitionDesc#internProperties}} only the value is being interned, but not the key, was that intentional? Code is below: {code:java} private static void internProperties(Properties properties) { for (Enumeration keys = properties.propertyNames(); keys.hasMoreElements();) { String key = (String) keys.nextElement(); String oldValue = properties.getProperty(key); if (oldValue != null) { properties.setProperty(key, oldValue.intern()); } } } {code} I'm working on creating a test that can easily measure the impact of this change. > Intern JobConf objects in Spark tasks > - > > Key: HIVE-19937 > URL: https://issues.apache.org/jira/browse/HIVE-19937 > Project: Hive > Issue Type: Improvement > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-19937.1.patch > > > When fixing HIVE-16395, we decided that each new Spark task should clone the > {{JobConf}} object to prevent any {{ConcurrentModificationException}} from > being thrown. However, setting this variable comes at a cost of storing a > duplicate {{JobConf}} object for each Spark task. These objects can take up a > significant amount of memory, we should intern them so that Spark tasks > running in the same JVM don't store duplicate copies. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19975) Checking writeIdList per table may not check the commit level of a partition on a partitioned table
[ https://issues.apache.org/jira/browse/HIVE-19975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528116#comment-16528116 ] Sergey Shelukhin commented on HIVE-19975: - I was also removing the fields... we shouldn't be doing the same work in parallel. Can you take a look at the test issue? There are invalid results from the original patch (HIVE-20005) > Checking writeIdList per table may not check the commit level of a partition > on a partitioned table > --- > > Key: HIVE-19975 > URL: https://issues.apache.org/jira/browse/HIVE-19975 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 4.0.0 >Reporter: Steve Yeom >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19975.01.patch > > > writeIdList is per table entity but stats for a partitioned table are per > partition. > I.e., each record in PARTITIONS has an independent stats. > So if we check the validity of a partition's stats, we need to check in the > context of > a partiton. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19975) Checking writeIdList per table may not check the commit level of a partition on a partitioned table
[ https://issues.apache.org/jira/browse/HIVE-19975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528115#comment-16528115 ] Sergey Shelukhin commented on HIVE-19975: - Yeah that's the fix I was also making yesterday, need to clean up and run some tests. > Checking writeIdList per table may not check the commit level of a partition > on a partitioned table > --- > > Key: HIVE-19975 > URL: https://issues.apache.org/jira/browse/HIVE-19975 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 4.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19975.01.patch > > > writeIdList is per table entity but stats for a partitioned table are per > partition. > I.e., each record in PARTITIONS has an independent stats. > So if we check the validity of a partition's stats, we need to check in the > context of > a partiton. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19975) Checking writeIdList per table may not check the commit level of a partition on a partitioned table
[ https://issues.apache.org/jira/browse/HIVE-19975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-19975: --- Assignee: Sergey Shelukhin (was: Steve Yeom) > Checking writeIdList per table may not check the commit level of a partition > on a partitioned table > --- > > Key: HIVE-19975 > URL: https://issues.apache.org/jira/browse/HIVE-19975 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 4.0.0 >Reporter: Steve Yeom >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19975.01.patch > > > writeIdList is per table entity but stats for a partitioned table are per > partition. > I.e., each record in PARTITIONS has an independent stats. > So if we check the validity of a partition's stats, we need to check in the > context of > a partiton. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17502) Reuse of default session should not throw an exception in LLAP w/ Tez
[ https://issues.apache.org/jira/browse/HIVE-17502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528111#comment-16528111 ] Hive QA commented on HIVE-17502: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12918499/HIVE-17502.3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14633 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12250/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12250/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12250/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12918499 - PreCommit-HIVE-Build > Reuse of default session should not throw an exception in LLAP w/ Tez > - > > Key: HIVE-17502 > URL: https://issues.apache.org/jira/browse/HIVE-17502 > Project: Hive > Issue Type: Bug > Components: llap, Tez >Affects Versions: 2.1.1, 2.2.0 > Environment: HDP 2.6.1.0-129, Hue 4 >Reporter: Thai Bui >Assignee: Thai Bui >Priority: Major > Fix For: 3.2.0 > > Attachments: HIVE-17502.2.patch, HIVE-17502.3.patch, HIVE-17502.patch > > > Hive2 w/ LLAP on Tez doesn't allow a currently used, default session to be > skipped mostly because of this line > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L365. > However, some clients such as Hue 4, allow multiple sessions to be used per > user. Under this configuration, a Thrift client will send a request to either > reuse or open a new session. The reuse request could include the session id > of a currently used snippet being executed in Hue, this causes HS2 to throw > an exception: > {noformat} > 2017-09-10T17:51:36,548 INFO [Thread-89]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:canWorkWithSameSession(512)) - The current user: > hive, session user: hive > 2017-09-10T17:51:36,549 ERROR [Thread-89]: exec.Task > (TezTask.java:execute(232)) - Failed to execute tez graph. > org.apache.hadoop.hive.ql.metadata.HiveException: The pool session > sessionId=5b61a578-6336-41c5-860d-9838166f97fe, queueName=llap, user=hive, > doAs=false, isOpen=true, isDefault=true, expires in 591015330ms should have > been returned to the pool > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.canWorkWithSameSession(TezSessionPoolManager.java:534) > ~[hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.getSession(TezSessionPoolManager.java:544) > ~[hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:147) > [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129] > at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:79) > [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129] > {noformat} > Note that every query is issued as a single 'hive' user to share the LLAP > daemon pool, a set of pre-determined number of AMs is initialized at setup > time. Thus, HS2 should allow new sessions from a Thrift client to be used out > of the pool, or an existing session to be skipped and an unused session from > the pool to be returned. The logic to throw an exception in the > `canWorkWithSameSession` doesn't make sense to me. > I have a solution to fix this issue in my local branch at > https://github.com/thaibui/hive/commit/078a521b9d0906fe6c0323b63e567f6eee2f3a70. > When applied, the log will become like so > {noformat} > 2017-09-10T09:15:33,578 INFO [Thread-239]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:canWorkWithSameSession(533)) - Skipping default > session sessionId=6638b1da-0f8a-405e-85f0-9586f484e6de, queueName=llap, > user=hive, doAs=false, isOpen=true, isDefault=true, expires in 591868732ms > since it is being used. > {noformat} > A test case is provided in my branch to demonstrate how it works. If possible > I would like this patch to be applied to version 2.1, 2.2 and master. Since > we are using 2.1 LLAP in productio
[jira] [Commented] (HIVE-17502) Reuse of default session should not throw an exception in LLAP w/ Tez
[ https://issues.apache.org/jira/browse/HIVE-17502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528068#comment-16528068 ] Hive QA commented on HIVE-17502: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 21s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 34s{color} | {color:blue} common in master has 64 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 0s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 27s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12250/dev-support/hive-personality.sh | | git revision | master / 35cec21 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: common ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12250/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Reuse of default session should not throw an exception in LLAP w/ Tez > - > > Key: HIVE-17502 > URL: https://issues.apache.org/jira/browse/HIVE-17502 > Project: Hive > Issue Type: Bug > Components: llap, Tez >Affects Versions: 2.1.1, 2.2.0 > Environment: HDP 2.6.1.0-129, Hue 4 >Reporter: Thai Bui >Assignee: Thai Bui >Priority: Major > Fix For: 3.2.0 > > Attachments: HIVE-17502.2.patch, HIVE-17502.3.patch, HIVE-17502.patch > > > Hive2 w/ LLAP on Tez doesn't allow a currently used, default session to be > skipped mostly because of this line > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L365. > However, some clients such as Hue 4, allow multiple sessions to be used per > user. Under this configuration, a Thrift client will send a request to either > reuse or open a new session. The reuse request could include the session id > of a currently used snippet being executed in Hue, this causes HS2 to throw > an exception: > {noformat} > 2017-09-10T17:51:36,548 INFO
[jira] [Commented] (HIVE-18728) Secure webHCat with SSL
[ https://issues.apache.org/jira/browse/HIVE-18728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528029#comment-16528029 ] Hive QA commented on HIVE-18728: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12915463/HIVE-18728.3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 14630 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[vector_outer_join5] (batchId=187) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitions (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitionsMultiInsert (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitionsUnionAll (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill (batchId=247) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12249/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12249/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12249/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 10 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12915463 - PreCommit-HIVE-Build > Secure webHCat with SSL > --- > > Key: HIVE-18728 > URL: https://issues.apache.org/jira/browse/HIVE-18728 > Project: Hive > Issue Type: New Feature > Components: Security >Reporter: Oleksiy Sayankin >Assignee: Oleksiy Sayankin >Priority: Major > Fix For: 3.2.0 > > Attachments: HIVE-18728.1.patch, HIVE-18728.2.patch, > HIVE-18728.3.patch > > > Doc for the issue: > *Configure WebHCat server to use SSL encryption* > You can configure WebHCat REST-API to use SSL (Secure Sockets Layer) > encryption. The following WebHCat properties are added to enable SSL. > {{templeton.use.ssl}} > Default value: {{false}} > Description: Set this to true for using SSL encryption for WebHCat server > {{templeton.keystore.path}} > Default value: {{}} > Description: SSL certificate keystore location for WebHCat server > {{templeton.keystore.password}} > Default value: {{}} > Description: SSL certificate keystore password for WebHCat server > {{templeton.ssl.protocol.blacklist}} > Default value: {{SSLv2,SSLv3}} > Description: SSL Versions to disable for WebHCat server > {{templeton.host}} > Default value: {{0.0.0.0}} > Description: The host address the WebHCat server will listen on. > *Modifying the {{webhcat-site.xml}} file* > Configure the following properties in the {{webhcat-site.xml}} file to enable > SSL encryption on each node where WebHCat is installed: > {code} > > > templeton.use.ssl > true > > > templeton.keystore.path > /path/to/ssl_keystore > > > templeton.keystore.password > password > > {code} > *Example:* To check status of WebHCat server configured for SSL encryption > use following command > {code} > curl -k 'https://:@:50111/templeton/v1/status' > {code} > replace {{}} and {{}} with valid user/password. Replace > {{}} with your host name. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19967) SMB Join : Need Optraits for PTFOperator ala GBY Op
[ https://issues.apache.org/jira/browse/HIVE-19967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-19967: -- Attachment: HIVE-19967.01-branch-03.patch > SMB Join : Need Optraits for PTFOperator ala GBY Op > --- > > Key: HIVE-19967 > URL: https://issues.apache.org/jira/browse/HIVE-19967 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-19967.01-branch-03.patch, HIVE-19967.1.patch, > HIVE-19967.2.patch, HIVE-19967.3.patch, HIVE-19967.4.patch, > HIVE-19967.5.patch, HIVE-19967.6.patch, HIVE-19967.7.patch, HIVE-19967.8.patch > > > The SMB join on one or more PTF Ops should reset the optraits keys just like > GBY Op does. > Currently there is no implementation of PTFOp optraits. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19967) SMB Join : Need Optraits for PTFOperator ala GBY Op
[ https://issues.apache.org/jira/browse/HIVE-19967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528010#comment-16528010 ] Deepak Jaiswal commented on HIVE-19967: --- Committed to master. Thanks [~jdere] for the review. Preparing for branch-3 > SMB Join : Need Optraits for PTFOperator ala GBY Op > --- > > Key: HIVE-19967 > URL: https://issues.apache.org/jira/browse/HIVE-19967 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-19967.1.patch, HIVE-19967.2.patch, > HIVE-19967.3.patch, HIVE-19967.4.patch, HIVE-19967.5.patch, > HIVE-19967.6.patch, HIVE-19967.7.patch, HIVE-19967.8.patch > > > The SMB join on one or more PTF Ops should reset the optraits keys just like > GBY Op does. > Currently there is no implementation of PTFOp optraits. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19983) Backport HIVE-19769 to branch-3
[ https://issues.apache.org/jira/browse/HIVE-19983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-19983: -- Affects Version/s: (was: 3.1.0) 3.2.0 Status: Patch Available (was: Open) > Backport HIVE-19769 to branch-3 > --- > > Key: HIVE-19983 > URL: https://issues.apache.org/jira/browse/HIVE-19983 > Project: Hive > Issue Type: Bug > Components: storage-api >Affects Versions: 3.2.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HIVE-19983-branch-3.patch > > > This patch will be needed for other catalog related work to be backported to > branch-3. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19983) Backport HIVE-19769 to branch-3
[ https://issues.apache.org/jira/browse/HIVE-19983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-19983: -- Attachment: HIVE-19983-branch-3.patch > Backport HIVE-19769 to branch-3 > --- > > Key: HIVE-19983 > URL: https://issues.apache.org/jira/browse/HIVE-19983 > Project: Hive > Issue Type: Bug > Components: storage-api >Affects Versions: 3.2.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HIVE-19983-branch-3.patch > > > This patch will be needed for other catalog related work to be backported to > branch-3. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18916) SparkClientImpl doesn't error out if spark-submit fails
[ https://issues.apache.org/jira/browse/HIVE-18916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16527997#comment-16527997 ] Sahil Takiar commented on HIVE-18916: - There was a clean run on the 18th, since then there have only been minor formatting fixes to make checkstyles pass. I don't think its worth the effort to get another clean run given how slow Hive QA is. Unless anyone objects, I'll merge this later today. > SparkClientImpl doesn't error out if spark-submit fails > --- > > Key: HIVE-18916 > URL: https://issues.apache.org/jira/browse/HIVE-18916 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18916.1.WIP.patch, HIVE-18916.2.patch, > HIVE-18916.3.patch, HIVE-18916.4.patch, HIVE-18916.5.patch, HIVE-18916.6.patch > > > If {{spark-submit}} returns a non-zero exit code, {{SparkClientImpl}} will > simply log the exit code, but won't throw an error. Eventually, the > connection timeout will get triggered and an exception like {{Timed out > waiting for client connection}} will be logged, which is pretty misleading. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19581) view do not support unicode characters well
[ https://issues.apache.org/jira/browse/HIVE-19581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16527996#comment-16527996 ] Naveen Gangam commented on HIVE-19581: -- Thanks [~asherman] Patch looks good to me. +1 pending test results. > view do not support unicode characters well > --- > > Key: HIVE-19581 > URL: https://issues.apache.org/jira/browse/HIVE-19581 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0 >Reporter: kai >Assignee: Andrew Sherman >Priority: Major > Attachments: HIVE-19581.1.patch, HIVE-19581.2.patch, > HIVE-19581.3.patch, HIVE-19581.4.patch, HIVE-19581.5.patch, > HIVE-19581.6.patch, explain.png, metastore.png > > > create table t_test (name ,string) ; > insert into table t_test VALUES ('李四'); > create view t_view_test as select * from t_test where name='李四'; > when select * from t_view_test no records return -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18118) Explain Extended should indicate if a file being read is an EC file
[ https://issues.apache.org/jira/browse/HIVE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16527986#comment-16527986 ] Andrew Sherman commented on HIVE-18118: --- Thanks [~stakiar] for review, rebasing today I have merge conflicts so I think I will have to do another patch. > Explain Extended should indicate if a file being read is an EC file > --- > > Key: HIVE-18118 > URL: https://issues.apache.org/jira/browse/HIVE-18118 > Project: Hive > Issue Type: Sub-task >Reporter: Sahil Takiar >Assignee: Andrew Sherman >Priority: Major > Attachments: HIVE-18118.1.patch, HIVE-18118.10.patch, > HIVE-18118.10.patch, HIVE-18118.11.patch, HIVE-18118.11.patch, > HIVE-18118.12.patch, HIVE-18118.2.patch, HIVE-18118.3.patch, > HIVE-18118.4.patch, HIVE-18118.5.patch, HIVE-18118.6.patch, > HIVE-18118.7.patch, HIVE-18118.8.patch, HIVE-18118.9.patch > > > We already print out the files Hive will read in the explain extended > command, we just have to modify it to say whether or not its an EC file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18728) Secure webHCat with SSL
[ https://issues.apache.org/jira/browse/HIVE-18728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16527972#comment-16527972 ] Hive QA commented on HIVE-18728: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 35s{color} | {color:blue} hcatalog/webhcat/svr in master has 96 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} hcatalog/webhcat/svr: The patch generated 3 new + 2 unchanged - 71 fixed = 5 total (was 73) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12249/dev-support/hive-personality.sh | | git revision | master / 2b0cb07 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-12249/yetus/diff-checkstyle-hcatalog_webhcat_svr.txt | | modules | C: hcatalog/webhcat/svr U: hcatalog/webhcat/svr | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12249/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Secure webHCat with SSL > --- > > Key: HIVE-18728 > URL: https://issues.apache.org/jira/browse/HIVE-18728 > Project: Hive > Issue Type: New Feature > Components: Security >Reporter: Oleksiy Sayankin >Assignee: Oleksiy Sayankin >Priority: Major > Fix For: 3.2.0 > > Attachments: HIVE-18728.1.patch, HIVE-18728.2.patch, > HIVE-18728.3.patch > > > Doc for the issue: > *Configure WebHCat server to use SSL encryption* > You can configure WebHCat REST-API to use SSL (Secure Sockets Layer) > encryption. The following WebHCat properties are added to enable SSL. > {{templeton.use.ssl}} > Default value: {{false}} > Description: Set this to true for using SSL encryption for WebHCat server > {{templeton.keystore.path}} > Default value: {{}} > Description: SSL certificate keystore location for WebHCat server > {{templeton.keystore.password}} > Default value: {{}} > Description: SSL certificate keystore password for WebHCat server > {{templeton.ssl.protocol.blacklist}} > Default value: {{SSLv2,SSLv3}} > Description: SSL Versions to disable for WebHCat server > {{templeton.host}} > Default value: {{0.0.0.0}} > Description: The host address the WebHCat server will listen on. > *Modifying the {{webhcat-site.xml}} file* > Configure the following properties in the {{webhcat-site.xml}} file to enable > SSL encryption on eac
[jira] [Updated] (HIVE-19806) Several tests do not properly sort their output
[ https://issues.apache.org/jira/browse/HIVE-19806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-19806: -- Attachment: HIVE-19806.2.patch > Several tests do not properly sort their output > --- > > Key: HIVE-19806 > URL: https://issues.apache.org/jira/browse/HIVE-19806 > Project: Hive > Issue Type: Bug > Components: Test >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HIVE-19806.2.patch, HIVE-19806.patch > > > A number of the tests produce unsorted output that happens to come out the > same on people's laptops and the ptest infrastructure. But when run on a > separate linux box the sort differences show up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19806) Several tests do not properly sort their output
[ https://issues.apache.org/jira/browse/HIVE-19806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-19806: -- Attachment: (was: HIVE-19806.2.patch) > Several tests do not properly sort their output > --- > > Key: HIVE-19806 > URL: https://issues.apache.org/jira/browse/HIVE-19806 > Project: Hive > Issue Type: Bug > Components: Test >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HIVE-19806.patch > > > A number of the tests produce unsorted output that happens to come out the > same on people's laptops and the ptest infrastructure. But when run on a > separate linux box the sort differences show up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19806) Several tests do not properly sort their output
[ https://issues.apache.org/jira/browse/HIVE-19806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-19806: -- Attachment: HIVE-19806.2.patch > Several tests do not properly sort their output > --- > > Key: HIVE-19806 > URL: https://issues.apache.org/jira/browse/HIVE-19806 > Project: Hive > Issue Type: Bug > Components: Test >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HIVE-19806.2.patch, HIVE-19806.patch > > > A number of the tests produce unsorted output that happens to come out the > same on people's laptops and the ptest infrastructure. But when run on a > separate linux box the sort differences show up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18668) Really shade guava in ql
[ https://issues.apache.org/jira/browse/HIVE-18668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16527944#comment-16527944 ] Hive QA commented on HIVE-18668: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12914288/HIVE-18668.02.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1152 failed/errored test(s), 9600 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestBeeLineDriver.org.apache.hadoop.hive.cli.TestBeeLineDriver (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[colstats_all_nulls] (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[create_merge_compressed] (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[drop_with_concurrency] (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[explain_outputs] (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[insert_overwrite_local_directory_1] (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[mapjoin2] (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[materialized_view_create_rewrite] (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[select_dummy_source] (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_10] (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_11] (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_12] (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_13] (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_16] (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_1] (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_2] (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_3] (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_7] (batchId=261) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[udf_unix_timestamp] (batchId=261) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[buckets] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[create_database] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[create_like] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_blobstore_to_blobstore] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_blobstore_to_hdfs] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_hdfs_to_blobstore] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[explain] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[having] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_blobstore] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_local] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_warehouse] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_local_to_blobstore] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_blobstore] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_blobstore_nonpart] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_local] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_warehouse] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_warehouse_nonpart] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_local_to_blobstore] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_blobstore_to_blobstore] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_empty_into_blobstore] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_dynamic_partitions] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_table] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_directory] (batchId=264) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_dynamic_partitions] (batchId=
[jira] [Commented] (HIVE-18668) Really shade guava in ql
[ https://issues.apache.org/jira/browse/HIVE-18668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16527909#comment-16527909 ] Hive QA commented on HIVE-18668: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12248/dev-support/hive-personality.sh | | git revision | master / 2b0cb07 | | Default Java | 1.8.0_111 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12248/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Really shade guava in ql > > > Key: HIVE-18668 > URL: https://issues.apache.org/jira/browse/HIVE-18668 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Fix For: 3.2.0 > > Attachments: HIVE-18668.01.patch, HIVE-18668.02.patch > > > After HIVE-15393 a test started to fail in druid; after some investigation it > turned out that ql doesn't shade it's guava artifact at all...because it > shades 'com.google.guava' instead 'com.google.common' -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20006) Make materializations invalidation cache work with multiple active remote metastores
[ https://issues.apache.org/jira/browse/HIVE-20006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-20006: --- Attachment: HIVE-20006.01.patch > Make materializations invalidation cache work with multiple active remote > metastores > > > Key: HIVE-20006 > URL: https://issues.apache.org/jira/browse/HIVE-20006 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Critical > Attachments: HIVE-19027.01.patch, HIVE-19027.02.patch, > HIVE-19027.03.patch, HIVE-19027.04.patch, HIVE-20006.01.patch, > HIVE-20006.patch > > > The main points: > - Only MVs stored in transactional tables can have a time window value of 0. > Those are the only MVs that can be guaranteed to not be outdated when a query > is executed, if we use custom storage handlers to store the materialized > view, we cannot make any promises. > - For MVs that +cannot be outdated+, we do not check the metastore. Instead, > comparison is based on valid write id lists. > - For MVs that +can be outdated+, we still rely on the invalidation cache. > ** The window for valid outdated MVs can be specified in intervals of 1 > minute (less than that, it is difficult to have any guarantees about whether > the MV is actually outdated by less than a minute or not). > ** The async loading is done every interval / 2 (or probably better, we can > make it configurable). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18279) Incorrect condition in StatsOpimizer
[ https://issues.apache.org/jira/browse/HIVE-18279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16527894#comment-16527894 ] Hive QA commented on HIVE-18279: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12902099/HIVE-18279.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 14630 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_empty_into_blobstore] (batchId=264) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_partitioned] (batchId=3) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin5] (batchId=88) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[columnStatsUpdateForStatsOptimizer_2] (batchId=31) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[constGby] (batchId=9) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[fileformat_mix] (batchId=58) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input24] (batchId=46) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[truncate_table] (batchId=84) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12247/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12247/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12247/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 8 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12902099 - PreCommit-HIVE-Build > Incorrect condition in StatsOpimizer > > > Key: HIVE-18279 > URL: https://issues.apache.org/jira/browse/HIVE-18279 > Project: Hive > Issue Type: Bug > Components: Statistics >Reporter: Oleksiy Sayankin >Assignee: Oleksiy Sayankin >Priority: Major > Fix For: 3.2.0 > > Attachments: HIVE-18279.1.patch > > > At the moment {{StatsOpimizer}} has code > {code} > if (rowCnt == null) { > // if rowCnt < 1 than its either empty table or table on which > stats are not > // computed We assume the worse and don't attempt to optimize. > Logger.debug("Table doesn't have up to date stats " + > tbl.getTableName()); > rowCnt = null; > } > {code} > in method {{private Long getRowCnt()}}. Condition > {code} > if (rowCnt == null) { > {code} > should be changed to > {code} > if (rowCnt == null || rowCnt == 0) { > {code} > because 0 value also means that table stats may not be computed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20035) write booleans as long when serializing to druid
[ https://issues.apache.org/jira/browse/HIVE-20035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16527878#comment-16527878 ] Jesus Camacho Rodriguez commented on HIVE-20035: +1 > write booleans as long when serializing to druid > > > Key: HIVE-20035 > URL: https://issues.apache.org/jira/browse/HIVE-20035 > Project: Hive > Issue Type: Bug >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-20035.1.patch, HIVE-20035.patch > > > Druid expressions do not support booleans yet. > In druid expressions booleans are treated and parsed from longs, however when > we store booleans from hive they are serialized as 'true' and 'false' string > values. > Need to make serialization consistent with deserialization and write long > values when sending data to druid. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20035) write booleans as long when serializing to druid
[ https://issues.apache.org/jira/browse/HIVE-20035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nishant Bangarwa updated HIVE-20035: Attachment: HIVE-20035.1.patch > write booleans as long when serializing to druid > > > Key: HIVE-20035 > URL: https://issues.apache.org/jira/browse/HIVE-20035 > Project: Hive > Issue Type: Bug >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-20035.1.patch, HIVE-20035.patch > > > Druid expressions do not support booleans yet. > In druid expressions booleans are treated and parsed from longs, however when > we store booleans from hive they are serialized as 'true' and 'false' string > values. > Need to make serialization consistent with deserialization and write long > values when sending data to druid. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HIVE-20025) Clean-up of event files created by HiveProtoLoggingHook.
[ https://issues.apache.org/jira/browse/HIVE-20025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-20025 started by Sankar Hariappan. --- > Clean-up of event files created by HiveProtoLoggingHook. > > > Key: HIVE-20025 > URL: https://issues.apache.org/jira/browse/HIVE-20025 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: Hive, hooks > Fix For: 4.0.0 > > > Currently, HiveProtoLoggingHook write event data to hdfs. The number of files > can grow to very large numbers. > Since the files are created under a folder with Date being a part of the > path, hive should have a way to clean up data older than a certain configured > time / date. This can be a job that can run with as little frequency as just > once a day. > This time should be set to 1 week default. There should also be a sane upper > bound of # of files so that when a large cluster generates a lot of files > during a spike, we don't force the cluster fall over. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19975) Checking writeIdList per table may not check the commit level of a partition on a partitioned table
[ https://issues.apache.org/jira/browse/HIVE-19975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Yeom reassigned HIVE-19975: - Assignee: Steve Yeom (was: Sergey Shelukhin) > Checking writeIdList per table may not check the commit level of a partition > on a partitioned table > --- > > Key: HIVE-19975 > URL: https://issues.apache.org/jira/browse/HIVE-19975 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 4.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19975.01.patch > > > writeIdList is per table entity but stats for a partitioned table are per > partition. > I.e., each record in PARTITIONS has an independent stats. > So if we check the validity of a partition's stats, we need to check in the > context of > a partiton. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18279) Incorrect condition in StatsOpimizer
[ https://issues.apache.org/jira/browse/HIVE-18279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16527795#comment-16527795 ] Hive QA commented on HIVE-18279: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 50s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 55s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12247/dev-support/hive-personality.sh | | git revision | master / 2b0cb07 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12247/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Incorrect condition in StatsOpimizer > > > Key: HIVE-18279 > URL: https://issues.apache.org/jira/browse/HIVE-18279 > Project: Hive > Issue Type: Bug > Components: Statistics >Reporter: Oleksiy Sayankin >Assignee: Oleksiy Sayankin >Priority: Major > Fix For: 3.2.0 > > Attachments: HIVE-18279.1.patch > > > At the moment {{StatsOpimizer}} has code > {code} > if (rowCnt == null) { > // if rowCnt < 1 than its either empty table or table on which > stats are not > // computed We assume the worse and don't attempt to optimize. > Logger.debug("Table doesn't have up to date stats " + > tbl.getTableName()); > rowCnt = null; > } > {code} > in method {{private Long getRowCnt()}}. Condition > {code} > if (rowCnt == null) { > {code} > should be changed to > {code} > if (rowCnt == null || rowCnt == 0) { > {code} > because 0 value also means that table stats may not be computed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20035) write booleans as long when serializing to druid
[ https://issues.apache.org/jira/browse/HIVE-20035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16527776#comment-16527776 ] Ashutosh Chauhan commented on HIVE-20035: - [~nishantbangarwa] Looks like you attached wrong patch ? > write booleans as long when serializing to druid > > > Key: HIVE-20035 > URL: https://issues.apache.org/jira/browse/HIVE-20035 > Project: Hive > Issue Type: Bug >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-20035.patch > > > Druid expressions do not support booleans yet. > In druid expressions booleans are treated and parsed from longs, however when > we store booleans from hive they are serialized as 'true' and 'false' string > values. > Need to make serialization consistent with deserialization and write long > values when sending data to druid. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18873) Skipping predicate pushdown for MR silently at HiveInputFormat can cause storage handlers to produce erroneous result
[ https://issues.apache.org/jira/browse/HIVE-18873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16527765#comment-16527765 ] Hive QA commented on HIVE-18873: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12913744/HIVE-18873.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14630 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_joins] (batchId=253) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12246/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12246/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12246/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12913744 - PreCommit-HIVE-Build > Skipping predicate pushdown for MR silently at HiveInputFormat can cause > storage handlers to produce erroneous result > - > > Key: HIVE-18873 > URL: https://issues.apache.org/jira/browse/HIVE-18873 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Major > Fix For: 3.2.0 > > Attachments: HIVE-18873.patch > > > {code:java} > // disable filter pushdown for mapreduce when there are more than one table > aliases, > // since we don't clone jobConf per alias > if (mrwork != null && mrwork.getAliases() != null && > mrwork.getAliases().size() > 1 && > jobConf.get(ConfVars.HIVE_EXECUTION_ENGINE.varname).equals("mr")) { > return; > } > {code} > I believe this needs to be handled at OpProcFactory so that hive doesn't > believe that predicate is handled by storage handler. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17593) DataWritableWriter strip spaces for CHAR type before writing, but predicate generator doesn't do same thing.
[ https://issues.apache.org/jira/browse/HIVE-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16527736#comment-16527736 ] Ferdinand Xu commented on HIVE-17593: - Thanks [~junjie] for reaching me about this. Why do we still need to pad the string? Should we save as its original format instead of either stripping or padding? > DataWritableWriter strip spaces for CHAR type before writing, but predicate > generator doesn't do same thing. > > > Key: HIVE-17593 > URL: https://issues.apache.org/jira/browse/HIVE-17593 > Project: Hive > Issue Type: Bug >Affects Versions: 2.3.0, 3.0.0 >Reporter: Junjie Chen >Assignee: Junjie Chen >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-17593.patch > > > DataWritableWriter strip spaces for CHAR type before writing. While when > generating predicate, it does NOT do same striping which should cause data > missing! > In current version, it doesn't cause data missing since predicate is not well > push down to parquet due to HIVE-17261. > Please see ConvertAstTosearchArg.java, getTypes treats CHAR and STRING as > same which will build a predicate with tail spaces. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19886) Logs may be directed to 2 files if --hiveconf hive.log.file is used
[ https://issues.apache.org/jira/browse/HIVE-19886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16527677#comment-16527677 ] Zoltan Haindrich commented on HIVE-19886: - I think raising an exception to get the user's attention to fix the logging setup would be the best > Logs may be directed to 2 files if --hiveconf hive.log.file is used > --- > > Key: HIVE-19886 > URL: https://issues.apache.org/jira/browse/HIVE-19886 > Project: Hive > Issue Type: Bug > Components: Logging >Affects Versions: 3.1.0, 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Jaume M >Priority: Major > Labels: pull-request-available > Attachments: HIVE-19886.2.patch, HIVE-19886.2.patch, HIVE-19886.patch > > > hive launch script explicitly specific log4j2 configuration file to use. The > main() methods in HiveServer2 and HiveMetastore reconfigures the logger based > on user input via --hiveconf hive.log.file. This may cause logs to end up in > 2 different files. Initial logs goes to the file specified in > hive-log4j2.properties and after logger reconfiguration the rest of the logs > goes to the file specified via --hiveconf hive.log.file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18873) Skipping predicate pushdown for MR silently at HiveInputFormat can cause storage handlers to produce erroneous result
[ https://issues.apache.org/jira/browse/HIVE-18873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16527674#comment-16527674 ] Hive QA commented on HIVE-18873: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 56s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 5s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} ql: The patch generated 0 new + 23 unchanged - 1 fixed = 23 total (was 24) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 1s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12246/dev-support/hive-personality.sh | | git revision | master / 2b0cb07 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12246/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Skipping predicate pushdown for MR silently at HiveInputFormat can cause > storage handlers to produce erroneous result > - > > Key: HIVE-18873 > URL: https://issues.apache.org/jira/browse/HIVE-18873 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Major > Fix For: 3.2.0 > > Attachments: HIVE-18873.patch > > > {code:java} > // disable filter pushdown for mapreduce when there are more than one table > aliases, > // since we don't clone jobConf per alias > if (mrwork != null && mrwork.getAliases() != null && > mrwork.getAliases().size() > 1 && > jobConf.get(ConfVars.HIVE_EXECUTION_ENGINE.varname).equals("mr")) { > return; > } > {code} > I believe this needs to be handled at OpProcFactory so that hive doesn't > believe that predicate is handled by storage handler. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19678) Ignore TestBeeLineWithArgs
[ https://issues.apache.org/jira/browse/HIVE-19678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-19678: Resolution: Unresolved Status: Resolved (was: Patch Available) > Ignore TestBeeLineWithArgs > -- > > Key: HIVE-19678 > URL: https://issues.apache.org/jira/browse/HIVE-19678 > Project: Hive > Issue Type: Improvement > Components: Tests >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19678.01.patch > > > timeouts every ~5. build > https://builds.apache.org/job/PreCommit-HIVE-Build/11155/testReport/org.apache.hive.beeline/TestBeeLineWithArgs/history/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19729) disable TestTablesGetExists
[ https://issues.apache.org/jira/browse/HIVE-19729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-19729: Resolution: Won't Fix Status: Resolved (was: Patch Available) I haven't seen this lately > disable TestTablesGetExists > --- > > Key: HIVE-19729 > URL: https://issues.apache.org/jira/browse/HIVE-19729 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19729.01.patch > > > causes clearly unrelated and unreproducible failures: > https://issues.apache.org/jira/browse/HIVE-19699?focusedCommentId=16493708&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16493708 -- This message was sent by Atlassian JIRA (v7.6.3#76005)