[jira] [Commented] (HIVE-20598) Fix typos in HiveAlgorithmsUtil calculations
[ https://issues.apache.org/jira/browse/HIVE-20598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621542#comment-16621542 ] Hive QA commented on HIVE-20598: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12940430/HIVE-20598.01.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14992 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13922/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13922/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13922/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12940430 - PreCommit-HIVE-Build > Fix typos in HiveAlgorithmsUtil calculations > > > Key: HIVE-20598 > URL: https://issues.apache.org/jira/browse/HIVE-20598 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-20598.01.patch > > > HIVE-10343 have made the costs changeable by hiveconf settings; however there > was a method in which there was already a local variable named > cpuCostbottom line is the cost of n-way joins calculated by this method > is computed as the product of the number of rows... > https://github.com/apache/hive/blob/9c907769a63a6b23c91fdf0b3f3d0aa6387035dc/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/cost/HiveAlgorithmsUtil.java#L83 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20608) Incorrect handling of sql command args in hive service leading to misleading error messages
[ https://issues.apache.org/jira/browse/HIVE-20608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Soumabrata Chakraborty reassigned HIVE-20608: - > Incorrect handling of sql command args in hive service leading to misleading > error messages > --- > > Key: HIVE-20608 > URL: https://issues.apache.org/jira/browse/HIVE-20608 > Project: Hive > Issue Type: Bug >Reporter: Soumabrata Chakraborty >Assignee: Soumabrata Chakraborty >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20607) TxnHandler should use PreparedStatement to execute direct SQL queries.
[ https://issues.apache.org/jira/browse/HIVE-20607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan reassigned HIVE-20607: --- > TxnHandler should use PreparedStatement to execute direct SQL queries. > -- > > Key: HIVE-20607 > URL: https://issues.apache.org/jira/browse/HIVE-20607 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore, Transactions >Affects Versions: 4.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: ACID > Fix For: 4.0.0 > > > TxnHandler uses direct SQL queries to operate on Txn related databases/tables > in Hive metastore RDBMS. > Most of the methods are direct calls from Metastore api which should be > directly append input string arguments to the SQL string. > Need to use parameterised PreparedStatement object to set these arguments. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20606) hive3.1 beeline to dns complaining about ssl on ip
[ https://issues.apache.org/jira/browse/HIVE-20606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] t oo updated HIVE-20606: Priority: Blocker (was: Critical) > hive3.1 beeline to dns complaining about ssl on ip > -- > > Key: HIVE-20606 > URL: https://issues.apache.org/jira/browse/HIVE-20606 > Project: Hive > Issue Type: Bug > Components: Beeline, HiveServer2 >Affects Versions: 3.1.0 >Reporter: t oo >Priority: Blocker > > Why is beeline complaining about ip when i use dns in the connection? I have > a valid cert/jks on the dns. Exact same beeline worked when running on > hive2.3.2 but this is hive3.1.0 > [ec2-user@ip-10-1-2-3 logs]$ $HIVE_HOME/bin/beeline > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/usr/lib/apache-hive-3.1.0-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/usr/lib/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an > explanation. > SLF4J: Actual binding is of type > [org.apache.logging.slf4j.Log4jLoggerFactory] > Beeline version 3.1.0 by Apache Hive > beeline> !connect > jdbc:hive2://mydns:1/default;ssl=true;sslTrustStore=/home/ec2-user/spark_home/conf/app-trust-nonprd.jks;trustStorePassword=changeit > userhere passhere > Connecting to > jdbc:hive2://mydns:1/default;ssl=true;sslTrustStore=/home/ec2-user/spark_home/conf/app-trust-nonprd.jks;trustStorePassword=changeit > 18/09/20 04:49:06 [main]: WARN jdbc.HiveConnection: Failed to connect to > mydns:1 > Unknown HS2 problem when communicating with Thrift server. > Error: Could not open client transport with JDBC Uri: > jdbc:hive2://mydns:1/default;ssl=true;sslTrustStore=/home/ec2-user/spark_home/conf/app-trust-nonprd.jks;trustStorePassword=changeit: > javax.net.ssl.SSLHandshakeException: > java.security.cert.CertificateException: No subject alternative names > matching IP address 10.1.2.3 found (state=08S01,code=0) > beeline> > > > > > > > > > > > hiveserver2 logs: > 2018-09-20T04:50:16,245 ERROR [HiveServer2-Handler-Pool: Thread-79] > server.TThreadPoolServer: Error occurred during processing of message. > java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: > javax.net.ssl.SSLHandshakeException: Remote host closed connection during > handshake > at > org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) > ~[hive-exec-3.1.0.jar:3.1.0] > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269) > ~[hive-exec-3.1.0.jar:3.1.0] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] > Caused by: org.apache.thrift.transport.TTransportException: > javax.net.ssl.SSLHandshakeException: Remote host closed connection during > handshake > at > org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129) > ~[hive-exec-3.1.0.jar:3.1.0] > at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) > ~[hive-exec-3.1.0.jar:3.1.0] > at > org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:178) > ~[hive-exec-3.1.0.jar:3.1.0] > at > org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125) > ~[hive-exec-3.1.0.jar:3.1.0] > at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) > ~[hive-exec-3.1.0.jar:3.1.0] > at > org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) > ~[hive-exec-3.1.0.jar:3.1.0] > at > org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) > ~[hive-exec-3.1.0.jar:3.1.0] > ... 4 more > Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection > during handshake > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002) > ~[?:1.8.0_181] > at > sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385) > ~[?:1.8.0_181] > at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:938) > ~[?:1.8.0_181] > at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) > ~[?:1.8.0_181] > at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) > ~[?:1.8.0_181] > at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) > ~[?:1.8.0_181] > at java.io.BufferedInputStream.read(BufferedInputStream.java:345) >
[jira] [Updated] (HIVE-20606) hive3.1 beeline to dns complaining about ssl on ip
[ https://issues.apache.org/jira/browse/HIVE-20606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] t oo updated HIVE-20606: Description: Why is beeline complaining about ip when i use dns in the connection? I have a valid cert/jks on the dns. Exact same beeline worked when running on hive2.3.2 but this is hive3.1.0 [ec2-user@ip-10-1-2-3 logs]$ $HIVE_HOME/bin/beeline SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/lib/apache-hive-3.1.0-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/lib/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Beeline version 3.1.0 by Apache Hive beeline> !connect jdbc:hive2://mydns:1/default;ssl=true;sslTrustStore=/home/ec2-user/spark_home/conf/app-trust-nonprd.jks;trustStorePassword=changeit userhere passhere Connecting to jdbc:hive2://mydns:1/default;ssl=true;sslTrustStore=/home/ec2-user/spark_home/conf/app-trust-nonprd.jks;trustStorePassword=changeit 18/09/20 04:49:06 [main]: WARN jdbc.HiveConnection: Failed to connect to mydns:1 Unknown HS2 problem when communicating with Thrift server. Error: Could not open client transport with JDBC Uri: jdbc:hive2://mydns:1/default;ssl=true;sslTrustStore=/home/ec2-user/spark_home/conf/app-trust-nonprd.jks;trustStorePassword=changeit: javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No subject alternative names matching IP address 10.1.2.3 found (state=08S01,code=0) beeline> hiveserver2 logs: 2018-09-20T04:50:16,245 ERROR [HiveServer2-Handler-Pool: Thread-79] server.TThreadPoolServer: Error occurred during processing of message. java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) ~[hive-exec-3.1.0.jar:3.1.0] at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269) ~[hive-exec-3.1.0.jar:3.1.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_181] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] Caused by: org.apache.thrift.transport.TTransportException: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129) ~[hive-exec-3.1.0.jar:3.1.0] at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[hive-exec-3.1.0.jar:3.1.0] at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:178) ~[hive-exec-3.1.0.jar:3.1.0] at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125) ~[hive-exec-3.1.0.jar:3.1.0] at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) ~[hive-exec-3.1.0.jar:3.1.0] at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) ~[hive-exec-3.1.0.jar:3.1.0] at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) ~[hive-exec-3.1.0.jar:3.1.0] ... 4 more Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002) ~[?:1.8.0_181] at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385) ~[?:1.8.0_181] at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:938) ~[?:1.8.0_181] at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) ~[?:1.8.0_181] at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) ~[?:1.8.0_181] at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) ~[?:1.8.0_181] at java.io.BufferedInputStream.read(BufferedInputStream.java:345) ~[?:1.8.0_181] at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127) ~[hive-exec-3.1.0.jar:3.1.0] at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[hive-exec-3.1.0.jar:3.1.0] at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:178) ~[hive-exec-3.1.0.jar:3.1.0] at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125) ~[hive-exec-3.1.0.jar:3.1.0] at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) ~[hive-exec-3.1.0.jar:3.1.0] at
[jira] [Commented] (HIVE-20598) Fix typos in HiveAlgorithmsUtil calculations
[ https://issues.apache.org/jira/browse/HIVE-20598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621514#comment-16621514 ] Hive QA commented on HIVE-20598: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 28s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 59s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} ql: The patch generated 0 new + 71 unchanged - 2 fixed = 71 total (was 73) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 29s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13922/dev-support/hive-personality.sh | | git revision | master / ee5566b | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13922/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Fix typos in HiveAlgorithmsUtil calculations > > > Key: HIVE-20598 > URL: https://issues.apache.org/jira/browse/HIVE-20598 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-20598.01.patch > > > HIVE-10343 have made the costs changeable by hiveconf settings; however there > was a method in which there was already a local variable named > cpuCostbottom line is the cost of n-way joins calculated by this method > is computed as the product of the number of rows... > https://github.com/apache/hive/blob/9c907769a63a6b23c91fdf0b3f3d0aa6387035dc/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/cost/HiveAlgorithmsUtil.java#L83 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20498) Support date type for column stats autogather
[ https://issues.apache.org/jira/browse/HIVE-20498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621494#comment-16621494 ] Hive QA commented on HIVE-20498: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12940413/HIVE-20498.04.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13920/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13920/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13920/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12940413/HIVE-20498.04.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12940413 - PreCommit-HIVE-Build > Support date type for column stats autogather > - > > Key: HIVE-20498 > URL: https://issues.apache.org/jira/browse/HIVE-20498 > Project: Hive > Issue Type: Sub-task > Components: Statistics >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-20498.02.patch, HIVE-20498.03.patch, > HIVE-20498.04.patch, HIVE-20498.1.patch > > > {code} > set hive.stats.column.autogather=true; > create table dx2(a int,b int,d date); > explain insert into dx2 values(1,1,'2011-11-11'); > -- no compute_stats calls > insert into dx2 values(1,1,'2011-11-11'); > insert into dx2 values(1,1,'2001-11-11'); > explain analyze table dx2 compute statistics for columns; > -- as expected; has compute_stats calls > analyze table dx2 compute statistics for columns; > -- runs ok > desc formatted dx2 d; > -- looks good > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20498) Support date type for column stats autogather
[ https://issues.apache.org/jira/browse/HIVE-20498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621495#comment-16621495 ] Hive QA commented on HIVE-20498: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12940413/HIVE-20498.04.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13921/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13921/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13921/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12940413/HIVE-20498.04.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12940413 - PreCommit-HIVE-Build > Support date type for column stats autogather > - > > Key: HIVE-20498 > URL: https://issues.apache.org/jira/browse/HIVE-20498 > Project: Hive > Issue Type: Sub-task > Components: Statistics >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-20498.02.patch, HIVE-20498.03.patch, > HIVE-20498.04.patch, HIVE-20498.1.patch > > > {code} > set hive.stats.column.autogather=true; > create table dx2(a int,b int,d date); > explain insert into dx2 values(1,1,'2011-11-11'); > -- no compute_stats calls > insert into dx2 values(1,1,'2011-11-11'); > insert into dx2 values(1,1,'2001-11-11'); > explain analyze table dx2 compute statistics for columns; > -- as expected; has compute_stats calls > analyze table dx2 compute statistics for columns; > -- runs ok > desc formatted dx2 d; > -- looks good > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20498) Support date type for column stats autogather
[ https://issues.apache.org/jira/browse/HIVE-20498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621493#comment-16621493 ] Hive QA commented on HIVE-20498: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12940413/HIVE-20498.04.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14983 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13919/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13919/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13919/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12940413 - PreCommit-HIVE-Build > Support date type for column stats autogather > - > > Key: HIVE-20498 > URL: https://issues.apache.org/jira/browse/HIVE-20498 > Project: Hive > Issue Type: Sub-task > Components: Statistics >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-20498.02.patch, HIVE-20498.03.patch, > HIVE-20498.04.patch, HIVE-20498.1.patch > > > {code} > set hive.stats.column.autogather=true; > create table dx2(a int,b int,d date); > explain insert into dx2 values(1,1,'2011-11-11'); > -- no compute_stats calls > insert into dx2 values(1,1,'2011-11-11'); > insert into dx2 values(1,1,'2001-11-11'); > explain analyze table dx2 compute statistics for columns; > -- as expected; has compute_stats calls > analyze table dx2 compute statistics for columns; > -- runs ok > desc formatted dx2 d; > -- looks good > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20536) Add Surrogate Keys function to Hive
[ https://issues.apache.org/jira/browse/HIVE-20536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-20536: Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks, Miklos! > Add Surrogate Keys function to Hive > --- > > Key: HIVE-20536 > URL: https://issues.apache.org/jira/browse/HIVE-20536 > Project: Hive > Issue Type: Task > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-20536.01.patch, HIVE-20536.02.patch, > HIVE-20536.03.patch, HIVE-20536.04.patch, HIVE-20536.05.patch, > HIVE-20536.06.patch, HIVE-20536.07.patch > > > Surrogate keys is an ability to generate and use unique integers for each row > in a table. If we have that ability then in conjunction with default clause > we can get surrogate keys functionality. Consider following ddl: > create table t1 (a string, b bigint default unique_long()); > We already have default clause wherein you can specify a function to provide > values. So, what we need is udf which can generate unique longs for each row > across queries for a table. > Idea is to use write_id . This is a column in metastore table TXN_COMPONENTS > whose value is determined at compile time to be used during query execution. > Each query execution generates a new write_id. So, we can seed udf with this > value during compilation. > Then we statically allocate ranges for each task from which it can draw next > long. So, lets say 64-bit write_id we divy up such that last 24 bits belong > to original usage of it that is txns. Next 16 bits are used for task_attempts > and last 24 bits to generate new long for each row. This implies we can allow > 17M txns, 65K tasks and 17M rows in a task. If you hit any of those limits we > can fail the query. > Implementation wise: serialize write_id in initialize() of udf. Then during > execute() we find out what task_attempt current task is and use it along with > write_id() to get starting long and give a new value on each invocation of > execute(). > Here we are assuming write_id can be determined at compile time, which should > be the case but we need to figure out how to get handle to it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20498) Support date type for column stats autogather
[ https://issues.apache.org/jira/browse/HIVE-20498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621466#comment-16621466 ] Hive QA commented on HIVE-20498: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 57s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 52s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 15s{color} | {color:red} metastore-server in master failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} ql: The patch generated 0 new + 7 unchanged - 1 fixed = 7 total (was 8) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 7s{color} | {color:green} The patch metastore-server passed checkstyle {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 14s{color} | {color:red} metastore-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 31s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13919/dev-support/hive-personality.sh | | git revision | master / 487714a | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13919/yetus/branch-findbugs-standalone-metastore_metastore-server.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13919/yetus/patch-findbugs-standalone-metastore_metastore-server.txt | | modules | C: ql standalone-metastore/metastore-server U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13919/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Support date type for column stats autogather > - > > Key: HIVE-20498 > URL: https://issues.apache.org/jira/browse/HIVE-20498 > Project: Hive > Issue Type: Sub-task > Components: Statistics >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-20498.02.patch, HIVE-20498.03.patch, > HIVE-20498.04.patch, HIVE-20498.1.patch > > > {code} > set hive.stats.column.autogather=true; > create table dx2(a int,b int,d date); > explain insert into dx2 values(1,1,'2011-11-11'); > -- no compute_stats calls > insert into dx2 values(1,1,'2011-11-11'); > insert into
[jira] [Commented] (HIVE-20549) Allow user set query tag, and kill query with tag
[ https://issues.apache.org/jira/browse/HIVE-20549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621457#comment-16621457 ] mahesh kumar behera commented on HIVE-20549: [~thejas] The code changes looks fine to me > Allow user set query tag, and kill query with tag > - > > Key: HIVE-20549 > URL: https://issues.apache.org/jira/browse/HIVE-20549 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-20549.1.patch, HIVE-20549.2.patch > > > HIVE-19924 add capacity for replication job set a query tag and kill the > replication distcp job with the tag. Here I make it more general, user can > set arbitrary "hive.query.tag" in sql script, and kill query with the tag. > Hive will cancel the corresponding operation in hs2, along with Tez/MR > application launched for the query. For example: > {code} > set hive.query.tag=mytag; > select . -- long running query > {code} > In another session: > {code} > kill query 'mytag'; > {code} > There're limitations in the implementation: > 1. No tag duplication check. There's nothing to prevent conflicting tag for > same user, and kill query will kill queries share the same tag. However, kill > query will not kill queries from different user unless admin. So different > user might share the same tag > 2. In multiple hs2 environment, kill statement should be issued to all hs2 to > make sure the corresponding operation is canceled. When beeline/jdbc connects > to hs2 using regular way (zookeeper url), the session will connect to random > hs2, which might be different than the hs2 where query run on. User can use > HiveConnection.getAllUrls or beeline --getUrlsFromBeelineSite (HIVE-20507) to > get a list of all hs2 instances. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20535) Add new configuration to set the size of the global compile lock
[ https://issues.apache.org/jira/browse/HIVE-20535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621449#comment-16621449 ] Hive QA commented on HIVE-20535: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12940398/HIVE-20535.12.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 14978 tests executed *Failed tests:* {noformat} TestOrcSplitElimination - did not produce a TEST-*.xml file (likely timed out) (batchId=289) org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testKillQuery (batchId=251) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13918/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13918/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13918/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12940398 - PreCommit-HIVE-Build > Add new configuration to set the size of the global compile lock > > > Key: HIVE-20535 > URL: https://issues.apache.org/jira/browse/HIVE-20535 > Project: Hive > Issue Type: Task > Components: HiveServer2 >Reporter: denys kuzmenko >Assignee: denys kuzmenko >Priority: Major > Attachments: HIVE-20535.1.patch, HIVE-20535.10.patch, > HIVE-20535.11.patch, HIVE-20535.12.patch, HIVE-20535.2.patch, > HIVE-20535.3.patch, HIVE-20535.4.patch, HIVE-20535.5.patch, > HIVE-20535.6.patch, HIVE-20535.8.patch, HIVE-20535.9.patch > > > When removing the compile lock, it is quite risky to remove it entirely. > It would be good to provide a pool size for the concurrent compilation, so > the administrator can limit the load -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621429#comment-16621429 ] zhuwei commented on HIVE-20497: --- I have checked the failed test case, they are not related to my change. @[~ashutoshc] Could you help to review the code? > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch, HIVE-20497.2.patch, > HIVE-20497.3.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20535) Add new configuration to set the size of the global compile lock
[ https://issues.apache.org/jira/browse/HIVE-20535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621420#comment-16621420 ] Hive QA commented on HIVE-20535: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 21s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 31s{color} | {color:blue} common in master has 65 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 58s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s{color} | {color:red} ql: The patch generated 1 new + 142 unchanged - 6 fixed = 143 total (was 148) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 41s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13918/dev-support/hive-personality.sh | | git revision | master / 487714a | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13918/yetus/diff-checkstyle-ql.txt | | modules | C: common ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13918/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Add new configuration to set the size of the global compile lock > > > Key: HIVE-20535 > URL: https://issues.apache.org/jira/browse/HIVE-20535 > Project: Hive > Issue Type: Task > Components: HiveServer2 >Reporter: denys kuzmenko >Assignee: denys kuzmenko >Priority: Major > Attachments: HIVE-20535.1.patch, HIVE-20535.10.patch, > HIVE-20535.11.patch, HIVE-20535.12.patch, HIVE-20535.2.patch, > HIVE-20535.3.patch, HIVE-20535.4.patch, HIVE-20535.5.patch, > HIVE-20535.6.patch, HIVE-20535.8.patch, HIVE-20535.9.patch > > > When removing the compile lock, it is quite risky to remove it entirely. > It would be good to provide a pool size for the concurrent compilation, so > the administrator can limit the load -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20536) Add Surrogate Keys function to Hive
[ https://issues.apache.org/jira/browse/HIVE-20536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621404#comment-16621404 ] Hive QA commented on HIVE-20536: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12940397/HIVE-20536.07.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14992 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13917/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13917/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13917/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12940397 - PreCommit-HIVE-Build > Add Surrogate Keys function to Hive > --- > > Key: HIVE-20536 > URL: https://issues.apache.org/jira/browse/HIVE-20536 > Project: Hive > Issue Type: Task > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Attachments: HIVE-20536.01.patch, HIVE-20536.02.patch, > HIVE-20536.03.patch, HIVE-20536.04.patch, HIVE-20536.05.patch, > HIVE-20536.06.patch, HIVE-20536.07.patch > > > Surrogate keys is an ability to generate and use unique integers for each row > in a table. If we have that ability then in conjunction with default clause > we can get surrogate keys functionality. Consider following ddl: > create table t1 (a string, b bigint default unique_long()); > We already have default clause wherein you can specify a function to provide > values. So, what we need is udf which can generate unique longs for each row > across queries for a table. > Idea is to use write_id . This is a column in metastore table TXN_COMPONENTS > whose value is determined at compile time to be used during query execution. > Each query execution generates a new write_id. So, we can seed udf with this > value during compilation. > Then we statically allocate ranges for each task from which it can draw next > long. So, lets say 64-bit write_id we divy up such that last 24 bits belong > to original usage of it that is txns. Next 16 bits are used for task_attempts > and last 24 bits to generate new long for each row. This implies we can allow > 17M txns, 65K tasks and 17M rows in a task. If you hit any of those limits we > can fail the query. > Implementation wise: serialize write_id in initialize() of udf. Then during > execute() we find out what task_attempt current task is and use it along with > write_id() to get starting long and give a new value on each invocation of > execute(). > Here we are assuming write_id can be determined at compile time, which should > be the case but we need to figure out how to get handle to it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18952) Tez session disconnect and reconnect on HS2 HA failover
[ https://issues.apache.org/jira/browse/HIVE-18952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18952: Resolution: Fixed Target Version/s: (was: 3.0.0) Status: Resolved (was: Patch Available) Committed to branch, HIVE-20605 will merge this into master. > Tez session disconnect and reconnect on HS2 HA failover > --- > > Key: HIVE-18952 > URL: https://issues.apache.org/jira/browse/HIVE-18952 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Sergey Shelukhin >Priority: Major > Fix For: master-tez092 > > Attachments: HIVE-18952.01.patch, HIVE-18952.02.patch, > HIVE-18952.03.patch, HIVE-18952.patch > > > Now that TEZ-3892 is committed, HIVE-18281 can make use of tez session > disconnect and reconnect on HA failover. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20547) HS2: support Tez sessions started by someone else (part 1)
[ https://issues.apache.org/jira/browse/HIVE-20547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-20547: Resolution: Fixed Status: Resolved (was: Patch Available) Committed to branch. HIVE-20605 will merge this into master. > HS2: support Tez sessions started by someone else (part 1) > -- > > Key: HIVE-20547 > URL: https://issues.apache.org/jira/browse/HIVE-20547 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Fix For: master-tez092 > > Attachments: HIVE-20547.01.patch, HIVE-20547.patch > > > The registry/configs/some code is based on a private patch by [~prasanth_j]. > The patch refactors tez pool session to use composition instead of > implementation inheritance from TezSessionState, to allow for two > implementations of TezSessionState. > For now it's blocked on getClient API in Tez that will be available after > 0.9.3 release; however I commented out that path to check that refactoring > passes tests. > When 0.9.3 becomes available, we can uncomment and commit. > In part 2, we may add some tests, and also consider other changes that are > required for external sessions (e.g. KillQuery, where we cannot assume YARN > is present). > We may also consider a WM change that allows for proportional session > distribution when the number of external sessions and the number of > admin-specified sessions doesn't match, or at least some validation to see > that the external sessions are available when applying a RP. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20605) merge master-tez092 branch into master
[ https://issues.apache.org/jira/browse/HIVE-20605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-20605: --- > merge master-tez092 branch into master > -- > > Key: HIVE-20605 > URL: https://issues.apache.org/jira/browse/HIVE-20605 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > > I got tired of waiting for Tez 0.92 release (it's been pending for half a > year) so I created a branch to prevent various patches from conflicting with > each other. > This jira is to merge them into master after Tez 0.92 is finally released. > The jiras here: > https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20fixVersion%20%3D%20master-tez092 > should then be updated with the corresponding Hive release version. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20547) HS2: support Tez sessions started by someone else (part 1)
[ https://issues.apache.org/jira/browse/HIVE-20547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-20547: Fix Version/s: master-tez092 > HS2: support Tez sessions started by someone else (part 1) > -- > > Key: HIVE-20547 > URL: https://issues.apache.org/jira/browse/HIVE-20547 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Fix For: master-tez092 > > Attachments: HIVE-20547.01.patch, HIVE-20547.patch > > > The registry/configs/some code is based on a private patch by [~prasanth_j]. > The patch refactors tez pool session to use composition instead of > implementation inheritance from TezSessionState, to allow for two > implementations of TezSessionState. > For now it's blocked on getClient API in Tez that will be available after > 0.9.3 release; however I commented out that path to check that refactoring > passes tests. > When 0.9.3 becomes available, we can uncomment and commit. > In part 2, we may add some tests, and also consider other changes that are > required for external sessions (e.g. KillQuery, where we cannot assume YARN > is present). > We may also consider a WM change that allows for proportional session > distribution when the number of external sessions and the number of > admin-specified sessions doesn't match, or at least some validation to see > that the external sessions are available when applying a RP. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18952) Tez session disconnect and reconnect on HS2 HA failover
[ https://issues.apache.org/jira/browse/HIVE-18952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18952: Fix Version/s: master-tez092 > Tez session disconnect and reconnect on HS2 HA failover > --- > > Key: HIVE-18952 > URL: https://issues.apache.org/jira/browse/HIVE-18952 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Sergey Shelukhin >Priority: Major > Fix For: master-tez092 > > Attachments: HIVE-18952.01.patch, HIVE-18952.02.patch, > HIVE-18952.03.patch, HIVE-18952.patch > > > Now that TEZ-3892 is committed, HIVE-18281 can make use of tez session > disconnect and reconnect on HA failover. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20536) Add Surrogate Keys function to Hive
[ https://issues.apache.org/jira/browse/HIVE-20536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621374#comment-16621374 ] Hive QA commented on HIVE-20536: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 38s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 1s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 45s{color} | {color:red} ql: The patch generated 2 new + 640 unchanged - 0 fixed = 642 total (was 640) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 39s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13917/dev-support/hive-personality.sh | | git revision | master / 487714a | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13917/yetus/diff-checkstyle-ql.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-13917/yetus/whitespace-eol.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13917/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Add Surrogate Keys function to Hive > --- > > Key: HIVE-20536 > URL: https://issues.apache.org/jira/browse/HIVE-20536 > Project: Hive > Issue Type: Task > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Attachments: HIVE-20536.01.patch, HIVE-20536.02.patch, > HIVE-20536.03.patch, HIVE-20536.04.patch, HIVE-20536.05.patch, > HIVE-20536.06.patch, HIVE-20536.07.patch > > > Surrogate keys is an ability to generate and use unique integers for each row > in a table. If we have that ability then in conjunction with default clause > we can get surrogate keys functionality. Consider following ddl: > create table t1 (a string, b bigint default unique_long()); > We already have default clause wherein you can specify a function to provide > values. So, what we need is udf which can generate unique longs for each row > across queries for a table. > Idea is to use write_id . This is a column in metastore table TXN_COMPONENTS > whose value is determined at compile time to be used during query execution. > Each query execution generates a new write_id. So, we can seed udf with this > value during compilation. > Then we statically allocate ranges for each task from which it can draw next >
[jira] [Updated] (HIVE-20601) EnvironmentContext null in ALTER_PARTITION event in DbNotificationListener
[ https://issues.apache.org/jira/browse/HIVE-20601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali updated HIVE-20601: Status: Patch Available (was: Open) > EnvironmentContext null in ALTER_PARTITION event in DbNotificationListener > -- > > Key: HIVE-20601 > URL: https://issues.apache.org/jira/browse/HIVE-20601 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 3.0.0, 4.0.0 >Reporter: Bharathkrishna Guruvayoor Murali >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-20601.1.patch > > > Cause : EnvironmentContext not passed here: > [https://github.com/apache/hive/blob/36c33ca066c99dfdb21223a711c0c3f33c85b943/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java#L726] > > It will be useful to have the environmentContext passed to > DbNotificationListener in this case, to know if the alter happened due to a > stat change. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20601) EnvironmentContext null in ALTER_PARTITION event in DbNotificationListener
[ https://issues.apache.org/jira/browse/HIVE-20601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali updated HIVE-20601: Attachment: HIVE-20601.1.patch > EnvironmentContext null in ALTER_PARTITION event in DbNotificationListener > -- > > Key: HIVE-20601 > URL: https://issues.apache.org/jira/browse/HIVE-20601 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 3.0.0, 4.0.0 >Reporter: Bharathkrishna Guruvayoor Murali >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-20601.1.patch > > > Cause : EnvironmentContext not passed here: > [https://github.com/apache/hive/blob/36c33ca066c99dfdb21223a711c0c3f33c85b943/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java#L726] > > It will be useful to have the environmentContext passed to > DbNotificationListener in this case, to know if the alter happened due to a > stat change. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Deleted] (HIVE-20596) full table scan in where clause OR condition
[ https://issues.apache.org/jira/browse/HIVE-20596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez deleted HIVE-20596: --- > full table scan in where clause OR condition > > > Key: HIVE-20596 > URL: https://issues.apache.org/jira/browse/HIVE-20596 > Project: Hive > Issue Type: Bug >Reporter: lishiyang >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20604) Minor compaction disables ORC column stats
[ https://issues.apache.org/jira/browse/HIVE-20604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621351#comment-16621351 ] Sergey Shelukhin commented on HIVE-20604: - I thought this was some sort of perf optimization, disabling ROW_INDEX? cc [~prasanth_j] > Minor compaction disables ORC column stats > -- > > Key: HIVE-20604 > URL: https://issues.apache.org/jira/browse/HIVE-20604 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Fix For: 4.0.0 > > > {noformat} > @Override > public org.apache.hadoop.hive.ql.exec.FileSinkOperator.RecordWriter > getRawRecordWriter(Path path, Options options) throws IOException { > final Path filename = AcidUtils.createFilename(path, options); > final OrcFile.WriterOptions opts = > OrcFile.writerOptions(options.getTableProperties(), > options.getConfiguration()); > if (!options.isWritingBase()) { > opts.bufferSize(OrcRecordUpdater.DELTA_BUFFER_SIZE) > .stripeSize(OrcRecordUpdater.DELTA_STRIPE_SIZE) > .blockPadding(false) > .compress(CompressionKind.NONE) > .rowIndexStride(0) > ; > } > {noformat} > {{rowIndexStride(0)}} makes {{StripeStatistics.getColumnStatistics()}} return > objects but with meaningless values, like min/max for > {{IntegerColumnStatistics}} set to MIN_LONG/MAX_LONG. > This interferes with ability to infer min ROW_ID for a split but also creates > inefficient files. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19950) Hive ACID NOT LOCK LockComponent Correctly
[ https://issues.apache.org/jira/browse/HIVE-19950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621348#comment-16621348 ] Hive QA commented on HIVE-19950: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12928594/patch.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13916/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13916/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13916/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-09-20 00:31:21.054 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-13916/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-09-20 00:31:21.058 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 487714a HIVE-19166: TestMiniLlapLocalCliDriver sysdb failure (Daniel Dai, reviewed by Vaibhav Gumashta) + git clean -f -d Removing standalone-metastore/metastore-server/src/gen/ + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 487714a HIVE-19166: TestMiniLlapLocalCliDriver sysdb failure (Daniel Dai, reviewed by Vaibhav Gumashta) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-09-20 00:31:21.775 + rm -rf ../yetus_PreCommit-HIVE-Build-13916 + mkdir ../yetus_PreCommit-HIVE-Build-13916 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-13916 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-13916/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java: does not exist in index error: standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java: does not exist in index error: src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java: does not exist in index The patch does not appear to apply with p0, p1, or p2 + result=1 + '[' 1 -ne 0 ']' + rm -rf yetus_PreCommit-HIVE-Build-13916 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12928594 - PreCommit-HIVE-Build > Hive ACID NOT LOCK LockComponent Correctly > -- > > Key: HIVE-19950 > URL: https://issues.apache.org/jira/browse/HIVE-19950 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.3.2 >Reporter: nickSoul >Priority: Blocker > Attachments: patch.patch > > > Hi, > When using Streaming Mutation recently, I found LockComponents were not > locked correctly by current transaction. Below is my test case: > Step1: Begin a transaction with transactionId 126, and the transaction locks > a table. Then hangs the transaction. The lock information were correctly > restored in mariaDB. > {code:sql} > MariaDB [hive]> select > HL_LOCK_EXT_ID,HL_LOCK_INT_ID,HL_TXNID,HL_DB,HL_TABLE,HL_PARTITION,HL_LOCK_STATE,HL_LOCK_TYPE,HL_ACQUIRED_AT,HL_BLOCKEDBY_EXT_ID,HL_BLOCKEDBY_INT_ID > from HIVE_LOCKS; > {code} > | HL_LOCK_EXT_ID | HL_LOCK_INT_ID | HL_TXNID | HL_DB | HL_TABLE | > HL_PARTITION | HL_LOCK_STATE | HL_LOCK_TYPE | HL_ACQUIRED_AT | > HL_BLOCKEDBY_EXT_ID | HL_BLOCKEDBY_INT_ID | > | 384 | 1 | 126 | test_acid | acid_test | NULL | a | w | 1529512857000 | NULL > | NULL | > > Step2: Begin the other transaction with a transactionId 127 before previous > transaction 126 finished. Transaction 127 tries
[jira] [Assigned] (HIVE-20604) Minor compaction disables ORC column stats
[ https://issues.apache.org/jira/browse/HIVE-20604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman reassigned HIVE-20604: - > Minor compaction disables ORC column stats > -- > > Key: HIVE-20604 > URL: https://issues.apache.org/jira/browse/HIVE-20604 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Fix For: 4.0.0 > > > {noformat} > @Override > public org.apache.hadoop.hive.ql.exec.FileSinkOperator.RecordWriter > getRawRecordWriter(Path path, Options options) throws IOException { > final Path filename = AcidUtils.createFilename(path, options); > final OrcFile.WriterOptions opts = > OrcFile.writerOptions(options.getTableProperties(), > options.getConfiguration()); > if (!options.isWritingBase()) { > opts.bufferSize(OrcRecordUpdater.DELTA_BUFFER_SIZE) > .stripeSize(OrcRecordUpdater.DELTA_STRIPE_SIZE) > .blockPadding(false) > .compress(CompressionKind.NONE) > .rowIndexStride(0) > ; > } > {noformat} > {{rowIndexStride(0)}} makes {{StripeStatistics.getColumnStatistics()}} return > objects but with meaningless values, like min/max for > {{IntegerColumnStatistics}} set to MIN_LONG/MAX_LONG. > This interferes with ability to infer min ROW_ID for a split but also creates > inefficient files. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20022) Upgrade hadoop.version to 3.1.1
[ https://issues.apache.org/jira/browse/HIVE-20022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621345#comment-16621345 ] Hive QA commented on HIVE-20022: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12940388/HIVE-20022.3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 14961 tests executed *Failed tests:* {noformat} TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=187) [infer_bucket_sort_reducers_power_two.q,list_bucket_dml_10.q,orc_merge9.q,leftsemijoin_mr.q,bucket6.q,bucketmapjoin7.q,uber_reduce.q,empty_dir_in_table.q,vector_outer_join2.q,spark_explain_groupbyshuffle.q,spark_dynamic_partition_pruning.q,spark_combine_equivalent_work.q,orc_merge1.q,spark_use_op_stats.q,orc_merge_diff_fs.q,quotedid_smb.q,truncate_column_buckets.q,spark_vectorized_dynamic_partition_pruning.q,spark_in_process_launcher.q,orc_merge3.q] org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=243) org.apache.hive.spark.client.rpc.TestRpc.testClientTimeout (batchId=318) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13915/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13915/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13915/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12940388 - PreCommit-HIVE-Build > Upgrade hadoop.version to 3.1.1 > --- > > Key: HIVE-20022 > URL: https://issues.apache.org/jira/browse/HIVE-20022 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Daniel Voros >Assignee: Daniel Voros >Priority: Blocker > Attachments: HIVE-20022.1.patch, HIVE-20022.2.patch, > HIVE-20022.3.patch, HIVE-20022.3.patch > > > HIVE-19304 is relying on YARN-7142 and YARN-8122 that will only be released > in Hadoop 3.1.1. We should upgrade when possible. > cc [~gsaha] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20603) "Wrong FS" error when inserting to partition after changing table location filesystem
[ https://issues.apache.org/jira/browse/HIVE-20603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621343#comment-16621343 ] Sergey Shelukhin commented on HIVE-20603: - Wow, this is even funnier than the old issue... so it used to just arbitrarily append partition path to table FS schema and authority? I wonder what the logic behind that is. +1 pending tests, if I understand this correctly. > "Wrong FS" error when inserting to partition after changing table location > filesystem > - > > Key: HIVE-20603 > URL: https://issues.apache.org/jira/browse/HIVE-20603 > Project: Hive > Issue Type: Bug >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-20603.1.patch > > > Inserting into an existing partition, after changing a table's location to > point to a different HDFS filesystem: > {noformat} >query += "CREATE TABLE test_managed_tbl (id int, name string, dept string) > PARTITIONED BY (year int);\n" > query += "INSERT INTO test_managed_tbl PARTITION (year=2016) VALUES > (8,'Henry','CSE');\n" > query += "ALTER TABLE test_managed_tbl ADD PARTITION (year=2017);\n" > query += "ALTER TABLE test_managed_tbl SET LOCATION > > 'hdfs://ns2/warehouse/tablespace/managed/hive/test_managed_tbl'" > query += "INSERT INTO test_managed_tbl PARTITION (year=2017) VALUES > (9,'Harris','CSE');\n" > {noformat} > Results in the following error: > {noformat} > java.lang.IllegalArgumentException: Wrong FS: > hdfs://ns1/warehouse/tablespace/managed/hive/test_managed_tbl/year=2017, > expected: hdfs://ns2 > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:781) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:240) > at > org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1583) > at > org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1580) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1595) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1734) > at org.apache.hadoop.hive.ql.metadata.Hive.copyFiles(Hive.java:4141) > at > org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1966) > at > org.apache.hadoop.hive.ql.exec.MoveTask.handleStaticParts(MoveTask.java:477) > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:397) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:210) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2701) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2372) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2048) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1746) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1740) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20603) "Wrong FS" error when inserting to partition after changing table location filesystem
[ https://issues.apache.org/jira/browse/HIVE-20603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-20603: -- Status: Patch Available (was: Open) > "Wrong FS" error when inserting to partition after changing table location > filesystem > - > > Key: HIVE-20603 > URL: https://issues.apache.org/jira/browse/HIVE-20603 > Project: Hive > Issue Type: Bug >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-20603.1.patch > > > Inserting into an existing partition, after changing a table's location to > point to a different HDFS filesystem: > {noformat} >query += "CREATE TABLE test_managed_tbl (id int, name string, dept string) > PARTITIONED BY (year int);\n" > query += "INSERT INTO test_managed_tbl PARTITION (year=2016) VALUES > (8,'Henry','CSE');\n" > query += "ALTER TABLE test_managed_tbl ADD PARTITION (year=2017);\n" > query += "ALTER TABLE test_managed_tbl SET LOCATION > > 'hdfs://ns2/warehouse/tablespace/managed/hive/test_managed_tbl'" > query += "INSERT INTO test_managed_tbl PARTITION (year=2017) VALUES > (9,'Harris','CSE');\n" > {noformat} > Results in the following error: > {noformat} > java.lang.IllegalArgumentException: Wrong FS: > hdfs://ns1/warehouse/tablespace/managed/hive/test_managed_tbl/year=2017, > expected: hdfs://ns2 > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:781) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:240) > at > org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1583) > at > org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1580) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1595) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1734) > at org.apache.hadoop.hive.ql.metadata.Hive.copyFiles(Hive.java:4141) > at > org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1966) > at > org.apache.hadoop.hive.ql.exec.MoveTask.handleStaticParts(MoveTask.java:477) > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:397) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:210) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2701) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2372) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2048) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1746) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1740) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20603) "Wrong FS" error when inserting to partition after changing table location filesystem
[ https://issues.apache.org/jira/browse/HIVE-20603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-20603: -- Attachment: HIVE-20603.1.patch > "Wrong FS" error when inserting to partition after changing table location > filesystem > - > > Key: HIVE-20603 > URL: https://issues.apache.org/jira/browse/HIVE-20603 > Project: Hive > Issue Type: Bug >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-20603.1.patch > > > Inserting into an existing partition, after changing a table's location to > point to a different HDFS filesystem: > {noformat} >query += "CREATE TABLE test_managed_tbl (id int, name string, dept string) > PARTITIONED BY (year int);\n" > query += "INSERT INTO test_managed_tbl PARTITION (year=2016) VALUES > (8,'Henry','CSE');\n" > query += "ALTER TABLE test_managed_tbl ADD PARTITION (year=2017);\n" > query += "ALTER TABLE test_managed_tbl SET LOCATION > > 'hdfs://ns2/warehouse/tablespace/managed/hive/test_managed_tbl'" > query += "INSERT INTO test_managed_tbl PARTITION (year=2017) VALUES > (9,'Harris','CSE');\n" > {noformat} > Results in the following error: > {noformat} > java.lang.IllegalArgumentException: Wrong FS: > hdfs://ns1/warehouse/tablespace/managed/hive/test_managed_tbl/year=2017, > expected: hdfs://ns2 > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:781) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:240) > at > org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1583) > at > org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1580) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1595) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1734) > at org.apache.hadoop.hive.ql.metadata.Hive.copyFiles(Hive.java:4141) > at > org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1966) > at > org.apache.hadoop.hive.ql.exec.MoveTask.handleStaticParts(MoveTask.java:477) > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:397) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:210) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2701) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2372) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2048) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1746) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1740) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20603) "Wrong FS" error when inserting to partition after changing table location filesystem
[ https://issues.apache.org/jira/browse/HIVE-20603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621336#comment-16621336 ] Jason Dere commented on HIVE-20603: --- Looks like HIVE-19891 changed some behavior relevant to this, but missed out on the case where the new table location used a different FS. cc [~sershe] [~ashutoshc] > "Wrong FS" error when inserting to partition after changing table location > filesystem > - > > Key: HIVE-20603 > URL: https://issues.apache.org/jira/browse/HIVE-20603 > Project: Hive > Issue Type: Bug >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > > Inserting into an existing partition, after changing a table's location to > point to a different HDFS filesystem: > {noformat} >query += "CREATE TABLE test_managed_tbl (id int, name string, dept string) > PARTITIONED BY (year int);\n" > query += "INSERT INTO test_managed_tbl PARTITION (year=2016) VALUES > (8,'Henry','CSE');\n" > query += "ALTER TABLE test_managed_tbl ADD PARTITION (year=2017);\n" > query += "ALTER TABLE test_managed_tbl SET LOCATION > > 'hdfs://ns2/warehouse/tablespace/managed/hive/test_managed_tbl'" > query += "INSERT INTO test_managed_tbl PARTITION (year=2017) VALUES > (9,'Harris','CSE');\n" > {noformat} > Results in the following error: > {noformat} > java.lang.IllegalArgumentException: Wrong FS: > hdfs://ns1/warehouse/tablespace/managed/hive/test_managed_tbl/year=2017, > expected: hdfs://ns2 > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:781) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:240) > at > org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1583) > at > org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1580) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1595) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1734) > at org.apache.hadoop.hive.ql.metadata.Hive.copyFiles(Hive.java:4141) > at > org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1966) > at > org.apache.hadoop.hive.ql.exec.MoveTask.handleStaticParts(MoveTask.java:477) > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:397) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:210) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2701) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2372) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2048) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1746) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1740) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20570) Union ALL with hive.optimize.union.remove=true has incorrect plan
[ https://issues.apache.org/jira/browse/HIVE-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-20570: --- Resolution: Fixed Fix Version/s: 4.0.0 Release Note: Patch pushed to master branch. Status: Resolved (was: Patch Available) > Union ALL with hive.optimize.union.remove=true has incorrect plan > - > > Key: HIVE-20570 > URL: https://issues.apache.org/jira/browse/HIVE-20570 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-20570.1.patch, HIVE-20570.2.patch, > HIVE-20570.3.patch > > > When hive.optimize.union.remove=true and a select query is run with group by, > the final fetch is waiting only for one of the branches and not both. > Test Case: > {code} > create table if not exists test_table(column1 string, column2 int); > insert into test_table values('a',1),('b',2); > set hive.optimize.union.remove=true; > set mapred.input.dir.recursive=true; > explain > select column1 from test_table group by column1 > union all > select column1 from test_table group by column1; > {code} > In the below the two stages correspond to the two parts of union all. But > the final fetch operator (Stage 0) only depends on one of the stages, but it > should depend on both. > Plan: > {code} > STAGE DEPENDENCIES: > Stage-1 is a root stage > Stage-2 is a root stage > *Stage-0 depends on stages: Stage-1* > STAGE PLANS: > Stage: Stage-1 > Map Reduce > Map Operator Tree: > TableScan > alias: test_table > Statistics: Num rows: 2 Data size: 6 Basic stats: COMPLETE Column > stats: NONE > Select Operator > expressions: column1 (type: string) > outputColumnNames: column1 > Statistics: Num rows: 2 Data size: 6 Basic stats: COMPLETE > Column stats: NONE > Group By Operator > keys: column1 (type: string) > mode: hash > outputColumnNames: _col0 > Statistics: Num rows: 2 Data size: 6 Basic stats: COMPLETE > Column stats: NONE > Reduce Output Operator > key expressions: _col0 (type: string) > sort order: + > Map-reduce partition columns: _col0 (type: string) > Statistics: Num rows: 2 Data size: 6 Basic stats: COMPLETE > Column stats: NONE > Execution mode: vectorized > Reduce Operator Tree: > Group By Operator > keys: KEY._col0 (type: string) > mode: mergepartial > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 3 Basic stats: COMPLETE Column > stats: NONE > File Output Operator > compressed: false > Statistics: Num rows: 1 Data size: 3 Basic stats: COMPLETE Column > stats: NONE > table: > input format: org.apache.hadoop.mapred.SequenceFileInputFormat > output format: > org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat > serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe > Stage: Stage-2 > Map Reduce > Map Operator Tree: > TableScan > alias: test_table > Statistics: Num rows: 2 Data size: 6 Basic stats: COMPLETE Column > stats: NONE > Select Operator > expressions: column1 (type: string) > outputColumnNames: column1 > Statistics: Num rows: 2 Data size: 6 Basic stats: COMPLETE > Column stats: NONE > Group By Operator > keys: column1 (type: string) > mode: hash > outputColumnNames: _col0 > Statistics: Num rows: 2 Data size: 6 Basic stats: COMPLETE > Column stats: NONE > Reduce Output Operator > key expressions: _col0 (type: string) > sort order: + > Map-reduce partition columns: _col0 (type: string) > Statistics: Num rows: 2 Data size: 6 Basic stats: COMPLETE > Column stats: NONE > Execution mode: vectorized > Reduce Operator Tree: > Group By Operator > keys: KEY._col0 (type: string) > mode: mergepartial > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 3 Basic stats: COMPLETE Column > stats: NONE > File Output Operator > compressed: false > Statistics: Num rows: 1 Data size: 3 Basic stats: COMPLETE Column > stats: NONE > table: > input format: org.apache.hadoop.mapred.SequenceFileInputFormat >
[jira] [Commented] (HIVE-20022) Upgrade hadoop.version to 3.1.1
[ https://issues.apache.org/jira/browse/HIVE-20022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621332#comment-16621332 ] Hive QA commented on HIVE-20022: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 25s{color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 36s{color} | {color:red} root in master failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 56s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 38s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 38s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 36m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13915/dev-support/hive-personality.sh | | git revision | master / 487714a | | Default Java | 1.8.0_111 | | compile | http://104.198.109.242/logs//PreCommit-HIVE-Build-13915/yetus/branch-compile-root.txt | | compile | http://104.198.109.242/logs//PreCommit-HIVE-Build-13915/yetus/patch-compile-root.txt | | javac | http://104.198.109.242/logs//PreCommit-HIVE-Build-13915/yetus/patch-compile-root.txt | | modules | C: . U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13915/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Upgrade hadoop.version to 3.1.1 > --- > > Key: HIVE-20022 > URL: https://issues.apache.org/jira/browse/HIVE-20022 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Daniel Voros >Assignee: Daniel Voros >Priority: Blocker > Attachments: HIVE-20022.1.patch, HIVE-20022.2.patch, > HIVE-20022.3.patch, HIVE-20022.3.patch > > > HIVE-19304 is relying on YARN-7142 and YARN-8122 that will only be released > in Hadoop 3.1.1. We should upgrade when possible. > cc [~gsaha] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20603) "Wrong FS" error when inserting to partition after changing table location filesystem
[ https://issues.apache.org/jira/browse/HIVE-20603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere reassigned HIVE-20603: - > "Wrong FS" error when inserting to partition after changing table location > filesystem > - > > Key: HIVE-20603 > URL: https://issues.apache.org/jira/browse/HIVE-20603 > Project: Hive > Issue Type: Bug >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > > Inserting into an existing partition, after changing a table's location to > point to a different HDFS filesystem: > {noformat} >query += "CREATE TABLE test_managed_tbl (id int, name string, dept string) > PARTITIONED BY (year int);\n" > query += "INSERT INTO test_managed_tbl PARTITION (year=2016) VALUES > (8,'Henry','CSE');\n" > query += "ALTER TABLE test_managed_tbl ADD PARTITION (year=2017);\n" > query += "ALTER TABLE test_managed_tbl SET LOCATION > > 'hdfs://ns2/warehouse/tablespace/managed/hive/test_managed_tbl'" > query += "INSERT INTO test_managed_tbl PARTITION (year=2017) VALUES > (9,'Harris','CSE');\n" > {noformat} > Results in the following error: > {noformat} > java.lang.IllegalArgumentException: Wrong FS: > hdfs://ns1/warehouse/tablespace/managed/hive/test_managed_tbl/year=2017, > expected: hdfs://ns2 > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:781) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:240) > at > org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1583) > at > org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1580) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1595) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1734) > at org.apache.hadoop.hive.ql.metadata.Hive.copyFiles(Hive.java:4141) > at > org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1966) > at > org.apache.hadoop.hive.ql.exec.MoveTask.handleStaticParts(MoveTask.java:477) > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:397) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:210) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2701) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2372) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2048) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1746) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1740) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17043) Remove non unique columns from group by keys if not referenced later
[ https://issues.apache.org/jira/browse/HIVE-17043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17043: --- Status: Patch Available (was: Open) > Remove non unique columns from group by keys if not referenced later > > > Key: HIVE-17043 > URL: https://issues.apache.org/jira/browse/HIVE-17043 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Ashutosh Chauhan >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-17043.1.patch, HIVE-17043.2.patch, > HIVE-17043.3.patch > > > Group by keys may be a mix of unique (or primary) keys and regular columns. > In such cases presence of regular column won't alter cardinality of groups. > So, if regular columns are not referenced later, they can be dropped from > group by keys. Depending on operator tree may result in those columns not > being read at all from disk in best case. In worst case, we will avoid > shuffling and sorting regular columns from mapper to reducer, which still > could be substantial CPU and network savings. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17043) Remove non unique columns from group by keys if not referenced later
[ https://issues.apache.org/jira/browse/HIVE-17043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17043: --- Status: Open (was: Patch Available) > Remove non unique columns from group by keys if not referenced later > > > Key: HIVE-17043 > URL: https://issues.apache.org/jira/browse/HIVE-17043 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Ashutosh Chauhan >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-17043.1.patch, HIVE-17043.2.patch, > HIVE-17043.3.patch > > > Group by keys may be a mix of unique (or primary) keys and regular columns. > In such cases presence of regular column won't alter cardinality of groups. > So, if regular columns are not referenced later, they can be dropped from > group by keys. Depending on operator tree may result in those columns not > being read at all from disk in best case. In worst case, we will avoid > shuffling and sorting regular columns from mapper to reducer, which still > could be substantial CPU and network savings. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17043) Remove non unique columns from group by keys if not referenced later
[ https://issues.apache.org/jira/browse/HIVE-17043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621318#comment-16621318 ] Vineet Garg commented on HIVE-17043: Patch (3) adds NOT NULL filter elimination tests > Remove non unique columns from group by keys if not referenced later > > > Key: HIVE-17043 > URL: https://issues.apache.org/jira/browse/HIVE-17043 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Ashutosh Chauhan >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-17043.1.patch, HIVE-17043.2.patch, > HIVE-17043.3.patch > > > Group by keys may be a mix of unique (or primary) keys and regular columns. > In such cases presence of regular column won't alter cardinality of groups. > So, if regular columns are not referenced later, they can be dropped from > group by keys. Depending on operator tree may result in those columns not > being read at all from disk in best case. In worst case, we will avoid > shuffling and sorting regular columns from mapper to reducer, which still > could be substantial CPU and network savings. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17043) Remove non unique columns from group by keys if not referenced later
[ https://issues.apache.org/jira/browse/HIVE-17043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17043: --- Attachment: HIVE-17043.3.patch > Remove non unique columns from group by keys if not referenced later > > > Key: HIVE-17043 > URL: https://issues.apache.org/jira/browse/HIVE-17043 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Ashutosh Chauhan >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-17043.1.patch, HIVE-17043.2.patch, > HIVE-17043.3.patch > > > Group by keys may be a mix of unique (or primary) keys and regular columns. > In such cases presence of regular column won't alter cardinality of groups. > So, if regular columns are not referenced later, they can be dropped from > group by keys. Depending on operator tree may result in those columns not > being read at all from disk in best case. In worst case, we will avoid > shuffling and sorting regular columns from mapper to reducer, which still > could be substantial CPU and network savings. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20595) Add findbugs-exclude.xml to metastore-server
[ https://issues.apache.org/jira/browse/HIVE-20595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621309#comment-16621309 ] Peter Vary commented on HIVE-20595: --- +1 pending tests > Add findbugs-exclude.xml to metastore-server > > > Key: HIVE-20595 > URL: https://issues.apache.org/jira/browse/HIVE-20595 > Project: Hive > Issue Type: Bug > Components: Hive, Standalone Metastore >Affects Versions: 4.0.0 >Reporter: Laszlo Pinter >Assignee: Laszlo Pinter >Priority: Blocker > Attachments: HIVE-20595.01.patch > > > The findbugs-exclude.xml is missing from > standalone-metastore/metastore-server/findbugs. This should be added, > otherwise the findbugs check will fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621308#comment-16621308 ] Hive QA commented on HIVE-20497: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12940387/HIVE-20497.3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 14981 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.ql.security.TestMetastoreAuthorizationProvider.testSimplePrivileges (batchId=245) org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testKillQuery (batchId=251) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13914/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13914/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13914/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12940387 - PreCommit-HIVE-Build > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch, HIVE-20497.2.patch, > HIVE-20497.3.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20440) Create better cache eviction policy for SmallTableCache
[ https://issues.apache.org/jira/browse/HIVE-20440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Antal Sinkovits updated HIVE-20440: --- Attachment: HIVE-20440.06.patch > Create better cache eviction policy for SmallTableCache > --- > > Key: HIVE-20440 > URL: https://issues.apache.org/jira/browse/HIVE-20440 > Project: Hive > Issue Type: Improvement > Components: Spark >Reporter: Antal Sinkovits >Assignee: Antal Sinkovits >Priority: Major > Attachments: HIVE-20440.01.patch, HIVE-20440.02.patch, > HIVE-20440.03.patch, HIVE-20440.04.patch, HIVE-20440.05.patch, > HIVE-20440.06.patch > > > Enhance the SmallTableCache, to use guava cache with soft references, so that > we evict when there is memory pressure. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621279#comment-16621279 ] Hive QA commented on HIVE-20497: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 8s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13914/dev-support/hive-personality.sh | | git revision | master / 487714a | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13914/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch, HIVE-20497.2.patch, > HIVE-20497.3.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20440) Create better cache eviction policy for SmallTableCache
[ https://issues.apache.org/jira/browse/HIVE-20440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Antal Sinkovits updated HIVE-20440: --- Attachment: HIVE-20440.05.patch > Create better cache eviction policy for SmallTableCache > --- > > Key: HIVE-20440 > URL: https://issues.apache.org/jira/browse/HIVE-20440 > Project: Hive > Issue Type: Improvement > Components: Spark >Reporter: Antal Sinkovits >Assignee: Antal Sinkovits >Priority: Major > Attachments: HIVE-20440.01.patch, HIVE-20440.02.patch, > HIVE-20440.03.patch, HIVE-20440.04.patch, HIVE-20440.05.patch > > > Enhance the SmallTableCache, to use guava cache with soft references, so that > we evict when there is memory pressure. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621250#comment-16621250 ] Hive QA commented on HIVE-18871: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12940383/HIVE-18871.7.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 14951 tests executed *Failed tests:* {noformat} TestMiniLlapLocalCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=167) [sysdb.q,tez_dynpart_hashjoin_3.q,dpp.q,topnkey.q,tez_join.q,vectorized_rcfile_columnar.q,vector_reuse_scratchcols.q,delete_where_non_partitioned.q,orc_merge11.q,schema_evol_orc_acid_table.q,cbo_semijoin.q,orc_merge_incompat_schema.q,vectorization_11.q,kryo.q,vector_reduce2.q,vector_interval_mapjoin.q,schema_evol_orc_acidvec_table_update_llap_io.q,tez_joins_explain.q,vector_windowing_order_null.q,explainuser_4.q,vector_llap_io_data_conversion.q,vector_aggregate_9.q,vector_groupby_grouping_sets_limit.q,insert_after_drop_partition.q,default_constraint.q,offset_limit.q,llap_acid_fast.q,subquery_select.q,results_cache_invalidation2.q,delete_where_partitioned.q] org.apache.hadoop.hive.ql.TestTxnCommands.testMergeOnTezEdges (batchId=315) org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testKillQuery (batchId=251) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13913/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13913/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13913/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12940383 - PreCommit-HIVE-Build > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1, 4.0.0, 3.2.0 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, > HIVE-18871.3.patch, HIVE-18871.4.patch, HIVE-18871.5.patch, > HIVE-18871.6.patch, HIVE-18871.7.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at >
[jira] [Updated] (HIVE-20593) Load Data for partitioned ACID tables fails with bucketId out of range: -1
[ https://issues.apache.org/jira/browse/HIVE-20593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-20593: -- Attachment: HIVE-20593.2.patch > Load Data for partitioned ACID tables fails with bucketId out of range: -1 > -- > > Key: HIVE-20593 > URL: https://issues.apache.org/jira/browse/HIVE-20593 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.1.0 >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-20593.1.patch, HIVE-20593.2.patch > > > Load data for ACID tables is failing to load ORC files when it is converted > to IAS job. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20602) hive3 crashes after 1min
[ https://issues.apache.org/jira/browse/HIVE-20602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] t oo updated HIVE-20602: Issue Type: Bug (was: New Feature) > hive3 crashes after 1min > > > Key: HIVE-20602 > URL: https://issues.apache.org/jira/browse/HIVE-20602 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Metastore, Standalone Metastore >Affects Versions: 3.0.0 >Reporter: t oo >Priority: Blocker > > Running hiveserver2 process (v3.0.0 of hive) on ec2 (not emr), the process > starts up and for the first 1min everything is ok (I can make beeline > connection, create/repair/select external hive tables) but then the > hiveserver2 process crashes. If I restart the process and even do nothing the > hiveserver2 process crashes after 1min. When checking the logs I see messages > like 'number of connections to metastore: 1','number of connections to > metastore: 2','number of connections to metastore: 3' then 'could not bind to > port 1 port already in use' then end of the logs. > I made some experiments on few different ec2s (if i use hive v2.3.2 the > hiveserver2 process never crashes), but if i use hive v3.0.0 it consistently > crashes after a min. > Metastore db is mysql rds, hive metastore process never crashed. I can see > the external hive table ddls are persisted in the mysql (ie DBS, TBLS tables). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621217#comment-16621217 ] Hive QA commented on HIVE-18871: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 6s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s{color} | {color:red} ql: The patch generated 1 new + 42 unchanged - 0 fixed = 43 total (was 42) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 41s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13913/dev-support/hive-personality.sh | | git revision | master / 487714a | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13913/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13913/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1, 4.0.0, 3.2.0 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, > HIVE-18871.3.patch, HIVE-18871.4.patch, HIVE-18871.5.patch, > HIVE-18871.6.patch, HIVE-18871.7.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at >
[jira] [Updated] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler
[ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misha Dmitriev updated HIVE-17684: -- Status: Patch Available (was: In Progress) Resubmitting the same patch; hope there will be no unrelated compilation error this time. > HoS memory issues with MapJoinMemoryExhaustionHandler > - > > Key: HIVE-17684 > URL: https://issues.apache.org/jira/browse/HIVE-17684 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Misha Dmitriev >Priority: Major > Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch, > HIVE-17684.03.patch, HIVE-17684.04.patch, HIVE-17684.05.patch, > HIVE-17684.06.patch, HIVE-17684.07.patch, HIVE-17684.08.patch, > HIVE-17684.09.patch, HIVE-17684.10.patch > > > We have seen a number of memory issues due the {{HashSinkOperator}} use of > the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect > scenarios where the small table is taking too much space in memory, in which > case a {{MapJoinMemoryExhaustionError}} is thrown. > The configs to control this logic are: > {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90) > {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55) > The handler works by using the {{MemoryMXBean}} and uses the following logic > to estimate how much memory the {{HashMap}} is consuming: > {{MemoryMXBean#getHeapMemoryUsage().getUsed() / > MemoryMXBean#getHeapMemoryUsage().getMax()}} > The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be > inaccurate. The value returned by this method returns all reachable and > unreachable memory on the heap, so there may be a bunch of garbage data, and > the JVM just hasn't taken the time to reclaim it all. This can lead to > intermittent failures of this check even though a simple GC would have > reclaimed enough space for the process to continue working. > We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. > In Hive-on-MR this probably made sense to use because every Hive task was run > in a dedicated container, so a Hive Task could assume it created most of the > data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks > running in a single executor, each doing different things. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler
[ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misha Dmitriev updated HIVE-17684: -- Attachment: HIVE-17684.10.patch > HoS memory issues with MapJoinMemoryExhaustionHandler > - > > Key: HIVE-17684 > URL: https://issues.apache.org/jira/browse/HIVE-17684 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Misha Dmitriev >Priority: Major > Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch, > HIVE-17684.03.patch, HIVE-17684.04.patch, HIVE-17684.05.patch, > HIVE-17684.06.patch, HIVE-17684.07.patch, HIVE-17684.08.patch, > HIVE-17684.09.patch, HIVE-17684.10.patch > > > We have seen a number of memory issues due the {{HashSinkOperator}} use of > the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect > scenarios where the small table is taking too much space in memory, in which > case a {{MapJoinMemoryExhaustionError}} is thrown. > The configs to control this logic are: > {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90) > {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55) > The handler works by using the {{MemoryMXBean}} and uses the following logic > to estimate how much memory the {{HashMap}} is consuming: > {{MemoryMXBean#getHeapMemoryUsage().getUsed() / > MemoryMXBean#getHeapMemoryUsage().getMax()}} > The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be > inaccurate. The value returned by this method returns all reachable and > unreachable memory on the heap, so there may be a bunch of garbage data, and > the JVM just hasn't taken the time to reclaim it all. This can lead to > intermittent failures of this check even though a simple GC would have > reclaimed enough space for the process to continue working. > We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. > In Hive-on-MR this probably made sense to use because every Hive task was run > in a dedicated container, so a Hive Task could assume it created most of the > data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks > running in a single executor, each doing different things. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler
[ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misha Dmitriev updated HIVE-17684: -- Attachment: HIVE-17684.10.patch > HoS memory issues with MapJoinMemoryExhaustionHandler > - > > Key: HIVE-17684 > URL: https://issues.apache.org/jira/browse/HIVE-17684 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Misha Dmitriev >Priority: Major > Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch, > HIVE-17684.03.patch, HIVE-17684.04.patch, HIVE-17684.05.patch, > HIVE-17684.06.patch, HIVE-17684.07.patch, HIVE-17684.08.patch, > HIVE-17684.09.patch > > > We have seen a number of memory issues due the {{HashSinkOperator}} use of > the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect > scenarios where the small table is taking too much space in memory, in which > case a {{MapJoinMemoryExhaustionError}} is thrown. > The configs to control this logic are: > {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90) > {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55) > The handler works by using the {{MemoryMXBean}} and uses the following logic > to estimate how much memory the {{HashMap}} is consuming: > {{MemoryMXBean#getHeapMemoryUsage().getUsed() / > MemoryMXBean#getHeapMemoryUsage().getMax()}} > The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be > inaccurate. The value returned by this method returns all reachable and > unreachable memory on the heap, so there may be a bunch of garbage data, and > the JVM just hasn't taken the time to reclaim it all. This can lead to > intermittent failures of this check even though a simple GC would have > reclaimed enough space for the process to continue working. > We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. > In Hive-on-MR this probably made sense to use because every Hive task was run > in a dedicated container, so a Hive Task could assume it created most of the > data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks > running in a single executor, each doing different things. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler
[ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misha Dmitriev updated HIVE-17684: -- Attachment: (was: HIVE-17684.10.patch) > HoS memory issues with MapJoinMemoryExhaustionHandler > - > > Key: HIVE-17684 > URL: https://issues.apache.org/jira/browse/HIVE-17684 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Misha Dmitriev >Priority: Major > Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch, > HIVE-17684.03.patch, HIVE-17684.04.patch, HIVE-17684.05.patch, > HIVE-17684.06.patch, HIVE-17684.07.patch, HIVE-17684.08.patch, > HIVE-17684.09.patch > > > We have seen a number of memory issues due the {{HashSinkOperator}} use of > the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect > scenarios where the small table is taking too much space in memory, in which > case a {{MapJoinMemoryExhaustionError}} is thrown. > The configs to control this logic are: > {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90) > {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55) > The handler works by using the {{MemoryMXBean}} and uses the following logic > to estimate how much memory the {{HashMap}} is consuming: > {{MemoryMXBean#getHeapMemoryUsage().getUsed() / > MemoryMXBean#getHeapMemoryUsage().getMax()}} > The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be > inaccurate. The value returned by this method returns all reachable and > unreachable memory on the heap, so there may be a bunch of garbage data, and > the JVM just hasn't taken the time to reclaim it all. This can lead to > intermittent failures of this check even though a simple GC would have > reclaimed enough space for the process to continue working. > We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. > In Hive-on-MR this probably made sense to use because every Hive task was run > in a dedicated container, so a Hive Task could assume it created most of the > data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks > running in a single executor, each doing different things. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler
[ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misha Dmitriev updated HIVE-17684: -- Status: In Progress (was: Patch Available) > HoS memory issues with MapJoinMemoryExhaustionHandler > - > > Key: HIVE-17684 > URL: https://issues.apache.org/jira/browse/HIVE-17684 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Misha Dmitriev >Priority: Major > Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch, > HIVE-17684.03.patch, HIVE-17684.04.patch, HIVE-17684.05.patch, > HIVE-17684.06.patch, HIVE-17684.07.patch, HIVE-17684.08.patch, > HIVE-17684.09.patch > > > We have seen a number of memory issues due the {{HashSinkOperator}} use of > the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect > scenarios where the small table is taking too much space in memory, in which > case a {{MapJoinMemoryExhaustionError}} is thrown. > The configs to control this logic are: > {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90) > {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55) > The handler works by using the {{MemoryMXBean}} and uses the following logic > to estimate how much memory the {{HashMap}} is consuming: > {{MemoryMXBean#getHeapMemoryUsage().getUsed() / > MemoryMXBean#getHeapMemoryUsage().getMax()}} > The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be > inaccurate. The value returned by this method returns all reachable and > unreachable memory on the heap, so there may be a bunch of garbage data, and > the JVM just hasn't taken the time to reclaim it all. This can lead to > intermittent failures of this check even though a simple GC would have > reclaimed enough space for the process to continue working. > We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. > In Hive-on-MR this probably made sense to use because every Hive task was run > in a dedicated container, so a Hive Task could assume it created most of the > data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks > running in a single executor, each doing different things. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20552) Get Schema from LogicalPlan faster
[ https://issues.apache.org/jira/browse/HIVE-20552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621176#comment-16621176 ] Hive QA commented on HIVE-20552: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12940382/HIVE-20552.3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14981 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13912/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13912/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13912/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12940382 - PreCommit-HIVE-Build > Get Schema from LogicalPlan faster > -- > > Key: HIVE-20552 > URL: https://issues.apache.org/jira/browse/HIVE-20552 > Project: Hive > Issue Type: Improvement >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-20552.1.patch, HIVE-20552.2.patch, > HIVE-20552.3.patch > > > To get the schema of a query faster, it currently needs to compile, optimize, > and generate a TezPlan, which creates extra overhead when only the > LogicalPlan is needed. > 1. Copy the method \{{HiveMaterializedViewsRegistry.parseQuery}}, making it > \{{public static}} and putting it in a utility class. > 2. Change the return statement of the method to \{{return > analyzer.getResultSchema();}} > 3. Change the return type of the method to \{{List}} > 4. Call the new method from \{{GenericUDTFGetSplits.createPlanFragment}} > replacing the current code which does this: > {code} > if(num == 0) { > //Schema only > return new PlanFragment(null, schema, null); > } > {code} > moving the call earlier in \{{getPlanFragment}} ... right after the HiveConf > is created ... bypassing the code that uses \{{HiveTxnManager}} and > \{{Driver}}. > 5. Convert the \{{List}} to > \{{org.apache.hadoop.hive.llap.Schema}}. > 6. return from \{{getPlanFragment}} by returning \{{new PlanFragment(null, > schema, null)}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
[ https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621166#comment-16621166 ] Prasanth Jayachandran commented on HIVE-20599: -- lgtm, +1. pending tests > CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException > --- > > Key: HIVE-20599 > URL: https://issues.apache.org/jira/browse/HIVE-20599 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-20599.1-branch-3.1.patch, HIVE-20599.1.patch > > > SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - > from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING); > throws below Exception > {code:java} > Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 > Wrong arguments ''PST'': No matching method for class > org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible > choices: _FUNC_(bigint) _FUNC_(binary) _FUNC_(boolean) _FUNC_(date) > _FUNC_(decimal(38,18)) _FUNC_(double) _FUNC_(float) _FUNC_(int) > _FUNC_(smallint) _FUNC_(string) _FUNC_(timestamp) _FUNC_(tinyint) > _FUNC_(void) (state=42000,code=4){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18767) Some alterPartitions invocations throw 'NumberFormatException: null'
[ https://issues.apache.org/jira/browse/HIVE-18767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mass Dosage updated HIVE-18767: --- Release Note: Resubmitting patch as HIVE-18767.3-branch-3.1.patch in order to trigger build. Target Version/s: 3.1.0, 2.3.3, 4.0.0, 3.2.0 (was: 2.3.3, 3.1.0, 4.0.0, 3.2.0) Status: Patch Available (was: In Progress) > Some alterPartitions invocations throw 'NumberFormatException: null' > > > Key: HIVE-18767 > URL: https://issues.apache.org/jira/browse/HIVE-18767 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.1.0, 2.3.3, 4.0.0, 3.2.0 >Reporter: Yuming Wang >Assignee: Mass Dosage >Priority: Major > Fix For: 2.3.3, 4.0.0 > > Attachments: HIVE-18767-branch-2.3.patch, HIVE-18767-branch-2.patch, > HIVE-18767-branch-3.1.patch, HIVE-18767-branch-3.patch, HIVE-18767.1.patch, > HIVE-18767.2-branch-2.3.patch, HIVE-18767.2-branch-2.patch, > HIVE-18767.2-branch-3.1.patch, HIVE-18767.2.patch, > HIVE-18767.3-branch-3.1.patch, HIVE-18767.3.patch, HIVE-18767.4.patch, > HIVE-18767.5.patch, HIVE-18767.6.patch > > > Error messages: > {noformat} > [info] Cause: java.lang.NumberFormatException: null > [info] at java.lang.Long.parseLong(Long.java:552) > [info] at java.lang.Long.parseLong(Long.java:631) > [info] at > org.apache.hadoop.hive.metastore.MetaStoreUtils.isFastStatsSame(MetaStoreUtils.java:315) > [info] at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:605) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions_with_environment_context(HiveMetaStore.java:3837) > [info] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [info] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [info] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [info] at java.lang.reflect.Method.invoke(Method.java:498) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > [info] at > com.sun.proxy.$Proxy23.alter_partitions_with_environment_context(Unknown > Source) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_partitions(HiveMetaStoreClient.java:1527) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18767) Some alterPartitions invocations throw 'NumberFormatException: null'
[ https://issues.apache.org/jira/browse/HIVE-18767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mass Dosage updated HIVE-18767: --- Target Version/s: 3.1.0, 2.3.3, 4.0.0, 3.2.0 (was: 2.3.3, 3.1.0, 4.0.0, 3.2.0) Status: In Progress (was: Patch Available) > Some alterPartitions invocations throw 'NumberFormatException: null' > > > Key: HIVE-18767 > URL: https://issues.apache.org/jira/browse/HIVE-18767 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.1.0, 2.3.3, 4.0.0, 3.2.0 >Reporter: Yuming Wang >Assignee: Mass Dosage >Priority: Major > Fix For: 2.3.3, 4.0.0 > > Attachments: HIVE-18767-branch-2.3.patch, HIVE-18767-branch-2.patch, > HIVE-18767-branch-3.1.patch, HIVE-18767-branch-3.patch, HIVE-18767.1.patch, > HIVE-18767.2-branch-2.3.patch, HIVE-18767.2-branch-2.patch, > HIVE-18767.2-branch-3.1.patch, HIVE-18767.2.patch, > HIVE-18767.3-branch-3.1.patch, HIVE-18767.3.patch, HIVE-18767.4.patch, > HIVE-18767.5.patch, HIVE-18767.6.patch > > > Error messages: > {noformat} > [info] Cause: java.lang.NumberFormatException: null > [info] at java.lang.Long.parseLong(Long.java:552) > [info] at java.lang.Long.parseLong(Long.java:631) > [info] at > org.apache.hadoop.hive.metastore.MetaStoreUtils.isFastStatsSame(MetaStoreUtils.java:315) > [info] at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:605) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions_with_environment_context(HiveMetaStore.java:3837) > [info] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [info] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [info] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [info] at java.lang.reflect.Method.invoke(Method.java:498) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > [info] at > com.sun.proxy.$Proxy23.alter_partitions_with_environment_context(Unknown > Source) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_partitions(HiveMetaStoreClient.java:1527) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18767) Some alterPartitions invocations throw 'NumberFormatException: null'
[ https://issues.apache.org/jira/browse/HIVE-18767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mass Dosage updated HIVE-18767: --- Attachment: HIVE-18767.3-branch-3.1.patch > Some alterPartitions invocations throw 'NumberFormatException: null' > > > Key: HIVE-18767 > URL: https://issues.apache.org/jira/browse/HIVE-18767 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.3.3, 3.1.0, 4.0.0, 3.2.0 >Reporter: Yuming Wang >Assignee: Mass Dosage >Priority: Major > Fix For: 2.3.3, 4.0.0 > > Attachments: HIVE-18767-branch-2.3.patch, HIVE-18767-branch-2.patch, > HIVE-18767-branch-3.1.patch, HIVE-18767-branch-3.patch, HIVE-18767.1.patch, > HIVE-18767.2-branch-2.3.patch, HIVE-18767.2-branch-2.patch, > HIVE-18767.2-branch-3.1.patch, HIVE-18767.2.patch, > HIVE-18767.3-branch-3.1.patch, HIVE-18767.3.patch, HIVE-18767.4.patch, > HIVE-18767.5.patch, HIVE-18767.6.patch > > > Error messages: > {noformat} > [info] Cause: java.lang.NumberFormatException: null > [info] at java.lang.Long.parseLong(Long.java:552) > [info] at java.lang.Long.parseLong(Long.java:631) > [info] at > org.apache.hadoop.hive.metastore.MetaStoreUtils.isFastStatsSame(MetaStoreUtils.java:315) > [info] at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:605) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions_with_environment_context(HiveMetaStore.java:3837) > [info] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [info] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [info] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [info] at java.lang.reflect.Method.invoke(Method.java:498) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > [info] at > com.sun.proxy.$Proxy23.alter_partitions_with_environment_context(Unknown > Source) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_partitions(HiveMetaStoreClient.java:1527) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17043) Remove non unique columns from group by keys if not referenced later
[ https://issues.apache.org/jira/browse/HIVE-17043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17043: --- Status: Patch Available (was: Open) > Remove non unique columns from group by keys if not referenced later > > > Key: HIVE-17043 > URL: https://issues.apache.org/jira/browse/HIVE-17043 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Ashutosh Chauhan >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-17043.1.patch, HIVE-17043.2.patch > > > Group by keys may be a mix of unique (or primary) keys and regular columns. > In such cases presence of regular column won't alter cardinality of groups. > So, if regular columns are not referenced later, they can be dropped from > group by keys. Depending on operator tree may result in those columns not > being read at all from disk in best case. In worst case, we will avoid > shuffling and sorting regular columns from mapper to reducer, which still > could be substantial CPU and network savings. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17043) Remove non unique columns from group by keys if not referenced later
[ https://issues.apache.org/jira/browse/HIVE-17043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17043: --- Status: Open (was: Patch Available) > Remove non unique columns from group by keys if not referenced later > > > Key: HIVE-17043 > URL: https://issues.apache.org/jira/browse/HIVE-17043 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Ashutosh Chauhan >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-17043.1.patch, HIVE-17043.2.patch > > > Group by keys may be a mix of unique (or primary) keys and regular columns. > In such cases presence of regular column won't alter cardinality of groups. > So, if regular columns are not referenced later, they can be dropped from > group by keys. Depending on operator tree may result in those columns not > being read at all from disk in best case. In worst case, we will avoid > shuffling and sorting regular columns from mapper to reducer, which still > could be substantial CPU and network savings. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20601) EnvironmentContext null in ALTER_PARTITION event in DbNotificationListener
[ https://issues.apache.org/jira/browse/HIVE-20601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali reassigned HIVE-20601: --- > EnvironmentContext null in ALTER_PARTITION event in DbNotificationListener > -- > > Key: HIVE-20601 > URL: https://issues.apache.org/jira/browse/HIVE-20601 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 3.0.0, 4.0.0 >Reporter: Bharathkrishna Guruvayoor Murali >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > > Cause : EnvironmentContext not passed here: > [https://github.com/apache/hive/blob/36c33ca066c99dfdb21223a711c0c3f33c85b943/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java#L726] > > It will be useful to have the environmentContext passed to > DbNotificationListener in this case, to know if the alter happened due to a > stat change. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17043) Remove non unique columns from group by keys if not referenced later
[ https://issues.apache.org/jira/browse/HIVE-17043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17043: --- Attachment: HIVE-17043.2.patch > Remove non unique columns from group by keys if not referenced later > > > Key: HIVE-17043 > URL: https://issues.apache.org/jira/browse/HIVE-17043 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Ashutosh Chauhan >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-17043.1.patch, HIVE-17043.2.patch > > > Group by keys may be a mix of unique (or primary) keys and regular columns. > In such cases presence of regular column won't alter cardinality of groups. > So, if regular columns are not referenced later, they can be dropped from > group by keys. Depending on operator tree may result in those columns not > being read at all from disk in best case. In worst case, we will avoid > shuffling and sorting regular columns from mapper to reducer, which still > could be substantial CPU and network savings. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20552) Get Schema from LogicalPlan faster
[ https://issues.apache.org/jira/browse/HIVE-20552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621120#comment-16621120 ] Hive QA commented on HIVE-20552: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 25s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 4s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} ql: The patch generated 0 new + 300 unchanged - 3 fixed = 300 total (was 303) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13912/dev-support/hive-personality.sh | | git revision | master / 487714a | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13912/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Get Schema from LogicalPlan faster > -- > > Key: HIVE-20552 > URL: https://issues.apache.org/jira/browse/HIVE-20552 > Project: Hive > Issue Type: Improvement >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-20552.1.patch, HIVE-20552.2.patch, > HIVE-20552.3.patch > > > To get the schema of a query faster, it currently needs to compile, optimize, > and generate a TezPlan, which creates extra overhead when only the > LogicalPlan is needed. > 1. Copy the method \{{HiveMaterializedViewsRegistry.parseQuery}}, making it > \{{public static}} and putting it in a utility class. > 2. Change the return statement of the method to \{{return > analyzer.getResultSchema();}} > 3. Change the return type of the method to \{{List}} > 4. Call the new method from \{{GenericUDTFGetSplits.createPlanFragment}} > replacing the current code which does this: > {code} > if(num == 0) { > //Schema only > return new PlanFragment(null, schema, null); > } > {code} > moving the call earlier in \{{getPlanFragment}} ... right after the HiveConf > is created ... bypassing the code that uses \{{HiveTxnManager}} and > \{{Driver}}. > 5. Convert the \{{List}} to > \{{org.apache.hadoop.hive.llap.Schema}}. > 6. return from \{{getPlanFragment}} by returning \{{new PlanFragment(null, > schema, null)}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20593) Load Data for partitioned ACID tables fails with bucketId out of range: -1
[ https://issues.apache.org/jira/browse/HIVE-20593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621089#comment-16621089 ] Hive QA commented on HIVE-20593: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12940376/HIVE-20593.1.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 14980 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[infer_bucket_sort_map_operators] (batchId=189) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query75] (batchId=264) org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testKillQuery (batchId=251) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13911/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13911/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13911/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12940376 - PreCommit-HIVE-Build > Load Data for partitioned ACID tables fails with bucketId out of range: -1 > -- > > Key: HIVE-20593 > URL: https://issues.apache.org/jira/browse/HIVE-20593 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.1.0 >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-20593.1.patch > > > Load data for ACID tables is failing to load ORC files when it is converted > to IAS job. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-19814) RPC Server port is always random for spark
[ https://issues.apache.org/jira/browse/HIVE-19814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621063#comment-16621063 ] bounkong khamphousone edited comment on HIVE-19814 at 9/19/18 7:11 PM: --- Hi, thanks for the fix. I would like to know why this contrib hasn't been merged to hive 3.X and 2.X ? Thank you all ! was (Author: tiboun): Hi, thanks for the fix. I would like to know why this contrib hasn't been merged to hive 3.X ? Thank you all ! > RPC Server port is always random for spark > -- > > Key: HIVE-19814 > URL: https://issues.apache.org/jira/browse/HIVE-19814 > Project: Hive > Issue Type: Bug > Components: Spark >Affects Versions: 2.3.0, 3.0.0, 2.4.0, 4.0.0 >Reporter: bounkong khamphousone >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19814.1.patch, HIVE-19814.2.patch, > HIVE-19814.3.patch > > > RPC server port is always a random one. In fact, the problem is in > RpcConfiguration.HIVE_SPARK_RSC_CONFIGS which doesn't include > SPARK_RPC_SERVER_PORT. > > I've found this issue while trying to make hive-on-spark running inside > docker. > > HIVE_SPARK_RSC_CONFIGS is called by HiveSparkClientFactory.initiateSparkConf > > SparkSessionManagerImpl.setup and the latter call > SparkClientFactory.initialize(conf) which initialize the rpc server. This > RPCServer is then used to create the sparkClient which use the rpc server > port as --remote-port arg. Since initiateSparkConf ignore > SPARK_RPC_SERVER_PORT, then it will always be a random port. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19814) RPC Server port is always random for spark
[ https://issues.apache.org/jira/browse/HIVE-19814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621063#comment-16621063 ] bounkong khamphousone commented on HIVE-19814: -- Hi, thanks for the fix. I would like to know why this contrib hasn't been merged to hive 3.X ? Thank you all ! > RPC Server port is always random for spark > -- > > Key: HIVE-19814 > URL: https://issues.apache.org/jira/browse/HIVE-19814 > Project: Hive > Issue Type: Bug > Components: Spark >Affects Versions: 2.3.0, 3.0.0, 2.4.0, 4.0.0 >Reporter: bounkong khamphousone >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19814.1.patch, HIVE-19814.2.patch, > HIVE-19814.3.patch > > > RPC server port is always a random one. In fact, the problem is in > RpcConfiguration.HIVE_SPARK_RSC_CONFIGS which doesn't include > SPARK_RPC_SERVER_PORT. > > I've found this issue while trying to make hive-on-spark running inside > docker. > > HIVE_SPARK_RSC_CONFIGS is called by HiveSparkClientFactory.initiateSparkConf > > SparkSessionManagerImpl.setup and the latter call > SparkClientFactory.initialize(conf) which initialize the rpc server. This > RPCServer is then used to create the sparkClient which use the rpc server > port as --remote-port arg. Since initiateSparkConf ignore > SPARK_RPC_SERVER_PORT, then it will always be a random port. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20593) Load Data for partitioned ACID tables fails with bucketId out of range: -1
[ https://issues.apache.org/jira/browse/HIVE-20593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621061#comment-16621061 ] Hive QA commented on HIVE-20593: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 40s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 57s{color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 43s{color} | {color:red} root in master failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 1s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 6s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 3s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 14s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 41s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 40s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 53m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13911/dev-support/hive-personality.sh | | git revision | master / 487714a | | Default Java | 1.8.0_111 | | compile | http://104.198.109.242/logs//PreCommit-HIVE-Build-13911/yetus/branch-compile-root.txt | | findbugs | v3.0.0 | | compile | http://104.198.109.242/logs//PreCommit-HIVE-Build-13911/yetus/patch-compile-root.txt | | javac | http://104.198.109.242/logs//PreCommit-HIVE-Build-13911/yetus/patch-compile-root.txt | | modules | C: . ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13911/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Load Data for partitioned ACID tables fails with bucketId out of range: -1 > -- > > Key: HIVE-20593 > URL: https://issues.apache.org/jira/browse/HIVE-20593 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.1.0 >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-20593.1.patch > > > Load data for ACID tables is failing to load ORC files when it is converted > to IAS job. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20095) Fix jdbc external table feature
[ https://issues.apache.org/jira/browse/HIVE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez reassigned HIVE-20095: -- Assignee: Jonathan Doron (was: Jesus Camacho Rodriguez) > Fix jdbc external table feature > --- > > Key: HIVE-20095 > URL: https://issues.apache.org/jira/browse/HIVE-20095 > Project: Hive > Issue Type: Bug >Reporter: Jonathan Doron >Assignee: Jonathan Doron >Priority: Major > Attachments: HIVE-20095.1.patch, HIVE-20095.2.patch, > HIVE-20095.3.patch, HIVE-20095.4.patch, HIVE-20095.5.patch, > HIVE-20095.6.patch, HIVE-20095.7.patch, HIVE-20095.7.patch, > HIVE-20095.8.patch, HIVE-20095.8.patch > > > It seems like the committed code for HIVE-19161 > (7584b3276bebf64aa006eaa162c0a6264d8fcb56) reverted some of HIVE-18423 > updates, and therefore some of the external table queries are not working > correctly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20095) Fix jdbc external table feature
[ https://issues.apache.org/jira/browse/HIVE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez reassigned HIVE-20095: -- Assignee: Jesus Camacho Rodriguez (was: Jonathan Doron) > Fix jdbc external table feature > --- > > Key: HIVE-20095 > URL: https://issues.apache.org/jira/browse/HIVE-20095 > Project: Hive > Issue Type: Bug >Reporter: Jonathan Doron >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-20095.1.patch, HIVE-20095.2.patch, > HIVE-20095.3.patch, HIVE-20095.4.patch, HIVE-20095.5.patch, > HIVE-20095.6.patch, HIVE-20095.7.patch, HIVE-20095.7.patch, > HIVE-20095.8.patch, HIVE-20095.8.patch > > > It seems like the committed code for HIVE-19161 > (7584b3276bebf64aa006eaa162c0a6264d8fcb56) reverted some of HIVE-18423 > updates, and therefore some of the external table queries are not working > correctly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20095) Fix jdbc external table feature
[ https://issues.apache.org/jira/browse/HIVE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-20095: --- Attachment: HIVE-20095.8.patch > Fix jdbc external table feature > --- > > Key: HIVE-20095 > URL: https://issues.apache.org/jira/browse/HIVE-20095 > Project: Hive > Issue Type: Bug >Reporter: Jonathan Doron >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-20095.1.patch, HIVE-20095.2.patch, > HIVE-20095.3.patch, HIVE-20095.4.patch, HIVE-20095.5.patch, > HIVE-20095.6.patch, HIVE-20095.7.patch, HIVE-20095.7.patch, > HIVE-20095.8.patch, HIVE-20095.8.patch > > > It seems like the committed code for HIVE-19161 > (7584b3276bebf64aa006eaa162c0a6264d8fcb56) reverted some of HIVE-18423 > updates, and therefore some of the external table queries are not working > correctly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20575) Fix flaky connection metric tests
[ https://issues.apache.org/jira/browse/HIVE-20575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laszlo Pinter updated HIVE-20575: - Attachment: HIVE-20575.05.patch > Fix flaky connection metric tests > - > > Key: HIVE-20575 > URL: https://issues.apache.org/jira/browse/HIVE-20575 > Project: Hive > Issue Type: Test > Components: Hive, Test >Affects Versions: 4.0.0 >Reporter: Laszlo Pinter >Assignee: Laszlo Pinter >Priority: Minor > Fix For: 4.0.0 > > Attachments: HIVE-20575.01.patch, HIVE-20575.02.patch, > HIVE-20575.03.patch, HIVE-20575.04.patch, HIVE-20575.05.patch > > > TestHs2ConnectionMetricsHttp.testOpenConnectionMetrics() is flaky. We need to > fix it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19166) TestMiniLlapLocalCliDriver sysdb failure
[ https://issues.apache.org/jira/browse/HIVE-19166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621010#comment-16621010 ] Daniel Dai commented on HIVE-19166: --- Committed to master. Further testing for branch-3. > TestMiniLlapLocalCliDriver sysdb failure > > > Key: HIVE-19166 > URL: https://issues.apache.org/jira/browse/HIVE-19166 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Vineet Garg >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19166.04.patch, HIVE-19166.05.patch, > HIVE-19166.06.patch, HIVE-19166.09.patch, HIVE-19166.1.patch, > HIVE-19166.10.patch, HIVE-19166.11.patch, HIVE-19166.12.patch, > HIVE-19166.13.patch, HIVE-19166.14.patch, HIVE-19166.15.patch, > HIVE-19166.16.patch, HIVE-19166.17.patch, HIVE-19166.18.patch, > HIVE-19166.19.patch, HIVE-19166.2.patch, HIVE-19166.20.branch-3.patch, > HIVE-19166.20.patch, HIVE-19166.3.patch > > > Broken by HIVE-18715 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19166) TestMiniLlapLocalCliDriver sysdb failure
[ https://issues.apache.org/jira/browse/HIVE-19166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-19166: -- Attachment: HIVE-19166.20.branch-3.patch > TestMiniLlapLocalCliDriver sysdb failure > > > Key: HIVE-19166 > URL: https://issues.apache.org/jira/browse/HIVE-19166 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Vineet Garg >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19166.04.patch, HIVE-19166.05.patch, > HIVE-19166.06.patch, HIVE-19166.09.patch, HIVE-19166.1.patch, > HIVE-19166.10.patch, HIVE-19166.11.patch, HIVE-19166.12.patch, > HIVE-19166.13.patch, HIVE-19166.14.patch, HIVE-19166.15.patch, > HIVE-19166.16.patch, HIVE-19166.17.patch, HIVE-19166.18.patch, > HIVE-19166.19.patch, HIVE-19166.2.patch, HIVE-19166.20.branch-3.patch, > HIVE-19166.20.patch, HIVE-19166.3.patch > > > Broken by HIVE-18715 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19166) TestMiniLlapLocalCliDriver sysdb failure
[ https://issues.apache.org/jira/browse/HIVE-19166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-19166: -- Target Version/s: 4.0.0, 3.2.0 (was: 3.0.0, 3.1.0) > TestMiniLlapLocalCliDriver sysdb failure > > > Key: HIVE-19166 > URL: https://issues.apache.org/jira/browse/HIVE-19166 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Vineet Garg >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19166.04.patch, HIVE-19166.05.patch, > HIVE-19166.06.patch, HIVE-19166.09.patch, HIVE-19166.1.patch, > HIVE-19166.10.patch, HIVE-19166.11.patch, HIVE-19166.12.patch, > HIVE-19166.13.patch, HIVE-19166.14.patch, HIVE-19166.15.patch, > HIVE-19166.16.patch, HIVE-19166.17.patch, HIVE-19166.18.patch, > HIVE-19166.19.patch, HIVE-19166.2.patch, HIVE-19166.20.branch-3.patch, > HIVE-19166.20.patch, HIVE-19166.3.patch > > > Broken by HIVE-18715 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20189) Separate metastore client code into its own module
[ https://issues.apache.org/jira/browse/HIVE-20189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-20189: -- Attachment: HIVE-20189.02.patch > Separate metastore client code into its own module > -- > > Key: HIVE-20189 > URL: https://issues.apache.org/jira/browse/HIVE-20189 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Affects Versions: 4.0.0, 3.2.0 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-20189.01.patch, HIVE-20189.02.patch > > > The goal of this JIRA is to split HiveMetastoreClient code out of > metastore-common. This is a pom-only change that does not require any changes > in the code. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17300) WebUI query plan graphs
[ https://issues.apache.org/jira/browse/HIVE-17300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620978#comment-16620978 ] Hive QA commented on HIVE-17300: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12940375/HIVE-17300.8.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14981 tests executed *Failed tests:* {noformat} org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testKillQuery (batchId=251) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13910/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13910/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13910/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12940375 - PreCommit-HIVE-Build > WebUI query plan graphs > --- > > Key: HIVE-17300 > URL: https://issues.apache.org/jira/browse/HIVE-17300 > Project: Hive > Issue Type: Sub-task > Components: Web UI >Affects Versions: 4.0.0 >Reporter: Karen Coppage >Assignee: Karen Coppage >Priority: Major > Labels: beginner, features, patch > Attachments: HIVE-17300.3.patch, HIVE-17300.4.patch, > HIVE-17300.5.patch, HIVE-17300.6.patch, HIVE-17300.7.patch, > HIVE-17300.7.patch, HIVE-17300.8.patch, HIVE-17300.8.patch, > HIVE-17300.8.patch, HIVE-17300.patch, complete_success.png, > full_mapred_stats.png, graph_with_mapred_stats.png, last_stage_error.png, > last_stage_running.png, non_mapred_task_selected.png > > > Hi all, > I’m working on a feature of the Hive WebUI Query Plan tab that would provide > the option to display the query plan as a nice graph (scroll down for > screenshots). If you click on one of the graph’s stages, the plan for that > stage appears as text below. > Stages are color-coded if they have a status (Success, Error, Running), and > the rest are grayed out. Coloring is based on status already available in the > WebUI, under the Stages tab. > There is an additional option to display stats for MapReduce tasks. This > includes the job’s ID, tracking URL (where the logs are found), and mapper > and reducer numbers/progress, among other info. > The library I’m using for the graph is called vis.js (http://visjs.org/). It > has an Apache license, and the only necessary file to be included from this > library is about 700 KB. > I tried to keep server-side changes minimal, and graph generation is taken > care of by the client. Plans with more than a given number of stages > (default: 25) won't be displayed in order to preserve resources. > I’d love to hear any and all input from the community about this feature: do > you think it’s useful, and is there anything important I’m missing? > Thanks, > Karen Coppage > Review request: https://reviews.apache.org/r/61663/ > Any input is welcome! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20538) Allow to store a key value together with a transaction.
[ https://issues.apache.org/jira/browse/HIVE-20538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaume M updated HIVE-20538: --- Attachment: HIVE-20538.4.patch Status: Patch Available (was: Open) > Allow to store a key value together with a transaction. > --- > > Key: HIVE-20538 > URL: https://issues.apache.org/jira/browse/HIVE-20538 > Project: Hive > Issue Type: New Feature > Components: Standalone Metastore, Transactions >Reporter: Jaume M >Assignee: Jaume M >Priority: Major > Attachments: HIVE-20538.1.patch, HIVE-20538.1.patch, > HIVE-20538.2.patch, HIVE-20538.3.patch, HIVE-20538.4.patch > > > This can be useful for example to know if a transaction has already happened. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20538) Allow to store a key value together with a transaction.
[ https://issues.apache.org/jira/browse/HIVE-20538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaume M updated HIVE-20538: --- Status: Open (was: Patch Available) > Allow to store a key value together with a transaction. > --- > > Key: HIVE-20538 > URL: https://issues.apache.org/jira/browse/HIVE-20538 > Project: Hive > Issue Type: New Feature > Components: Standalone Metastore, Transactions >Reporter: Jaume M >Assignee: Jaume M >Priority: Major > Attachments: HIVE-20538.1.patch, HIVE-20538.1.patch, > HIVE-20538.2.patch, HIVE-20538.3.patch > > > This can be useful for example to know if a transaction has already happened. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20593) Load Data for partitioned ACID tables fails with bucketId out of range: -1
[ https://issues.apache.org/jira/browse/HIVE-20593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-20593: -- Affects Version/s: 3.1.0 > Load Data for partitioned ACID tables fails with bucketId out of range: -1 > -- > > Key: HIVE-20593 > URL: https://issues.apache.org/jira/browse/HIVE-20593 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.1.0 >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-20593.1.patch > > > Load data for ACID tables is failing to load ORC files when it is converted > to IAS job. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20593) Load Data for partitioned ACID tables fails with bucketId out of range: -1
[ https://issues.apache.org/jira/browse/HIVE-20593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-20593: -- Component/s: Transactions > Load Data for partitioned ACID tables fails with bucketId out of range: -1 > -- > > Key: HIVE-20593 > URL: https://issues.apache.org/jira/browse/HIVE-20593 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-20593.1.patch > > > Load data for ACID tables is failing to load ORC files when it is converted > to IAS job. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17300) WebUI query plan graphs
[ https://issues.apache.org/jira/browse/HIVE-17300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620929#comment-16620929 ] Hive QA commented on HIVE-17300: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 30s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 26s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 34s{color} | {color:blue} common in master has 65 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 41s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 1s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 39s{color} | {color:blue} service in master has 48 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 36m 58s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13910/dev-support/hive-personality.sh | | git revision | master / ce36c43 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: common itests/hive-unit ql service U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13910/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > WebUI query plan graphs > --- > > Key: HIVE-17300 > URL: https://issues.apache.org/jira/browse/HIVE-17300 > Project: Hive > Issue Type: Sub-task > Components: Web UI >Affects Versions: 4.0.0 >Reporter: Karen Coppage >Assignee: Karen Coppage >Priority: Major > Labels: beginner, features, patch > Attachments: HIVE-17300.3.patch, HIVE-17300.4.patch, > HIVE-17300.5.patch, HIVE-17300.6.patch, HIVE-17300.7.patch, > HIVE-17300.7.patch, HIVE-17300.8.patch, HIVE-17300.8.patch, > HIVE-17300.8.patch, HIVE-17300.patch, complete_success.png, > full_mapred_stats.png, graph_with_mapred_stats.png, last_stage_error.png, > last_stage_running.png, non_mapred_task_selected.png > > > Hi all, > I’m working on a feature of the Hive WebUI Query Plan tab that would
[jira] [Comment Edited] (HIVE-20267) Expanding WebUI to include form to dynamically config log levels
[ https://issues.apache.org/jira/browse/HIVE-20267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620924#comment-16620924 ] Prasanth Jayachandran edited comment on HIVE-20267 at 9/19/18 5:32 PM: --- [~zchovan] could you provide your email to which I can attribute your contribution to? I can only see your username in jira. was (Author: prasanth_j): [~zchovan] could you provide your email to which I can attribute your contribution to? > Expanding WebUI to include form to dynamically config log levels > - > > Key: HIVE-20267 > URL: https://issues.apache.org/jira/browse/HIVE-20267 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0, 3.2.0 >Reporter: Zoltan Chovan >Assignee: Zoltan Chovan >Priority: Minor > Attachments: HIVE-20267.1.patch, HIVE-20267.2.patch, > HIVE-20267.3.patch, HIVE-20267.4.patch, HIVE-20267.5.patch > > > Expanding the possibility to change the log levels during runtime, the webUI > can be extended to interact with the Log4j2ConfiguratorServlet, this way it > can be directly used and users/admins don't need to execute curl commands > from commandline. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20267) Expanding WebUI to include form to dynamically config log levels
[ https://issues.apache.org/jira/browse/HIVE-20267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620924#comment-16620924 ] Prasanth Jayachandran commented on HIVE-20267: -- [~zchovan] could you provide your email to which I can attribute your contribution to? > Expanding WebUI to include form to dynamically config log levels > - > > Key: HIVE-20267 > URL: https://issues.apache.org/jira/browse/HIVE-20267 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0, 3.2.0 >Reporter: Zoltan Chovan >Assignee: Zoltan Chovan >Priority: Minor > Attachments: HIVE-20267.1.patch, HIVE-20267.2.patch, > HIVE-20267.3.patch, HIVE-20267.4.patch, HIVE-20267.5.patch > > > Expanding the possibility to change the log levels during runtime, the webUI > can be extended to interact with the Log4j2ConfiguratorServlet, this way it > can be directly used and users/admins don't need to execute curl commands > from commandline. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20600) Metastore connection leak
[ https://issues.apache.org/jira/browse/HIVE-20600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620915#comment-16620915 ] Damon Cortesi commented on HIVE-20600: -- Attached proposed patch for Hive 2.3.3. > Metastore connection leak > - > > Key: HIVE-20600 > URL: https://issues.apache.org/jira/browse/HIVE-20600 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 2.3.3 >Reporter: Damon Cortesi >Priority: Major > Attachments: HIVE-20600.patch, consume_threads.py > > > Within the execute method of HiveServer2, there appears to be a connection > leak. With fairly straightforward series of INSERT statements, the connection > count in the logs continues to increase over time. Under certain loads, this > can also consume all underlying threads of the Hive metastore and result in > HS2 becoming unresponsive to new connections. > The log below is the result of some python code executing a single insert > statement, and then looping through a series of 10 more insert statements. We > can see there's one dangling connection left open after each execution > leaving us with 12 open connections (11 from the execute statements + 1 from > HS2 startup). > {code} > 2018-09-19T17:14:32,108 INFO [main([])]: hive.metastore > (HiveMetaStoreClient.java:open(481)) - Opened a connection to metastore, > current connections: 1 > 2018-09-19T17:14:48,175 INFO [29049f74-73c4-4f48-9cf7-b4bfe524a85b > HiveServer2-Handler-Pool: Thread-31([])]: hive.metastore > (HiveMetaStoreClient.java:open(481)) - Opened a connection to metastore, > current connections: 2 > 2018-09-19T17:15:05,543 INFO [HiveServer2-Background-Pool: Thread-36([])]: > hive.metastore (HiveMetaStoreClient.java:close(564)) - Closed a connection to > metastore, current connections: 1 > 2018-09-19T17:15:05,548 INFO [HiveServer2-Background-Pool: Thread-36([])]: > hive.metastore (HiveMetaStoreClient.java:open(481)) - Opened a connection to > metastore, current connections: 2 > 2018-09-19T17:15:05,932 INFO [HiveServer2-Background-Pool: Thread-36([])]: > hive.metastore (HiveMetaStoreClient.java:close(564)) - Closed a connection to > metastore, current connections: 1 > 2018-09-19T17:15:05,935 INFO [HiveServer2-Background-Pool: Thread-36([])]: > hive.metastore (HiveMetaStoreClient.java:open(481)) - Opened a connection to > metastore, current connections: 2 > 2018-09-19T17:15:06,123 INFO [HiveServer2-Background-Pool: Thread-36([])]: > hive.metastore (HiveMetaStoreClient.java:close(564)) - Closed a connection to > metastore, current connections: 1 > 2018-09-19T17:15:06,126 INFO [HiveServer2-Background-Pool: Thread-36([])]: > hive.metastore (HiveMetaStoreClient.java:open(481)) - Opened a connection to > metastore, current connections: 2 > ... > 2018-09-19T17:15:20,626 INFO [29049f74-73c4-4f48-9cf7-b4bfe524a85b > HiveServer2-Handler-Pool: Thread-31([])]: hive.metastore > (HiveMetaStoreClient.java:open(481)) - Opened a connection to metastore, > current connections: 12 > 2018-09-19T17:15:21,153 INFO [HiveServer2-Background-Pool: Thread-162([])]: > hive.metastore (HiveMetaStoreClient.java:close(564)) - Closed a connection to > metastore, current connections: 11 > 2018-09-19T17:15:21,155 INFO [HiveServer2-Background-Pool: Thread-162([])]: > hive.metastore (HiveMetaStoreClient.java:open(481)) - Opened a connection to > metastore, current connections: 12 > 2018-09-19T17:15:21,306 INFO [HiveServer2-Background-Pool: Thread-162([])]: > hive.metastore (HiveMetaStoreClient.java:close(564)) - Closed a connection to > metastore, current connections: 11 > 2018-09-19T17:15:21,308 INFO [HiveServer2-Background-Pool: Thread-162([])]: > hive.metastore (HiveMetaStoreClient.java:open(481)) - Opened a connection to > metastore, current connections: 12 > 2018-09-19T17:15:21,385 INFO [HiveServer2-Background-Pool: Thread-162([])]: > hive.metastore (HiveMetaStoreClient.java:close(564)) - Closed a connection to > metastore, current connections: 11 > 2018-09-19T17:15:21,387 INFO [HiveServer2-Background-Pool: Thread-162([])]: > hive.metastore (HiveMetaStoreClient.java:open(481)) - Opened a connection to > metastore, current connections: 12 > 2018-09-19T17:15:21,541 INFO [HiveServer2-Handler-Pool: Thread-31([])]: > hive.metastore (HiveMetaStoreClient.java:open(481)) - Opened a connection to > metastore, current connections: 13 > 2018-09-19T17:15:21,542 INFO [HiveServer2-Handler-Pool: Thread-31([])]: > hive.metastore (HiveMetaStoreClient.java:close(564)) - Closed a connection to > metastore, current connections: 12 > {code} > Attached is a simple [impyla|https://github.com/cloudera/impyla] script that > triggers the condition. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20600) Metastore connection leak
[ https://issues.apache.org/jira/browse/HIVE-20600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damon Cortesi updated HIVE-20600: - Attachment: HIVE-20600.patch > Metastore connection leak > - > > Key: HIVE-20600 > URL: https://issues.apache.org/jira/browse/HIVE-20600 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 2.3.3 >Reporter: Damon Cortesi >Priority: Major > Attachments: HIVE-20600.patch, consume_threads.py > > > Within the execute method of HiveServer2, there appears to be a connection > leak. With fairly straightforward series of INSERT statements, the connection > count in the logs continues to increase over time. Under certain loads, this > can also consume all underlying threads of the Hive metastore and result in > HS2 becoming unresponsive to new connections. > The log below is the result of some python code executing a single insert > statement, and then looping through a series of 10 more insert statements. We > can see there's one dangling connection left open after each execution > leaving us with 12 open connections (11 from the execute statements + 1 from > HS2 startup). > {code} > 2018-09-19T17:14:32,108 INFO [main([])]: hive.metastore > (HiveMetaStoreClient.java:open(481)) - Opened a connection to metastore, > current connections: 1 > 2018-09-19T17:14:48,175 INFO [29049f74-73c4-4f48-9cf7-b4bfe524a85b > HiveServer2-Handler-Pool: Thread-31([])]: hive.metastore > (HiveMetaStoreClient.java:open(481)) - Opened a connection to metastore, > current connections: 2 > 2018-09-19T17:15:05,543 INFO [HiveServer2-Background-Pool: Thread-36([])]: > hive.metastore (HiveMetaStoreClient.java:close(564)) - Closed a connection to > metastore, current connections: 1 > 2018-09-19T17:15:05,548 INFO [HiveServer2-Background-Pool: Thread-36([])]: > hive.metastore (HiveMetaStoreClient.java:open(481)) - Opened a connection to > metastore, current connections: 2 > 2018-09-19T17:15:05,932 INFO [HiveServer2-Background-Pool: Thread-36([])]: > hive.metastore (HiveMetaStoreClient.java:close(564)) - Closed a connection to > metastore, current connections: 1 > 2018-09-19T17:15:05,935 INFO [HiveServer2-Background-Pool: Thread-36([])]: > hive.metastore (HiveMetaStoreClient.java:open(481)) - Opened a connection to > metastore, current connections: 2 > 2018-09-19T17:15:06,123 INFO [HiveServer2-Background-Pool: Thread-36([])]: > hive.metastore (HiveMetaStoreClient.java:close(564)) - Closed a connection to > metastore, current connections: 1 > 2018-09-19T17:15:06,126 INFO [HiveServer2-Background-Pool: Thread-36([])]: > hive.metastore (HiveMetaStoreClient.java:open(481)) - Opened a connection to > metastore, current connections: 2 > ... > 2018-09-19T17:15:20,626 INFO [29049f74-73c4-4f48-9cf7-b4bfe524a85b > HiveServer2-Handler-Pool: Thread-31([])]: hive.metastore > (HiveMetaStoreClient.java:open(481)) - Opened a connection to metastore, > current connections: 12 > 2018-09-19T17:15:21,153 INFO [HiveServer2-Background-Pool: Thread-162([])]: > hive.metastore (HiveMetaStoreClient.java:close(564)) - Closed a connection to > metastore, current connections: 11 > 2018-09-19T17:15:21,155 INFO [HiveServer2-Background-Pool: Thread-162([])]: > hive.metastore (HiveMetaStoreClient.java:open(481)) - Opened a connection to > metastore, current connections: 12 > 2018-09-19T17:15:21,306 INFO [HiveServer2-Background-Pool: Thread-162([])]: > hive.metastore (HiveMetaStoreClient.java:close(564)) - Closed a connection to > metastore, current connections: 11 > 2018-09-19T17:15:21,308 INFO [HiveServer2-Background-Pool: Thread-162([])]: > hive.metastore (HiveMetaStoreClient.java:open(481)) - Opened a connection to > metastore, current connections: 12 > 2018-09-19T17:15:21,385 INFO [HiveServer2-Background-Pool: Thread-162([])]: > hive.metastore (HiveMetaStoreClient.java:close(564)) - Closed a connection to > metastore, current connections: 11 > 2018-09-19T17:15:21,387 INFO [HiveServer2-Background-Pool: Thread-162([])]: > hive.metastore (HiveMetaStoreClient.java:open(481)) - Opened a connection to > metastore, current connections: 12 > 2018-09-19T17:15:21,541 INFO [HiveServer2-Handler-Pool: Thread-31([])]: > hive.metastore (HiveMetaStoreClient.java:open(481)) - Opened a connection to > metastore, current connections: 13 > 2018-09-19T17:15:21,542 INFO [HiveServer2-Handler-Pool: Thread-31([])]: > hive.metastore (HiveMetaStoreClient.java:close(564)) - Closed a connection to > metastore, current connections: 12 > {code} > Attached is a simple [impyla|https://github.com/cloudera/impyla] script that > triggers the condition. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
[ https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-20599: -- Attachment: HIVE-20599.1.patch > CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException > --- > > Key: HIVE-20599 > URL: https://issues.apache.org/jira/browse/HIVE-20599 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-20599.1-branch-3.1.patch, HIVE-20599.1.patch > > > SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - > from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING); > throws below Exception > {code:java} > Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 > Wrong arguments ''PST'': No matching method for class > org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible > choices: _FUNC_(bigint) _FUNC_(binary) _FUNC_(boolean) _FUNC_(date) > _FUNC_(decimal(38,18)) _FUNC_(double) _FUNC_(float) _FUNC_(int) > _FUNC_(smallint) _FUNC_(string) _FUNC_(timestamp) _FUNC_(tinyint) > _FUNC_(void) (state=42000,code=4){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
[ https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-20599 started by Naresh P R. - > CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException > --- > > Key: HIVE-20599 > URL: https://issues.apache.org/jira/browse/HIVE-20599 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-20599.1-branch-3.1.patch > > > SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - > from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING); > throws below Exception > {code:java} > Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 > Wrong arguments ''PST'': No matching method for class > org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible > choices: _FUNC_(bigint) _FUNC_(binary) _FUNC_(boolean) _FUNC_(date) > _FUNC_(decimal(38,18)) _FUNC_(double) _FUNC_(float) _FUNC_(int) > _FUNC_(smallint) _FUNC_(string) _FUNC_(timestamp) _FUNC_(tinyint) > _FUNC_(void) (state=42000,code=4){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
[ https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-20599: -- Attachment: HIVE-20599.1-branch-3.1.patch Status: Patch Available (was: In Progress) > CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException > --- > > Key: HIVE-20599 > URL: https://issues.apache.org/jira/browse/HIVE-20599 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-20599.1-branch-3.1.patch > > > SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - > from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING); > throws below Exception > {code:java} > Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 > Wrong arguments ''PST'': No matching method for class > org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible > choices: _FUNC_(bigint) _FUNC_(binary) _FUNC_(boolean) _FUNC_(date) > _FUNC_(decimal(38,18)) _FUNC_(double) _FUNC_(float) _FUNC_(int) > _FUNC_(smallint) _FUNC_(string) _FUNC_(timestamp) _FUNC_(tinyint) > _FUNC_(void) (state=42000,code=4){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20575) Fix flaky connection metric tests
[ https://issues.apache.org/jira/browse/HIVE-20575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620835#comment-16620835 ] Hive QA commented on HIVE-20575: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12940372/HIVE-20575.04.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 14980 tests executed *Failed tests:* {noformat} org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testComplexQuery (batchId=251) org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testDataTypes (batchId=251) org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testEscapedStrings (batchId=251) org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testLlapInputFormatEndToEnd (batchId=251) org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testNonAsciiStrings (batchId=251) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13909/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13909/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13909/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 5 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12940372 - PreCommit-HIVE-Build > Fix flaky connection metric tests > - > > Key: HIVE-20575 > URL: https://issues.apache.org/jira/browse/HIVE-20575 > Project: Hive > Issue Type: Test > Components: Hive, Test >Affects Versions: 4.0.0 >Reporter: Laszlo Pinter >Assignee: Laszlo Pinter >Priority: Minor > Fix For: 4.0.0 > > Attachments: HIVE-20575.01.patch, HIVE-20575.02.patch, > HIVE-20575.03.patch, HIVE-20575.04.patch > > > TestHs2ConnectionMetricsHttp.testOpenConnectionMetrics() is flaky. We need to > fix it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
[ https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R reassigned HIVE-20599: - > CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException > --- > > Key: HIVE-20599 > URL: https://issues.apache.org/jira/browse/HIVE-20599 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Fix For: 3.1.0 > > > SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - > from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING); > throws below Exception > {code:java} > Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 > Wrong arguments ''PST'': No matching method for class > org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible > choices: _FUNC_(bigint) _FUNC_(binary) _FUNC_(boolean) _FUNC_(date) > _FUNC_(decimal(38,18)) _FUNC_(double) _FUNC_(float) _FUNC_(int) > _FUNC_(smallint) _FUNC_(string) _FUNC_(timestamp) _FUNC_(tinyint) > _FUNC_(void) (state=42000,code=4){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20531) One of the task , either move or add partition can be avoided in repl load flow
[ https://issues.apache.org/jira/browse/HIVE-20531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-20531: --- Attachment: HIVE-20531.02.patch > One of the task , either move or add partition can be avoided in repl load > flow > --- > > Key: HIVE-20531 > URL: https://issues.apache.org/jira/browse/HIVE-20531 > Project: Hive > Issue Type: Sub-task > Components: repl >Affects Versions: 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-20531.01.patch, HIVE-20531.02.patch > > > In replication load, both add partition and insert operations are handled > through import. Import creates 3 major tasks. Copy, add partition and move. > Copy does the copy of data from source location to staging directory. Then > add partition (which runs in parallel to copy) creates the partition in meta > store. Its a no op in case of insert and by the time this ddl task is > executed for insert partition would be already present. The third operation > is move. Which actually moves the file from staging directory to actual > location. And then in case of insert it adds the insert event to notification > table. It does this for add partition operation which is redundant as the > event for add partition would have been written already by ddl task. With the > optimization to copy directly to actual table location in S3, move task can > be avoided for add partition operation replay and replay of insert need not > create the add partition (ddl) task. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20531) One of the task , either move or add partition can be avoided in repl load flow
[ https://issues.apache.org/jira/browse/HIVE-20531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-20531: --- Status: Open (was: Patch Available) > One of the task , either move or add partition can be avoided in repl load > flow > --- > > Key: HIVE-20531 > URL: https://issues.apache.org/jira/browse/HIVE-20531 > Project: Hive > Issue Type: Sub-task > Components: repl >Affects Versions: 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-20531.01.patch, HIVE-20531.02.patch > > > In replication load, both add partition and insert operations are handled > through import. Import creates 3 major tasks. Copy, add partition and move. > Copy does the copy of data from source location to staging directory. Then > add partition (which runs in parallel to copy) creates the partition in meta > store. Its a no op in case of insert and by the time this ddl task is > executed for insert partition would be already present. The third operation > is move. Which actually moves the file from staging directory to actual > location. And then in case of insert it adds the insert event to notification > table. It does this for add partition operation which is redundant as the > event for add partition would have been written already by ddl task. With the > optimization to copy directly to actual table location in S3, move task can > be avoided for add partition operation replay and replay of insert need not > create the add partition (ddl) task. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20531) One of the task , either move or add partition can be avoided in repl load flow
[ https://issues.apache.org/jira/browse/HIVE-20531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-20531: --- Status: Patch Available (was: Open) > One of the task , either move or add partition can be avoided in repl load > flow > --- > > Key: HIVE-20531 > URL: https://issues.apache.org/jira/browse/HIVE-20531 > Project: Hive > Issue Type: Sub-task > Components: repl >Affects Versions: 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-20531.01.patch, HIVE-20531.02.patch > > > In replication load, both add partition and insert operations are handled > through import. Import creates 3 major tasks. Copy, add partition and move. > Copy does the copy of data from source location to staging directory. Then > add partition (which runs in parallel to copy) creates the partition in meta > store. Its a no op in case of insert and by the time this ddl task is > executed for insert partition would be already present. The third operation > is move. Which actually moves the file from staging directory to actual > location. And then in case of insert it adds the insert event to notification > table. It does this for add partition operation which is redundant as the > event for add partition would have been written already by ddl task. With the > optimization to copy directly to actual table location in S3, move task can > be avoided for add partition operation replay and replay of insert need not > create the add partition (ddl) task. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20575) Fix flaky connection metric tests
[ https://issues.apache.org/jira/browse/HIVE-20575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620774#comment-16620774 ] Hive QA commented on HIVE-20575: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 42s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13909/dev-support/hive-personality.sh | | git revision | master / ce36c43 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: itests/hive-unit U: itests/hive-unit | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13909/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Fix flaky connection metric tests > - > > Key: HIVE-20575 > URL: https://issues.apache.org/jira/browse/HIVE-20575 > Project: Hive > Issue Type: Test > Components: Hive, Test >Affects Versions: 4.0.0 >Reporter: Laszlo Pinter >Assignee: Laszlo Pinter >Priority: Minor > Fix For: 4.0.0 > > Attachments: HIVE-20575.01.patch, HIVE-20575.02.patch, > HIVE-20575.03.patch, HIVE-20575.04.patch > > > TestHs2ConnectionMetricsHttp.testOpenConnectionMetrics() is flaky. We need to > fix it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20556) Expose an API to retrieve the TBL_ID from TBLS in the metastore tables
[ https://issues.apache.org/jira/browse/HIVE-20556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaume M updated HIVE-20556: --- Status: Open (was: Patch Available) > Expose an API to retrieve the TBL_ID from TBLS in the metastore tables > -- > > Key: HIVE-20556 > URL: https://issues.apache.org/jira/browse/HIVE-20556 > Project: Hive > Issue Type: New Feature > Components: Metastore, Standalone Metastore >Reporter: Jaume M >Assignee: Jaume M >Priority: Major > Attachments: HIVE-20556.1.patch, HIVE-20556.10.patch, > HIVE-20556.11.patch, HIVE-20556.2.patch, HIVE-20556.3.patch, > HIVE-20556.4.patch, HIVE-20556.5.patch, HIVE-20556.6.patch, > HIVE-20556.7.patch, HIVE-20556.8.patch, HIVE-20556.9.patch > > > We have two options to do this > 1) Use the current MTable and add a field for this value > 2) Add an independent API call to the metastore that would return the TBL_ID. > Option 1 is preferable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20556) Expose an API to retrieve the TBL_ID from TBLS in the metastore tables
[ https://issues.apache.org/jira/browse/HIVE-20556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaume M updated HIVE-20556: --- Attachment: HIVE-20556.11.patch Status: Patch Available (was: Open) > Expose an API to retrieve the TBL_ID from TBLS in the metastore tables > -- > > Key: HIVE-20556 > URL: https://issues.apache.org/jira/browse/HIVE-20556 > Project: Hive > Issue Type: New Feature > Components: Metastore, Standalone Metastore >Reporter: Jaume M >Assignee: Jaume M >Priority: Major > Attachments: HIVE-20556.1.patch, HIVE-20556.10.patch, > HIVE-20556.11.patch, HIVE-20556.2.patch, HIVE-20556.3.patch, > HIVE-20556.4.patch, HIVE-20556.5.patch, HIVE-20556.6.patch, > HIVE-20556.7.patch, HIVE-20556.8.patch, HIVE-20556.9.patch > > > We have two options to do this > 1) Use the current MTable and add a field for this value > 2) Add an independent API call to the metastore that would return the TBL_ID. > Option 1 is preferable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20545) Exclude large-sized parameters from serialization of Table and Partition thrift objects in HMS notifications
[ https://issues.apache.org/jira/browse/HIVE-20545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620754#comment-16620754 ] Hive QA commented on HIVE-20545: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12940369/HIVE-20545.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14979 tests executed *Failed tests:* {noformat} org.apache.hive.jdbc.TestJdbcDriver2.testSelectExecAsync2 (batchId=252) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13908/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13908/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13908/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12940369 - PreCommit-HIVE-Build > Exclude large-sized parameters from serialization of Table and Partition > thrift objects in HMS notifications > > > Key: HIVE-20545 > URL: https://issues.apache.org/jira/browse/HIVE-20545 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.1.0, 4.0.0 >Reporter: Bharathkrishna Guruvayoor Murali >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-20545.1.patch, HIVE-20545.2.patch > > > Clients can add large-sized parameters in Table/Partition objects. So we need > to enable adding regex patterns through HiveConf to match parameters to be > filtered from table and partition objects before serialization in HMS > notifications. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20536) Add Surrogate Keys function to Hive
[ https://issues.apache.org/jira/browse/HIVE-20536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620716#comment-16620716 ] Ashutosh Chauhan commented on HIVE-20536: - -ve values for surrogate keys are ok, since surrogate keys by defintion has no semantic meaning, its just an identifier. +1 pending tests. > Add Surrogate Keys function to Hive > --- > > Key: HIVE-20536 > URL: https://issues.apache.org/jira/browse/HIVE-20536 > Project: Hive > Issue Type: Task > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Attachments: HIVE-20536.01.patch, HIVE-20536.02.patch, > HIVE-20536.03.patch, HIVE-20536.04.patch, HIVE-20536.05.patch, > HIVE-20536.06.patch, HIVE-20536.07.patch > > > Surrogate keys is an ability to generate and use unique integers for each row > in a table. If we have that ability then in conjunction with default clause > we can get surrogate keys functionality. Consider following ddl: > create table t1 (a string, b bigint default unique_long()); > We already have default clause wherein you can specify a function to provide > values. So, what we need is udf which can generate unique longs for each row > across queries for a table. > Idea is to use write_id . This is a column in metastore table TXN_COMPONENTS > whose value is determined at compile time to be used during query execution. > Each query execution generates a new write_id. So, we can seed udf with this > value during compilation. > Then we statically allocate ranges for each task from which it can draw next > long. So, lets say 64-bit write_id we divy up such that last 24 bits belong > to original usage of it that is txns. Next 16 bits are used for task_attempts > and last 24 bits to generate new long for each row. This implies we can allow > 17M txns, 65K tasks and 17M rows in a task. If you hit any of those limits we > can fail the query. > Implementation wise: serialize write_id in initialize() of udf. Then during > execute() we find out what task_attempt current task is and use it along with > write_id() to get starting long and give a new value on each invocation of > execute(). > Here we are assuming write_id can be determined at compile time, which should > be the case but we need to figure out how to get handle to it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20545) Exclude large-sized parameters from serialization of Table and Partition thrift objects in HMS notifications
[ https://issues.apache.org/jira/browse/HIVE-20545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620695#comment-16620695 ] Hive QA commented on HIVE-20545: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 35s{color} | {color:blue} standalone-metastore/metastore-common in master has 28 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 17s{color} | {color:red} metastore-server in master failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 14s{color} | {color:red} metastore-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 21m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13908/dev-support/hive-personality.sh | | git revision | master / 9c90776 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13908/yetus/branch-findbugs-standalone-metastore_metastore-server.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13908/yetus/patch-findbugs-standalone-metastore_metastore-server.txt | | modules | C: standalone-metastore/metastore-common standalone-metastore/metastore-server U: standalone-metastore | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13908/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Exclude large-sized parameters from serialization of Table and Partition > thrift objects in HMS notifications > > > Key: HIVE-20545 > URL: https://issues.apache.org/jira/browse/HIVE-20545 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.1.0, 4.0.0 >Reporter: Bharathkrishna Guruvayoor Murali >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-20545.1.patch, HIVE-20545.2.patch > > > Clients can add large-sized parameters in Table/Partition objects. So we need > to enable adding regex patterns through HiveConf to match parameters to be > filtered from table and partition objects before serialization in HMS >