[jira] [Commented] (HIVE-13001) Hive pre-commits builds taking much longer than normal

2016-02-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133784#comment-15133784
 ] 

Sergey Shelukhin commented on HIVE-13001:
-

I think at some point not so long ago the time was 2:05-2:15...

> Hive pre-commits builds taking much longer than normal
> --
>
> Key: HIVE-13001
> URL: https://issues.apache.org/jira/browse/HIVE-13001
> Project: Hive
>  Issue Type: Test
>  Components: Hive
>Affects Versions: 1.3.0
>Reporter: Naveen Gangam
>Assignee: Sergio Peña
>
> http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6863/
> Build took 6+ hours to complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-12982) WebHCat Templeton Server throwing java.lang.RuntimeException

2016-02-04 Thread Devendra Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devendra Vishwakarma resolved HIVE-12982.
-
Resolution: Fixed

> WebHCat Templeton Server throwing java.lang.RuntimeException
> 
>
> Key: HIVE-12982
> URL: https://issues.apache.org/jira/browse/HIVE-12982
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 1.2.1
> Environment: Hive version - 1.2.1
> Linux s390x architecture
>Reporter: Devendra Vishwakarma
>Assignee: Devendra Vishwakarma
>Priority: Blocker
>
> WebHCat Templeton Server is throwing 
> java.lang.RuntimeException: Could not load wadl generators from 
> wadlGeneratorDescriptions. 
> [root@abcde ~]# curl -s 
> 'http://:50111/templeton/v1/status?user.name='
> 
> 
> 
> Error 503 java.lang.RuntimeException: Could not load wadl generators 
> from wadlGeneratorDescriptions.
> 
> 
> HTTP ERROR: 503
> Problem accessing /templeton/v1/status. Reason:
>  java.lang.RuntimeException: Could not load wadl generators from 
> wadlGeneratorDescriptions.
> Powered by Jetty://
> 
> 
> Server logs -
> ===
> java.lang.RuntimeException: Could not load wadl generators from 
> wadlGeneratorDescriptions.
>   at 
> com.sun.jersey.api.wadl.config.WadlGeneratorConfig.createWadlGenerator(WadlGeneratorConfig.java:184)
>   at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.(WadlApplicationContextImpl.java:92)
>   at com.sun.jersey.server.impl.wadl.WadlFactory.init(WadlFactory.java:96)
>   at 
> com.sun.jersey.server.impl.application.RootResourceUriRules.initWadl(RootResourceUriRules.java:169)
>   at 
> com.sun.jersey.server.impl.application.RootResourceUriRules.(RootResourceUriRules.java:106)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._initiate(WebApplicationImpl.java:1300)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.access$700(WebApplicationImpl.java:163)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:769)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:765)
>   at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:765)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:760)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.initiate(ServletContainer.java:489)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer$InternalWebComponent.initiate(ServletContainer.java:319)
>   at 
> com.sun.jersey.spi.container.servlet.WebComponent.load(WebComponent.java:609)
>   at 
> com.sun.jersey.spi.container.servlet.WebComponent.init(WebComponent.java:210)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:374)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:557)
>   at javax.servlet.GenericServlet.init(GenericServlet.java:241)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:463)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.doStart(ServletHolder.java:283)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:770)
>   at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:249)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:676)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.doStart(HandlerCollection.java:224)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.doStart(HandlerWrapper.java:90)
>   at org.eclipse.jetty.server.Server.doStart(Server.java:261)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59)
>   at org.apache.hive.hcatalog.templeton.Main.runServer(Main.java:205)
>   at org.apache.hive.hcatalog.templeton.Main.run(Main.java:118)
>   at org.apache.hive.hcatalog.templeton.Main.main(Main.java:266)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
>   at java.la

[jira] [Commented] (HIVE-12982) WebHCat Templeton Server throwing java.lang.RuntimeException

2016-02-04 Thread Devendra Vishwakarma (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133767#comment-15133767
 ] 

Devendra Vishwakarma commented on HIVE-12982:
-

We found out the root cause of this issue in WebHCat. The issue with the plugin 
version of javadoc we were using in maven project. We had javadoc plugin 
version 2.9.1 in pom.xml.
To fix this issue we had to downgrade the plugin back to 2.4 i.e.

2.4 

And this resolved the problem.

> WebHCat Templeton Server throwing java.lang.RuntimeException
> 
>
> Key: HIVE-12982
> URL: https://issues.apache.org/jira/browse/HIVE-12982
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 1.2.1
> Environment: Hive version - 1.2.1
> Linux s390x architecture
>Reporter: Devendra Vishwakarma
>Assignee: Devendra Vishwakarma
>Priority: Blocker
>
> WebHCat Templeton Server is throwing 
> java.lang.RuntimeException: Could not load wadl generators from 
> wadlGeneratorDescriptions. 
> [root@abcde ~]# curl -s 
> 'http://:50111/templeton/v1/status?user.name='
> 
> 
> 
> Error 503 java.lang.RuntimeException: Could not load wadl generators 
> from wadlGeneratorDescriptions.
> 
> 
> HTTP ERROR: 503
> Problem accessing /templeton/v1/status. Reason:
>  java.lang.RuntimeException: Could not load wadl generators from 
> wadlGeneratorDescriptions.
> Powered by Jetty://
> 
> 
> Server logs -
> ===
> java.lang.RuntimeException: Could not load wadl generators from 
> wadlGeneratorDescriptions.
>   at 
> com.sun.jersey.api.wadl.config.WadlGeneratorConfig.createWadlGenerator(WadlGeneratorConfig.java:184)
>   at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.(WadlApplicationContextImpl.java:92)
>   at com.sun.jersey.server.impl.wadl.WadlFactory.init(WadlFactory.java:96)
>   at 
> com.sun.jersey.server.impl.application.RootResourceUriRules.initWadl(RootResourceUriRules.java:169)
>   at 
> com.sun.jersey.server.impl.application.RootResourceUriRules.(RootResourceUriRules.java:106)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._initiate(WebApplicationImpl.java:1300)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.access$700(WebApplicationImpl.java:163)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:769)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:765)
>   at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:765)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:760)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.initiate(ServletContainer.java:489)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer$InternalWebComponent.initiate(ServletContainer.java:319)
>   at 
> com.sun.jersey.spi.container.servlet.WebComponent.load(WebComponent.java:609)
>   at 
> com.sun.jersey.spi.container.servlet.WebComponent.init(WebComponent.java:210)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:374)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:557)
>   at javax.servlet.GenericServlet.init(GenericServlet.java:241)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:463)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.doStart(ServletHolder.java:283)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:770)
>   at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:249)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:676)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.doStart(HandlerCollection.java:224)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.doStart(HandlerWrapper.java:90)
>   at org.eclipse.jetty.server.Server.doStart(Server.java:261)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59)
>   at org.apache.hive.hcatalog.templeton.Main.runServer(Main.java:205)
>   at org.apache.hive.hcatalog.templeton.Main.run(Main.java:118)
>   at org.apache.hive.hcatal

[jira] [Assigned] (HIVE-12982) WebHCat Templeton Server throwing java.lang.RuntimeException

2016-02-04 Thread Devendra Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devendra Vishwakarma reassigned HIVE-12982:
---

Assignee: Devendra Vishwakarma

> WebHCat Templeton Server throwing java.lang.RuntimeException
> 
>
> Key: HIVE-12982
> URL: https://issues.apache.org/jira/browse/HIVE-12982
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 1.2.1
> Environment: Hive version - 1.2.1
> Linux s390x architecture
>Reporter: Devendra Vishwakarma
>Assignee: Devendra Vishwakarma
>Priority: Blocker
>
> WebHCat Templeton Server is throwing 
> java.lang.RuntimeException: Could not load wadl generators from 
> wadlGeneratorDescriptions. 
> [root@abcde ~]# curl -s 
> 'http://:50111/templeton/v1/status?user.name='
> 
> 
> 
> Error 503 java.lang.RuntimeException: Could not load wadl generators 
> from wadlGeneratorDescriptions.
> 
> 
> HTTP ERROR: 503
> Problem accessing /templeton/v1/status. Reason:
>  java.lang.RuntimeException: Could not load wadl generators from 
> wadlGeneratorDescriptions.
> Powered by Jetty://
> 
> 
> Server logs -
> ===
> java.lang.RuntimeException: Could not load wadl generators from 
> wadlGeneratorDescriptions.
>   at 
> com.sun.jersey.api.wadl.config.WadlGeneratorConfig.createWadlGenerator(WadlGeneratorConfig.java:184)
>   at 
> com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.(WadlApplicationContextImpl.java:92)
>   at com.sun.jersey.server.impl.wadl.WadlFactory.init(WadlFactory.java:96)
>   at 
> com.sun.jersey.server.impl.application.RootResourceUriRules.initWadl(RootResourceUriRules.java:169)
>   at 
> com.sun.jersey.server.impl.application.RootResourceUriRules.(RootResourceUriRules.java:106)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._initiate(WebApplicationImpl.java:1300)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.access$700(WebApplicationImpl.java:163)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:769)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:765)
>   at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:765)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:760)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.initiate(ServletContainer.java:489)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer$InternalWebComponent.initiate(ServletContainer.java:319)
>   at 
> com.sun.jersey.spi.container.servlet.WebComponent.load(WebComponent.java:609)
>   at 
> com.sun.jersey.spi.container.servlet.WebComponent.init(WebComponent.java:210)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:374)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:557)
>   at javax.servlet.GenericServlet.init(GenericServlet.java:241)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:463)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.doStart(ServletHolder.java:283)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:770)
>   at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:249)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:676)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.doStart(HandlerCollection.java:224)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.doStart(HandlerWrapper.java:90)
>   at org.eclipse.jetty.server.Server.doStart(Server.java:261)
>   at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59)
>   at org.apache.hive.hcatalog.templeton.Main.runServer(Main.java:205)
>   at org.apache.hive.hcatalog.templeton.Main.run(Main.java:118)
>   at org.apache.hive.hcatalog.templeton.Main.main(Main.java:266)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)

[jira] [Commented] (HIVE-12994) Implement support for NULLS FIRST/NULLS LAST

2016-02-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133749#comment-15133749
 ] 

Hive QA commented on HIVE-12994:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12786102/HIVE-12994.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6869/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6869/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6869/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Tests exited with: ExecutionException: java.util.concurrent.ExecutionException: 
java.io.IOException: Could not create 
/data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-6869/succeeded/TestCommands
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12786102 - PreCommit-HIVE-TRUNK-Build

> Implement support for NULLS FIRST/NULLS LAST
> 
>
> Key: HIVE-12994
> URL: https://issues.apache.org/jira/browse/HIVE-12994
> Project: Hive
>  Issue Type: New Feature
>  Components: CBO, Metastore, Parser, Serializers/Deserializers
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-12994.patch
>
>
> From SQL:2003, the NULLS FIRST and NULLS LAST options can be used to 
> determine whether nulls appear before or after non-null data values when the 
> ORDER BY clause is used.
> SQL standard does not specify the behavior by default. Currently in Hive, 
> null values sort as if lower than any non-null value; that is, NULLS FIRST is 
> the default for ASC order, and NULLS LAST for DESC order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12730) MetadataUpdater: provide a mechanism to edit the basic statistics of a table (or a partition)

2016-02-04 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12730:
---
Attachment: HIVE-12730.05.patch

> MetadataUpdater: provide a mechanism to edit the basic statistics of a table 
> (or a partition)
> -
>
> Key: HIVE-12730
> URL: https://issues.apache.org/jira/browse/HIVE-12730
> Project: Hive
>  Issue Type: New Feature
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12730.01.patch, HIVE-12730.02.patch, 
> HIVE-12730.03.patch, HIVE-12730.04.patch, HIVE-12730.05.patch
>
>
> We would like to provide a way for developers/users to modify the numRows and 
> dataSize for a table/partition. Right now although they are part of the table 
> properties, they will be set to -1 when the task is not coming from a 
> statsTask. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11527) bypass HiveServer2 thrift interface for query results

2016-02-04 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133712#comment-15133712
 ] 

Takanobu Asanuma commented on HIVE-11527:
-

i appreciate your support!

> bypass HiveServer2 thrift interface for query results
> -
>
> Key: HIVE-11527
> URL: https://issues.apache.org/jira/browse/HIVE-11527
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Sergey Shelukhin
>Assignee: Takanobu Asanuma
> Attachments: HIVE-11527.WIP.patch
>
>
> Right now, HS2 reads query results and returns them to the caller via its 
> thrift API.
> There should be an option for HS2 to return some pointer to results (an HDFS 
> link?) and for the user to read the results directly off HDFS inside the 
> cluster, or via something like WebHDFS outside the cluster
> Review board link: https://reviews.apache.org/r/40867



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12993) user and password supplied from URL is overwritten by the empty user and password of the JDBC connection string when it's calling from beeline

2016-02-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133696#comment-15133696
 ] 

Hive QA commented on HIVE-12993:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12786097/HIVE-12993.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 10051 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6868/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6868/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6868/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12786097 - PreCommit-HIVE-TRUNK-Build

> user and password supplied from URL is overwritten by the empty user and 
> password of the JDBC connection string when it's calling from beeline
> --
>
> Key: HIVE-12993
> URL: https://issues.apache.org/jira/browse/HIVE-12993
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline, JDBC
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-12993.1.patch
>
>
> When we make the call {{beeline -u 
> "jdbc:hive2://localhost:1/;user=aaa;password=bbb"}}, the user and 
> password are overwritten by the blank ones since internally it constructs a 
> "connect  '' '' " call with empty user and password. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10187) Avro backed tables don't handle cyclical or recursive records

2016-02-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133610#comment-15133610
 ] 

Hive QA commented on HIVE-10187:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12786091/HIVE-10187.4.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 10036 tests 
executed
*Failed tests:*
{noformat}
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6867/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6867/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6867/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12786091 - PreCommit-HIVE-TRUNK-Build

> Avro backed tables don't handle cyclical or recursive records
> -
>
> Key: HIVE-10187
> URL: https://issues.apache.org/jira/browse/HIVE-10187
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 1.2.0
>Reporter: Mark Wagner
>Assignee: Mark Wagner
> Attachments: HIVE-10187.1.patch, HIVE-10187.2.patch, 
> HIVE-10187.3.patch, HIVE-10187.4.patch, HIVE-10187.demo.patch
>
>
> [HIVE-7653] changed the Avro SerDe to make it generate TypeInfos even for 
> recursive/cyclical schemas. However, any attempt to serialize data which 
> exploits that ability results in silently dropped fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13009) Fix add_jar_file.q on Windows

2016-02-04 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-13009:
--
Attachment: HIVE-13009.1.patch

Replace '/' with system:file.separator in the local file path.

> Fix add_jar_file.q on Windows
> -
>
> Key: HIVE-13009
> URL: https://issues.apache.org/jira/browse/HIVE-13009
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, Windows
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-13009.1.patch
>
>
> Forward slashes in the local file path don't work for Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12839) Upgrade Hive to Calcite 1.6

2016-02-04 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133585#comment-15133585
 ] 

Pengcheng Xiong commented on HIVE-12839:


[~ashutoshc] and [~jcamachorodriguez], thanks for your attention. The test 
exposed a mistake that I have made. It seems that it is using the default 
calcite's getSelectivity for selectivity estimation rather than Hive's 
getSelectivity. As a result, the limit remove rule is not triggered and an 
extra stage is added. The main reason is that I did not make Hive to initiate 
some of the metadataproviders correctly due to the change of CALCITE-794. Now, 
I made a full change and it seemed working. I am now waiting for a new QA run.

> Upgrade Hive to Calcite 1.6
> ---
>
> Key: HIVE-12839
> URL: https://issues.apache.org/jira/browse/HIVE-12839
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, 
> HIVE-12839.03.patch, HIVE-12839.04.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.6.0-incubating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12839) Upgrade Hive to Calcite 1.6

2016-02-04 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12839:
---
Attachment: HIVE-12839.04.patch

> Upgrade Hive to Calcite 1.6
> ---
>
> Key: HIVE-12839
> URL: https://issues.apache.org/jira/browse/HIVE-12839
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, 
> HIVE-12839.03.patch, HIVE-12839.04.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.6.0-incubating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12839) Upgrade Hive to Calcite 1.6

2016-02-04 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12839:
---
Attachment: (was: HIVE-12839.04.patch)

> Upgrade Hive to Calcite 1.6
> ---
>
> Key: HIVE-12839
> URL: https://issues.apache.org/jira/browse/HIVE-12839
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, 
> HIVE-12839.03.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.6.0-incubating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12839) Upgrade Hive to Calcite 1.6

2016-02-04 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12839:
---
Attachment: HIVE-12839.04.patch

> Upgrade Hive to Calcite 1.6
> ---
>
> Key: HIVE-12839
> URL: https://issues.apache.org/jira/browse/HIVE-12839
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12839.01.patch, HIVE-12839.02.patch, 
> HIVE-12839.03.patch, HIVE-12839.04.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.6.0-incubating.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11527) bypass HiveServer2 thrift interface for query results

2016-02-04 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133522#comment-15133522
 ] 

Takanobu Asanuma commented on HIVE-11527:
-

I forgot to publish ... I published a while ago.

> bypass HiveServer2 thrift interface for query results
> -
>
> Key: HIVE-11527
> URL: https://issues.apache.org/jira/browse/HIVE-11527
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Sergey Shelukhin
>Assignee: Takanobu Asanuma
> Attachments: HIVE-11527.WIP.patch
>
>
> Right now, HS2 reads query results and returns them to the caller via its 
> thrift API.
> There should be an option for HS2 to return some pointer to results (an HDFS 
> link?) and for the user to read the results directly off HDFS inside the 
> cluster, or via something like WebHDFS outside the cluster
> Review board link: https://reviews.apache.org/r/40867



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12558) LLAP: output QueryFragmentCounters somewhere

2016-02-04 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-12558:
-
Attachment: sample-output.png

Attaching snapshot of the output displaying counters

!sample-output.png!

> LLAP: output QueryFragmentCounters somewhere
> 
>
> Key: HIVE-12558
> URL: https://issues.apache.org/jira/browse/HIVE-12558
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Sergey Shelukhin
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-12558.1.patch, HIVE-12558.wip.patch, 
> sample-output.png
>
>
> Right now, LLAP logs counters for every fragment; most of them are IO related 
> and could be very useful, they also include table names so that things like 
> cache hit ratio, etc., could be calculated for every table.
> We need to output them to some metrics system (preserving the breakdown by 
> table, possibly also adding query ID or even stage) so that they'd be usable 
> without grep/sed/awk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12558) LLAP: output QueryFragmentCounters somewhere

2016-02-04 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133511#comment-15133511
 ] 

Prasanth Jayachandran commented on HIVE-12558:
--

This patch needs TEZ-3090 to compile.

> LLAP: output QueryFragmentCounters somewhere
> 
>
> Key: HIVE-12558
> URL: https://issues.apache.org/jira/browse/HIVE-12558
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Sergey Shelukhin
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-12558.1.patch, HIVE-12558.wip.patch
>
>
> Right now, LLAP logs counters for every fragment; most of them are IO related 
> and could be very useful, they also include table names so that things like 
> cache hit ratio, etc., could be calculated for every table.
> We need to output them to some metrics system (preserving the breakdown by 
> table, possibly also adding query ID or even stage) so that they'd be usable 
> without grep/sed/awk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12558) LLAP: output QueryFragmentCounters somewhere

2016-02-04 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-12558:
-
Attachment: HIVE-12558.1.patch

> LLAP: output QueryFragmentCounters somewhere
> 
>
> Key: HIVE-12558
> URL: https://issues.apache.org/jira/browse/HIVE-12558
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Sergey Shelukhin
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-12558.1.patch, HIVE-12558.wip.patch
>
>
> Right now, LLAP logs counters for every fragment; most of them are IO related 
> and could be very useful, they also include table names so that things like 
> cache hit ratio, etc., could be calculated for every table.
> We need to output them to some metrics system (preserving the breakdown by 
> table, possibly also adding query ID or even stage) so that they'd be usable 
> without grep/sed/awk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13008) WebHcat DDL commands in secure mode NPE when default FileSystem doesn't support delegation tokens

2016-02-04 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13008:
--
Attachment: HIVE-13008.patch

[~thejas] could you review please

> WebHcat DDL commands in secure mode NPE when default FileSystem doesn't 
> support delegation tokens
> -
>
> Key: HIVE-13008
> URL: https://issues.apache.org/jira/browse/HIVE-13008
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-13008.patch
>
>
> {noformat}
> ERROR | 11 Jan 2016 20:19:02,781 | 
> org.apache.hive.hcatalog.templeton.CatchallExceptionMapper |
> java.lang.NullPointerException
> at 
> org.apache.hive.hcatalog.templeton.SecureProxySupport$2.run(SecureProxySupport.java:171)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hive.hcatalog.templeton.SecureProxySupport.writeProxyDelegationTokens(SecureProxySupport.java:168)
> at 
> org.apache.hive.hcatalog.templeton.SecureProxySupport.open(SecureProxySupport.java:95)
> at 
> org.apache.hive.hcatalog.templeton.HcatDelegator.run(HcatDelegator.java:63)
> at org.apache.hive.hcatalog.templeton.Server.ddl(Server.java:217)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1480)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1411)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1360)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1350)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:538)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:716)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:565)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1360)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:615)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:574)
> at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:88)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1331)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:477)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1031)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:406)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:965)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)
> at 
> org.eclipse.jetty.server.handler.

[jira] [Commented] (HIVE-7148) Use murmur hash to create bucketed tables

2016-02-04 Thread Charles Pritchard (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133477#comment-15133477
 ] 

Charles Pritchard commented on HIVE-7148:
-

I could really use custom bucketing functions, as I want to use buckets instead 
of partitions based on a derived value.

> Use murmur hash to create bucketed tables
> -
>
> Key: HIVE-7148
> URL: https://issues.apache.org/jira/browse/HIVE-7148
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>
> HIVE-7121 introduced murmur hashing for queries that don't insert into 
> bucketed tables. This was done to achieve better distribution of the data. 
> The same should be done for bucketed tables as well, but this involves making 
> sure we don't break backwards compat. This probably means that we have to 
> store the partitioning function used in the metadata and use that to 
> determine if SMB and bucketed map-join optimizations apply.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12988) Improve dynamic partition loading IV

2016-02-04 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-12988:

Attachment: HIVE-12988.2.patch

> Improve dynamic partition loading IV
> 
>
> Key: HIVE-12988
> URL: https://issues.apache.org/jira/browse/HIVE-12988
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 1.2.0, 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-12988.2.patch, HIVE-12988.patch
>
>
> Parallelize copyFiles()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10115) HS2 running on a Kerberized cluster should offer Kerberos(GSSAPI) and Delegation token(DIGEST) when alternate authentication is enabled

2016-02-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133431#comment-15133431
 ] 

Hive QA commented on HIVE-10115:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12786046/HIVE-10115.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10051 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.ql.TestTxnCommands2.testInitiatorWithMultipleFailedCompactions
org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark.testSparkQuery
org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark.testTempTable
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6866/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6866/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6866/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12786046 - PreCommit-HIVE-TRUNK-Build

> HS2 running on a Kerberized cluster should offer Kerberos(GSSAPI) and 
> Delegation token(DIGEST) when alternate authentication is enabled
> ---
>
> Key: HIVE-10115
> URL: https://issues.apache.org/jira/browse/HIVE-10115
> Project: Hive
>  Issue Type: Improvement
>  Components: Authentication
>Affects Versions: 1.1.0
>Reporter: Mubashir Kazia
>Assignee: Mubashir Kazia
>  Labels: patch
> Attachments: HIVE-10115.0.patch, HIVE-10115.2.patch
>
>
> In a Kerberized cluster when alternate authentication is enabled on HS2, it 
> should also accept Kerberos Authentication. The reason this is important is 
> because when we enable LDAP authentication HS2 stops accepting delegation 
> token authentication. So we are forced to enter username passwords in the 
> oozie configuration.
> The whole idea of SASL is that multiple authentication mechanism can be 
> offered. If we disable Kerberos(GSSAPI) and delegation token (DIGEST) 
> authentication when we enable LDAP authentication, this defeats SASL purpose.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12558) LLAP: output QueryFragmentCounters somewhere

2016-02-04 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133415#comment-15133415
 ] 

Prasanth Jayachandran commented on HIVE-12558:
--

Uploading a WIP patch. Needs cleanup and some console UI tweaking. 

> LLAP: output QueryFragmentCounters somewhere
> 
>
> Key: HIVE-12558
> URL: https://issues.apache.org/jira/browse/HIVE-12558
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Sergey Shelukhin
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-12558.wip.patch
>
>
> Right now, LLAP logs counters for every fragment; most of them are IO related 
> and could be very useful, they also include table names so that things like 
> cache hit ratio, etc., could be calculated for every table.
> We need to output them to some metrics system (preserving the breakdown by 
> table, possibly also adding query ID or even stage) so that they'd be usable 
> without grep/sed/awk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12558) LLAP: output QueryFragmentCounters somewhere

2016-02-04 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-12558:
-
Attachment: HIVE-12558.wip.patch

> LLAP: output QueryFragmentCounters somewhere
> 
>
> Key: HIVE-12558
> URL: https://issues.apache.org/jira/browse/HIVE-12558
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Sergey Shelukhin
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-12558.wip.patch
>
>
> Right now, LLAP logs counters for every fragment; most of them are IO related 
> and could be very useful, they also include table names so that things like 
> cache hit ratio, etc., could be calculated for every table.
> We need to output them to some metrics system (preserving the breakdown by 
> table, possibly also adding query ID or even stage) so that they'd be usable 
> without grep/sed/awk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13006) LLAP: add finer-grained classloaders as an option to be able to block the usage of removed UDFs

2016-02-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13006:

Description: Right now we add an epic global classloader for executors via 
a thread factory, so once added, UDFs cannot really be removed until restart. 
Changing the classloader for threadpool threads might not be possible, so we'd 
need to make executor threads (or the threadpool) custom.

> LLAP: add finer-grained classloaders as an option to be able to block the 
> usage of removed UDFs
> ---
>
> Key: HIVE-13006
> URL: https://issues.apache.org/jira/browse/HIVE-13006
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>
> Right now we add an epic global classloader for executors via a thread 
> factory, so once added, UDFs cannot really be removed until restart. Changing 
> the classloader for threadpool threads might not be possible, so we'd need to 
> make executor threads (or the threadpool) custom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12856) LLAP: update (add/remove) the UDFs available in LLAP when they are changed (refresh periodically)

2016-02-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12856:

Summary: LLAP: update (add/remove) the UDFs available in LLAP when they are 
changed (refresh periodically)  (was: LLAP: update (add/remove) the UDFs 
available in LLAP when they are changed; also refresh periodically)

> LLAP: update (add/remove) the UDFs available in LLAP when they are changed 
> (refresh periodically)
> -
>
> Key: HIVE-12856
> URL: https://issues.apache.org/jira/browse/HIVE-12856
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>
> I don't think re-querying the functions is going to scale, and the sessions 
> obviously cannot notify all LLAP clusters of every change. We should add 
> global versioning to metastore functions to track changes, and then possibly 
> add a notification mechanism, potentially thru ZK to avoid overloading the 
> metastore itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12892) Add global change versioning to permanent functions in metastore

2016-02-04 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133347#comment-15133347
 ] 

Alan Gates commented on HIVE-12892:
---

There's all kinds of stuff in there that won't work properly in multi-user mode 
without a transaction manager.  This seems like just another instance.  Is 
there a compelling reason to guard it now with increment rather than live 
dangerously until the txn manager work is done?

> Add global change versioning to permanent functions in metastore
> 
>
> Key: HIVE-12892
> URL: https://issues.apache.org/jira/browse/HIVE-12892
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12892.01.patch, HIVE-12892.02.patch, 
> HIVE-12892.03.patch, HIVE-12892.04.patch, HIVE-12892.05.patch, 
> HIVE-12892.nogen.patch, HIVE-12892.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12924) CBO: Calcite Operator To Hive Operator (Calcite Return Path): TestCliDriver groupby_ppr_multi_distinct.q failure

2016-02-04 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-12924:
-
Description: 
{code}
EXPLAIN EXTENDED
FROM srcpart src
INSERT OVERWRITE TABLE dest1
SELECT substr(src.key,1,1), count(DISTINCT substr(src.value,5)), 
concat(substr(src.key,1,1),sum(substr(src.value,5))), sum(DISTINCT 
substr(src.value, 5)), count(DISTINCT src.value)
WHERE src.ds = '2008-04-08'
GROUP BY substr(src.key,1,1)
{code}

Ended Job = job_local968043618_0742 with errors
FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask

  was:
{code}
EXPLAIN EXTENDED
FROM srcpart src
INSERT OVERWRITE TABLE dest1
SELECT substr(src.key,1,1), count(DISTINCT substr(src.value,5)), 
concat(substr(src.key,1,1),sum(substr(src.value,5))), sum(DISTINCT 
substr(src.value, 5)), count(DISTINCT src.value)
WHERE src.ds = '2008-04-08'
GROUP BY substr(src.key,1,1)
{code}

Stack trace:
{code}
2016-01-25T14:27:56,694 DEBUG [4e6a139e-a78c-4f61-bb10-57af2b0d4381 main[]]: 
parse.TypeCheckCtx (TypeCheckCtx.java:setError(159)) - Setting error: [Line 
6:79 Expression not in GROUP BY key 'key'] from (tok_table_or_col src)
java.lang.Exception
at 
org.apache.hadoop.hive.ql.parse.TypeCheckCtx.setError(TypeCheckCtx.java:159) 
[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$ColumnExprProcessor.process(TypeCheckProcFactory.java:628)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:158)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:213)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:157)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:10512)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:10468)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genSelectLogicalPlan(CalcitePlanner.java:2920)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLogicalPlan(CalcitePlanner.java:3053)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:874)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:832)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:112) 
[calcite-core-1.5.0.jar:1.5.0]
at 
org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:971)
 [calcite-core-1.5.0.jar:1.5.0]
at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:148) 
[calcite-core-1.5.0.jar:1.5.0]
at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:105) 
[calcite-core-1.5.0.jar:1.5.0]
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedHiveOPDag(CalcitePlanner.java:677)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:264)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10100)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:229)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
 [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
  

[jira] [Commented] (HIVE-13004) Remove encryption shims

2016-02-04 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133314#comment-15133314
 ] 

Ashutosh Chauhan commented on HIVE-13004:
-

[~spena] Welcome  : ) Seems like your comment got truncated. If you want to 
take this up, feel free to assign it to yourself.

> Remove encryption shims
> ---
>
> Key: HIVE-13004
> URL: https://issues.apache.org/jira/browse/HIVE-13004
> Project: Hive
>  Issue Type: Task
>  Components: Encryption
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>
> It has served its purpose. Now that we don't support hadoop-1, its no longer 
> needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12892) Add global change versioning to permanent functions in metastore

2016-02-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133309#comment-15133309
 ] 

Sergey Shelukhin commented on HIVE-12892:
-

Hmm... should we have the increment before txn manager, and then replace it 
with get+put when txn manager is in place?

> Add global change versioning to permanent functions in metastore
> 
>
> Key: HIVE-12892
> URL: https://issues.apache.org/jira/browse/HIVE-12892
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12892.01.patch, HIVE-12892.02.patch, 
> HIVE-12892.03.patch, HIVE-12892.04.patch, HIVE-12892.05.patch, 
> HIVE-12892.nogen.patch, HIVE-12892.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12927) HBase metastore: sequences are not safe

2016-02-04 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133294#comment-15133294
 ] 

Alan Gates commented on HIVE-12927:
---

See comments on HIVE-12892 on why increment won't work in that case.  In this 
case where I'm explicitly circumventing the transaction management it would be 
viable.  Is it better than checkAndPut?  I don't care which we use here.

> HBase metastore: sequences are not safe
> ---
>
> Key: HIVE-12927
> URL: https://issues.apache.org/jira/browse/HIVE-12927
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Sergey Shelukhin
>Assignee: Alan Gates
>Priority: Critical
> Attachments: HIVE-12927.patch
>
>
> {noformat}
>   long getNextSequence(byte[] sequence) throws IOException {
> {noformat}
> Is not safe in presence of any concurrency. It should use HBase increment API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13005) CBO: Calcite Operator To Hive Operator (Calcite Return Path): RexNode convert(ExprNodeConstantDesc literal) decimal support bug

2016-02-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133296#comment-15133296
 ] 

Sergey Shelukhin commented on HIVE-13005:
-

+1

> CBO: Calcite Operator To Hive Operator (Calcite Return Path): RexNode 
> convert(ExprNodeConstantDesc literal)  decimal support bug
> 
>
> Key: HIVE-13005
> URL: https://issues.apache.org/jira/browse/HIVE-13005
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-13005.1.patch
>
>
> HIVE-8064 seems to have introduced this code in RexNodeConverter::convert(), 
> but the parameters look like they  are wrongly passed :
> {code}
> RelDataType relType = 
> cluster.getTypeFactory().createSqlType(SqlTypeName.DECIMAL,
> bd.scale(), unscaled.toString().length());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12892) Add global change versioning to permanent functions in metastore

2016-02-04 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133291#comment-15133291
 ] 

Alan Gates commented on HIVE-12892:
---

The changes in HBaseReadWrite won't work.  HBaseReadWrite doesn't use 
HTableInterface directly, it uses HBaseConnection so that it can use a 
transaction manager such as Tephra or Omid.  These don't support the increment 
interface.  They only support put, get, scan, and delete.  With a transaction 
manager in place I don't believe we'll need increment here.

> Add global change versioning to permanent functions in metastore
> 
>
> Key: HIVE-12892
> URL: https://issues.apache.org/jira/browse/HIVE-12892
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12892.01.patch, HIVE-12892.02.patch, 
> HIVE-12892.03.patch, HIVE-12892.04.patch, HIVE-12892.05.patch, 
> HIVE-12892.nogen.patch, HIVE-12892.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12064) prevent transactional=false

2016-02-04 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133287#comment-15133287
 ] 

Wei Zheng commented on HIVE-12064:
--

[~ekoifman] Correct me if I'm wrong. In general we want to fail the following 
scenarios:
1. CREATE TABLE (with no bucketing or no ORC format) TBLPROPERTIES 
("transactional"="true");
Bad news, we currently don't prevent that.
2. ALTER TABLE
2.1 No tblproperties -> "transactional"="false" (Do we allow this? Although no 
such need to do that to non-ACID table)
2.2 No tblproperties -> "transactional"="true" (need to check bucketing and 
ORC, if not satisfied, fail)
2.3 "transactional"="false" -> "true" (need to check bucketing and ORC, if not 
satisfied, fail)
2.4 "transactional"="true" -> "false" (cannot go back)

> prevent transactional=false
> ---
>
> Key: HIVE-12064
> URL: https://issues.apache.org/jira/browse/HIVE-12064
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Wei Zheng
>Priority: Critical
> Attachments: HIVE-12064.2.patch, HIVE-12064.patch
>
>
> currently a tblproperty transactional=true must be set to make a table behave 
> in ACID compliant way.
> This is misleading in that it seems like changing it to transactional=false 
> makes the table non-acid but on disk layout of acid table is different than 
> plain tables.  So changing this  property may cause wrong data to be returned.
> Should prevent transactional=false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13005) CBO: Calcite Operator To Hive Operator (Calcite Return Path): RexNode convert(ExprNodeConstantDesc literal) decimal support bug

2016-02-04 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133284#comment-15133284
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-13005:
--

With return path on, this is exposing some issues with literal_decimal.q. With 
return path  turned off, we silently ignore this error(assuming that the client 
input a wrong precision/scale combo) and move on to the AST code path which 
handles this scenario correctly because Hive supports it.

> CBO: Calcite Operator To Hive Operator (Calcite Return Path): RexNode 
> convert(ExprNodeConstantDesc literal)  decimal support bug
> 
>
> Key: HIVE-13005
> URL: https://issues.apache.org/jira/browse/HIVE-13005
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-13005.1.patch
>
>
> HIVE-8064 seems to have introduced this code in RexNodeConverter::convert(), 
> but the parameters look like they  are wrongly passed :
> {code}
> RelDataType relType = 
> cluster.getTypeFactory().createSqlType(SqlTypeName.DECIMAL,
> bd.scale(), unscaled.toString().length());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12790) Metastore connection leaks in HiveServer2

2016-02-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133279#comment-15133279
 ] 

Hive QA commented on HIVE-12790:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12786030/HIVE-12790.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 10051 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6865/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6865/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6865/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12786030 - PreCommit-HIVE-TRUNK-Build

> Metastore connection leaks in HiveServer2
> -
>
> Key: HIVE-12790
> URL: https://issues.apache.org/jira/browse/HIVE-12790
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-12790.2.patch, HIVE-12790.3.patch, 
> HIVE-12790.patch, snippedLog.txt
>
>
> HiveServer2 keeps opening new connections to HMS each time it launches a 
> task. These connections do not appear to be closed when the task completes 
> thus causing a HMS connection leak. "lsof" for the HS2 process shows 
> connections to port 9083.
> {code}
> 2015-12-03 04:20:56,352 INFO  [HiveServer2-Background-Pool: Thread-424756()]: 
> ql.Driver (SessionState.java:printInfo(558)) - Launching Job 11 out of 41
> 2015-12-03 04:20:56,354 INFO  [Thread-405728()]: hive.metastore 
> (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with 
> URI thrift://:9083
> 2015-12-03 04:20:56,360 INFO  [Thread-405728()]: hive.metastore 
> (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, 
> current connections: 14824
> 2015-12-03 04:20:56,360 INFO  [Thread-405728()]: hive.metastore 
> (HiveMetaStoreClient.java:open(400)) - Connected to metastore.
> 
> 2015-12-03 04:21:06,355 INFO  [HiveServer2-Background-Pool: Thread-424756()]: 
> ql.Driver (SessionState.java:printInfo(558)) - Launching Job 12 out of 41
> 2015-12-03 04:21:06,357 INFO  [Thread-405756()]: hive.metastore 
> (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with 
> URI thrift://:9083
> 2015-12-03 04:21:06,362 INFO  [Thread-405756()]: hive.metastore 
> (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, 
> current connections: 14825
> 2015-12-03 04:21:06,362 INFO  [Thread-405756()]: hive.metastore 
> (HiveMetaStoreClient.java:open(400)) - Connected to metastore.
> ...
> 2015-12-03 04:21:08,357 INFO  [HiveServer2-Background-Pool: Thread-424756()]: 
> ql.Driver (SessionState.java:printInfo(558)) - Launching Job 13 out of 41
> 2015-12-03 04:21:08,360 INFO  [Thread-405782()]: hive.metastore 
> (HiveMetaStoreClient.java:open(311)) - Trying to connect to metastore with 
> URI thrift://:9083
> 2015-12-03 04:21:08,364 INFO  [Thread-405782()]: hive.metastore 
> (HiveMetaStoreClient.java:open(351)) - Opened a connection to metastore, 
> current connections: 14826
> 2015-12-03 04:21:08,365 INFO  [Thread-405782()]: hive.metastore 
> (HiveMetaStoreClient.java:open(400)) - Connected to metastore.
> ... 
> {code}
> The TaskRunner thread starts a new SessionState each time, which creates a 
> new connection to the HMS (via Hive.get(conf).getMSC()) that is never closed.
> Even SessionState.close(), currently not being called by the TaskRunner 
> thread, does not close this connection.
> Attaching a anonymized log snippet where the number of HMS connections 
> reaches north of 25000+ connections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12803) CBO: Calcite Operator To Hive Operator (Calcite Return Path): MiniTezCliDriver count.q failure

2016-02-04 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133277#comment-15133277
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-12803:
--

The exception mentioned in the jira is fixed by the change in HIVE-12924. 
However I see wrong results in the queries so I am leaving this jira open for 
now.

> CBO: Calcite Operator To Hive Operator (Calcite Return Path): 
> MiniTezCliDriver count.q failure
> --
>
> Key: HIVE-12803
> URL: https://issues.apache.org/jira/browse/HIVE-12803
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
>
> {code}
> select a, count(distinct b), count(distinct c), sum(d) from abcd group by a;
> {code}
> Set hive.cbo.returnpath.hiveop=true;
> {code}
> java.lang.IndexOutOfBoundsException: Index: 5, Size: 5
> at java.util.ArrayList.rangeCheck(ArrayList.java:635) ~[?:1.7.0_79]
> at java.util.ArrayList.get(ArrayList.java:411) ~[?:1.7.0_79]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.HiveGBOpConvUtil.genReduceSideGB1NoMapGB(HiveGBOpConvUtil.java:1060)
>  ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.HiveGBOpConvUtil.genNoMapSideGBNoSkew(HiveGBOpConvUtil.java:473)
>  ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.HiveGBOpConvUtil.translateGB(HiveGBOpConvUtil.java:304)
>  ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.HiveOpConverter.visit(HiveOpConverter.java:398)
>  ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.HiveOpConverter.dispatch(HiveOpConverter.java:181)
>  ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.optimizer.calcite.translator.HiveOpConverter.convert(HiveOpConverter.java:154)
>  ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedHiveOPDag(CalcitePlanner.java:688)
>  ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:266)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10094)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:231)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:237)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:237)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:471) 
> [hive-exec-2.1.0-SNAPSHOT.jar:?]
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:311) 
> [hive-exec-2.1.0-SNAPSHOT.jar:?]
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1149) 
> [hive-exec-2.1.0-SNAPSHOT.jar:?]
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1237) 
> [hive-exec-2.1.0-SNAPSHOT.jar:?]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13005) CBO: Calcite Operator To Hive Operator (Calcite Return Path): RexNode convert(ExprNodeConstantDesc literal) decimal support bug

2016-02-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133271#comment-15133271
 ] 

Sergey Shelukhin commented on HIVE-13005:
-

Hmm... it's surprising that anything has ever worked if this is the case.

> CBO: Calcite Operator To Hive Operator (Calcite Return Path): RexNode 
> convert(ExprNodeConstantDesc literal)  decimal support bug
> 
>
> Key: HIVE-13005
> URL: https://issues.apache.org/jira/browse/HIVE-13005
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-13005.1.patch
>
>
> HIVE-8064 seems to have introduced this code in RexNodeConverter::convert(), 
> but the parameters look like they  are wrongly passed :
> {code}
> RelDataType relType = 
> cluster.getTypeFactory().createSqlType(SqlTypeName.DECIMAL,
> bd.scale(), unscaled.toString().length());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12976) MetaStoreDirectSql doesn't batch IN lists in all cases

2016-02-04 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133263#comment-15133263
 ] 

Gopal V commented on HIVE-12976:


[~sershe]: LGTM - +1 with some caveats.

[~ashutoshc]/[~sushanth]: any comments on the code itself? I haven't understood 
the original code enough to be definite.

{code}
hive> explain select * from lineitem where l_orderkey = 121201;
OK
Plan optimized by CBO.

Stage-0
   Fetch Operator
  limit:-1
  Stage-1
 Map 1 vectorized, llap
 File Output Operator [FS_7]
compressed:false
Statistics:Num rows: 1434 Data size: 1274826 Basic stats: COMPLETE 
Column stats: COMPLETE
table:{"input 
format:":"org.apache.hadoop.mapred.TextInputFormat","output 
format:":"org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat","serde:":"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe"}
Select Operator [OP_6]
   
outputColumnNames:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11","_col12","_col13","_col14","_col15"]
   Statistics:Num rows: 1434 Data size: 1274826 Basic stats: 
COMPLETE Column stats: COMPLETE
   Filter Operator [FIL_5]
  predicate:(l_orderkey = 121201) (type: boolean)
  Statistics:Num rows: 1434 Data size: 1274826 Basic stats: 
COMPLETE Column stats: COMPLETE
  TableScan [TS_0]
 alias:lineitem
 Statistics:Num rows: 589709 Data size: 5333990851301 
Basic stats: COMPLETE Column stats: COMPLETE

Time taken: 1.709 seconds, Fetched: 21 row(s)
{code}

> MetaStoreDirectSql doesn't batch IN lists in all cases
> --
>
> Key: HIVE-12976
> URL: https://issues.apache.org/jira/browse/HIVE-12976
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12976.01.patch, HIVE-12976.02.patch, 
> HIVE-12976.patch
>
>
> That means that some RDBMS products with arbitrary limits cannot run these 
> queries. I hope HBase metastore comes soon and delivers us from Oracle! For 
> now, though, we have to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13005) CBO: Calcite Operator To Hive Operator (Calcite Return Path): RexNode convert(ExprNodeConstantDesc literal) decimal support bug

2016-02-04 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133261#comment-15133261
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-13005:
--

cc-ing [~sershe] 

> CBO: Calcite Operator To Hive Operator (Calcite Return Path): RexNode 
> convert(ExprNodeConstantDesc literal)  decimal support bug
> 
>
> Key: HIVE-13005
> URL: https://issues.apache.org/jira/browse/HIVE-13005
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-13005.1.patch
>
>
> HIVE-8064 seems to have introduced this code in RexNodeConverter::convert(), 
> but the parameters look like they  are wrongly passed :
> {code}
> RelDataType relType = 
> cluster.getTypeFactory().createSqlType(SqlTypeName.DECIMAL,
> bd.scale(), unscaled.toString().length());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13005) CBO: Calcite Operator To Hive Operator (Calcite Return Path): RexNode convert(ExprNodeConstantDesc literal) decimal support bug

2016-02-04 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13005:
-
Attachment: HIVE-13005.1.patch

[~ashutoshc] Can you please review this change.

> CBO: Calcite Operator To Hive Operator (Calcite Return Path): RexNode 
> convert(ExprNodeConstantDesc literal)  decimal support bug
> 
>
> Key: HIVE-13005
> URL: https://issues.apache.org/jira/browse/HIVE-13005
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-13005.1.patch
>
>
> HIVE-8064 seems to have introduced this code in RexNodeConverter::convert(), 
> but the parameters look like they  are wrongly passed :
> {code}
> RelDataType relType = 
> cluster.getTypeFactory().createSqlType(SqlTypeName.DECIMAL,
> bd.scale(), unscaled.toString().length());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10187) Avro backed tables don't handle cyclical or recursive records

2016-02-04 Thread Anthony Hsu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133204#comment-15133204
 ] 

Anthony Hsu commented on HIVE-10187:


I left some minor comments on [the RB|https://reviews.apache.org/r/36154/].

> Avro backed tables don't handle cyclical or recursive records
> -
>
> Key: HIVE-10187
> URL: https://issues.apache.org/jira/browse/HIVE-10187
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 1.2.0
>Reporter: Mark Wagner
>Assignee: Mark Wagner
> Attachments: HIVE-10187.1.patch, HIVE-10187.2.patch, 
> HIVE-10187.3.patch, HIVE-10187.4.patch, HIVE-10187.demo.patch
>
>
> [HIVE-7653] changed the Avro SerDe to make it generate TypeInfos even for 
> recursive/cyclical schemas. However, any attempt to serialize data which 
> exploits that ability results in silently dropped fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-13005) CBO: Calcite Operator To Hive Operator (Calcite Return Path): RexNode convert(ExprNodeConstantDesc literal) decimal support bug

2016-02-04 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan reassigned HIVE-13005:


Assignee: Hari Sankar Sivarama Subramaniyan

> CBO: Calcite Operator To Hive Operator (Calcite Return Path): RexNode 
> convert(ExprNodeConstantDesc literal)  decimal support bug
> 
>
> Key: HIVE-13005
> URL: https://issues.apache.org/jira/browse/HIVE-13005
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
>
> HIVE-8064 seems to have introduced this code in RexNodeConverter::convert(), 
> but the parameters look like they  are wrongly passed :
> {code}
> RelDataType relType = 
> cluster.getTypeFactory().createSqlType(SqlTypeName.DECIMAL,
> bd.scale(), unscaled.toString().length());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13004) Remove encryption shims

2016-02-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133149#comment-15133149
 ] 

Sergio Peña commented on HIVE-13004:


Great. Thanks [~ashutoshc].
I will the remove.

> Remove encryption shims
> ---
>
> Key: HIVE-13004
> URL: https://issues.apache.org/jira/browse/HIVE-13004
> Project: Hive
>  Issue Type: Task
>  Components: Encryption
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>
> It has served its purpose. Now that we don't support hadoop-1, its no longer 
> needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-13004) Remove encryption shims

2016-02-04 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reassigned HIVE-13004:
---

Assignee: Ashutosh Chauhan  (was: Sergio Peña)

> Remove encryption shims
> ---
>
> Key: HIVE-13004
> URL: https://issues.apache.org/jira/browse/HIVE-13004
> Project: Hive
>  Issue Type: Task
>  Components: Encryption
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>
> It has served its purpose. Now that we don't support hadoop-1, its no longer 
> needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13001) Hive pre-commits builds taking much longer than normal

2016-02-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-13001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133090#comment-15133090
 ] 

Sergio Peña commented on HIVE-13001:


This is the only test that took too much time. A previous test and another test 
after that took 3h, that is our average time.
I'll close this ticket as jenkins is working fine again.


> Hive pre-commits builds taking much longer than normal
> --
>
> Key: HIVE-13001
> URL: https://issues.apache.org/jira/browse/HIVE-13001
> Project: Hive
>  Issue Type: Test
>  Components: Hive
>Affects Versions: 1.3.0
>Reporter: Naveen Gangam
>Assignee: Sergio Peña
>
> http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6863/
> Build took 6+ hours to complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-13001) Hive pre-commits builds taking much longer than normal

2016-02-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-13001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña resolved HIVE-13001.

Resolution: Fixed

> Hive pre-commits builds taking much longer than normal
> --
>
> Key: HIVE-13001
> URL: https://issues.apache.org/jira/browse/HIVE-13001
> Project: Hive
>  Issue Type: Test
>  Components: Hive
>Affects Versions: 1.3.0
>Reporter: Naveen Gangam
>Assignee: Sergio Peña
>
> http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6863/
> Build took 6+ hours to complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12923) CBO: Calcite Operator To Hive Operator (Calcite Return Path): TestCliDriver groupby_grouping_sets4.q failure

2016-02-04 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133078#comment-15133078
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-12923:
--

[~jcamachorodriguez] Can you please look at the change.

Thanks
Hari

> CBO: Calcite Operator To Hive Operator (Calcite Return Path): TestCliDriver 
> groupby_grouping_sets4.q failure
> 
>
> Key: HIVE-12923
> URL: https://issues.apache.org/jira/browse/HIVE-12923
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-12923.1.patch, HIVE-12923.2.patch
>
>
> {code}
> EXPLAIN
> SELECT * FROM
> (SELECT a, b, count(*) from T1 where a < 3 group by a, b with cube) subq1
> join
> (SELECT a, b, count(*) from T1 where a < 3 group by a, b with cube) subq2
> on subq1.a = subq2.a
> {code}
> Stack trace:
> {code}
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactory.pruneJoinOperator(ColumnPrunerProcFactory.java:1110)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactory.access$400(ColumnPrunerProcFactory.java:85)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactory$ColumnPrunerJoinProc.process(ColumnPrunerProcFactory.java:941)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPruner$ColumnPrunerWalker.walk(ColumnPruner.java:172)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120)
> at 
> org.apache.hadoop.hive.ql.optimizer.ColumnPruner.transform(ColumnPruner.java:135)
> at 
> org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:237)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10176)
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:229)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239)
> at 
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:472)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:312)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1168)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1256)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1094)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1082)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:400)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1129)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1103)
> at 
> org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:10444)
> at 
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_grouping_sets4(TestCliDriver.java:3313)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12950) get rid of the NullScan emptyFile madness

2016-02-04 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133069#comment-15133069
 ] 

Ashutosh Chauhan commented on HIVE-12950:
-

+1

> get rid of the NullScan emptyFile madness
> -
>
> Key: HIVE-12950
> URL: https://issues.apache.org/jira/browse/HIVE-12950
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12950.01.patch, HIVE-12950.02.patch, 
> HIVE-12950.03.patch, HIVE-12950.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12730) MetadataUpdater: provide a mechanism to edit the basic statistics of a table (or a partition)

2016-02-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133066#comment-15133066
 ] 

Hive QA commented on HIVE-12730:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12785934/HIVE-12730.04.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 69 failed/errored test(s), 10038 tests 
executed
*Failed tests:*
{noformat}
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin_negative
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin_negative2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats18
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_truncate_column
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_truncate_column_list_bucket
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucketmapjoin7
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_11
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_12
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_3
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_4
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_7
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_8
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucketmapjoin7
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_12
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_3
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_4
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_7
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_8
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucket_map_join_spark1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucket_map_join_spark2
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucket_map_join_spark3
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucketmapjoin1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucketmap

[jira] [Updated] (HIVE-12924) CBO: Calcite Operator To Hive Operator (Calcite Return Path): TestCliDriver groupby_ppr_multi_distinct.q failure

2016-02-04 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-12924:
-
Attachment: HIVE-12924.2.patch

> CBO: Calcite Operator To Hive Operator (Calcite Return Path): TestCliDriver 
> groupby_ppr_multi_distinct.q failure
> 
>
> Key: HIVE-12924
> URL: https://issues.apache.org/jira/browse/HIVE-12924
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-12924.1.patch, HIVE-12924.2.patch
>
>
> {code}
> EXPLAIN EXTENDED
> FROM srcpart src
> INSERT OVERWRITE TABLE dest1
> SELECT substr(src.key,1,1), count(DISTINCT substr(src.value,5)), 
> concat(substr(src.key,1,1),sum(substr(src.value,5))), sum(DISTINCT 
> substr(src.value, 5)), count(DISTINCT src.value)
> WHERE src.ds = '2008-04-08'
> GROUP BY substr(src.key,1,1)
> {code}
> Stack trace:
> {code}
> 2016-01-25T14:27:56,694 DEBUG [4e6a139e-a78c-4f61-bb10-57af2b0d4381 main[]]: 
> parse.TypeCheckCtx (TypeCheckCtx.java:setError(159)) - Setting error: [Line 
> 6:79 Expression not in GROUP BY key 'key'] from (tok_table_or_col src)
> java.lang.Exception
> at 
> org.apache.hadoop.hive.ql.parse.TypeCheckCtx.setError(TypeCheckCtx.java:159) 
> [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$ColumnExprProcessor.process(TypeCheckProcFactory.java:628)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:158)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:213)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:157)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:10512)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:10468)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genSelectLogicalPlan(CalcitePlanner.java:2920)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLogicalPlan(CalcitePlanner.java:3053)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:874)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:832)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:112) 
> [calcite-core-1.5.0.jar:1.5.0]
> at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:971)
>  [calcite-core-1.5.0.jar:1.5.0]
> at 
> org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:148) 
> [calcite-core-1.5.0.jar:1.5.0]
> at 
> org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:105) 
> [calcite-core-1.5.0.jar:1.5.0]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedHiveOPDag(CalcitePlanner.java:677)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:264)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10100)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:229)
>  [hive-exec-2.1.0-SNAPSHOT.jar:2.

[jira] [Commented] (HIVE-12976) MetaStoreDirectSql doesn't batch IN lists in all cases

2016-02-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15133011#comment-15133011
 ] 

Sergey Shelukhin commented on HIVE-12976:
-

Failures are unrelated. [~gopalv] any further feedback? Thanks.

> MetaStoreDirectSql doesn't batch IN lists in all cases
> --
>
> Key: HIVE-12976
> URL: https://issues.apache.org/jira/browse/HIVE-12976
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12976.01.patch, HIVE-12976.02.patch, 
> HIVE-12976.patch
>
>
> That means that some RDBMS products with arbitrary limits cannot run these 
> queries. I hope HBase metastore comes soon and delivers us from Oracle! For 
> now, though, we have to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12950) get rid of the NullScan emptyFile madness

2016-02-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132996#comment-15132996
 ] 

Sergey Shelukhin commented on HIVE-12950:
-

[~ashutoshc] [~mmccline] ping?

> get rid of the NullScan emptyFile madness
> -
>
> Key: HIVE-12950
> URL: https://issues.apache.org/jira/browse/HIVE-12950
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12950.01.patch, HIVE-12950.02.patch, 
> HIVE-12950.03.patch, HIVE-12950.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13003) remove the code to create emptyFile from Hive

2016-02-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13003:

Issue Type: Wish  (was: Improvement)

> remove the code to create emptyFile from Hive
> -
>
> Key: HIVE-13003
> URL: https://issues.apache.org/jira/browse/HIVE-13003
> Project: Hive
>  Issue Type: Wish
>Reporter: Sergey Shelukhin
>
> After HIVE-12950, it would be nice to see if this code is needed anywhere any 
> more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12950) get rid of the NullScan emptyFile madness

2016-02-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12950:

Summary: get rid of the NullScan emptyFile madness  (was: get rid of the 
NullScan emptyFile madness (part 1 - at least for Tez and LLAP))

> get rid of the NullScan emptyFile madness
> -
>
> Key: HIVE-12950
> URL: https://issues.apache.org/jira/browse/HIVE-12950
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12950.01.patch, HIVE-12950.02.patch, 
> HIVE-12950.03.patch, HIVE-12950.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12950) get rid of the NullScan emptyFile madness (part 1 - at least for Tez and LLAP)

2016-02-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12950:

Attachment: HIVE-12950.03.patch

More out file updates. One of the failures is caused by HIVE-13002

> get rid of the NullScan emptyFile madness (part 1 - at least for Tez and LLAP)
> --
>
> Key: HIVE-12950
> URL: https://issues.apache.org/jira/browse/HIVE-12950
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12950.01.patch, HIVE-12950.02.patch, 
> HIVE-12950.03.patch, HIVE-12950.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11527) bypass HiveServer2 thrift interface for query results

2016-02-04 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132986#comment-15132986
 ] 

Vaibhav Gumashta commented on HIVE-11527:
-

[~tasanuma0829] Thanks for your patch. I'll also take a look today.

> bypass HiveServer2 thrift interface for query results
> -
>
> Key: HIVE-11527
> URL: https://issues.apache.org/jira/browse/HIVE-11527
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Sergey Shelukhin
>Assignee: Takanobu Asanuma
> Attachments: HIVE-11527.WIP.patch
>
>
> Right now, HS2 reads query results and returns them to the caller via its 
> thrift API.
> There should be an option for HS2 to return some pointer to results (an HDFS 
> link?) and for the user to read the results directly off HDFS inside the 
> cluster, or via something like WebHDFS outside the cluster
> Review board link: https://reviews.apache.org/r/40867



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12892) Add global change versioning to permanent functions in metastore

2016-02-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12892:

Attachment: HIVE-12892.05.patch

The new upgrade scripts were left out of the patch and erased :( Again, no 
logic change compared to the original ;)

> Add global change versioning to permanent functions in metastore
> 
>
> Key: HIVE-12892
> URL: https://issues.apache.org/jira/browse/HIVE-12892
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12892.01.patch, HIVE-12892.02.patch, 
> HIVE-12892.03.patch, HIVE-12892.04.patch, HIVE-12892.05.patch, 
> HIVE-12892.nogen.patch, HIVE-12892.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12892) Add global change versioning to permanent functions in metastore

2016-02-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12892:

Attachment: HIVE-12892.05.patch

> Add global change versioning to permanent functions in metastore
> 
>
> Key: HIVE-12892
> URL: https://issues.apache.org/jira/browse/HIVE-12892
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12892.01.patch, HIVE-12892.02.patch, 
> HIVE-12892.03.patch, HIVE-12892.04.patch, HIVE-12892.05.patch, 
> HIVE-12892.nogen.patch, HIVE-12892.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12892) Add global change versioning to permanent functions in metastore

2016-02-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12892:

Attachment: (was: HIVE-12892.05.patch)

> Add global change versioning to permanent functions in metastore
> 
>
> Key: HIVE-12892
> URL: https://issues.apache.org/jira/browse/HIVE-12892
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12892.01.patch, HIVE-12892.02.patch, 
> HIVE-12892.03.patch, HIVE-12892.04.patch, HIVE-12892.nogen.patch, 
> HIVE-12892.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12990) LLAP: ORC cache NPE without FileID support

2016-02-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-12990:

Attachment: HIVE-12990.01.patch

Fixed. It looks like the other places where long is used are safe, this was an 
omission in the ctor

> LLAP: ORC cache NPE without FileID support
> --
>
> Key: HIVE-12990
> URL: https://issues.apache.org/jira/browse/HIVE-12990
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.1.0
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12990.01.patch, HIVE-12990.patch
>
>
> {code}
>OrcBatchKey stripeKey = hasFileId ? new OrcBatchKey(fileId, -1, 0) : null;
>...
>   if (hasFileId && metadataCache != null) {
> stripeKey.stripeIx = stripeIx;
> stripeMetadata = metadataCache.getStripeMetadata(stripeKey);
>   }
> ...
>   public void setStripeMetadata(OrcStripeMetadata m) {
> assert stripes != null;
> stripes[m.getStripeIx()] = m;
>   }
> {code}
> {code}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.llap.io.metadata.OrcStripeMetadata.getStripeIx(OrcStripeMetadata.java:106)
> at 
> org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.setStripeMetadata(OrcEncodedDataConsumer.java:70)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.readStripesMetadata(OrcEncodedDataReader.java:685)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:283)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:215)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:212)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:212)
> at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:93)
> ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-1608) use sequencefile as the default for storing intermediate results

2016-02-04 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132819#comment-15132819
 ] 

Chaoyu Tang commented on HIVE-1608:
---

FYI, regenerating baselines of some failed tests is already under the way. 

> use sequencefile as the default for storing intermediate results
> 
>
> Key: HIVE-1608
> URL: https://issues.apache.org/jira/browse/HIVE-1608
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.7.0
>Reporter: Namit Jain
>Assignee: Brock Noland
> Attachments: HIVE-1608.1.patch, HIVE-1608.patch
>
>
> The only argument for having a text file for storing intermediate results 
> seems to be better debuggability.
> But, tailing a sequence file is possible, and it should be more space 
> efficient



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12441) Driver.acquireLocksAndOpenTxn() should only call recordValidTxns() when needed

2016-02-04 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-12441:
-
Target Version/s: 1.3.0, 2.1.0  (was: 1.3.0)

> Driver.acquireLocksAndOpenTxn() should only call recordValidTxns() when needed
> --
>
> Key: HIVE-12441
> URL: https://issues.apache.org/jira/browse/HIVE-12441
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Wei Zheng
> Attachments: HIVE-12441.1.patch
>
>
> recordValidTxns() is only needed if ACID tables are part of the query.  
> Otherwise it's just overhead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12441) Driver.acquireLocksAndOpenTxn() should only call recordValidTxns() when needed

2016-02-04 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-12441:
-
Attachment: HIVE-12441.1.patch

patch 1 for test

> Driver.acquireLocksAndOpenTxn() should only call recordValidTxns() when needed
> --
>
> Key: HIVE-12441
> URL: https://issues.apache.org/jira/browse/HIVE-12441
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Wei Zheng
> Attachments: HIVE-12441.1.patch
>
>
> recordValidTxns() is only needed if ACID tables are part of the query.  
> Otherwise it's just overhead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-1608) use sequencefile as the default for storing intermediate results

2016-02-04 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132774#comment-15132774
 ] 

Aihua Xu commented on HIVE-1608:


Seems it makes sense to switch to use SequenceFile by default. It will save the 
space. The newline characters in the intermediate text file are escaped, so we 
shouldn't have that issue any more.

I will regenerate the baselines.

> use sequencefile as the default for storing intermediate results
> 
>
> Key: HIVE-1608
> URL: https://issues.apache.org/jira/browse/HIVE-1608
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.7.0
>Reporter: Namit Jain
>Assignee: Brock Noland
> Attachments: HIVE-1608.1.patch, HIVE-1608.patch
>
>
> The only argument for having a text file for storing intermediate results 
> seems to be better debuggability.
> But, tailing a sequence file is possible, and it should be more space 
> efficient



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12999) Tez: Vertex creation is slowed down when NN throttles IPCs

2016-02-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132768#comment-15132768
 ] 

Sergey Shelukhin commented on HIVE-12999:
-

+1

> Tez: Vertex creation is slowed down when NN throttles IPCs
> --
>
> Key: HIVE-12999
> URL: https://issues.apache.org/jira/browse/HIVE-12999
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.2.0, 1.3.0, 2.0.0, 2.1.0
>Reporter: Gopal V
>Assignee: Gopal V
> Attachments: HIVE-12999.1.patch
>
>
> Tez vertex building has a decidedly slow path in the code, which is not 
> related to the DAG plan at all.
> The total number of RPC calls is not related to the total number of 
> operators, due to a bug in the DagUtils inner loops.
> {code}
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.createTmpDirs(Utilities.java:3207)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.createTmpDirs(Utilities.java:3170)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.createVertex(DagUtils.java:548)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.createVertex(DagUtils.java:1151)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.build(TezTask.java:388)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:175)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12892) Add global change versioning to permanent functions in metastore

2016-02-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132715#comment-15132715
 ] 

Hive QA commented on HIVE-12892:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12785921/HIVE-12892.04.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10036 tests 
executed
*Failed tests:*
{noformat}
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hive.beeline.TestSchemaTool.testSchemaUpgrade
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6863/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6863/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6863/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12785921 - PreCommit-HIVE-TRUNK-Build

> Add global change versioning to permanent functions in metastore
> 
>
> Key: HIVE-12892
> URL: https://issues.apache.org/jira/browse/HIVE-12892
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12892.01.patch, HIVE-12892.02.patch, 
> HIVE-12892.03.patch, HIVE-12892.04.patch, HIVE-12892.nogen.patch, 
> HIVE-12892.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12963) LIMIT statement with SORT BY creates additional MR job with hardcoded only one reducer

2016-02-04 Thread Alina Abramova (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alina Abramova updated HIVE-12963:
--
Attachment: HIVE-12963.3.patch

> LIMIT statement with SORT BY creates additional MR job with hardcoded only 
> one reducer
> --
>
> Key: HIVE-12963
> URL: https://issues.apache.org/jira/browse/HIVE-12963
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.0.0, 1.2.1, 0.13
>Reporter: Alina Abramova
>Assignee: Alina Abramova
> Attachments: HIVE-12963.1.patch, HIVE-12963.2.patch, 
> HIVE-12963.3.patch
>
>
> I execute query:
> hive> select age from test1 sort by age.age  limit 10;  
> Total jobs = 2
> Launching Job 1 out of 2
> Number of reduce tasks not specified. Estimated from input data size: 1
> Launching Job 2 out of 2
> Number of reduce tasks determined at compile time: 1
> When I have a large number of rows then the last stage of the job takes a 
> long time. I think we could allow to user choose number of reducers of last 
> job or refuse extra MR job.
> The same behavior I observed with querie:
> hive> create table new_test as select age from test1 group by age.age  limit 
> 10;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12885) LDAP Authenticator improvements

2016-02-04 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132555#comment-15132555
 ] 

Chaoyu Tang commented on HIVE-12885:


Chatted with Naveen offline in details about the implementation. It seems that 
he has covered all the cases so far we have run into and addressed the known 
backward compatibility issues.
+1

> LDAP Authenticator improvements
> ---
>
> Key: HIVE-12885
> URL: https://issues.apache.org/jira/browse/HIVE-12885
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Attachments: HIVE-12885.2.patch, HIVE-12885.3.patch, HIVE-12885.patch
>
>
> Currently Hive's LDAP Atn provider assumes certain defaults to keep its 
> configuration simple. 
> 1) One of the assumptions is the presence of an attribute 
> "distinguishedName". In certain non-standard LDAP implementations, this 
> attribute may not be available. So instead of basing all ldap searches on 
> this attribute, getNameInNamespace() returns the same value. So this API is 
> to be used instead.
> 2) It also assumes that the "user" value being passed in, will be able to 
> bind to LDAP. However, certain LDAP implementations, by default, only allow 
> the full DN to be used, just short user names are not permitted. We will need 
> to be able to support short names too when hive configuration only has 
> "BaseDN" specified (not userDNPatterns). So instead of hard-coding "uid" or 
> "CN" as keys for the short usernames, it probably better to make this a 
> configurable parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9534) incorrect result set for query that projects a windowed aggregate

2016-02-04 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-9534:
---
Attachment: HIVE-9534.2.patch

> incorrect result set for query that projects a windowed aggregate
> -
>
> Key: HIVE-9534
> URL: https://issues.apache.org/jira/browse/HIVE-9534
> Project: Hive
>  Issue Type: Bug
>  Components: PTF-Windowing
>Reporter: N Campbell
>Assignee: Aihua Xu
> Attachments: HIVE-9534.1.patch, HIVE-9534.2.patch
>
>
> Result set returned by Hive has one row instead of 5
> {code}
> select avg(distinct tsint.csint) over () from tsint 
> create table  if not exists TSINT (RNUM int , CSINT smallint)
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS TEXTFILE;
> 0|\N
> 1|-1
> 2|0
> 3|1
> 4|10
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9534) incorrect result set for query that projects a windowed aggregate

2016-02-04 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132527#comment-15132527
 ] 

Aihua Xu commented on HIVE-9534:



Code review: https://reviews.apache.org/r/43192/. Somehow I can't link to that.

> incorrect result set for query that projects a windowed aggregate
> -
>
> Key: HIVE-9534
> URL: https://issues.apache.org/jira/browse/HIVE-9534
> Project: Hive
>  Issue Type: Bug
>  Components: PTF-Windowing
>Reporter: N Campbell
>Assignee: Aihua Xu
> Attachments: HIVE-9534.1.patch
>
>
> Result set returned by Hive has one row instead of 5
> {code}
> select avg(distinct tsint.csint) over () from tsint 
> create table  if not exists TSINT (RNUM int , CSINT smallint)
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS TEXTFILE;
> 0|\N
> 1|-1
> 2|0
> 3|1
> 4|10
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9534) incorrect result set for query that projects a windowed aggregate

2016-02-04 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-9534:
---
Attachment: HIVE-9534.1.patch

> incorrect result set for query that projects a windowed aggregate
> -
>
> Key: HIVE-9534
> URL: https://issues.apache.org/jira/browse/HIVE-9534
> Project: Hive
>  Issue Type: Bug
>  Components: PTF-Windowing
>Reporter: N Campbell
>Assignee: Aihua Xu
> Attachments: HIVE-9534.1.patch
>
>
> Result set returned by Hive has one row instead of 5
> {code}
> select avg(distinct tsint.csint) over () from tsint 
> create table  if not exists TSINT (RNUM int , CSINT smallint)
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS TEXTFILE;
> 0|\N
> 1|-1
> 2|0
> 3|1
> 4|10
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12987) Add metrics for HS2 active users and SQL operations

2016-02-04 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HIVE-12987:
---
Attachment: HIVE-12987.2.patch

New patch 2 rebased to trunk latest.

> Add metrics for HS2 active users and SQL operations
> ---
>
> Key: HIVE-12987
> URL: https://issues.apache.org/jira/browse/HIVE-12987
> Project: Hive
>  Issue Type: Task
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: HIVE-12987.1.patch, HIVE-12987.2.patch, 
> HIVE-12987.2.patch
>
>
> HIVE-12271 added metrics for all HS2 operations. Sometimes, users are also 
> interested in metrics just for SQL operations.
> It is useful to track active user count as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12958) Make embedded Jetty server more configurable

2016-02-04 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-12958:

Release Note: 
A new property 'templeton.jetty.configuration' can be set pointing to an XML 
file in webhcat configuration file to enable the embedded Jetty configuration 
from such file.

   
 templeton.jetty.configuration
 
 The embedded jetty configuration file.
   

We can follow 
https://wiki.eclipse.org/Jetty/Tutorial/Embedding_Jetty#Configuring_a_File_Server_with_XML
 to make the xml configuration file to update the settings.

Here is an example:


http://www.eclipse.org/jetty/configure.dtd";>



  
  

65535
  
  



  was:
A new property 'templeton.jetty.configuration' can be set pointing to an XML 
file in webhcat configuration file to enable the embedded Jetty configuration 
from such file.

{noformat}
   
 templeton.jetty.configuration
 
 The embedded jetty configuration file.
  
{noformat}

We can follow 
https://wiki.eclipse.org/Jetty/Tutorial/Embedding_Jetty#Configuring_a_File_Server_with_XML
 to make the xml configuration file to update the settings.

Here is an example:


http://www.eclipse.org/jetty/configure.dtd";>



  
  

65535
  
  




> Make embedded Jetty server more configurable
> 
>
> Key: HIVE-12958
> URL: https://issues.apache.org/jira/browse/HIVE-12958
> Project: Hive
>  Issue Type: Improvement
>  Components: HCatalog
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-12958.1.patch, HIVE-12958.2.patch, 
> HIVE-12958.3.patch
>
>
> Currently you can't configure embedded jetty within HCatalog. Propose to 
> support an xml configuration which Jetty already supports. A new Web-hcat 
> property will be added to specify the configure file location. If the file 
> doesn't exist, falls back to old behavior. If it exists, such configuration 
> will be loaded to configure embedded Jetty server. 
> Some default parameters for Jetty may not be sufficient for some cases such 
> as request/response buffer size. This improvement allows to make such change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12958) Make embedded Jetty server more configurable

2016-02-04 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-12958:

Release Note: 
A new property 'templeton.jetty.configuration' can be set pointing to an XML 
file in webhcat configuration file to enable the embedded Jetty configuration 
from such file.

{noformat}
   
 templeton.jetty.configuration
 
 The embedded jetty configuration file.
  
{noformat}

We can follow 
https://wiki.eclipse.org/Jetty/Tutorial/Embedding_Jetty#Configuring_a_File_Server_with_XML
 to make the xml configuration file to update the settings.

Here is an example:


http://www.eclipse.org/jetty/configure.dtd";>



  
  

65535
  
  



  was:
We can follow 
https://wiki.eclipse.org/Jetty/Tutorial/Embedding_Jetty#Configuring_a_File_Server_with_XML
 to make the xml configuration file to update the settings.

Here is an example:


http://www.eclipse.org/jetty/configure.dtd";>



  
  

65535
  
  




> Make embedded Jetty server more configurable
> 
>
> Key: HIVE-12958
> URL: https://issues.apache.org/jira/browse/HIVE-12958
> Project: Hive
>  Issue Type: Improvement
>  Components: HCatalog
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-12958.1.patch, HIVE-12958.2.patch, 
> HIVE-12958.3.patch
>
>
> Currently you can't configure embedded jetty within HCatalog. Propose to 
> support an xml configuration which Jetty already supports. A new Web-hcat 
> property will be added to specify the configure file location. If the file 
> doesn't exist, falls back to old behavior. If it exists, such configuration 
> will be loaded to configure embedded Jetty server. 
> Some default parameters for Jetty may not be sufficient for some cases such 
> as request/response buffer size. This improvement allows to make such change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12958) Make embedded Jetty server more configurable

2016-02-04 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-12958:

Description: 
Currently you can't configure embedded jetty within HCatalog. Propose to 
support an xml configuration which Jetty already supports. A new Web-hcat 
property will be added to specify the configure file location. If the file 
doesn't exist, falls back to old behavior. If it exists, such configuration 
will be loaded to configure embedded Jetty server. 

Some default parameters for Jetty may not be sufficient for some cases such as 
request/response buffer size. This improvement allows to make such change.

  was:
Currently you can't configure embedded jetty within HCatalog. Propose to 
support adding an xml configuration which Jetty already supports. A new 
Web-hcat configuration will be added to specify the configure file location. If 
the file doesn't exist, falls back to old behavior. If it exists, load such 
configuration to configure Jetty server. 

Some default parameters may not be sufficient such as request/response buffer 
size. This improvement allows to make such configuration change.


> Make embedded Jetty server more configurable
> 
>
> Key: HIVE-12958
> URL: https://issues.apache.org/jira/browse/HIVE-12958
> Project: Hive
>  Issue Type: Improvement
>  Components: HCatalog
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-12958.1.patch, HIVE-12958.2.patch, 
> HIVE-12958.3.patch
>
>
> Currently you can't configure embedded jetty within HCatalog. Propose to 
> support an xml configuration which Jetty already supports. A new Web-hcat 
> property will be added to specify the configure file location. If the file 
> doesn't exist, falls back to old behavior. If it exists, such configuration 
> will be loaded to configure embedded Jetty server. 
> Some default parameters for Jetty may not be sufficient for some cases such 
> as request/response buffer size. This improvement allows to make such change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12958) Make embedded Jetty server more configurable

2016-02-04 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-12958:

Labels: TODOC2.1  (was: )

> Make embedded Jetty server more configurable
> 
>
> Key: HIVE-12958
> URL: https://issues.apache.org/jira/browse/HIVE-12958
> Project: Hive
>  Issue Type: Improvement
>  Components: HCatalog
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-12958.1.patch, HIVE-12958.2.patch, 
> HIVE-12958.3.patch
>
>
> Currently you can't configure embedded jetty within HCatalog. Propose to 
> support an xml configuration which Jetty already supports. A new Web-hcat 
> property will be added to specify the configure file location. If the file 
> doesn't exist, falls back to old behavior. If it exists, such configuration 
> will be loaded to configure embedded Jetty server. 
> Some default parameters for Jetty may not be sufficient for some cases such 
> as request/response buffer size. This improvement allows to make such change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9534) incorrect result set for query that projects a windowed aggregate

2016-02-04 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-9534:
---
Component/s: (was: SQL)
 PTF-Windowing

> incorrect result set for query that projects a windowed aggregate
> -
>
> Key: HIVE-9534
> URL: https://issues.apache.org/jira/browse/HIVE-9534
> Project: Hive
>  Issue Type: Bug
>  Components: PTF-Windowing
>Reporter: N Campbell
>Assignee: Aihua Xu
>
> Result set returned by Hive has one row instead of 5
> {code}
> select avg(distinct tsint.csint) over () from tsint 
> create table  if not exists TSINT (RNUM int , CSINT smallint)
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS TEXTFILE;
> 0|\N
> 1|-1
> 2|0
> 3|1
> 4|10
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12958) Make embedded Jetty server more configurable

2016-02-04 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132486#comment-15132486
 ] 

Aihua Xu commented on HIVE-12958:
-

Those tests are not related to the patch.

> Make embedded Jetty server more configurable
> 
>
> Key: HIVE-12958
> URL: https://issues.apache.org/jira/browse/HIVE-12958
> Project: Hive
>  Issue Type: Improvement
>  Components: HCatalog
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-12958.1.patch, HIVE-12958.2.patch, 
> HIVE-12958.3.patch
>
>
> Currently you can't configure embedded jetty within HCatalog. Propose to 
> support an xml configuration which Jetty already supports. A new Web-hcat 
> property will be added to specify the configure file location. If the file 
> doesn't exist, falls back to old behavior. If it exists, such configuration 
> will be loaded to configure embedded Jetty server. 
> Some default parameters for Jetty may not be sufficient for some cases such 
> as request/response buffer size. This improvement allows to make such change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12958) Make embedded Jetty server more configurable

2016-02-04 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-12958:

Description: 
Currently you can't configure embedded jetty within HCatalog. Propose to 
support adding an xml configuration which Jetty already supports. A new 
Web-hcat configuration will be added to specify the configure file location. If 
the file doesn't exist, falls back to old behavior. If it exists, load such 
configuration to configure Jetty server. 

Some default parameters may not be sufficient such as request/response buffer 
size. This improvement allows to make such configuration change.

  was:
Currently you can't configure embedded jetty within HCatalog. Propose to 
support add an xml configuration which Jetty already supports. A new Web-hcat 
configuration will be added to specify the configure file location. If the file 
doesn't exist, falls back to old behavior. If it exists, load such 
configuration to configure Jetty server. 

Some default parameters may not be sufficient such as request/response buffer 
size. This improvement allows to make such configuration change.


> Make embedded Jetty server more configurable
> 
>
> Key: HIVE-12958
> URL: https://issues.apache.org/jira/browse/HIVE-12958
> Project: Hive
>  Issue Type: Improvement
>  Components: HCatalog
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-12958.1.patch, HIVE-12958.2.patch, 
> HIVE-12958.3.patch
>
>
> Currently you can't configure embedded jetty within HCatalog. Propose to 
> support adding an xml configuration which Jetty already supports. A new 
> Web-hcat configuration will be added to specify the configure file location. 
> If the file doesn't exist, falls back to old behavior. If it exists, load 
> such configuration to configure Jetty server. 
> Some default parameters may not be sufficient such as request/response buffer 
> size. This improvement allows to make such configuration change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12941) Unexpected result when using MIN() on struct with NULL in first field

2016-02-04 Thread Yongzhi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132450#comment-15132450
 ] 

Yongzhi Chen commented on HIVE-12941:
-

>From the behavior of primitive types, null value is lowest priority in getting 
>the min and max values: trying to get the min and max value from non-null 
>values first, only return null when all the values are null. The algorithm 
>treat null value as if largest value for function min and smallest value for 
>max.
But the original min code does not apply the rule recursively for other complex 
types for example struct. It can handle null struct, but not struct with null 
fields {null}. max function works because ,by default, null is treated as 
minimum value for compare function. Attach the patch apply the rule for 
function min too. 


> Unexpected result when using MIN() on struct with NULL in first field
> -
>
> Key: HIVE-12941
> URL: https://issues.apache.org/jira/browse/HIVE-12941
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Jan-Erik Hedbom
>Assignee: Yongzhi Chen
> Attachments: HIVE-12941.1.patch
>
>
> Using MIN() on struct with NULL in first field of a row yields NULL as result.
> Example:
> select min(a) FROM (select 1 as a union all select 2 as a union all select 
> cast(null as int) as a) tmp;
> OK
> _c0
> 1
> As expected. But if we wrap it in a struct:
> select min(a) FROM (select named_struct("field",1) as a union all select 
> named_struct("field",2) as a union all select named_struct("field",cast(null 
> as int)) as a) tmp;
> OK
> _c0
> NULL
> Using MAX() works as expected for structs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12941) Unexpected result when using MIN() on struct with NULL in first field

2016-02-04 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-12941:

Attachment: HIVE-12941.1.patch

> Unexpected result when using MIN() on struct with NULL in first field
> -
>
> Key: HIVE-12941
> URL: https://issues.apache.org/jira/browse/HIVE-12941
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Jan-Erik Hedbom
>Assignee: Yongzhi Chen
> Attachments: HIVE-12941.1.patch
>
>
> Using MIN() on struct with NULL in first field of a row yields NULL as result.
> Example:
> select min(a) FROM (select 1 as a union all select 2 as a union all select 
> cast(null as int) as a) tmp;
> OK
> _c0
> 1
> As expected. But if we wrap it in a struct:
> select min(a) FROM (select named_struct("field",1) as a union all select 
> named_struct("field",2) as a union all select named_struct("field",cast(null 
> as int)) as a) tmp;
> OK
> _c0
> NULL
> Using MAX() works as expected for structs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12984) spark tgz-s need to be deleted on mvn clean, as are other binary artifacts in the tree

2016-02-04 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132423#comment-15132423
 ] 

Xuefu Zhang commented on HIVE-12984:


I'm well open to that. However, there isn't such tarball available in the 
public maven repos. There might be one somewhere from Spark project, but Hive 
needs a version (for test) that doesn't include Hive artifacts.

> spark tgz-s need to be deleted on mvn clean, as are other binary artifacts in 
> the tree
> --
>
> Key: HIVE-12984
> URL: https://issues.apache.org/jira/browse/HIVE-12984
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12984.patch
>
>
> Currently, tgz files are downloaded and kept around forever. I noticed when 
> packaging the release (apparently the excludes in packaging files also didn't 
> work) that the initial src tar.gz was huge; regardless of that, I had 6 
> version of spark (1.2 thru 1.6 with one dot version) sitting there, and also 
> in every clone of Hive that I have.
> These should be switched to use normal means of artifact distribution (I 
> think I already filed a jira but I cannot find it now); meanwhile making sure 
> that mvn clean would remove them.
> I realize it could create some pain when running tests repeatedly on dev 
> machine unless "clean" is omitted from rebuilds; that is somewhat intentional 
> - it should be a good incentive to switch to maven for dependency management 
> instead of a bash script ;)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12205) Spark: unify spark statististics aggregation between local and remote spark client

2016-02-04 Thread Chinna Rao Lalam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinna Rao Lalam updated HIVE-12205:

Attachment: HIVE-12205.2.patch

Hi [~chengxiang li],

Thanks for the review. I have reworked the patch and uploaded a draft patch, 
yet to test the patch. Can you take a look this is the way you are suggesting?

I have created ticket in RB [https://reviews.apache.org/r/43188/]

> Spark: unify spark statististics aggregation between local and remote spark 
> client
> --
>
> Key: HIVE-12205
> URL: https://issues.apache.org/jira/browse/HIVE-12205
> Project: Hive
>  Issue Type: Task
>  Components: Spark
>Affects Versions: 1.1.0
>Reporter: Xuefu Zhang
>Assignee: Chinna Rao Lalam
> Attachments: HIVE-12205.1.patch, HIVE-12205.2.patch
>
>
> In class {{LocalSparkJobStatus}} and {{RemoteSparkJobStatus}}, spark 
> statistics aggregation are done similar but in different code paths. Ideally, 
> we should have a unified approach to simply maintenance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12976) MetaStoreDirectSql doesn't batch IN lists in all cases

2016-02-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132229#comment-15132229
 ] 

Hive QA commented on HIVE-12976:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12786101/HIVE-12976.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10036 tests 
executed
*Failed tests:*
{noformat}
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.ql.TestTxnCommands2.testInitiatorWithMultipleFailedCompactions
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6862/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6862/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6862/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12786101 - PreCommit-HIVE-TRUNK-Build

> MetaStoreDirectSql doesn't batch IN lists in all cases
> --
>
> Key: HIVE-12976
> URL: https://issues.apache.org/jira/browse/HIVE-12976
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12976.01.patch, HIVE-12976.02.patch, 
> HIVE-12976.patch
>
>
> That means that some RDBMS products with arbitrary limits cannot run these 
> queries. I hope HBase metastore comes soon and delivers us from Oracle! For 
> now, though, we have to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11527) bypass HiveServer2 thrift interface for query results

2016-02-04 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132194#comment-15132194
 ] 

Takanobu Asanuma commented on HIVE-11527:
-

Thanks. I'd like to get Jing's advice. And I left some questions on RB about 
result files.

> bypass HiveServer2 thrift interface for query results
> -
>
> Key: HIVE-11527
> URL: https://issues.apache.org/jira/browse/HIVE-11527
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Sergey Shelukhin
>Assignee: Takanobu Asanuma
> Attachments: HIVE-11527.WIP.patch
>
>
> Right now, HS2 reads query results and returns them to the caller via its 
> thrift API.
> There should be an option for HS2 to return some pointer to results (an HDFS 
> link?) and for the user to read the results directly off HDFS inside the 
> cluster, or via something like WebHDFS outside the cluster
> Review board link: https://reviews.apache.org/r/40867



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12244) Refactoring code for avoiding of comparison of Strings and do comparison on Path

2016-02-04 Thread Alina Abramova (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alina Abramova updated HIVE-12244:
--
Attachment: HIVE-12244.5.patch

> Refactoring code for avoiding of comparison of Strings and do comparison on 
> Path
> 
>
> Key: HIVE-12244
> URL: https://issues.apache.org/jira/browse/HIVE-12244
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 0.13.0, 0.14.0, 1.0.0, 1.2.1
>Reporter: Alina Abramova
>Assignee: Alina Abramova
>Priority: Minor
>  Labels: patch
> Fix For: 1.2.1
>
> Attachments: HIVE-12244.1.patch, HIVE-12244.2.patch, 
> HIVE-12244.3.patch, HIVE-12244.4.patch, HIVE-12244.5.patch
>
>
> In Hive often String is used for representation path and it causes new issues.
> We need to compare it with equals() but comparing Strings often is not right 
> in terms comparing paths .
> I think if we use Path from org.apache.hadoop.fs we will avoid new problems 
> in future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-5326) Operators && and || do not work

2016-02-04 Thread Maria Roy (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132092#comment-15132092
 ] 

Maria Roy commented on HIVE-5326:
-

I have analysed this issue and have identified required code changes as below:
1) Modify 
/hive/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java to add 
the below code:
  a) system.registerGenericUDF("&&", GenericUDFOPAnd.class);
  b) add the operator && to the list HIVE_OPERATORS
2) Modify 
/hive/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/SqlFunctionConverter.java
 to register a duplicate function with && operator.
Appreciate additional inputs for this fix.

> Operators && and || do not work
> ---
>
> Key: HIVE-5326
> URL: https://issues.apache.org/jira/browse/HIVE-5326
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Amareshwari Sriramadasu
>
> Though the documentation 
> https://cwiki.apache.org/Hive/languagemanual-udf.html says they are same as 
> AND and OR, they do not even get parsed. User gets parsing when they are 
> used. 
> hive> select key from src where key=a || key =b;
> FAILED: Parse Error: line 1:33 cannot recognize input near '|' 'key' '=' in 
> expression specification
> hive> select key from src where key=a && key =b;
> FAILED: Parse Error: line 1:33 cannot recognize input near '&' 'key' '=' in 
> expression specification



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12974) HiveServer2: Thrift SASL related exception when using custom PasswdAuthenticationProvider

2016-02-04 Thread Francisco Romero Bueno (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132034#comment-15132034
 ] 

Francisco Romero Bueno commented on HIVE-12974:
---

More research about the connections sent from HiveServer2 to HiveServer2. Data 
packets always sent 5 bytes, the following ones (hexadecimal): 22 41 30 30 31

Any idea about these connections?

> HiveServer2: Thrift SASL related exception when using custom 
> PasswdAuthenticationProvider
> -
>
> Key: HIVE-12974
> URL: https://issues.apache.org/jira/browse/HIVE-12974
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2
>Affects Versions: 0.13.0
> Environment: HDP-2.1
>Reporter: Francisco Romero Bueno
>Priority: Critical
>
> I've created a custom implementation of the `PasswdAuthenticationProvider` 
> interface, based on OAuth2. I think the code is irrelevant for the problem 
> I'm experiencing, nevertheless, it can be found [here | 
> https://github.com/telefonicaid/fiware-cosmos/blob/master/cosmos-hive-auth-provider/src/main/java/com/telefonica/iot/cosmos/hive/authprovider/OAuth2AuthenticationProviderImpl.java].
> I've configured `hive-site.xml` with the following properties:
> {code}
> 
>hive.server2.authentication
>CUSTOM
> 
> 
>hive.server2.custom.authentication.class
>
> com.telefonica.iot.cosmos.hive.authprovider.OAuth2AuthenticationProviderImpl
> 
> {code}
> Then I've restarted the Hive service and I've connected a JDBC based remote 
> client with success. This is an example of a successful run found in 
> `/var/log/hive/hiveserver2.log`:
> {code}
> 2016-02-01 11:52:44,515 INFO  [pool-5-thread-5]: 
> authprovider.HttpClientFactory (HttpClientFactory.java:(66)) - Setting 
> max total connections (500)
> 2016-02-01 11:52:44,515 INFO  [pool-5-thread-5]: 
> authprovider.HttpClientFactory (HttpClientFactory.java:(67)) - Setting 
> default max connections per route (100)
> 2016-02-01 11:52:44,799 INFO  [pool-5-thread-5]: 
> authprovider.HttpClientFactory 
> (OAuth2AuthenticationProviderImpl.java:Authenticate(65)) - Doing request: GET 
> https://account.lab.fiware.org/user?access_token=xx
>  HTTP/1.1
> 2016-02-01 11:52:44,800 INFO  [pool-5-thread-5]: 
> authprovider.HttpClientFactory 
> (OAuth2AuthenticationProviderImpl.java:Authenticate(76)) - Response received: 
> {"organizations": [], "displayName": "frb", "roles": [{"name": "provider", 
> "id": "106"}], "app_id": "", "email": 
> "f...@tid.es", "id": "frb"}
> 2016-02-01 11:52:44,801 INFO  [pool-5-thread-5]: 
> authprovider.HttpClientFactory 
> (OAuth2AuthenticationProviderImpl.java:Authenticate(104)) - User frb 
> authenticated
> 2016-02-01 11:52:44,868 INFO  [pool-5-thread-5]: thrift.ThriftCLIService 
> (ThriftCLIService.java:OpenSession(188)) - Client protocol version: 
> HIVE_CLI_SERVICE_PROTOCOL_V6
> 2016-02-01 11:52:44,871 INFO  [pool-5-thread-5]: session.SessionState 
> (SessionState.java:start(358)) - No Tez session required at this point. 
> hive.execution.engine=mr.
> 2016-02-01 11:52:44,873 INFO  [pool-5-thread-5]: session.SessionState 
> (SessionState.java:start(358)) - No Tez session required at this point. 
> hive.execution.engine=mr.
> {code}
> The problem is after that the following error appears in a recurrent manner:
> {code}
> 2016-02-01 11:52:48,227 ERROR [pool-5-thread-4]: server.TThreadPoolServer 
> (TThreadPoolServer.java:run(215)) - Error occurred during processing of 
> message.
> java.lang.RuntimeException: 
> org.apache.thrift.transport.TTransportException
>   at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:189)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.thrift.transport.TTransportException
>   at 
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
>   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>   at 
> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
>   at 
> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>   at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
>   at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport

[jira] [Commented] (HIVE-12987) Add metrics for HS2 active users and SQL operations

2016-02-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132003#comment-15132003
 ] 

Hive QA commented on HIVE-12987:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12786043/HIVE-12987.2.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6861/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6861/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6861/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-6861/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 26268de HIVE-12908 : Improve dynamic partition loading III 
(Ashutosh Chauhan via Prasanth J)
+ git clean -f -d
Removing ql/src/java/org/apache/hadoop/hive/ql/io/NullScanFileSystem.java
Removing ql/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
Removing ql/src/test/queries/clientpositive/llap_nullscan.q
Removing ql/src/test/results/clientpositive/llap/llap_nullscan.q.out
Removing ql/src/test/results/clientpositive/tez/llap_nullscan.q.out
+ git checkout master
Already on 'master'
+ git reset --hard origin/master
HEAD is now at 26268de HIVE-12908 : Improve dynamic partition loading III 
(Ashutosh Chauhan via Prasanth J)
+ git merge --ff-only origin/master
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12786043 - PreCommit-HIVE-TRUNK-Build

> Add metrics for HS2 active users and SQL operations
> ---
>
> Key: HIVE-12987
> URL: https://issues.apache.org/jira/browse/HIVE-12987
> Project: Hive
>  Issue Type: Task
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: HIVE-12987.1.patch, HIVE-12987.2.patch
>
>
> HIVE-12271 added metrics for all HS2 operations. Sometimes, users are also 
> interested in metrics just for SQL operations.
> It is useful to track active user count as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12950) get rid of the NullScan emptyFile madness (part 1 - at least for Tez and LLAP)

2016-02-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132001#comment-15132001
 ] 

Hive QA commented on HIVE-12950:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12785909/HIVE-12950.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 10038 tests 
executed
*Failed tests:*
{noformat}
TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_values_orig_table
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_llap_nullscan
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_llap_nullscan
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testFetchingPartitionsWithDifferentSchemas
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testGetPartitionSpecs_WithAndWithoutPartitionGrouping
org.apache.hadoop.hive.ql.TestTxnCommands2.testInitiatorWithMultipleFailedCompactions
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6860/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6860/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6860/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 11 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12785909 - PreCommit-HIVE-TRUNK-Build

> get rid of the NullScan emptyFile madness (part 1 - at least for Tez and LLAP)
> --
>
> Key: HIVE-12950
> URL: https://issues.apache.org/jira/browse/HIVE-12950
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-12950.01.patch, HIVE-12950.02.patch, 
> HIVE-12950.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12872) NoSuchMethodError exception Clause in Hive 1.1.1

2016-02-04 Thread fxliuwenjie (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15131976#comment-15131976
 ] 

fxliuwenjie commented on HIVE-12872:



[ 
https://issues.apache.org/jira/browse/HIVE-12872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15130724#comment-15130724
 ] 

Damour benoit commented on HIVE-12872:
--

Hello,

I had just the same error with hive-1.1.1 and hadoop-2.6.0
Changing to hive-1.2.1 (hadoop-2.6.0) solved it for me.

cheers

ps: I first mistakenly kept my $HIVE_HOME (in .bashrc) set to my hive-1.1.1 
home dir, and got the same error while running hive-1.2.1/bin/hive client. So 
watchout and remember to update your $HIVE_HOME when upgrading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


> NoSuchMethodError exception Clause in Hive 1.1.1
> 
>
> Key: HIVE-12872
> URL: https://issues.apache.org/jira/browse/HIVE-12872
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.1
> Environment: Hadoop 2.6.0
>Reporter: fxliuwenjie
>
> Firstly, i created a table :
> hive>crete externale table beauties(id bigint, name string, heigth double) 
> partitioned by (nation string) row format delimited fields terminated by '\t' 
> location '/beauty';
> Then i loaded data into this table:
> hive>load data loca inpath '/home/tmpdata/b.c' into table beauties 
> partition(nation='China');
> hive>load data local inpath '/home/tmpdata/b.j' into table beauties 
> partition(nation='Japan');
> Then i test to query the uploaded data:
> hive>select * from beauties;
> OK
> 1 lee 165.0 China
> 2 jzmb 167.0 Japan
> When i tried to run the below query i faced the issue:
> hive>select * from beauties where nation = 'China';
> Exception in thread "main" 
> java.lang.NoSuchMethodError:org.apache/hadoop.hive.ql.ppd.ExprWalkerInfo.getConvertedNode(Lorg/apache/hadoop/hive/ql/lib/Node;)Lorg/apache/hadoop/hive/ql/plan/ExprNodeDesc;
> atorg.apache.hadoop.hive.ql.ppd.ExprWalkerProcFactory$GenericFuncExprProcessor.process(ExprWalkerProcFactory.java:176)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
> at 
> org.apache.hadoop.hive.ql.ppd.ExprWalkerProcFactory.extractPushdownPreds(ExprWalkerProcFactory.java:290)
> at 
> org.apache.hadoop.hive.ql.ppd.ExprWalkerProcFactory.extractPushdownPreds(ExprWalkerProcFactory.java:241)
> at 
> org.apache.hadoop.hive.ql.ppd.OpProcFactory$FilterPPD.process(OpProcFactory.java:418)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
> at 
> org.apache.hadoop.hive.ql.ppd.PredicatePushDown.transform(PredicatePushDown.java:135)
> at 
> org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:182)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10207)
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:192).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)