[jira] [Reopened] (HIVE-6485) Downgrade to httpclient-4.2.5 in JDBC from httpclient-4.3.2
[ https://issues.apache.org/jira/browse/HIVE-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair reopened HIVE-6485: - Lets mark it as fixed only after HIVE-4764 goes in. (alternatively, maybe mark it as duplicate ). > Downgrade to httpclient-4.2.5 in JDBC from httpclient-4.3.2 > --- > > Key: HIVE-6485 > URL: https://issues.apache.org/jira/browse/HIVE-6485 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 0.13.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Fix For: 0.13.0 > > Attachments: HIVE-6485.1.patch > > > Had upgraded to the new version while adding SSL over Http mode support for > HiveServer2. But that conflicts with httpclient-4.2.5 which is in hadoop > classpath. I don't have a good reason to use httpclient-4.3.2, so it's better > to match hadoop. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-4629) HS2 should support an API to retrieve query logs
[ https://issues.apache.org/jira/browse/HIVE-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925491#comment-13925491 ] Carl Steinbach commented on HIVE-4629: -- Does the new version of the patch address any of the API design issues I mentioned earlier? > HS2 should support an API to retrieve query logs > > > Key: HIVE-4629 > URL: https://issues.apache.org/jira/browse/HIVE-4629 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Reporter: Shreepadma Venugopalan >Assignee: Shreepadma Venugopalan > Attachments: HIVE-4629-no_thrift.1.patch, HIVE-4629.1.patch, > HIVE-4629.2.patch > > > HiveServer2 should support an API to retrieve query logs. This is > particularly relevant because HiveServer2 supports async execution but > doesn't provide a way to report progress. Providing an API to retrieve query > logs will help report progress to the client. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6486) Support secure Subject.doAs() in HiveServer2 JDBC client.
[ https://issues.apache.org/jira/browse/HIVE-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925486#comment-13925486 ] Thejas M Nair commented on HIVE-6486: - [~rhbutani] I think it will be very valuable to have this patch committed to 0.13 as well. > Support secure Subject.doAs() in HiveServer2 JDBC client. > - > > Key: HIVE-6486 > URL: https://issues.apache.org/jira/browse/HIVE-6486 > Project: Hive > Issue Type: Improvement > Components: Authentication, HiveServer2, JDBC >Affects Versions: 0.11.0, 0.12.0 >Reporter: Shivaraju Gowda >Assignee: Shivaraju Gowda > Fix For: 0.13.0 > > Attachments: HIVE-6486.1.patch, HIVE-6486.2.patch, HIVE-6486.3.patch, > Hive_011_Support-Subject_doAS.patch, TestHive_SujectDoAs.java > > > HIVE-5155 addresses the problem of kerberos authentication in multi-user > middleware server using proxy user. In this mode the principal used by the > middle ware server has privileges to impersonate selected users in > Hive/Hadoop. > This enhancement is to support Subject.doAs() authentication in Hive JDBC > layer so that the end users Kerberos Subject is passed through in the middle > ware server. With this improvement there won't be any additional setup in the > server to grant proxy privileges to some users and there won't be need to > specify a proxy user in the JDBC client. This version should also be more > secure since it won't require principals with the privileges to impersonate > other users in Hive/Hadoop setup. > -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6598) Importing the project into eclipse as maven project have some issues
[ https://issues.apache.org/jira/browse/HIVE-6598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chinna Rao Lalam updated HIVE-6598: --- Fix Version/s: 0.13.0 Affects Version/s: 0.13.0 Status: Patch Available (was: Open) > Importing the project into eclipse as maven project have some issues > > > Key: HIVE-6598 > URL: https://issues.apache.org/jira/browse/HIVE-6598 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.0 > Environment: Windows 8 ,Eclipse Kepler and Maven 3.1.1 >Reporter: Chinna Rao Lalam >Assignee: Chinna Rao Lalam > Fix For: 0.13.0 > > Attachments: HIVE-6598.patch > > > Importing the project into eclipse as maven project throwing these problems. > Plugin execution not covered by lifecycle configuration: > org.apache.maven.plugins:maven-antrun-plugin:1.7:run (execution: > setup-test-dirs, phase: process-test-resources) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6598) Importing the project into eclipse as maven project have some issues
[ https://issues.apache.org/jira/browse/HIVE-6598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chinna Rao Lalam updated HIVE-6598: --- Attachment: HIVE-6598.patch Added plugin's configuration. > Importing the project into eclipse as maven project have some issues > > > Key: HIVE-6598 > URL: https://issues.apache.org/jira/browse/HIVE-6598 > Project: Hive > Issue Type: Bug > Environment: Windows 8 ,Eclipse Kepler and Maven 3.1.1 >Reporter: Chinna Rao Lalam >Assignee: Chinna Rao Lalam > Attachments: HIVE-6598.patch > > > Importing the project into eclipse as maven project throwing these problems. > Plugin execution not covered by lifecycle configuration: > org.apache.maven.plugins:maven-antrun-plugin:1.7:run (execution: > setup-test-dirs, phase: process-test-resources) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6594) UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization
[ https://issues.apache.org/jira/browse/HIVE-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925477#comment-13925477 ] Hive QA commented on HIVE-6594: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12633542/HIVE-6594.2.patch {color:green}SUCCESS:{color} +1 5375 tests passed Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1685/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1685/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12633542 > UnsignedInt128 addition does not increase internal int array count resulting > in corrupted values during serialization > - > > Key: HIVE-6594 > URL: https://issues.apache.org/jira/browse/HIVE-6594 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.13.0 >Reporter: Remus Rusanu >Assignee: Remus Rusanu > Attachments: HIVE-6594.1.patch, HIVE-6594.2.patch > > > Discovered this while investigating why my fix for HIVE-6222 produced diffs. > I discovered that Decimal128.addDestructive does not adjust the internal > count when an the number of relevant ints increases. Since this count is used > in the fast HiveDecimalWriter conversion code, the results are off. > The root cause is UnsignedDecimal128.differenceInternal does not do an > updateCount() on the result. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-6598) Importing the project into eclipse as maven project have some issues
Chinna Rao Lalam created HIVE-6598: -- Summary: Importing the project into eclipse as maven project have some issues Key: HIVE-6598 URL: https://issues.apache.org/jira/browse/HIVE-6598 Project: Hive Issue Type: Bug Environment: Windows 8 ,Eclipse Kepler and Maven 3.1.1 Reporter: Chinna Rao Lalam Assignee: Chinna Rao Lalam Importing the project into eclipse as maven project throwing these problems. Plugin execution not covered by lifecycle configuration: org.apache.maven.plugins:maven-antrun-plugin:1.7:run (execution: setup-test-dirs, phase: process-test-resources) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-5179) Wincompat : change script tests from bash to sh
[ https://issues.apache.org/jira/browse/HIVE-5179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-5179: --- Resolution: Fixed Fix Version/s: 0.14.0 Status: Resolved (was: Patch Available) Committed to trunk. Thanks, Sushanth! > Wincompat : change script tests from bash to sh > --- > > Key: HIVE-5179 > URL: https://issues.apache.org/jira/browse/HIVE-5179 > Project: Hive > Issue Type: Sub-task > Components: Windows >Reporter: Sushanth Sowmyan >Assignee: Sushanth Sowmyan > Fix For: 0.14.0 > > Attachments: HIVE-5179.patch > > -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-4723) DDLSemanticAnalyzer.addTablePartsOutputs eats several exceptions
[ https://issues.apache.org/jira/browse/HIVE-4723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-4723: --- Resolution: Fixed Fix Version/s: 0.14.0 Status: Resolved (was: Patch Available) Committed to trunk. Thanks, Szehon! > DDLSemanticAnalyzer.addTablePartsOutputs eats several exceptions > > > Key: HIVE-4723 > URL: https://issues.apache.org/jira/browse/HIVE-4723 > Project: Hive > Issue Type: Bug >Affects Versions: 0.12.0 >Reporter: Brock Noland >Assignee: Szehon Ho > Fix For: 0.14.0 > > Attachments: HIVE-4723.1.patch, HIVE-4723.2.patch, HIVE-4723.3.patch, > HIVE-4723.4.patch, HIVE-4723.5.patch, HIVE-4723.5.patch, HIVE-4723.patch > > > I accidently tried to archive a partition on a non-partitioned table. The > error message was bad, hive ate an exception, and NPE'ed. > {noformat} > 2013-06-09 16:36:12,628 ERROR parse.DDLSemanticAnalyzer > (DDLSemanticAnalyzer.java:addTablePartsOutputs(2899)) - Got HiveException > during obtaining list of partitions > 2013-06-09 16:36:12,628 ERROR ql.Driver (SessionState.java:printError(383)) - > FAILED: NullPointerException null > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.addTablePartsOutputs(DDLSemanticAnalyzer.java:2912) > at > org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.addTablePartsOutputs(DDLSemanticAnalyzer.java:2877) > at > org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeAlterTableArchive(DDLSemanticAnalyzer.java:2730) > at > org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:316) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:277) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:433) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:782) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.util.RunJar.main(RunJar.java:156) > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6568) Vectorized cast of decimal to string and timestamp produces incorrect result.
[ https://issues.apache.org/jira/browse/HIVE-6568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HIVE-6568: --- Affects Version/s: 0.13.0 > Vectorized cast of decimal to string and timestamp produces incorrect result. > - > > Key: HIVE-6568 > URL: https://issues.apache.org/jira/browse/HIVE-6568 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 0.13.0 >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: HIVE-6568.1.patch, HIVE-6568.2.patch > > > A decimal value 1.23 with scale 5 is represented in string as 1.23000. This > behavior is different from HiveDecimal behavior. > The difference in cast to timestamp is due to more aggressive rounding in > vectorized expression. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-5179) Wincompat : change script tests from bash to sh
[ https://issues.apache.org/jira/browse/HIVE-5179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925413#comment-13925413 ] Ashutosh Chauhan commented on HIVE-5179: +1 > Wincompat : change script tests from bash to sh > --- > > Key: HIVE-5179 > URL: https://issues.apache.org/jira/browse/HIVE-5179 > Project: Hive > Issue Type: Sub-task > Components: Windows >Reporter: Sushanth Sowmyan >Assignee: Sushanth Sowmyan > Attachments: HIVE-5179.patch > > -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-4723) DDLSemanticAnalyzer.addTablePartsOutputs eats several exceptions
[ https://issues.apache.org/jira/browse/HIVE-4723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925406#comment-13925406 ] Ashutosh Chauhan commented on HIVE-4723: +1 > DDLSemanticAnalyzer.addTablePartsOutputs eats several exceptions > > > Key: HIVE-4723 > URL: https://issues.apache.org/jira/browse/HIVE-4723 > Project: Hive > Issue Type: Bug >Affects Versions: 0.12.0 >Reporter: Brock Noland >Assignee: Szehon Ho > Attachments: HIVE-4723.1.patch, HIVE-4723.2.patch, HIVE-4723.3.patch, > HIVE-4723.4.patch, HIVE-4723.5.patch, HIVE-4723.5.patch, HIVE-4723.patch > > > I accidently tried to archive a partition on a non-partitioned table. The > error message was bad, hive ate an exception, and NPE'ed. > {noformat} > 2013-06-09 16:36:12,628 ERROR parse.DDLSemanticAnalyzer > (DDLSemanticAnalyzer.java:addTablePartsOutputs(2899)) - Got HiveException > during obtaining list of partitions > 2013-06-09 16:36:12,628 ERROR ql.Driver (SessionState.java:printError(383)) - > FAILED: NullPointerException null > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.addTablePartsOutputs(DDLSemanticAnalyzer.java:2912) > at > org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.addTablePartsOutputs(DDLSemanticAnalyzer.java:2877) > at > org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeAlterTableArchive(DDLSemanticAnalyzer.java:2730) > at > org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:316) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:277) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:433) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:782) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.util.RunJar.main(RunJar.java:156) > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-4629) HS2 should support an API to retrieve query logs
[ https://issues.apache.org/jira/browse/HIVE-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925394#comment-13925394 ] Gordon Wang commented on HIVE-4629: --- What about the status of this jira? Does anyone try to rebase it to the latest trunk? I think it is a useful feature especially when doing some testing about hql. > HS2 should support an API to retrieve query logs > > > Key: HIVE-4629 > URL: https://issues.apache.org/jira/browse/HIVE-4629 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Reporter: Shreepadma Venugopalan >Assignee: Shreepadma Venugopalan > Attachments: HIVE-4629-no_thrift.1.patch, HIVE-4629.1.patch, > HIVE-4629.2.patch > > > HiveServer2 should support an API to retrieve query logs. This is > particularly relevant because HiveServer2 supports async execution but > doesn't provide a way to report progress. Providing an API to retrieve query > logs will help report progress to the client. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6531) Runtime errors in vectorized execution.
[ https://issues.apache.org/jira/browse/HIVE-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HIVE-6531: --- Resolution: Fixed Fix Version/s: 0.14.0 0.13.0 Status: Resolved (was: Patch Available) Committed to branch-0.13 as well. > Runtime errors in vectorized execution. > --- > > Key: HIVE-6531 > URL: https://issues.apache.org/jira/browse/HIVE-6531 > Project: Hive > Issue Type: Bug >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Fix For: 0.13.0, 0.14.0 > > Attachments: HIVE-6531.1.patch, HIVE-6531.2.patch, HIVE-6531.3.patch > > > There are a few runtime errors observed in some of the tpcds queries for > following reasons: > 1) VectorFileSinkOperator fails with LazyBinarySerde. > 2) Decimal128 and Unsigned128 don't serialize correctly. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6591) Importing a table containing hidden dirs fails
[ https://issues.apache.org/jira/browse/HIVE-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925374#comment-13925374 ] Hive QA commented on HIVE-6591: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12633488/HIVE-6591.patch {color:green}SUCCESS:{color} +1 5375 tests passed Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1683/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1683/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12633488 > Importing a table containing hidden dirs fails > -- > > Key: HIVE-6591 > URL: https://issues.apache.org/jira/browse/HIVE-6591 > Project: Hive > Issue Type: Bug > Components: Import/Export >Affects Versions: 0.10.0, 0.11.0, 0.12.0 >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan > Attachments: HIVE-6591.patch > > > hidden files should be ignored while exporting -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6481) Add .reviewboardrc file
[ https://issues.apache.org/jira/browse/HIVE-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925358#comment-13925358 ] Carl Steinbach commented on HIVE-6481: -- I updated the wiki some instructions for using rbt. > Add .reviewboardrc file > --- > > Key: HIVE-6481 > URL: https://issues.apache.org/jira/browse/HIVE-6481 > Project: Hive > Issue Type: Improvement > Components: Build Infrastructure >Reporter: Carl Steinbach >Assignee: Carl Steinbach > Fix For: 0.13.0 > > Attachments: HIVE-6481.1.patch, HIVE-6481.2.patch > > > We should add a .reviewboardrc file to trunk in order to streamline the > review process. > Used in conjunction with RBTools this file makes posting a review request as > simple as executing the following command: > % rbt post -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6430) MapJoin hash table has large memory overhead
[ https://issues.apache.org/jira/browse/HIVE-6430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925357#comment-13925357 ] Hive QA commented on HIVE-6430: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12633496/HIVE-6430.patch {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5373 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucket_num_reducers org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_bucketed_table {noformat} Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1682/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1682/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12633496 > MapJoin hash table has large memory overhead > > > Key: HIVE-6430 > URL: https://issues.apache.org/jira/browse/HIVE-6430 > Project: Hive > Issue Type: Improvement >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-6430.patch > > > Right now, in some queries, I see that storing e.g. 4 ints (2 for key and 2 > for row) can take several hundred bytes, which is ridiculous. I am reducing > the size of MJKey and MJRowContainer in other jiras, but in general we don't > need to have java hash table there. We can either use primitive-friendly > hashtable like the one from HPPC (Apache-licenced), or some variation, to map > primitive keys to single row storage structure without an object per row > (similar to vectorization). -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Proposal to switch to pull requests
I'm +1 on switching to git, but only if we can find a way to disable merge commits to trunk and feature branches. I'm -1 on switching to Github since, as far as I know, it only supports merge based workflows. On Sun, Mar 9, 2014 at 12:25 PM, Edward Capriolo wrote: > I do not think we want Pull Requests coming at us. Better way is let > someone open a git branch for the changes, then we review and merge the > branch. > > > On Sat, Mar 8, 2014 at 4:25 PM, Brock Noland wrote: > > > In my read of the Apache git - github integration blog post we cannot use > > pull requests as patches. Just that we'll be notified of them and could > > perhaps use them as code review. > > > > One additional item I think we should investigate is disabling merge > > commits on trunk and feature branches. > > On Mar 7, 2014 7:57 PM, "Edward Capriolo" wrote: > > > > > We need to keep patches in Jira I feel. We have gotten better on the > > > documentation front but having a patch in the jira is critical I feel. > We > > > must at least have a perma link to the changes. > > > > > > > > > On Fri, Mar 7, 2014 at 8:40 PM, Sergey Shelukhin < > ser...@hortonworks.com > > > >wrote: > > > > > > > +1 to git! > > > > > > > > > > > > On Fri, Mar 7, 2014 at 12:46 PM, Xuefu Zhang > > > wrote: > > > > > > > > > Switching to git from svn seems to be a proposal slightly different > > > from > > > > > that of switching to pull request from the head of the thread. > > > Personally > > > > > I'm +1 to git, but I think patches are very portable and widely > > adopted > > > > in > > > > > Hadoop ecosystem and we should keep the practice. Thus, +1 to that > > > also. > > > > > > > > > > --Xuefu > > > > > > > > > > > > > > > On Fri, Mar 7, 2014 at 12:27 PM, Gunther Hagleitner < > > > > > ghagleit...@hortonworks.com> wrote: > > > > > > > > > > > Once Prasad's loop finishes I'd like to add my +1 too. > > > > > > > > > > > > > > > > > > On Fri, Mar 7, 2014 at 11:44 AM, Vaibhav Gumashta < > > > > > > vgumas...@hortonworks.com > > > > > > > wrote: > > > > > > > > > > > > > +1 for moving to git! > > > > > > > > > > > > > > Thanks, > > > > > > > --Vaibhav > > > > > > > > > > > > > > > > > > > > > On Fri, Mar 7, 2014 at 9:46 AM, Prasad Mujumdar < > > > > pras...@cloudera.com > > > > > > > >wrote: > > > > > > > > > > > > > > > while (true) { > > > > > > > >+1 > > > > > > > > } > > > > > > > > > > > > > > > > +1 // another, just in case ;) > > > > > > > > > > > > > > > > thanks > > > > > > > > Prasad > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Fri, Mar 7, 2014 at 6:47 AM, kulkarni.swar...@gmail.com < > > > > > > > > kulkarni.swar...@gmail.com> wrote: > > > > > > > > > > > > > > > > > +1 > > > > > > > > > > > > > > > > > > > > > > > > > > > On Fri, Mar 7, 2014 at 1:05 AM, Thejas Nair < > > > > > the...@hortonworks.com> > > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > > Should we start with moving our primary source code > > > repository > > > > > from > > > > > > > > > > svn to git ? I feel git is more powerful and easy to use > > > (once > > > > > you > > > > > > go > > > > > > > > > > past the learning curve!). > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Wed, Mar 5, 2014 at 7:39 AM, Brock Noland < > > > > br...@cloudera.com > > > > > > > > > > > > > > wrote: > > > > > > > > > > > Personally I prefer the Github workflow, but I believe > > > there > > > > > have > > > > > > > > been > > > > > > > > > > > some challenges with that since the source for apache > > > > projects > > > > > > must > > > > > > > > be > > > > > > > > > > > stored in apache source control (git or svn). > > > > > > > > > > > > > > > > > > > > > > Relevent: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://blogs.apache.org/infra/entry/improved_integration_between_apache_and > > > > > > > > > > > > > > > > > > > > > > On Wed, Mar 5, 2014 at 9:19 AM, > > kulkarni.swar...@gmail.com > > > > > > > > > > > wrote: > > > > > > > > > > >> Hello, > > > > > > > > > > >> > > > > > > > > > > >> Since we have a nice mirrored git repository for > > hive[1], > > > > any > > > > > > > > specific > > > > > > > > > > >> reason why we can't switch to doing pull requests > > instead > > > of > > > > > > > > patches? > > > > > > > > > > IMHO > > > > > > > > > > >> pull requests are awesome for peer review plus it is > > also > > > > very > > > > > > > easy > > > > > > > > to > > > > > > > > > > keep > > > > > > > > > > >> track of JIRAs with open pull requests instead of > > looking > > > > for > > > > > > > JIRAs > > > > > > > > > in a > > > > > > > > > > >> "Patch Available" state. Also since they get updated > > > > > > > automatically, > > > > > > > > it > > > > > > > > > > is > > > > > > > > > > >> also very easy to see if a review comment made by a > > > reviewer > > > > > was > > > > > > > > > > addressed > > > > > > > > > > >> properly or not. > > > > > > > > > > >> > >
[jira] [Commented] (HIVE-6037) Synchronize HiveConf with hive-default.xml.template and support show conf
[ https://issues.apache.org/jira/browse/HIVE-6037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925323#comment-13925323 ] Lefty Leverenz commented on HIVE-6037: -- [~hagleitn] raised the question of whether Hive 0.13.0 should include this patch in the dev@hive thread "Timeline for the Hive 0.13 release?": {quote} Do we need to include HIVE-6037 in the release? I.e.: Is the current hive-default.xml.template very out of sync without it? It's a pretty large patch mostly aimed at making our lives easier, but doesn't impact the end user directly. Can we do that one trunk only? {quote} * [Timeline for the Hive 0.13 release? |http://mail-archives.apache.org/mod_mbox/hive-dev/201403.mbox/%3cCAGLR3Tw_sOf7bUcCRQWepjMh16jJL=xtyood5s38f6j9tys...@mail.gmail.com%3e] My reply: "if HIVE-6037 were committed to trunk right away then Navis's efforts wouldn't be lost so maybe we should consider doing that. But recently I've been telling people not to bother updating hive-default.xml.template because it's going to be generated from HiveConf.java soon, so I'd have to figure out which parameters aren't in the template file. That's less work than patching HiveConf.java with recent config params after committing HIVE-6037, though." > Synchronize HiveConf with hive-default.xml.template and support show conf > - > > Key: HIVE-6037 > URL: https://issues.apache.org/jira/browse/HIVE-6037 > Project: Hive > Issue Type: Improvement > Components: Configuration >Reporter: Navis >Assignee: Navis >Priority: Minor > Fix For: 0.13.0 > > Attachments: CHIVE-6037.3.patch.txt, HIVE-6037.1.patch.txt, > HIVE-6037.10.patch.txt, HIVE-6037.11.patch.txt, HIVE-6037.12.patch.txt, > HIVE-6037.14.patch.txt, HIVE-6037.15.patch.txt, HIVE-6037.16.patch.txt, > HIVE-6037.17.patch, HIVE-6037.2.patch.txt, HIVE-6037.4.patch.txt, > HIVE-6037.5.patch.txt, HIVE-6037.6.patch.txt, HIVE-6037.7.patch.txt, > HIVE-6037.8.patch.txt, HIVE-6037.9.patch.txt, HIVE-6037.patch > > > see HIVE-5879 -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6393) Support unqualified column references in Joining conditions
[ https://issues.apache.org/jira/browse/HIVE-6393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925315#comment-13925315 ] Lefty Leverenz commented on HIVE-6393: -- Good, although I'd say "Hive" instead of "we" and mention the release number (with a link to this jira). Or should it be "the query optimizer" instead of "we"? This can go in a version-info box at the end of the Join Syntax section, then a simple example can be added just for emphasis at the end of the Examples section. The syntax itself doesn't need any changes. Your second sentence says that the column names can't be identical, so the first example in the wikidoc can't use unqualified column references: "SELECT a.* FROM a JOIN b ON (a.id = b.id)". But is it more restrictive than that -- neither table can have an (unreferenced) column named the same as the other table's referenced column? Perhaps I'm overthinking this. I can put this in the wiki after you fine-tune it, unless you'd rather do it yourself. > Support unqualified column references in Joining conditions > --- > > Key: HIVE-6393 > URL: https://issues.apache.org/jira/browse/HIVE-6393 > Project: Hive > Issue Type: Improvement >Reporter: Harish Butani >Assignee: Harish Butani > Fix For: 0.13.0 > > Attachments: HIVE-6393.1.patch, HIVE-6393.2.patch, HIVE-6393.3.patch > > > Support queries of the form: > {noformat} > create table r1(a int); > create table r2(b); > select a, b > from r1 join r2 on a = b > {noformat} > This becomes more useful in old style syntax: > {noformat} > select a, b > from r1, r2 > where a = b > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6575) select * fails on parquet table with map datatype
[ https://issues.apache.org/jira/browse/HIVE-6575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925309#comment-13925309 ] Hive QA commented on HIVE-6575: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12633569/HIVE-6575.3.patch {color:green}SUCCESS:{color} +1 5374 tests passed Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1680/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1680/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12633569 > select * fails on parquet table with map datatype > - > > Key: HIVE-6575 > URL: https://issues.apache.org/jira/browse/HIVE-6575 > Project: Hive > Issue Type: Bug > Components: Serializers/Deserializers >Affects Versions: 0.13.0 >Reporter: Szehon Ho >Assignee: Szehon Ho > Labels: parquet > Attachments: HIVE-6575.2.patch, HIVE-6575.3.patch, HIVE-6575.patch > > > Create parquet table with map and run select * from parquet_table, returns > following exception: > {noformat} > FAILED: RuntimeException java.lang.ClassCastException: > org.apache.hadoop.hive.ql.io.parquet.serde.DeepParquetHiveMapInspector cannot > be cast to > org.apache.hadoop.hive.ql.io.parquet.serde.StandardParquetHiveMapInspector > {noformat} > However select from parquet_table seems to work, and thus joins will > work. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6440) sql std auth - add command to change owner of database
[ https://issues.apache.org/jira/browse/HIVE-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925308#comment-13925308 ] Lefty Leverenz commented on HIVE-6440: -- The wiki should document this with a release note here: * [Language Manual DDL: Alter Database |https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterDatabase] [~thejas], shall I put it in or do you want to do it yourself? Presumably user|group|role are keywords and username is the name of the user or role. But why is username in brackets -- can it really be omitted? And what happens if someone specifies group? One more nit: does this also work for "alter schema"? > sql std auth - add command to change owner of database > -- > > Key: HIVE-6440 > URL: https://issues.apache.org/jira/browse/HIVE-6440 > Project: Hive > Issue Type: Sub-task > Components: Authorization >Reporter: Thejas M Nair >Assignee: Thejas M Nair > Fix For: 0.13.0 > > Attachments: HIVE-6440.1.patch, HIVE-6440.2.patch, HIVE-6440.3.patch > > > It should be possible to change the owner of a database once it is created. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6393) Support unqualified column references in Joining conditions
[ https://issues.apache.org/jira/browse/HIVE-6393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925307#comment-13925307 ] Harish Butani commented on HIVE-6393: - Yes, there should be documentation for this. Here is a shot at it, please revise/edit: {noformat} Unqualified column references are now supported in join conditions. We attempt to resolve these against the inputs to a Join. If an unqualified column reference resolves to more than 1 table we will flag this as an ambiguous reference. {noformat} > Support unqualified column references in Joining conditions > --- > > Key: HIVE-6393 > URL: https://issues.apache.org/jira/browse/HIVE-6393 > Project: Hive > Issue Type: Improvement >Reporter: Harish Butani >Assignee: Harish Butani > Fix For: 0.13.0 > > Attachments: HIVE-6393.1.patch, HIVE-6393.2.patch, HIVE-6393.3.patch > > > Support queries of the form: > {noformat} > create table r1(a int); > create table r2(b); > select a, b > from r1 join r2 on a = b > {noformat} > This becomes more useful in old style syntax: > {noformat} > select a, b > from r1, r2 > where a = b > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6393) Support unqualified column references in Joining conditions
[ https://issues.apache.org/jira/browse/HIVE-6393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harish Butani updated HIVE-6393: Fix Version/s: 0.13.0 > Support unqualified column references in Joining conditions > --- > > Key: HIVE-6393 > URL: https://issues.apache.org/jira/browse/HIVE-6393 > Project: Hive > Issue Type: Improvement >Reporter: Harish Butani >Assignee: Harish Butani > Fix For: 0.13.0 > > Attachments: HIVE-6393.1.patch, HIVE-6393.2.patch, HIVE-6393.3.patch > > > Support queries of the form: > {noformat} > create table r1(a int); > create table r2(b); > select a, b > from r1 join r2 on a = b > {noformat} > This becomes more useful in old style syntax: > {noformat} > select a, b > from r1, r2 > where a = b > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6531) Runtime errors in vectorized execution.
[ https://issues.apache.org/jira/browse/HIVE-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925304#comment-13925304 ] Harish Butani commented on HIVE-6531: - +1 for 0.13 > Runtime errors in vectorized execution. > --- > > Key: HIVE-6531 > URL: https://issues.apache.org/jira/browse/HIVE-6531 > Project: Hive > Issue Type: Bug >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: HIVE-6531.1.patch, HIVE-6531.2.patch, HIVE-6531.3.patch > > > There are a few runtime errors observed in some of the tpcds queries for > following reasons: > 1) VectorFileSinkOperator fails with LazyBinarySerde. > 2) Decimal128 and Unsigned128 don't serialize correctly. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6558) HiveServer2 Plain SASL authentication broken after hadoop 2.3 upgrade
[ https://issues.apache.org/jira/browse/HIVE-6558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925302#comment-13925302 ] Ashutosh Chauhan commented on HIVE-6558: Looks good to me. +1 cc: [~thejas] > HiveServer2 Plain SASL authentication broken after hadoop 2.3 upgrade > - > > Key: HIVE-6558 > URL: https://issues.apache.org/jira/browse/HIVE-6558 > Project: Hive > Issue Type: Bug > Components: Authentication, HiveServer2 >Affects Versions: 0.13.0 >Reporter: Prasad Mujumdar >Assignee: Prasad Mujumdar >Priority: Blocker > Attachments: HIVE-6558.2.patch > > > Java only includes Plain SASL client and not server. Hence HiveServer2 > includes a Plain SASL server implementation. Now Hadoop has its own Plain > SASL server [HADOOP-9020|https://issues.apache.org/jira/browse/HADOOP-9020] > which is part of Hadoop 2.3 > [release|http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/releasenotes.html]. > The two servers use different Sasl callbacks and the servers are registered > in java.security.Provider via static code. As a result the HiveServer2 > instance could be using Hadoop's Plain SASL server which breaks the > authentication. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6481) Add .reviewboardrc file
[ https://issues.apache.org/jira/browse/HIVE-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925300#comment-13925300 ] Lefty Leverenz commented on HIVE-6481: -- This could be documented in the wiki here: * [How To Contribute: Review Process |https://cwiki.apache.org/confluence/display/Hive/HowToContribute#HowToContribute-ReviewProcess] (I'm a newbie on the review board, so won't volunteer.) > Add .reviewboardrc file > --- > > Key: HIVE-6481 > URL: https://issues.apache.org/jira/browse/HIVE-6481 > Project: Hive > Issue Type: Improvement > Components: Build Infrastructure >Reporter: Carl Steinbach >Assignee: Carl Steinbach > Fix For: 0.13.0 > > Attachments: HIVE-6481.1.patch, HIVE-6481.2.patch > > > We should add a .reviewboardrc file to trunk in order to streamline the > review process. > Used in conjunction with RBTools this file makes posting a review request as > simple as executing the following command: > % rbt post -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Proposal to switch to pull requests
I do not think we want Pull Requests coming at us. Better way is let someone open a git branch for the changes, then we review and merge the branch. On Sat, Mar 8, 2014 at 4:25 PM, Brock Noland wrote: > In my read of the Apache git - github integration blog post we cannot use > pull requests as patches. Just that we'll be notified of them and could > perhaps use them as code review. > > One additional item I think we should investigate is disabling merge > commits on trunk and feature branches. > On Mar 7, 2014 7:57 PM, "Edward Capriolo" wrote: > > > We need to keep patches in Jira I feel. We have gotten better on the > > documentation front but having a patch in the jira is critical I feel. We > > must at least have a perma link to the changes. > > > > > > On Fri, Mar 7, 2014 at 8:40 PM, Sergey Shelukhin > >wrote: > > > > > +1 to git! > > > > > > > > > On Fri, Mar 7, 2014 at 12:46 PM, Xuefu Zhang > > wrote: > > > > > > > Switching to git from svn seems to be a proposal slightly different > > from > > > > that of switching to pull request from the head of the thread. > > Personally > > > > I'm +1 to git, but I think patches are very portable and widely > adopted > > > in > > > > Hadoop ecosystem and we should keep the practice. Thus, +1 to that > > also. > > > > > > > > --Xuefu > > > > > > > > > > > > On Fri, Mar 7, 2014 at 12:27 PM, Gunther Hagleitner < > > > > ghagleit...@hortonworks.com> wrote: > > > > > > > > > Once Prasad's loop finishes I'd like to add my +1 too. > > > > > > > > > > > > > > > On Fri, Mar 7, 2014 at 11:44 AM, Vaibhav Gumashta < > > > > > vgumas...@hortonworks.com > > > > > > wrote: > > > > > > > > > > > +1 for moving to git! > > > > > > > > > > > > Thanks, > > > > > > --Vaibhav > > > > > > > > > > > > > > > > > > On Fri, Mar 7, 2014 at 9:46 AM, Prasad Mujumdar < > > > pras...@cloudera.com > > > > > > >wrote: > > > > > > > > > > > > > while (true) { > > > > > > >+1 > > > > > > > } > > > > > > > > > > > > > > +1 // another, just in case ;) > > > > > > > > > > > > > > thanks > > > > > > > Prasad > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Fri, Mar 7, 2014 at 6:47 AM, kulkarni.swar...@gmail.com < > > > > > > > kulkarni.swar...@gmail.com> wrote: > > > > > > > > > > > > > > > +1 > > > > > > > > > > > > > > > > > > > > > > > > On Fri, Mar 7, 2014 at 1:05 AM, Thejas Nair < > > > > the...@hortonworks.com> > > > > > > > > wrote: > > > > > > > > > > > > > > > > > Should we start with moving our primary source code > > repository > > > > from > > > > > > > > > svn to git ? I feel git is more powerful and easy to use > > (once > > > > you > > > > > go > > > > > > > > > past the learning curve!). > > > > > > > > > > > > > > > > > > > > > > > > > > > On Wed, Mar 5, 2014 at 7:39 AM, Brock Noland < > > > br...@cloudera.com > > > > > > > > > > > > wrote: > > > > > > > > > > Personally I prefer the Github workflow, but I believe > > there > > > > have > > > > > > > been > > > > > > > > > > some challenges with that since the source for apache > > > projects > > > > > must > > > > > > > be > > > > > > > > > > stored in apache source control (git or svn). > > > > > > > > > > > > > > > > > > > > Relevent: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://blogs.apache.org/infra/entry/improved_integration_between_apache_and > > > > > > > > > > > > > > > > > > > > On Wed, Mar 5, 2014 at 9:19 AM, > kulkarni.swar...@gmail.com > > > > > > > > > > wrote: > > > > > > > > > >> Hello, > > > > > > > > > >> > > > > > > > > > >> Since we have a nice mirrored git repository for > hive[1], > > > any > > > > > > > specific > > > > > > > > > >> reason why we can't switch to doing pull requests > instead > > of > > > > > > > patches? > > > > > > > > > IMHO > > > > > > > > > >> pull requests are awesome for peer review plus it is > also > > > very > > > > > > easy > > > > > > > to > > > > > > > > > keep > > > > > > > > > >> track of JIRAs with open pull requests instead of > looking > > > for > > > > > > JIRAs > > > > > > > > in a > > > > > > > > > >> "Patch Available" state. Also since they get updated > > > > > > automatically, > > > > > > > it > > > > > > > > > is > > > > > > > > > >> also very easy to see if a review comment made by a > > reviewer > > > > was > > > > > > > > > addressed > > > > > > > > > >> properly or not. > > > > > > > > > >> > > > > > > > > > >> Thoughts? > > > > > > > > > >> > > > > > > > > > >> Thanks, > > > > > > > > > >> > > > > > > > > > >> [1] https://github.com/apache/hive > > > > > > > > > >> > > > > > > > > > >> -- > > > > > > > > > >> Swarnim > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > Apache MRUnit - Unit testing MapReduce - > > > > > http://mrunit.apache.org > > > > > > > > > > > > > > > > > > -- > > > > > > > > > CONFIDENTIALITY NOTICE > > > > > > > > > NOTICE: This message is intended for the use of the > > individual > > > or > > > > > > > entity >
[jira] [Commented] (HIVE-6587) allow specifying additional Hive classpath for Hadoop
[ https://issues.apache.org/jira/browse/HIVE-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925273#comment-13925273 ] Hive QA commented on HIVE-6587: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12633474/HIVE-6587.patch {color:green}SUCCESS:{color} +1 5374 tests passed Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1679/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1679/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12633474 > allow specifying additional Hive classpath for Hadoop > - > > Key: HIVE-6587 > URL: https://issues.apache.org/jira/browse/HIVE-6587 > Project: Hive > Issue Type: Improvement >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Trivial > Attachments: HIVE-6587.patch > > > Allow users to add jars to hive's Hadoop classpath without explicitly > modifying their Hadoop classpath -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6568) Vectorized cast of decimal to string and timestamp produces incorrect result.
[ https://issues.apache.org/jira/browse/HIVE-6568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HIVE-6568: --- Status: Open (was: Patch Available) > Vectorized cast of decimal to string and timestamp produces incorrect result. > - > > Key: HIVE-6568 > URL: https://issues.apache.org/jira/browse/HIVE-6568 > Project: Hive > Issue Type: Bug > Components: Vectorization >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: HIVE-6568.1.patch, HIVE-6568.2.patch > > > A decimal value 1.23 with scale 5 is represented in string as 1.23000. This > behavior is different from HiveDecimal behavior. > The difference in cast to timestamp is due to more aggressive rounding in > vectorized expression. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6568) Vectorized cast of decimal to string and timestamp produces incorrect result.
[ https://issues.apache.org/jira/browse/HIVE-6568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HIVE-6568: --- Attachment: HIVE-6568.2.patch Updated patch re-based against latest trunk. > Vectorized cast of decimal to string and timestamp produces incorrect result. > - > > Key: HIVE-6568 > URL: https://issues.apache.org/jira/browse/HIVE-6568 > Project: Hive > Issue Type: Bug > Components: Vectorization >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: HIVE-6568.1.patch, HIVE-6568.2.patch > > > A decimal value 1.23 with scale 5 is represented in string as 1.23000. This > behavior is different from HiveDecimal behavior. > The difference in cast to timestamp is due to more aggressive rounding in > vectorized expression. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6568) Vectorized cast of decimal to string and timestamp produces incorrect result.
[ https://issues.apache.org/jira/browse/HIVE-6568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HIVE-6568: --- Status: Patch Available (was: Open) > Vectorized cast of decimal to string and timestamp produces incorrect result. > - > > Key: HIVE-6568 > URL: https://issues.apache.org/jira/browse/HIVE-6568 > Project: Hive > Issue Type: Bug > Components: Vectorization >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: HIVE-6568.1.patch, HIVE-6568.2.patch > > > A decimal value 1.23 with scale 5 is represented in string as 1.23000. This > behavior is different from HiveDecimal behavior. > The difference in cast to timestamp is due to more aggressive rounding in > vectorized expression. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6511) casting from decimal to tinyint,smallint, int and bigint generates different result when vectorization is on
[ https://issues.apache.org/jira/browse/HIVE-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HIVE-6511: --- Affects Version/s: 0.13.0 > casting from decimal to tinyint,smallint, int and bigint generates different > result when vectorization is on > > > Key: HIVE-6511 > URL: https://issues.apache.org/jira/browse/HIVE-6511 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.0 >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Fix For: 0.13.0, 0.14.0 > > Attachments: HIVE-6511.1.patch, HIVE-6511.2.patch, HIVE-6511.3.patch, > HIVE-6511.4.patch > > > select dc,cast(dc as int), cast(dc as smallint),cast(dc as tinyint) from > vectortab10korc limit 20 generates following result when vectorization is > enabled: > {code} > 4619756289662.078125 -1628520834 -16770 126 > 1553532646710.316406 -1245514442 -2762 54 > 3367942487288.360352 688127224 -776-8 > 4386447830839.337891 1286221623 12087 55 > -3234165331139.458008 -54957251 27453 61 > -488378613475.326172 1247658269 -16099 29 > -493942492598.691406 -21253559 -19895 73 > 3101852523586.039062 886135874 23618 66 > 2544105595941.381836 1484956709 -23515 37 > -3997512403067.0625 1102149509 30597 -123 > -1183754978977.589355 1655994718 31070 94 > 1408783849655.676758 34576568-26440 -72 > -2993175106993.426758 417098319 27215 79 > 3004723551798.100586 -1753555402 -8650 54 > 1103792083527.786133 -14511544 -28088 72 > 469767055288.485352 1615620024 26552 -72 > -1263700791098.294434 -980406074 12486 -58 > -4244889766496.484375 -1462078048 30112 -96 > -3962729491139.782715 1525323068 -27332 60 > NULL NULLNULLNULL > {code} > When vectorization is disabled, result looks like this: > {code} > 4619756289662.078125 -1628520834 -16770 126 > 1553532646710.316406 -1245514442 -2762 54 > 3367942487288.360352 688127224 -776-8 > 4386447830839.337891 1286221623 12087 55 > -3234165331139.458008 -54957251 27453 61 > -488378613475.326172 1247658269 -16099 29 > -493942492598.691406 -21253558 -19894 74 > 3101852523586.039062 886135874 23618 66 > 2544105595941.381836 1484956709 -23515 37 > -3997512403067.0625 1102149509 30597 -123 > -1183754978977.589355 1655994719 31071 95 > 1408783849655.676758 34576567-26441 -73 > -2993175106993.426758 417098319 27215 79 > 3004723551798.100586 -1753555402 -8650 54 > 1103792083527.786133 -14511545 -28089 71 > 469767055288.485352 1615620024 26552 -72 > -1263700791098.294434 -980406074 12486 -58 > -4244889766496.484375 -1462078048 30112 -96 > -3962729491139.782715 1525323069 -27331 61 > NULL NULLNULLNULL > {code} > This issue is visible only for certain decimal values. In above example, row > 7,11,12, and 15 generates different results. > vectortab10korc table schema: > {code} > t tinyint from deserializer > sismallintfrom deserializer > i int from deserializer > b bigint from deserializer > f float from deserializer > d double from deserializer > dcdecimal(38,18) from deserializer > boboolean from deserializer > s string from deserializer > s2string from deserializer > tstimestamp from deserializer > > # Detailed Table Information > Database: default > Owner:xyz > CreateTime: Tue Feb 25 21:54:28 UTC 2014 > LastAccessTime: UNKNOWN > Protect Mode: None > Retention:0 > Location: > hdfs://host1.domain.com:8020/apps/hive/warehouse/vectortab10korc > Table Type: MANAGED_TABLE > Table Parameters: > COLUMN_STATS_ACCURATE true > numFiles1 > numRows 1 > rawDataSize 0 > totalSize 344748 > transient_lastDdlTime 1393365281 > > # Storage Information
[jira] [Commented] (HIVE-6511) casting from decimal to tinyint,smallint, int and bigint generates different result when vectorization is on
[ https://issues.apache.org/jira/browse/HIVE-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925265#comment-13925265 ] Jitendra Nath Pandey commented on HIVE-6511: I have committed this. > casting from decimal to tinyint,smallint, int and bigint generates different > result when vectorization is on > > > Key: HIVE-6511 > URL: https://issues.apache.org/jira/browse/HIVE-6511 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.0 >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Fix For: 0.13.0, 0.14.0 > > Attachments: HIVE-6511.1.patch, HIVE-6511.2.patch, HIVE-6511.3.patch, > HIVE-6511.4.patch > > > select dc,cast(dc as int), cast(dc as smallint),cast(dc as tinyint) from > vectortab10korc limit 20 generates following result when vectorization is > enabled: > {code} > 4619756289662.078125 -1628520834 -16770 126 > 1553532646710.316406 -1245514442 -2762 54 > 3367942487288.360352 688127224 -776-8 > 4386447830839.337891 1286221623 12087 55 > -3234165331139.458008 -54957251 27453 61 > -488378613475.326172 1247658269 -16099 29 > -493942492598.691406 -21253559 -19895 73 > 3101852523586.039062 886135874 23618 66 > 2544105595941.381836 1484956709 -23515 37 > -3997512403067.0625 1102149509 30597 -123 > -1183754978977.589355 1655994718 31070 94 > 1408783849655.676758 34576568-26440 -72 > -2993175106993.426758 417098319 27215 79 > 3004723551798.100586 -1753555402 -8650 54 > 1103792083527.786133 -14511544 -28088 72 > 469767055288.485352 1615620024 26552 -72 > -1263700791098.294434 -980406074 12486 -58 > -4244889766496.484375 -1462078048 30112 -96 > -3962729491139.782715 1525323068 -27332 60 > NULL NULLNULLNULL > {code} > When vectorization is disabled, result looks like this: > {code} > 4619756289662.078125 -1628520834 -16770 126 > 1553532646710.316406 -1245514442 -2762 54 > 3367942487288.360352 688127224 -776-8 > 4386447830839.337891 1286221623 12087 55 > -3234165331139.458008 -54957251 27453 61 > -488378613475.326172 1247658269 -16099 29 > -493942492598.691406 -21253558 -19894 74 > 3101852523586.039062 886135874 23618 66 > 2544105595941.381836 1484956709 -23515 37 > -3997512403067.0625 1102149509 30597 -123 > -1183754978977.589355 1655994719 31071 95 > 1408783849655.676758 34576567-26441 -73 > -2993175106993.426758 417098319 27215 79 > 3004723551798.100586 -1753555402 -8650 54 > 1103792083527.786133 -14511545 -28089 71 > 469767055288.485352 1615620024 26552 -72 > -1263700791098.294434 -980406074 12486 -58 > -4244889766496.484375 -1462078048 30112 -96 > -3962729491139.782715 1525323069 -27331 61 > NULL NULLNULLNULL > {code} > This issue is visible only for certain decimal values. In above example, row > 7,11,12, and 15 generates different results. > vectortab10korc table schema: > {code} > t tinyint from deserializer > sismallintfrom deserializer > i int from deserializer > b bigint from deserializer > f float from deserializer > d double from deserializer > dcdecimal(38,18) from deserializer > boboolean from deserializer > s string from deserializer > s2string from deserializer > tstimestamp from deserializer > > # Detailed Table Information > Database: default > Owner:xyz > CreateTime: Tue Feb 25 21:54:28 UTC 2014 > LastAccessTime: UNKNOWN > Protect Mode: None > Retention:0 > Location: > hdfs://host1.domain.com:8020/apps/hive/warehouse/vectortab10korc > Table Type: MANAGED_TABLE > Table Parameters: > COLUMN_STATS_ACCURATE true > numFiles1 > numRows 1 > rawDataSize 0 > totalSize 344748 > transient_lastDdlTime 1393365281
[jira] [Updated] (HIVE-6511) casting from decimal to tinyint,smallint, int and bigint generates different result when vectorization is on
[ https://issues.apache.org/jira/browse/HIVE-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HIVE-6511: --- Resolution: Fixed Fix Version/s: 0.14.0 0.13.0 Status: Resolved (was: Patch Available) > casting from decimal to tinyint,smallint, int and bigint generates different > result when vectorization is on > > > Key: HIVE-6511 > URL: https://issues.apache.org/jira/browse/HIVE-6511 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.0 >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Fix For: 0.13.0, 0.14.0 > > Attachments: HIVE-6511.1.patch, HIVE-6511.2.patch, HIVE-6511.3.patch, > HIVE-6511.4.patch > > > select dc,cast(dc as int), cast(dc as smallint),cast(dc as tinyint) from > vectortab10korc limit 20 generates following result when vectorization is > enabled: > {code} > 4619756289662.078125 -1628520834 -16770 126 > 1553532646710.316406 -1245514442 -2762 54 > 3367942487288.360352 688127224 -776-8 > 4386447830839.337891 1286221623 12087 55 > -3234165331139.458008 -54957251 27453 61 > -488378613475.326172 1247658269 -16099 29 > -493942492598.691406 -21253559 -19895 73 > 3101852523586.039062 886135874 23618 66 > 2544105595941.381836 1484956709 -23515 37 > -3997512403067.0625 1102149509 30597 -123 > -1183754978977.589355 1655994718 31070 94 > 1408783849655.676758 34576568-26440 -72 > -2993175106993.426758 417098319 27215 79 > 3004723551798.100586 -1753555402 -8650 54 > 1103792083527.786133 -14511544 -28088 72 > 469767055288.485352 1615620024 26552 -72 > -1263700791098.294434 -980406074 12486 -58 > -4244889766496.484375 -1462078048 30112 -96 > -3962729491139.782715 1525323068 -27332 60 > NULL NULLNULLNULL > {code} > When vectorization is disabled, result looks like this: > {code} > 4619756289662.078125 -1628520834 -16770 126 > 1553532646710.316406 -1245514442 -2762 54 > 3367942487288.360352 688127224 -776-8 > 4386447830839.337891 1286221623 12087 55 > -3234165331139.458008 -54957251 27453 61 > -488378613475.326172 1247658269 -16099 29 > -493942492598.691406 -21253558 -19894 74 > 3101852523586.039062 886135874 23618 66 > 2544105595941.381836 1484956709 -23515 37 > -3997512403067.0625 1102149509 30597 -123 > -1183754978977.589355 1655994719 31071 95 > 1408783849655.676758 34576567-26441 -73 > -2993175106993.426758 417098319 27215 79 > 3004723551798.100586 -1753555402 -8650 54 > 1103792083527.786133 -14511545 -28089 71 > 469767055288.485352 1615620024 26552 -72 > -1263700791098.294434 -980406074 12486 -58 > -4244889766496.484375 -1462078048 30112 -96 > -3962729491139.782715 1525323069 -27331 61 > NULL NULLNULLNULL > {code} > This issue is visible only for certain decimal values. In above example, row > 7,11,12, and 15 generates different results. > vectortab10korc table schema: > {code} > t tinyint from deserializer > sismallintfrom deserializer > i int from deserializer > b bigint from deserializer > f float from deserializer > d double from deserializer > dcdecimal(38,18) from deserializer > boboolean from deserializer > s string from deserializer > s2string from deserializer > tstimestamp from deserializer > > # Detailed Table Information > Database: default > Owner:xyz > CreateTime: Tue Feb 25 21:54:28 UTC 2014 > LastAccessTime: UNKNOWN > Protect Mode: None > Retention:0 > Location: > hdfs://host1.domain.com:8020/apps/hive/warehouse/vectortab10korc > Table Type: MANAGED_TABLE > Table Parameters: > COLUMN_STATS_ACCURATE true > numFiles1 > numRows 1 > rawDataSize 0 > totalSize 344748
[jira] [Commented] (HIVE-6531) Runtime errors in vectorized execution.
[ https://issues.apache.org/jira/browse/HIVE-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925256#comment-13925256 ] Jitendra Nath Pandey commented on HIVE-6531: I have committed this to trunk. [~rhbutani] This is a serious bug failing many queries in vectorized mode. It will be good to have it fixed in 0.13. I can commit to branch if you agree. The same patch applies to 0.13. > Runtime errors in vectorized execution. > --- > > Key: HIVE-6531 > URL: https://issues.apache.org/jira/browse/HIVE-6531 > Project: Hive > Issue Type: Bug >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: HIVE-6531.1.patch, HIVE-6531.2.patch, HIVE-6531.3.patch > > > There are a few runtime errors observed in some of the tpcds queries for > following reasons: > 1) VectorFileSinkOperator fails with LazyBinarySerde. > 2) Decimal128 and Unsigned128 don't serialize correctly. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6147) Support avro data stored in HBase columns
[ https://issues.apache.org/jira/browse/HIVE-6147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni updated HIVE-6147: --- Attachment: HIVE-6147.5.patch.txt New patch rebased with master. > Support avro data stored in HBase columns > - > > Key: HIVE-6147 > URL: https://issues.apache.org/jira/browse/HIVE-6147 > Project: Hive > Issue Type: Bug > Components: HBase Handler >Affects Versions: 0.12.0 >Reporter: Swarnim Kulkarni >Assignee: Swarnim Kulkarni > Attachments: HIVE-6147.1.patch.txt, HIVE-6147.2.patch.txt, > HIVE-6147.3.patch.txt, HIVE-6147.3.patch.txt, HIVE-6147.4.patch.txt, > HIVE-6147.5.patch.txt > > > Presently, the HBase Hive integration supports querying only primitive data > types in columns. It would be nice to be able to store and query Avro objects > in HBase columns by making them visible as structs to Hive. This will allow > Hive to perform ad hoc analysis of HBase data which can be deeply structured. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6570) Hive variable substitution does not work with the "source" command
[ https://issues.apache.org/jira/browse/HIVE-6570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925238#comment-13925238 ] Hive QA commented on HIVE-6570: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12633453/HIVE-6570.1.patch {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5374 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_root_dir_external_table {noformat} Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1678/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1678/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12633453 > Hive variable substitution does not work with the "source" command > -- > > Key: HIVE-6570 > URL: https://issues.apache.org/jira/browse/HIVE-6570 > Project: Hive > Issue Type: Bug >Reporter: Anthony Hsu >Assignee: Anthony Hsu > Attachments: HIVE-6570.1.patch > > > The following does not work: > {code} > source ${hivevar:test-dir}/test.q; > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6568) Vectorized cast of decimal to string and timestamp produces incorrect result.
[ https://issues.apache.org/jira/browse/HIVE-6568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925213#comment-13925213 ] Hive QA commented on HIVE-6568: --- {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12633436/HIVE-6568.1.patch Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1677/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1677/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n '' ]] + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + cd /data/hive-ptest/working/ + tee /data/hive-ptest/logs/PreCommit-HIVE-Build-1677/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ svn = \s\v\n ]] + [[ -n '' ]] + [[ -d apache-svn-trunk-source ]] + [[ ! -d apache-svn-trunk-source/.svn ]] + [[ ! -d apache-svn-trunk-source ]] + cd apache-svn-trunk-source + svn revert -R . Reverted 'common/src/java/org/apache/hadoop/hive/common/FileUtils.java' Reverted 'ql/src/java/org/apache/hadoop/hive/ql/io/BucketizedHiveInputFormat.java' ++ egrep -v '^X|^Performing status on external' ++ awk '{print $2}' ++ svn status --no-ignore + rm -rf target datanucleus.log ant/target shims/target shims/0.20/target shims/0.20S/target shims/0.23/target shims/aggregator/target shims/common/target shims/common-secure/target packaging/target hbase-handler/target testutils/target jdbc/target metastore/target itests/target itests/hcatalog-unit/target itests/test-serde/target itests/qtest/target itests/hive-unit/target itests/custom-serde/target itests/util/target hcatalog/target hcatalog/storage-handlers/hbase/target hcatalog/server-extensions/target hcatalog/core/target hcatalog/webhcat/svr/target hcatalog/webhcat/java-client/target hcatalog/hcatalog-pig-adapter/target hwi/target common/target common/src/gen service/target contrib/target serde/target beeline/target odbc/target cli/target ql/dependency-reduced-pom.xml ql/target ql/src/test/results/clientpositive/bucket_if_with_path_filter.q.out ql/src/test/queries/clientpositive/bucket_if_with_path_filter.q + svn update Fetching external item into 'hcatalog/src/test/e2e/harness' External at revision 1575712. At revision 1575712. + patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hive-ptest/working/scratch/build.patch + [[ -f /data/hive-ptest/working/scratch/build.patch ]] + chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh + /data/hive-ptest/working/scratch/smart-apply-patch.sh /data/hive-ptest/working/scratch/build.patch The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12633436 > Vectorized cast of decimal to string and timestamp produces incorrect result. > - > > Key: HIVE-6568 > URL: https://issues.apache.org/jira/browse/HIVE-6568 > Project: Hive > Issue Type: Bug > Components: Vectorization >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: HIVE-6568.1.patch > > > A decimal value 1.23 with scale 5 is represented in string as 1.23000. This > behavior is different from HiveDecimal behavior. > The difference in cast to timestamp is due to more aggressive rounding in > vectorized expression. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6585) bucket map join fails in presence of _SUCCESS file
[ https://issues.apache.org/jira/browse/HIVE-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925212#comment-13925212 ] Hive QA commented on HIVE-6585: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12633425/HIVE-6585.patch {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5375 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucket_num_reducers {noformat} Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1675/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1675/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12633425 > bucket map join fails in presence of _SUCCESS file > -- > > Key: HIVE-6585 > URL: https://issues.apache.org/jira/browse/HIVE-6585 > Project: Hive > Issue Type: Bug > Components: File Formats >Affects Versions: 0.12.0, 0.13.0 >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan > Attachments: HIVE-6585.patch > > > Reason is missing path filters. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6511) casting from decimal to tinyint,smallint, int and bigint generates different result when vectorization is on
[ https://issues.apache.org/jira/browse/HIVE-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925195#comment-13925195 ] Hive QA commented on HIVE-6511: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12633312/HIVE-6511.4.patch {color:green}SUCCESS:{color} +1 5374 tests passed Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1674/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1674/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12633312 > casting from decimal to tinyint,smallint, int and bigint generates different > result when vectorization is on > > > Key: HIVE-6511 > URL: https://issues.apache.org/jira/browse/HIVE-6511 > Project: Hive > Issue Type: Bug >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: HIVE-6511.1.patch, HIVE-6511.2.patch, HIVE-6511.3.patch, > HIVE-6511.4.patch > > > select dc,cast(dc as int), cast(dc as smallint),cast(dc as tinyint) from > vectortab10korc limit 20 generates following result when vectorization is > enabled: > {code} > 4619756289662.078125 -1628520834 -16770 126 > 1553532646710.316406 -1245514442 -2762 54 > 3367942487288.360352 688127224 -776-8 > 4386447830839.337891 1286221623 12087 55 > -3234165331139.458008 -54957251 27453 61 > -488378613475.326172 1247658269 -16099 29 > -493942492598.691406 -21253559 -19895 73 > 3101852523586.039062 886135874 23618 66 > 2544105595941.381836 1484956709 -23515 37 > -3997512403067.0625 1102149509 30597 -123 > -1183754978977.589355 1655994718 31070 94 > 1408783849655.676758 34576568-26440 -72 > -2993175106993.426758 417098319 27215 79 > 3004723551798.100586 -1753555402 -8650 54 > 1103792083527.786133 -14511544 -28088 72 > 469767055288.485352 1615620024 26552 -72 > -1263700791098.294434 -980406074 12486 -58 > -4244889766496.484375 -1462078048 30112 -96 > -3962729491139.782715 1525323068 -27332 60 > NULL NULLNULLNULL > {code} > When vectorization is disabled, result looks like this: > {code} > 4619756289662.078125 -1628520834 -16770 126 > 1553532646710.316406 -1245514442 -2762 54 > 3367942487288.360352 688127224 -776-8 > 4386447830839.337891 1286221623 12087 55 > -3234165331139.458008 -54957251 27453 61 > -488378613475.326172 1247658269 -16099 29 > -493942492598.691406 -21253558 -19894 74 > 3101852523586.039062 886135874 23618 66 > 2544105595941.381836 1484956709 -23515 37 > -3997512403067.0625 1102149509 30597 -123 > -1183754978977.589355 1655994719 31071 95 > 1408783849655.676758 34576567-26441 -73 > -2993175106993.426758 417098319 27215 79 > 3004723551798.100586 -1753555402 -8650 54 > 1103792083527.786133 -14511545 -28089 71 > 469767055288.485352 1615620024 26552 -72 > -1263700791098.294434 -980406074 12486 -58 > -4244889766496.484375 -1462078048 30112 -96 > -3962729491139.782715 1525323069 -27331 61 > NULL NULLNULLNULL > {code} > This issue is visible only for certain decimal values. In above example, row > 7,11,12, and 15 generates different results. > vectortab10korc table schema: > {code} > t tinyint from deserializer > sismallintfrom deserializer > i int from deserializer > b bigint from deserializer > f float from deserializer > d double from deserializer > dcdecimal(38,18) from deserializer > boboolean from deserializer > s string from deserializer > s2string from deserializer > tstimestamp from deserializer > > # Detailed Table Information > Database: default > Owner:xyz > CreateTime: Tue Feb 25 21:54:28 UTC 2014 > LastAccessTime: UNKNOWN > Protect Mode: None
[jira] [Commented] (HIVE-5155) Support secure proxy user access to HiveServer2
[ https://issues.apache.org/jira/browse/HIVE-5155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925185#comment-13925185 ] Hive QA commented on HIVE-5155: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12633424/HIVE-5155.4.patch {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 5375 tests executed *Failed tests:* {noformat} org.apache.hive.jdbc.TestSSL.testSSLConnectionWithProperty org.apache.hive.jdbc.TestSSL.testSSLConnectionWithURL org.apache.hive.jdbc.TestSSL.testSSLFetch org.apache.hive.service.cli.session.TestSessionHooks.testProxyUser {noformat} Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1673/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1673/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12633424 > Support secure proxy user access to HiveServer2 > --- > > Key: HIVE-5155 > URL: https://issues.apache.org/jira/browse/HIVE-5155 > Project: Hive > Issue Type: Improvement > Components: Authentication, HiveServer2, JDBC >Affects Versions: 0.12.0 >Reporter: Prasad Mujumdar >Assignee: Prasad Mujumdar > Attachments: HIVE-5155-1-nothrift.patch, HIVE-5155-noThrift.2.patch, > HIVE-5155-noThrift.4.patch, HIVE-5155-noThrift.5.patch, > HIVE-5155-noThrift.6.patch, HIVE-5155-noThrift.7.patch, > HIVE-5155-noThrift.8.patch, HIVE-5155.1.patch, HIVE-5155.2.patch, > HIVE-5155.3.patch, HIVE-5155.4.patch, ProxyAuth.java, ProxyAuth.out, > TestKERBEROS_Hive_JDBC.java > > > The HiveServer2 can authenticate a client using via Kerberos and impersonate > the connecting user with underlying secure hadoop. This becomes a gateway for > a remote client to access secure hadoop cluster. Now this works fine for when > the client obtains Kerberos ticket and directly connects to HiveServer2. > There's another big use case for middleware tools where the end user wants to > access Hive via another server. For example Oozie action or Hue submitting > queries or a BI tool server accessing to HiveServer2. In these cases, the > third party server doesn't have end user's Kerberos credentials and hence it > can't submit queries to HiveServer2 on behalf of the end user. > This ticket is for enabling proxy access to HiveServer2 for third party tools > on behalf of end users. There are two parts of the solution proposed in this > ticket: > 1) Delegation token based connection for Oozie (OOZIE-1457) > This is the common mechanism for Hadoop ecosystem components. Hive Remote > Metastore and HCatalog already support this. This is suitable for tool like > Oozie that submits the MR jobs as actions on behalf of its client. Oozie > already uses similar mechanism for Metastore/HCatalog access. > 2) Direct proxy access for privileged hadoop users > The delegation token implementation can be a challenge for non-hadoop > (especially non-java) components. This second part enables a privileged user > to directly specify an alternate session user during the connection. If the > connecting user has hadoop level privilege to impersonate the requested > userid, then HiveServer2 will run the session as that requested user. For > example, user Hue is allowed to impersonate user Bob (via core-site.xml proxy > user configuration). Then user Hue can connect to HiveServer2 and specify Bob > as session user via a session property. HiveServer2 will verify Hue's proxy > user privilege and then impersonate user Bob instead of Hue. This will enable > any third party tool to impersonate alternate userid without having to > implement delegation token connection. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 18185: Support Kerberos HTTP authentication for HiveServer2 running in http mode
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/18185/ --- (Updated March 9, 2014, 11:08 a.m.) Review request for hive and Thejas Nair. Bugs: HIVE-4764 https://issues.apache.org/jira/browse/HIVE-4764 Repository: hive-git Description --- Support Kerberos HTTP authentication for HiveServer2 running in http mode Diffs - itests/hive-unit/src/test/java/org/apache/hive/service/cli/thrift/TestThriftHttpCLIService.java 57fda94 jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java 4102d7a jdbc/src/java/org/apache/hive/jdbc/HttpBasicAuthInterceptor.java 66eba1b jdbc/src/java/org/apache/hive/jdbc/HttpKerberosRequestInterceptor.java PRE-CREATION pom.xml 0669728 service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java d8ba3aa service/src/java/org/apache/hive/service/auth/HttpAuthUtils.java PRE-CREATION service/src/java/org/apache/hive/service/auth/HttpAuthenticationException.java PRE-CREATION service/src/java/org/apache/hive/service/auth/HttpCLIServiceUGIProcessor.java PRE-CREATION service/src/java/org/apache/hive/service/cli/CLIService.java 2b1e712 service/src/java/org/apache/hive/service/cli/session/SessionManager.java bfe0e7b service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java 6fbc847 service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIService.java 26bda5a service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java a6ff6ce service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java e77f043 shims/common-secure/src/main/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge20S.java dc89de1 shims/common/src/main/java/org/apache/hadoop/hive/shims/HadoopShims.java 9e9a60d shims/common/src/main/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge.java 03f4e51 Diff: https://reviews.apache.org/r/18185/diff/ Testing (updated) --- Using beeline in a kerberos setup. Thanks, Vaibhav Gumashta
[jira] [Resolved] (HIVE-6485) Downgrade to httpclient-4.2.5 in JDBC from httpclient-4.3.2
[ https://issues.apache.org/jira/browse/HIVE-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta resolved HIVE-6485. Resolution: Fixed Changes will get in as part of HIVE-4764. > Downgrade to httpclient-4.2.5 in JDBC from httpclient-4.3.2 > --- > > Key: HIVE-6485 > URL: https://issues.apache.org/jira/browse/HIVE-6485 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 0.13.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Fix For: 0.13.0 > > Attachments: HIVE-6485.1.patch > > > Had upgraded to the new version while adding SSL over Http mode support for > HiveServer2. But that conflicts with httpclient-4.2.5 which is in hadoop > classpath. I don't have a good reason to use httpclient-4.3.2, so it's better > to match hadoop. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-4764) Support Kerberos HTTP authentication for HiveServer2 running in http mode
[ https://issues.apache.org/jira/browse/HIVE-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-4764: --- Status: Patch Available (was: Open) > Support Kerberos HTTP authentication for HiveServer2 running in http mode > - > > Key: HIVE-4764 > URL: https://issues.apache.org/jira/browse/HIVE-4764 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Affects Versions: 0.13.0 >Reporter: Thejas M Nair >Assignee: Vaibhav Gumashta > Fix For: 0.13.0 > > Attachments: HIVE-4764.1.patch, HIVE-4764.2.patch, HIVE-4764.3.patch, > HIVE-4764.4.patch > > > Support Kerberos authentication for HiveServer2 running in http mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-4764) Support Kerberos HTTP authentication for HiveServer2 running in http mode
[ https://issues.apache.org/jira/browse/HIVE-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-4764: --- Attachment: HIVE-4764.4.patch Merging changes from HIVE-6485 in this patch itself. > Support Kerberos HTTP authentication for HiveServer2 running in http mode > - > > Key: HIVE-4764 > URL: https://issues.apache.org/jira/browse/HIVE-4764 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Affects Versions: 0.13.0 >Reporter: Thejas M Nair >Assignee: Vaibhav Gumashta > Fix For: 0.13.0 > > Attachments: HIVE-4764.1.patch, HIVE-4764.2.patch, HIVE-4764.3.patch, > HIVE-4764.4.patch > > > Support Kerberos authentication for HiveServer2 running in http mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-4764) Support Kerberos HTTP authentication for HiveServer2 running in http mode
[ https://issues.apache.org/jira/browse/HIVE-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-4764: --- Status: Open (was: Patch Available) > Support Kerberos HTTP authentication for HiveServer2 running in http mode > - > > Key: HIVE-4764 > URL: https://issues.apache.org/jira/browse/HIVE-4764 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Affects Versions: 0.13.0 >Reporter: Thejas M Nair >Assignee: Vaibhav Gumashta > Fix For: 0.13.0 > > Attachments: HIVE-4764.1.patch, HIVE-4764.2.patch, HIVE-4764.3.patch, > HIVE-4764.4.patch > > > Support Kerberos authentication for HiveServer2 running in http mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 18185: Support Kerberos HTTP authentication for HiveServer2 running in http mode
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/18185/ --- (Updated March 9, 2014, 11:04 a.m.) Review request for hive and Thejas Nair. Changes --- Merging changes from HIVE-6485 + some more refactoring. Bugs: HIVE-4764 https://issues.apache.org/jira/browse/HIVE-4764 Repository: hive-git Description --- Support Kerberos HTTP authentication for HiveServer2 running in http mode Diffs (updated) - itests/hive-unit/src/test/java/org/apache/hive/service/cli/thrift/TestThriftHttpCLIService.java 57fda94 jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java 4102d7a jdbc/src/java/org/apache/hive/jdbc/HttpBasicAuthInterceptor.java 66eba1b jdbc/src/java/org/apache/hive/jdbc/HttpKerberosRequestInterceptor.java PRE-CREATION pom.xml 0669728 service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java d8ba3aa service/src/java/org/apache/hive/service/auth/HttpAuthUtils.java PRE-CREATION service/src/java/org/apache/hive/service/auth/HttpAuthenticationException.java PRE-CREATION service/src/java/org/apache/hive/service/auth/HttpCLIServiceUGIProcessor.java PRE-CREATION service/src/java/org/apache/hive/service/cli/CLIService.java 2b1e712 service/src/java/org/apache/hive/service/cli/session/SessionManager.java bfe0e7b service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java 6fbc847 service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIService.java 26bda5a service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java a6ff6ce service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java e77f043 shims/common-secure/src/main/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge20S.java dc89de1 shims/common/src/main/java/org/apache/hadoop/hive/shims/HadoopShims.java 9e9a60d shims/common/src/main/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge.java 03f4e51 Diff: https://reviews.apache.org/r/18185/diff/ Testing --- Thanks, Vaibhav Gumashta
[jira] [Commented] (HIVE-6147) Support avro data stored in HBase columns
[ https://issues.apache.org/jira/browse/HIVE-6147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925165#comment-13925165 ] Hive QA commented on HIVE-6147: --- {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12633416/HIVE-6147.4.patch.txt Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1672/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1672/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n '' ]] + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + cd /data/hive-ptest/working/ + tee /data/hive-ptest/logs/PreCommit-HIVE-Build-1672/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ svn = \s\v\n ]] + [[ -n '' ]] + [[ -d apache-svn-trunk-source ]] + [[ ! -d apache-svn-trunk-source/.svn ]] + [[ ! -d apache-svn-trunk-source ]] + cd apache-svn-trunk-source + svn revert -R . Reverted 'ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java' ++ awk '{print $2}' ++ egrep -v '^X|^Performing status on external' ++ svn status --no-ignore + rm -rf target datanucleus.log ant/target shims/target shims/0.20/target shims/0.20S/target shims/0.23/target shims/aggregator/target shims/common/target shims/common-secure/target packaging/target hbase-handler/target testutils/target jdbc/target metastore/target itests/target itests/hcatalog-unit/target itests/test-serde/target itests/qtest/target itests/hive-unit/target itests/custom-serde/target itests/util/target hcatalog/target hcatalog/storage-handlers/hbase/target hcatalog/server-extensions/target hcatalog/core/target hcatalog/webhcat/svr/target hcatalog/webhcat/java-client/target hcatalog/hcatalog-pig-adapter/target hwi/target common/target common/src/gen service/target contrib/target serde/target beeline/target odbc/target cli/target ql/dependency-reduced-pom.xml ql/target ql/src/test/results/clientnegative/parquet_timestamp.q.out ql/src/test/results/clientnegative/parquet_char.q.out ql/src/test/results/clientnegative/parquet_date.q.out ql/src/test/results/clientnegative/parquet_decimal.q.out ql/src/test/results/clientnegative/parquet_varchar.q.out ql/src/test/queries/clientnegative/parquet_char.q ql/src/test/queries/clientnegative/parquet_timestamp.q ql/src/test/queries/clientnegative/parquet_decimal.q ql/src/test/queries/clientnegative/parquet_date.q ql/src/test/queries/clientnegative/parquet_varchar.q + svn update Fetching external item into 'hcatalog/src/test/e2e/harness' External at revision 1575684. At revision 1575684. + patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hive-ptest/working/scratch/build.patch + [[ -f /data/hive-ptest/working/scratch/build.patch ]] + chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh + /data/hive-ptest/working/scratch/smart-apply-patch.sh /data/hive-ptest/working/scratch/build.patch The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12633416 > Support avro data stored in HBase columns > - > > Key: HIVE-6147 > URL: https://issues.apache.org/jira/browse/HIVE-6147 > Project: Hive > Issue Type: Bug > Components: HBase Handler >Affects Versions: 0.12.0 >Reporter: Swarnim Kulkarni >Assignee: Swarnim Kulkarni > Attachments: HIVE-6147.1.patch.txt, HIVE-6147.2.patch.txt, > HIVE-6147.3.patch.txt, HIVE-6147.3.patch.txt, HIVE-6147.4.patch.txt > > > Presently, the HBase Hive integration supports querying only primitive data > types in columns. It would be nice to be able to store and query Avro objects > in HBase columns by making them visible as structs to Hive. This will allow > Hive to perform ad hoc analysis of HBase data which can be deeply structured. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6457) Ensure Parquet integration has good error messages for data types not supported
[ https://issues.apache.org/jira/browse/HIVE-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925163#comment-13925163 ] Hive QA commented on HIVE-6457: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12633429/HIVE-6457.patch {color:green}SUCCESS:{color} +1 5379 tests passed Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1671/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1671/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12633429 > Ensure Parquet integration has good error messages for data types not > supported > --- > > Key: HIVE-6457 > URL: https://issues.apache.org/jira/browse/HIVE-6457 > Project: Hive > Issue Type: Improvement > Components: Serializers/Deserializers >Affects Versions: 0.13.0 >Reporter: Brock Noland >Assignee: Brock Noland > Labels: parquet > Attachments: HIVE-6457.patch, HIVE-6457.patch > > -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-4764) Support Kerberos HTTP authentication for HiveServer2 running in http mode
[ https://issues.apache.org/jira/browse/HIVE-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925148#comment-13925148 ] Hive QA commented on HIVE-4764: --- {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12631360/HIVE-4764.3.patch Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1670/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1670/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n '' ]] + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + cd /data/hive-ptest/working/ + tee /data/hive-ptest/logs/PreCommit-HIVE-Build-1670/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ svn = \s\v\n ]] + [[ -n '' ]] + [[ -d apache-svn-trunk-source ]] + [[ ! -d apache-svn-trunk-source/.svn ]] + [[ ! -d apache-svn-trunk-source ]] + cd apache-svn-trunk-source + svn revert -R . ++ egrep -v '^X|^Performing status on external' ++ awk '{print $2}' ++ svn status --no-ignore + rm -rf target datanucleus.log ant/target shims/target shims/0.20/target shims/0.20S/target shims/0.23/target shims/aggregator/target shims/common/target shims/common-secure/target packaging/target hbase-handler/target testutils/target jdbc/target metastore/target itests/target itests/hcatalog-unit/target itests/test-serde/target itests/qtest/target itests/hive-unit/target itests/custom-serde/target itests/util/target hcatalog/target hcatalog/storage-handlers/hbase/target hcatalog/server-extensions/target hcatalog/core/target hcatalog/webhcat/svr/target hcatalog/webhcat/java-client/target hcatalog/hcatalog-pig-adapter/target hwi/target common/target common/src/gen service/target contrib/target serde/target beeline/target odbc/target cli/target ql/dependency-reduced-pom.xml ql/target + svn update Fetching external item into 'hcatalog/src/test/e2e/harness' External at revision 1575672. At revision 1575672. + patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hive-ptest/working/scratch/build.patch + [[ -f /data/hive-ptest/working/scratch/build.patch ]] + chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh + /data/hive-ptest/working/scratch/smart-apply-patch.sh /data/hive-ptest/working/scratch/build.patch The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12631360 > Support Kerberos HTTP authentication for HiveServer2 running in http mode > - > > Key: HIVE-4764 > URL: https://issues.apache.org/jira/browse/HIVE-4764 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Affects Versions: 0.13.0 >Reporter: Thejas M Nair >Assignee: Vaibhav Gumashta > Fix For: 0.13.0 > > Attachments: HIVE-4764.1.patch, HIVE-4764.2.patch, HIVE-4764.3.patch > > > Support Kerberos authentication for HiveServer2 running in http mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6393) Support unqualified column references in Joining conditions
[ https://issues.apache.org/jira/browse/HIVE-6393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925144#comment-13925144 ] Lefty Leverenz commented on HIVE-6393: -- This needs wiki documentation and perhaps a release note. (Is the fix version 0.13?) Here's the Joins doc: * [Language Manual: Hive Joins |https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Joins] > Support unqualified column references in Joining conditions > --- > > Key: HIVE-6393 > URL: https://issues.apache.org/jira/browse/HIVE-6393 > Project: Hive > Issue Type: Improvement >Reporter: Harish Butani >Assignee: Harish Butani > Attachments: HIVE-6393.1.patch, HIVE-6393.2.patch, HIVE-6393.3.patch > > > Support queries of the form: > {noformat} > create table r1(a int); > create table r2(b); > select a, b > from r1 join r2 on a = b > {noformat} > This becomes more useful in old style syntax: > {noformat} > select a, b > from r1, r2 > where a = b > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)