[jira] [Commented] (HIVE-9593) ORC Reader should ignore unknown metadata streams
[ https://issues.apache.org/jira/browse/HIVE-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14572266#comment-14572266 ] Lefty Leverenz commented on HIVE-9593: -- Thanks Prasanth. > ORC Reader should ignore unknown metadata streams > -- > > Key: HIVE-9593 > URL: https://issues.apache.org/jira/browse/HIVE-9593 > Project: Hive > Issue Type: Bug > Components: File Formats >Affects Versions: 0.11.0, 0.12.0, 0.13.1, 1.0.0, 1.2.0, 1.1.0 >Reporter: Gopal V >Assignee: Owen O'Malley > Fix For: 1.1.0 > > Attachments: HIVE-9593.no-autogen.patch, hive-9593.patch > > > ORC readers should ignore metadata streams which are non-essential additions > to the main data streams. > This will include additional indices, histograms or anything we add as an > optional stream. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10841) [WHERE col is not null] does not work sometimes for queries with many JOIN statements
[ https://issues.apache.org/jira/browse/HIVE-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14572263#comment-14572263 ] Laljo John Pullokkaran commented on HIVE-10841: --- [~apivovarov] I see that predicate is being pushed down with the patch. See the attached explain below: hive> explain select acct.ACC_N, acct.brn FROM L JOIN LA ON L.id = LA.loan_id JOIN FR ON L.id = FR.loan_id JOIN A ON LA.aid = A.id JOIN PI ON PI.id = LA.pi_id JOIN acct ON A.id = acct.aid and acct.brn is not null WHERE L.id = 4436; OK STAGE DEPENDENCIES: Stage-12 is a root stage Stage-9 depends on stages: Stage-12 Stage-0 depends on stages: Stage-9 STAGE PLANS: Stage: Stage-12 Map Reduce Local Work Alias -> Map Local Tables: a Fetch Operator limit: -1 acct Fetch Operator limit: -1 fr Fetch Operator limit: -1 l Fetch Operator limit: -1 pi Fetch Operator limit: -1 Alias -> Map Local Operator Tree: a TableScan alias: a filterExpr: id is not null (type: boolean) Statistics: Num rows: 1 Data size: 4 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: id is not null (type: boolean) Statistics: Num rows: 1 Data size: 4 Basic stats: COMPLETE Column stats: NONE HashTable Sink Operator keys: 0 _col5 (type: int) 1 id (type: int) 2 aid (type: int) acct TableScan alias: acct filterExpr: (brn is not null and aid is not null) (type: boolean) Statistics: Num rows: 3 Data size: 31 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: (brn is not null and aid is not null) (type: boolean) Statistics: Num rows: 1 Data size: 10 Basic stats: COMPLETE Column stats: NONE HashTable Sink Operator keys: 0 _col5 (type: int) 1 id (type: int) 2 aid (type: int) fr TableScan alias: fr filterExpr: (loan_id = 4436) (type: boolean) Statistics: Num rows: 1 Data size: 4 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: (loan_id = 4436) (type: boolean) Statistics: Num rows: 1 Data size: 4 Basic stats: COMPLETE Column stats: NONE HashTable Sink Operator keys: 0 4436 (type: int) 1 4436 (type: int) 2 4436 (type: int) l TableScan alias: l filterExpr: (id = 4436) (type: boolean) Statistics: Num rows: 1 Data size: 4 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: (id = 4436) (type: boolean) Statistics: Num rows: 1 Data size: 4 Basic stats: COMPLETE Column stats: NONE HashTable Sink Operator keys: 0 4436 (type: int) 1 4436 (type: int) 2 4436 (type: int) pi TableScan alias: pi filterExpr: id is not null (type: boolean) Statistics: Num rows: 1 Data size: 4 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: id is not null (type: boolean) Statistics: Num rows: 1 Data size: 4 Basic stats: COMPLETE Column stats: NONE HashTable Sink Operator keys: 0 _col6 (type: int) 1 id (type: int) Stage: Stage-9 Map Reduce Map Operator Tree: TableScan alias: la filterExpr: (((loan_id is not null and aid is not null) and pi_id is not null) and (loan_id = 4436)) (type: boolean) Statistics: Num rows: 1 Data size: 14 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: (((loan_id is not null and aid is not null) and pi_id is not null) and (loan_id = 4436)) (type: boolean) Statistics: Num rows: 1 Data size: 14 Basic stats: COMPLETE Column stats: NONE Map Join Operator condition map: Inner Join 0 to 1 Inner Join 0 to 2 keys: 0 4436 (type: int) 1 4436 (type: int) 2 4436 (type: int) outputColumnNames: _col5, _col6 Statistics: Num rows: 2 Data size: 8 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: _col5 is not null (type: boolean)
[jira] [Commented] (HIVE-10555) Improve windowing spec of range based windowing to support additional range formats
[ https://issues.apache.org/jira/browse/HIVE-10555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14572250#comment-14572250 ] Lefty Leverenz commented on HIVE-10555: --- Doc note: Subtasks that need documentation have been marked with TODOC1.3 labels. > Improve windowing spec of range based windowing to support additional range > formats > --- > > Key: HIVE-10555 > URL: https://issues.apache.org/jira/browse/HIVE-10555 > Project: Hive > Issue Type: Improvement > Components: PTF-Windowing >Affects Versions: 1.3.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Fix For: 1.3.0 > > > Currently windowing function only supports the formats of {{x preceding and > current}}, {{x preceding and y following}}, {{current and y following}}. > Windowing of {{x preceding and y preceding}} and {{x following and y > following}} doesn't work properly. > The following functions should be supported. > First_value(), last_value(), sum(), avg() , count(), min(), max() -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10834) Support First_value()/last_value() over x preceding and y preceding windowing
[ https://issues.apache.org/jira/browse/HIVE-10834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14572246#comment-14572246 ] Lefty Leverenz commented on HIVE-10834: --- Doc note: This needs to be documented in the wiki for the 1.3.0 release. * [Windowing and Analytics -- WINDOW clause | https://cwiki.apache.org/confluence/display/Hive/LanguageManual+WindowingAndAnalytics#LanguageManualWindowingAndAnalytics-WINDOWclause] > Support First_value()/last_value() over x preceding and y preceding windowing > - > > Key: HIVE-10834 > URL: https://issues.apache.org/jira/browse/HIVE-10834 > Project: Hive > Issue Type: Sub-task > Components: PTF-Windowing >Reporter: Aihua Xu >Assignee: Aihua Xu > Labels: TODOC1.3 > Fix For: 1.3.0 > > Attachments: HIVE-10834.patch > > > Currently the following query > {noformat} > select ts, f, first_value(f) over (partition by ts order by t rows between 2 > preceding and 1 preceding) from over10k limit 100; > {noformat} > throws exception: > {noformat} > java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: > Hive Runtime Error while processing row (tag=0) > {"key":{"reducesinkkey0":"2013-03-01 > 09:11:58.703071","reducesinkkey1":-3},"value":{"_col3":0.83}} > at > org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:256) > at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506) > at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447) > at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:449) > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row (tag=0) {"key":{"reducesinkkey0":"2013-03-01 > 09:11:58.703071","reducesinkkey1":-3},"value":{"_col3":0.83}} > at > org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:244) > ... 3 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Internal Error: > cannot generate all output rows for a Partition > at > org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.finishPartition(WindowingTableFunction.java:519) > at > org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.finishPartition(PTFOperator.java:337) > at > org.apache.hadoop.hive.ql.exec.PTFOperator.process(PTFOperator.java:114) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88) > at > org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:235) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10834) Support First_value()/last_value() over x preceding and y preceding windowing
[ https://issues.apache.org/jira/browse/HIVE-10834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lefty Leverenz updated HIVE-10834: -- Labels: TODOC1.3 (was: ) > Support First_value()/last_value() over x preceding and y preceding windowing > - > > Key: HIVE-10834 > URL: https://issues.apache.org/jira/browse/HIVE-10834 > Project: Hive > Issue Type: Sub-task > Components: PTF-Windowing >Reporter: Aihua Xu >Assignee: Aihua Xu > Labels: TODOC1.3 > Fix For: 1.3.0 > > Attachments: HIVE-10834.patch > > > Currently the following query > {noformat} > select ts, f, first_value(f) over (partition by ts order by t rows between 2 > preceding and 1 preceding) from over10k limit 100; > {noformat} > throws exception: > {noformat} > java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: > Hive Runtime Error while processing row (tag=0) > {"key":{"reducesinkkey0":"2013-03-01 > 09:11:58.703071","reducesinkkey1":-3},"value":{"_col3":0.83}} > at > org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:256) > at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506) > at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447) > at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:449) > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row (tag=0) {"key":{"reducesinkkey0":"2013-03-01 > 09:11:58.703071","reducesinkkey1":-3},"value":{"_col3":0.83}} > at > org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:244) > ... 3 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Internal Error: > cannot generate all output rows for a Partition > at > org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.finishPartition(WindowingTableFunction.java:519) > at > org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.finishPartition(PTFOperator.java:337) > at > org.apache.hadoop.hive.ql.exec.PTFOperator.process(PTFOperator.java:114) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88) > at > org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:235) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10826) Support min()/max() functions over x preceding and y preceding windowing
[ https://issues.apache.org/jira/browse/HIVE-10826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lefty Leverenz updated HIVE-10826: -- Labels: TODOC1.3 (was: ) > Support min()/max() functions over x preceding and y preceding windowing > - > > Key: HIVE-10826 > URL: https://issues.apache.org/jira/browse/HIVE-10826 > Project: Hive > Issue Type: Sub-task > Components: PTF-Windowing >Reporter: Aihua Xu >Assignee: Aihua Xu > Labels: TODOC1.3 > Fix For: 1.3.0 > > Attachments: HIVE-10826.patch > > > Currently the query > {noformat} > select key, value, min(value) over (partition by key order by value rows > between 1 preceding and 1 preceding) from small; > {noformat} > doesn't work. It failed with > {noformat} > java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: > Hive Runtime Error while processing row (tag=0) > {"key":{"reducesinkkey0":"2"},"value":{"_col0":"500"}} > at > org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:256) > at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506) > at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447) > at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:449) > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row (tag=0) > {"key":{"reducesinkkey0":"2"},"value":{"_col0":"500"}} > at > org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:244) > ... 3 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Internal Error: > cannot generate all output rows for a Partition > at > org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.finishPartition(WindowingTableFunction.java:520) > at > org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.finishPartition(PTFOperator.java:337) > at > org.apache.hadoop.hive.ql.exec.PTFOperator.process(PTFOperator.java:114) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88) > at > org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:235) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10904) Use beeline-log4j.properties for migrated CLI [beeline-cli Branch]
[ https://issues.apache.org/jira/browse/HIVE-10904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14572241#comment-14572241 ] Chinna Rao Lalam commented on HIVE-10904: - Thanks [~leftylev], Linked this issue to HIVE-10810 > Use beeline-log4j.properties for migrated CLI [beeline-cli Branch] > -- > > Key: HIVE-10904 > URL: https://issues.apache.org/jira/browse/HIVE-10904 > Project: Hive > Issue Type: Sub-task >Affects Versions: beeline-cli-branch >Reporter: Chinna Rao Lalam >Assignee: Chinna Rao Lalam > Attachments: HIVE-10904.patch > > > Updated CLI printing logs on the console. Use beeline-log4j.properties for > redirecting to file. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10904) Use beeline-log4j.properties for migrated CLI [beeline-cli Branch]
[ https://issues.apache.org/jira/browse/HIVE-10904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14572231#comment-14572231 ] Lefty Leverenz commented on HIVE-10904: --- Should this be documented? If so, please link it to HIVE-10810 (Document Beeline/CLI changes). > Use beeline-log4j.properties for migrated CLI [beeline-cli Branch] > -- > > Key: HIVE-10904 > URL: https://issues.apache.org/jira/browse/HIVE-10904 > Project: Hive > Issue Type: Sub-task >Affects Versions: beeline-cli-branch >Reporter: Chinna Rao Lalam >Assignee: Chinna Rao Lalam > Attachments: HIVE-10904.patch > > > Updated CLI printing logs on the console. Use beeline-log4j.properties for > redirecting to file. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10905) Quit&Exit fails ending with ';' [beeline-cli Branch]
[ https://issues.apache.org/jira/browse/HIVE-10905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chinna Rao Lalam updated HIVE-10905: Attachment: HIVE-10905.1.patch Hi [~Ferd], Rebased the patch > Quit&Exit fails ending with ';' [beeline-cli Branch] > > > Key: HIVE-10905 > URL: https://issues.apache.org/jira/browse/HIVE-10905 > Project: Hive > Issue Type: Sub-task >Affects Versions: beeline-cli-branch >Reporter: Chinna Rao Lalam >Assignee: Chinna Rao Lalam > Attachments: HIVE-10905.1.patch, HIVE-10905.patch > > > In CLI quit and exit will expect ending ';' > In Updated CLI quit and exit without ending ; is working. > quit and exit ending with ';' throwing exception. Support quit and exit with > ending ';' for the compatibility; -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10921) Change trunk pom version to reflect the branch-1 split
[ https://issues.apache.org/jira/browse/HIVE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14572194#comment-14572194 ] Hive QA commented on HIVE-10921: {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12737415/HIVE-10921.patch {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 8992 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autogen_colalias org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_nondeterministic org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx_cbo_2 {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4166/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4166/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4166/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12737415 - PreCommit-HIVE-TRUNK-Build > Change trunk pom version to reflect the branch-1 split > -- > > Key: HIVE-10921 > URL: https://issues.apache.org/jira/browse/HIVE-10921 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: 2.0.0 > > Attachments: HIVE-10921.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10905) Quit&Exit fails ending with ';' [beeline-cli Branch]
[ https://issues.apache.org/jira/browse/HIVE-10905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14572174#comment-14572174 ] Ferdinand Xu commented on HIVE-10905: - Could you rebase your patch to the latest code of branch "beeline cli"? Thank you! > Quit&Exit fails ending with ';' [beeline-cli Branch] > > > Key: HIVE-10905 > URL: https://issues.apache.org/jira/browse/HIVE-10905 > Project: Hive > Issue Type: Sub-task >Affects Versions: beeline-cli-branch >Reporter: Chinna Rao Lalam >Assignee: Chinna Rao Lalam > Attachments: HIVE-10905.patch > > > In CLI quit and exit will expect ending ';' > In Updated CLI quit and exit without ending ; is working. > quit and exit ending with ';' throwing exception. Support quit and exit with > ending ';' for the compatibility; -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10905) Quit&Exit fails ending with ';' [beeline-cli Branch]
[ https://issues.apache.org/jira/browse/HIVE-10905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14572168#comment-14572168 ] Ferdinand Xu commented on HIVE-10905: - LGTM +1 > Quit&Exit fails ending with ';' [beeline-cli Branch] > > > Key: HIVE-10905 > URL: https://issues.apache.org/jira/browse/HIVE-10905 > Project: Hive > Issue Type: Sub-task >Affects Versions: beeline-cli-branch >Reporter: Chinna Rao Lalam >Assignee: Chinna Rao Lalam > Attachments: HIVE-10905.patch > > > In CLI quit and exit will expect ending ';' > In Updated CLI quit and exit without ending ; is working. > quit and exit ending with ';' throwing exception. Support quit and exit with > ending ';' for the compatibility; -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10904) Use beeline-log4j.properties for migrated CLI [beeline-cli Branch]
[ https://issues.apache.org/jira/browse/HIVE-10904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14572167#comment-14572167 ] Ferdinand Xu commented on HIVE-10904: - LGTM +1 > Use beeline-log4j.properties for migrated CLI [beeline-cli Branch] > -- > > Key: HIVE-10904 > URL: https://issues.apache.org/jira/browse/HIVE-10904 > Project: Hive > Issue Type: Sub-task >Affects Versions: beeline-cli-branch >Reporter: Chinna Rao Lalam >Assignee: Chinna Rao Lalam > Attachments: HIVE-10904.patch > > > Updated CLI printing logs on the console. Use beeline-log4j.properties for > redirecting to file. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10904) Use beeline-log4j.properties for migrated CLI [beeline-cli Branch]
[ https://issues.apache.org/jira/browse/HIVE-10904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu updated HIVE-10904: Issue Type: Sub-task (was: Bug) Parent: HIVE-10511 > Use beeline-log4j.properties for migrated CLI [beeline-cli Branch] > -- > > Key: HIVE-10904 > URL: https://issues.apache.org/jira/browse/HIVE-10904 > Project: Hive > Issue Type: Sub-task >Affects Versions: beeline-cli-branch >Reporter: Chinna Rao Lalam >Assignee: Chinna Rao Lalam > Attachments: HIVE-10904.patch > > > Updated CLI printing logs on the console. Use beeline-log4j.properties for > redirecting to file. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10736) LLAP: HiveServer2 shutdown of cached tez app-masters is not clean
[ https://issues.apache.org/jira/browse/HIVE-10736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vikram Dixit K updated HIVE-10736: -- Attachment: HIVE-10736.2.patch > LLAP: HiveServer2 shutdown of cached tez app-masters is not clean > - > > Key: HIVE-10736 > URL: https://issues.apache.org/jira/browse/HIVE-10736 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Reporter: Gopal V >Assignee: Vikram Dixit K > Attachments: HIVE-10736.1.patch, HIVE-10736.2.patch > > > The shutdown process throws concurrent modification exceptions and fails to > clean up the app masters per queue. > {code} > 2015-05-17 20:24:00,464 INFO [Thread-6()]: service.AbstractService > (AbstractService.java:stop(125)) - Service:OperationManager is stopped. > 2015-05-17 20:24:00,464 INFO [Thread-6()]: service.AbstractService > (AbstractService.java:stop(125)) - Service:SessionManager is stopped. > 2015-05-17 20:24:00,464 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-6()]: service.AbstractService > (AbstractService.java:stop(125)) - Service:CLIService is stopped. > 2015-05-17 20:24:00,465 INFO [Thread-6()]: service.AbstractService > (AbstractService.java:stop(125)) - Service:HiveServer2 is stopped. > 2015-05-17 20:24:00,465 INFO [Thread-6()]: tez.TezSessionState > (TezSessionState.java:close(332)) - Closing Tez Session > 2015-05-17 20:24:00,466 INFO [Thread-6()]: client.TezClient > (TezClient.java:stop(495)) - Shutting down Tez Session, > sessionName=HIVE-94cc629d-63bc-490a-a135-af85c0cc0f2e, > applicationId=application_1431919257083_0012 > 2015-05-17 20:24:00,570 ERROR [Thread-6()]: server.HiveServer2 > (HiveServer2.java:stop(322)) - Tez session pool manager stop had an error > during stop of HiveServer2. Shutting down HiveServer2 anyway. > java.util.ConcurrentModificationException > at > java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966) > at java.util.LinkedList$ListItr.next(LinkedList.java:888) > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.stop(TezSessionPoolManager.java:187) > at > org.apache.hive.service.server.HiveServer2.stop(HiveServer2.java:320) > at > org.apache.hive.service.server.HiveServer2$1.run(HiveServer2.java:107) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10735) Cached plan race condition - VectorMapJoinCommonOperator has no closeOp()
[ https://issues.apache.org/jira/browse/HIVE-10735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-10735: -- Issue Type: Bug (was: Sub-task) Parent: (was: HIVE-7926) > Cached plan race condition - VectorMapJoinCommonOperator has no closeOp() > - > > Key: HIVE-10735 > URL: https://issues.apache.org/jira/browse/HIVE-10735 > Project: Hive > Issue Type: Bug > Components: Vectorization >Reporter: Gopal V >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-10705.01.patch, HIVE-10705.02.patch > > > Looks like some state is mutated during execution across threads in LLAP. > Either we can't share the operator objects across threads, because they are > tied to the data objects per invocation or this is missing a closeOp() which > resets the common-setup between reuses. > {code} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyLongOperator.process(VectorMapJoinInnerBigOnlyLongOperator.java:380) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:850) > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:114) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:850) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:97) > at > org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:164) > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:45) > ... 18 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:379) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:850) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.forwardBigTableBatch(VectorMapJoinGenerateResultOperator.java:599) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyGenerateResultOperator.generateHashMultiSetResultRepeatedAll(VectorMapJoinInnerBigOnlyGenerateResultOperator.java:304) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyGenerateResultOperator.finishInnerBigOnlyRepeated(VectorMapJoinInnerBigOnlyGenerateResultOperator.java:328) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyLongOperator.process(VectorMapJoinInnerBigOnlyLongOperator.java:201) > ... 24 more > Caused by: java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector.setVal(BytesColumnVector.java:152) > at > org.apache.hadoop.hive.ql.exec.vector.VectorDeserializeRow$StringReaderByValue.apply(VectorDeserializeRow.java:349) > at > org.apache.hadoop.hive.ql.exec.vector.VectorDeserializeRow.deserializeByValue(VectorDeserializeRow.java:688) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultSingleValue(VectorMapJoinGenerateResultOperator.java:177) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerGenerateResultOperator.finishInner(VectorMapJoinInnerGenerateResultOperator.java:201) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:359) > ... 29 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10735) Cached plan race condition - VectorMapJoinCommonOperator has no closeOp()
[ https://issues.apache.org/jira/browse/HIVE-10735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-10735: -- Summary: Cached plan race condition - VectorMapJoinCommonOperator has no closeOp() (was: LLAP: Cached plan race condition - VectorMapJoinCommonOperator has no closeOp()) > Cached plan race condition - VectorMapJoinCommonOperator has no closeOp() > - > > Key: HIVE-10735 > URL: https://issues.apache.org/jira/browse/HIVE-10735 > Project: Hive > Issue Type: Sub-task > Components: Vectorization >Reporter: Gopal V >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-10705.01.patch, HIVE-10705.02.patch > > > Looks like some state is mutated during execution across threads in LLAP. > Either we can't share the operator objects across threads, because they are > tied to the data objects per invocation or this is missing a closeOp() which > resets the common-setup between reuses. > {code} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyLongOperator.process(VectorMapJoinInnerBigOnlyLongOperator.java:380) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:850) > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:114) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:850) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:97) > at > org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:164) > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:45) > ... 18 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:379) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:850) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.forwardBigTableBatch(VectorMapJoinGenerateResultOperator.java:599) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyGenerateResultOperator.generateHashMultiSetResultRepeatedAll(VectorMapJoinInnerBigOnlyGenerateResultOperator.java:304) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyGenerateResultOperator.finishInnerBigOnlyRepeated(VectorMapJoinInnerBigOnlyGenerateResultOperator.java:328) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyLongOperator.process(VectorMapJoinInnerBigOnlyLongOperator.java:201) > ... 24 more > Caused by: java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector.setVal(BytesColumnVector.java:152) > at > org.apache.hadoop.hive.ql.exec.vector.VectorDeserializeRow$StringReaderByValue.apply(VectorDeserializeRow.java:349) > at > org.apache.hadoop.hive.ql.exec.vector.VectorDeserializeRow.deserializeByValue(VectorDeserializeRow.java:688) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultSingleValue(VectorMapJoinGenerateResultOperator.java:177) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerGenerateResultOperator.finishInner(VectorMapJoinInnerGenerateResultOperator.java:201) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:359) > ... 29 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10919) Windows: create table with JsonSerDe failed via beeline unless you add hcatalog core jar to classpath
[ https://issues.apache.org/jira/browse/HIVE-10919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-10919: -- Fix Version/s: 1.3.0 > Windows: create table with JsonSerDe failed via beeline unless you add > hcatalog core jar to classpath > - > > Key: HIVE-10919 > URL: https://issues.apache.org/jira/browse/HIVE-10919 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Fix For: 1.3.0 > > Attachments: HIVE-10919.1.patch > > > NO PRECOMMIT TESTS > Before we run HiveServer2 tests, we create table via beeline. > And 'create table' with JsonSerDe failed on Winodws. It works on Linux: > {noformat} > 0: jdbc:hive2://localhost:10001> create external table all100kjson( > 0: jdbc:hive2://localhost:10001> s string, > 0: jdbc:hive2://localhost:10001> i int, > 0: jdbc:hive2://localhost:10001> d double, > 0: jdbc:hive2://localhost:10001> m map, > 0: jdbc:hive2://localhost:10001> bb array>, > 0: jdbc:hive2://localhost:10001> t timestamp) > 0: jdbc:hive2://localhost:10001> row format serde > 'org.apache.hive.hcatalog.data.JsonSerDe' > 0: jdbc:hive2://localhost:10001> WITH SERDEPROPERTIES > ('timestamp.formats'='-MM-dd\'T\'HH:mm:ss') > 0: jdbc:hive2://localhost:10001> STORED AS TEXTFILE > 0: jdbc:hive2://localhost:10001> location '/user/hcat/tests/data/all100kjson'; > Error: Error while processing statement: FAILED: Execution Error, return code > 1 from org.apache.hadoop.hive.ql.exec.DDLT > ask. Cannot validate serde: org.apache.hive.hcatalog.data.JsonSerDe > (state=08S01,code=1) > {noformat} > hive.log shows: > {noformat} > 2015-05-21 21:59:17,004 ERROR operation.Operation > (SQLOperation.java:run(209)) - Error running hive query: > org.apache.hive.service.cli.HiveSQLException: Error while processing > statement: FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. Cannot validate serde: > org.apache.hive.hcatalog.data.JsonSerDe > at > org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:315) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:156) > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71) > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Cannot validate > serde: org.apache.hive.hcatalog.data.JsonSerDe > at > org.apache.hadoop.hive.ql.exec.DDLTask.validateSerDe(DDLTask.java:3871) > at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4011) > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1650) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1409) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1192) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1054) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154) > ... 11 more > Caused by: java.lang.ClassNotFoundException: Class > org.apache.hive.hcatalog.data.JsonSerDe not found > at > org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101) > at > org.apache.hadoop.hive.ql.exec.DDLTask.validateSerDe(DDLTask.java:3865) > ... 21 more > {noformat} > If you do add the hcatalog jar to classpath, it works: > {noformat}0: jdbc:hive2://localhost:10001> add jar > hdfs:///tmp/testjars/hive-hcatalog-core-1.2.0.2.3.0.0-2079.jar; > INFO : converting to local > hdfs:///tmp/testjars/hive-hcatalog-core-1.2.0.2.3.0.0-2079.jar > INFO : Added > [/C:/Users/hadoop/AppDat
[jira] [Resolved] (HIVE-10919) Windows: create table with JsonSerDe failed via beeline unless you add hcatalog core jar to classpath
[ https://issues.apache.org/jira/browse/HIVE-10919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner resolved HIVE-10919. --- Resolution: Fixed > Windows: create table with JsonSerDe failed via beeline unless you add > hcatalog core jar to classpath > - > > Key: HIVE-10919 > URL: https://issues.apache.org/jira/browse/HIVE-10919 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-10919.1.patch > > > NO PRECOMMIT TESTS > Before we run HiveServer2 tests, we create table via beeline. > And 'create table' with JsonSerDe failed on Winodws. It works on Linux: > {noformat} > 0: jdbc:hive2://localhost:10001> create external table all100kjson( > 0: jdbc:hive2://localhost:10001> s string, > 0: jdbc:hive2://localhost:10001> i int, > 0: jdbc:hive2://localhost:10001> d double, > 0: jdbc:hive2://localhost:10001> m map, > 0: jdbc:hive2://localhost:10001> bb array>, > 0: jdbc:hive2://localhost:10001> t timestamp) > 0: jdbc:hive2://localhost:10001> row format serde > 'org.apache.hive.hcatalog.data.JsonSerDe' > 0: jdbc:hive2://localhost:10001> WITH SERDEPROPERTIES > ('timestamp.formats'='-MM-dd\'T\'HH:mm:ss') > 0: jdbc:hive2://localhost:10001> STORED AS TEXTFILE > 0: jdbc:hive2://localhost:10001> location '/user/hcat/tests/data/all100kjson'; > Error: Error while processing statement: FAILED: Execution Error, return code > 1 from org.apache.hadoop.hive.ql.exec.DDLT > ask. Cannot validate serde: org.apache.hive.hcatalog.data.JsonSerDe > (state=08S01,code=1) > {noformat} > hive.log shows: > {noformat} > 2015-05-21 21:59:17,004 ERROR operation.Operation > (SQLOperation.java:run(209)) - Error running hive query: > org.apache.hive.service.cli.HiveSQLException: Error while processing > statement: FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. Cannot validate serde: > org.apache.hive.hcatalog.data.JsonSerDe > at > org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:315) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:156) > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71) > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Cannot validate > serde: org.apache.hive.hcatalog.data.JsonSerDe > at > org.apache.hadoop.hive.ql.exec.DDLTask.validateSerDe(DDLTask.java:3871) > at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4011) > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1650) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1409) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1192) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1054) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154) > ... 11 more > Caused by: java.lang.ClassNotFoundException: Class > org.apache.hive.hcatalog.data.JsonSerDe not found > at > org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101) > at > org.apache.hadoop.hive.ql.exec.DDLTask.validateSerDe(DDLTask.java:3865) > ... 21 more > {noformat} > If you do add the hcatalog jar to classpath, it works: > {noformat}0: jdbc:hive2://localhost:10001> add jar > hdfs:///tmp/testjars/hive-hcatalog-core-1.2.0.2.3.0.0-2079.jar; > INFO : converting to local > hdfs:///tmp/testjars/hive-hcatalog-core-1.2.0.2.3.0.0-2079.jar > INFO : Added > [/C:/Users/hadoop/AppData/Local/Temp/bc941dac-3bca-4287-
[jira] [Commented] (HIVE-10919) Windows: create table with JsonSerDe failed via beeline unless you add hcatalog core jar to classpath
[ https://issues.apache.org/jira/browse/HIVE-10919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14572096#comment-14572096 ] Gunther Hagleitner commented on HIVE-10919: --- Committed to master. Thanks [~hsubramaniyan]. > Windows: create table with JsonSerDe failed via beeline unless you add > hcatalog core jar to classpath > - > > Key: HIVE-10919 > URL: https://issues.apache.org/jira/browse/HIVE-10919 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-10919.1.patch > > > NO PRECOMMIT TESTS > Before we run HiveServer2 tests, we create table via beeline. > And 'create table' with JsonSerDe failed on Winodws. It works on Linux: > {noformat} > 0: jdbc:hive2://localhost:10001> create external table all100kjson( > 0: jdbc:hive2://localhost:10001> s string, > 0: jdbc:hive2://localhost:10001> i int, > 0: jdbc:hive2://localhost:10001> d double, > 0: jdbc:hive2://localhost:10001> m map, > 0: jdbc:hive2://localhost:10001> bb array>, > 0: jdbc:hive2://localhost:10001> t timestamp) > 0: jdbc:hive2://localhost:10001> row format serde > 'org.apache.hive.hcatalog.data.JsonSerDe' > 0: jdbc:hive2://localhost:10001> WITH SERDEPROPERTIES > ('timestamp.formats'='-MM-dd\'T\'HH:mm:ss') > 0: jdbc:hive2://localhost:10001> STORED AS TEXTFILE > 0: jdbc:hive2://localhost:10001> location '/user/hcat/tests/data/all100kjson'; > Error: Error while processing statement: FAILED: Execution Error, return code > 1 from org.apache.hadoop.hive.ql.exec.DDLT > ask. Cannot validate serde: org.apache.hive.hcatalog.data.JsonSerDe > (state=08S01,code=1) > {noformat} > hive.log shows: > {noformat} > 2015-05-21 21:59:17,004 ERROR operation.Operation > (SQLOperation.java:run(209)) - Error running hive query: > org.apache.hive.service.cli.HiveSQLException: Error while processing > statement: FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. Cannot validate serde: > org.apache.hive.hcatalog.data.JsonSerDe > at > org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:315) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:156) > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71) > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Cannot validate > serde: org.apache.hive.hcatalog.data.JsonSerDe > at > org.apache.hadoop.hive.ql.exec.DDLTask.validateSerDe(DDLTask.java:3871) > at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4011) > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1650) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1409) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1192) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1054) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154) > ... 11 more > Caused by: java.lang.ClassNotFoundException: Class > org.apache.hive.hcatalog.data.JsonSerDe not found > at > org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101) > at > org.apache.hadoop.hive.ql.exec.DDLTask.validateSerDe(DDLTask.java:3865) > ... 21 more > {noformat} > If you do add the hcatalog jar to classpath, it works: > {noformat}0: jdbc:hive2://localhost:10001> add jar > hdfs:///tmp/testjars/hive-hcatalog-core-1.2.0.2.3.0.0-2079.jar; > INFO : converting to local > hdfs:///tmp/testjars/hive-hcatalog-core-1.2.0.2.3.0.0-2079
[jira] [Updated] (HIVE-7193) Hive should support additional LDAP authentication parameters
[ https://issues.apache.org/jira/browse/HIVE-7193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naveen Gangam updated HIVE-7193: Attachment: HIVE-7193.3.patch Incorporated suggestions from Chaoyu. Attaching a new patch. > Hive should support additional LDAP authentication parameters > - > > Key: HIVE-7193 > URL: https://issues.apache.org/jira/browse/HIVE-7193 > Project: Hive > Issue Type: Bug >Affects Versions: 0.10.0 >Reporter: Mala Chikka Kempanna >Assignee: Naveen Gangam > Attachments: HIVE-7193.2.patch, HIVE-7193.3.patch, HIVE-7193.patch, > LDAPAuthentication_Design_Doc.docx, LDAPAuthentication_Design_Doc_V2.docx > > > Currently hive has only following authenticator parameters for LDAP > authentication for hiveserver2. > > hive.server2.authentication > LDAP > > > hive.server2.authentication.ldap.url > ldap://our_ldap_address > > We need to include other LDAP properties as part of hive-LDAP authentication > like below > a group search base -> dc=domain,dc=com > a group search filter -> member={0} > a user search base -> dc=domain,dc=com > a user search filter -> sAMAAccountName={0} > a list of valid user groups -> group1,group2,group3 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10736) LLAP: HiveServer2 shutdown of cached tez app-masters is not clean
[ https://issues.apache.org/jira/browse/HIVE-10736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14572079#comment-14572079 ] Hive QA commented on HIVE-10736: {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12737408/HIVE-10736.1.patch Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4165/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4165/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4165/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]] + export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera + JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera + export PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin + PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + cd /data/hive-ptest/working/ + tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-4165/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at aec98e9 HIVE-10705 Update tests for HIVE-9302 after removing binaries(Ferdinand Xu, reviewed by Hari Sankar Sivarama Subramaniyan) + git clean -f -d Removing ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java.orig Removing ql/src/test/queries/clientpositive/insertoverwrite_bucket.q Removing ql/src/test/results/clientpositive/insertoverwrite_bucket.q.out + git checkout master Already on 'master' + git reset --hard origin/master HEAD is now at aec98e9 HIVE-10705 Update tests for HIVE-9302 after removing binaries(Ferdinand Xu, reviewed by Hari Sankar Sivarama Subramaniyan) + git merge --ff-only origin/master Already up-to-date. + git gc + patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hive-ptest/working/scratch/build.patch + [[ -f /data/hive-ptest/working/scratch/build.patch ]] + chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh + /data/hive-ptest/working/scratch/smart-apply-patch.sh /data/hive-ptest/working/scratch/build.patch The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12737408 - PreCommit-HIVE-TRUNK-Build > LLAP: HiveServer2 shutdown of cached tez app-masters is not clean > - > > Key: HIVE-10736 > URL: https://issues.apache.org/jira/browse/HIVE-10736 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Reporter: Gopal V >Assignee: Vikram Dixit K > Attachments: HIVE-10736.1.patch > > > The shutdown process throws concurrent modification exceptions and fails to > clean up the app masters per queue. > {code} > 2015-05-17 20:24:00,464 INFO [Thread-6()]: service.AbstractService > (AbstractService.java:stop(125)) - Service:OperationManager is stopped. > 2015-05-17 20:24:00,464 INFO [Thread-6()]: service.AbstractService > (AbstractService.java:stop(125)) - Service:SessionManager is stopped. > 2015-05-17 20:24:00,464 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-9()]: tez.TezSessi
[jira] [Commented] (HIVE-10880) The bucket number is not respected in insert overwrite.
[ https://issues.apache.org/jira/browse/HIVE-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14572076#comment-14572076 ] Hive QA commented on HIVE-10880: {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12737388/HIVE-10880.2.patch {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 8993 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autogen_colalias org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_nondeterministic org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx_cbo_2 {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4164/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4164/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4164/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12737388 - PreCommit-HIVE-TRUNK-Build > The bucket number is not respected in insert overwrite. > --- > > Key: HIVE-10880 > URL: https://issues.apache.org/jira/browse/HIVE-10880 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Yongzhi Chen >Assignee: Yongzhi Chen >Priority: Blocker > Attachments: HIVE-10880.1.patch, HIVE-10880.2.patch > > > When hive.enforce.bucketing is true, the bucket number defined in the table > is no longer respected in current master and 1.2. This is a regression. > Reproduce: > {noformat} > CREATE TABLE IF NOT EXISTS buckettestinput( > data string > ) > ROW FORMAT DELIMITED FIELDS TERMINATED BY ','; > CREATE TABLE IF NOT EXISTS buckettestoutput1( > data string > )CLUSTERED BY(data) > INTO 2 BUCKETS > ROW FORMAT DELIMITED FIELDS TERMINATED BY ','; > CREATE TABLE IF NOT EXISTS buckettestoutput2( > data string > )CLUSTERED BY(data) > INTO 2 BUCKETS > ROW FORMAT DELIMITED FIELDS TERMINATED BY ','; > Then I inserted the following data into the "buckettestinput" table > firstinsert1 > firstinsert2 > firstinsert3 > firstinsert4 > firstinsert5 > firstinsert6 > firstinsert7 > firstinsert8 > secondinsert1 > secondinsert2 > secondinsert3 > secondinsert4 > secondinsert5 > secondinsert6 > secondinsert7 > secondinsert8 > set hive.enforce.bucketing = true; > set hive.enforce.sorting=true; > insert overwrite table buckettestoutput1 > select * from buckettestinput where data like 'first%'; > set hive.auto.convert.sortmerge.join=true; > set hive.optimize.bucketmapjoin = true; > set hive.optimize.bucketmapjoin.sortedmerge = true; > select * from buckettestoutput1 a join buckettestoutput2 b on (a.data=b.data); > Error: Error while compiling statement: FAILED: SemanticException [Error > 10141]: Bucketed table metadata is not correct. Fix the metadata or don't use > bucketed mapjoin, by setting hive.enforce.bucketmapjoin to false. The number > of buckets for table buckettestoutput1 is 2, whereas the number of files is 1 > (state=42000,code=10141) > {noformat} > The related debug information related to insert overwrite: > {noformat} > 0: jdbc:hive2://localhost:1> insert overwrite table buckettestoutput1 > select * from buckettestinput where data like 'first%'insert overwrite table > buckettestoutput1 > 0: jdbc:hive2://localhost:1> ; > select * from buckettestinput where data like ' > first%'; > INFO : Number of reduce tasks determined at compile time: 2 > INFO : In order to change the average load for a reducer (in bytes): > INFO : set hive.exec.reducers.bytes.per.reducer= > INFO : In order to limit the maximum number of reducers: > INFO : set hive.exec.reducers.max= > INFO : In order to set a constant number of reducers: > INFO : set mapred.reduce.tasks= > INFO : Job running in-process (local Hadoop) > INFO : 2015-06-01 11:09:29,650 Stage-1 map = 86%, reduce = 100% > INFO : Ended Job = job_local107155352_0001 > INFO : Loading data to table default.buckettestoutput1 from > file:/user/hive/warehouse/buckettestoutput1/.hive-staging_hive_2015-06-01_11-09-28_166_3109203968904090801-1/-ext-1 > INFO : Table default.buckettestoutput1 stats: [numFiles=1, numRows=4, > totalSize=52, rawDataSize=48] > No rows affected (1.692 seconds) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-7292) Hive on Spark
[ https://issues.apache.org/jira/browse/HIVE-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dutianmin reassigned HIVE-7292: --- Assignee: dutianmin (was: Xuefu Zhang) > Hive on Spark > - > > Key: HIVE-7292 > URL: https://issues.apache.org/jira/browse/HIVE-7292 > Project: Hive > Issue Type: Improvement > Components: Spark >Reporter: Xuefu Zhang >Assignee: dutianmin > Labels: Spark-M1, Spark-M2, Spark-M3, Spark-M4, Spark-M5 > Attachments: Hive-on-Spark.pdf > > > Spark as an open-source data analytics cluster computing framework has gained > significant momentum recently. Many Hive users already have Spark installed > as their computing backbone. To take advantages of Hive, they still need to > have either MapReduce or Tez on their cluster. This initiative will provide > user a new alternative so that those user can consolidate their backend. > Secondly, providing such an alternative further increases Hive's adoption as > it exposes Spark users to a viable, feature-rich de facto standard SQL tools > on Hadoop. > Finally, allowing Hive to run on Spark also has performance benefits. Hive > queries, especially those involving multiple reducer stages, will run faster, > thus improving user experience as Tez does. > This is an umbrella JIRA which will cover many coming subtask. Design doc > will be attached here shortly, and will be on the wiki as well. Feedback from > the community is greatly appreciated! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10903) Add hive.in.test for HoS tests [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-10903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HIVE-10903: -- Attachment: HIVE-10903.1.patch Test against master. > Add hive.in.test for HoS tests [Spark Branch] > - > > Key: HIVE-10903 > URL: https://issues.apache.org/jira/browse/HIVE-10903 > Project: Hive > Issue Type: Test >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-10903.1-spark.patch, HIVE-10903.1.patch > > > Missing the property can make CBO fails to run during UT. There should be > other effects that can be identified here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10903) Add hive.in.test for HoS tests [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-10903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14572006#comment-14572006 ] Rui Li commented on HIVE-10903: --- Both MR and tez have this flag in hive-site.xml for tests. The flag was introduced in HIVE-6204. And added to tez's tests in HIVE-8014. > Add hive.in.test for HoS tests [Spark Branch] > - > > Key: HIVE-10903 > URL: https://issues.apache.org/jira/browse/HIVE-10903 > Project: Hive > Issue Type: Test >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-10903.1-spark.patch > > > Missing the property can make CBO fails to run during UT. There should be > other effects that can be identified here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10872) LLAP: make sure tests pass
[ https://issues.apache.org/jira/browse/HIVE-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571996#comment-14571996 ] Hive QA commented on HIVE-10872: {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12737384/HIVE-10872.02.patch Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4163/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4163/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4163/ Messages: {noformat} This message was trimmed, see log for full details [WARNING] - javax.transaction.InvalidTransactionException [WARNING] - javax.transaction.xa.XAException [WARNING] - javax.transaction.Synchronization [WARNING] - javax.transaction.HeuristicRollbackException [WARNING] - javax.transaction.HeuristicCommitException [WARNING] - javax.transaction.xa.XAResource [WARNING] - javax.transaction.TransactionSynchronizationRegistry [WARNING] - javax.transaction.TransactionRolledbackException [WARNING] - 8 more... [WARNING] jsp-2.1-6.1.14.jar, jasper-runtime-5.5.12.jar define 43 overlappping classes: [WARNING] - org.apache.jasper.runtime.ProtectedFunctionMapper$2 [WARNING] - org.apache.jasper.runtime.JspFactoryImpl$PrivilegedGetPageContext [WARNING] - org.apache.jasper.runtime.PageContextImpl$3 [WARNING] - org.apache.jasper.runtime.PageContextImpl$4 [WARNING] - org.apache.jasper.runtime.JspFactoryImpl [WARNING] - org.apache.jasper.runtime.ProtectedFunctionMapper$1 [WARNING] - org.apache.jasper.runtime.PerThreadTagHandlerPool$PerThreadData [WARNING] - org.apache.jasper.JasperException [WARNING] - org.apache.jasper.util.FastDateFormat [WARNING] - org.apache.jasper.runtime.PageContextImpl$8 [WARNING] - 33 more... [WARNING] jsp-2.1-6.1.14.jar, jasper-compiler-5.5.12.jar define 143 overlappping classes: [WARNING] - org.apache.jasper.compiler.TagLibraryInfoImpl [WARNING] - org.apache.jasper.xmlparser.SymbolTable [WARNING] - org.apache.jasper.compiler.Generator$FragmentHelperClass$Fragment [WARNING] - org.apache.jasper.compiler.Generator$1TagHandlerPoolVisitor [WARNING] - org.apache.jasper.compiler.SmapStratum$LineInfo [WARNING] - org.apache.jasper.compiler.Node$AttributeGenerator [WARNING] - org.apache.jasper.compiler.ScriptingVariabler [WARNING] - org.apache.jasper.compiler.tagplugin.TagPlugin [WARNING] - org.apache.jasper.compiler.Node$JspAttribute [WARNING] - org.apache.jasper.compiler.Node [WARNING] - 133 more... [WARNING] commons-collections-3.2.1.jar, commons-beanutils-core-1.8.0.jar, commons-beanutils-1.7.0.jar define 10 overlappping classes: [WARNING] - org.apache.commons.collections.FastHashMap$EntrySet [WARNING] - org.apache.commons.collections.ArrayStack [WARNING] - org.apache.commons.collections.FastHashMap$1 [WARNING] - org.apache.commons.collections.FastHashMap$KeySet [WARNING] - org.apache.commons.collections.FastHashMap$CollectionView [WARNING] - org.apache.commons.collections.BufferUnderflowException [WARNING] - org.apache.commons.collections.Buffer [WARNING] - org.apache.commons.collections.FastHashMap$CollectionView$CollectionViewIterator [WARNING] - org.apache.commons.collections.FastHashMap$Values [WARNING] - org.apache.commons.collections.FastHashMap [WARNING] jsp-2.1-6.1.14.jar, jasper-compiler-5.5.12.jar, jasper-runtime-5.5.12.jar define 1 overlappping classes: [WARNING] - org.apache.jasper.compiler.Localizer [WARNING] jline-2.12.jar, leveldbjni-all-1.8.jar define 4 overlappping classes: [WARNING] - org.fusesource.hawtjni.runtime.JNIEnv [WARNING] - org.fusesource.hawtjni.runtime.PointerMath [WARNING] - org.fusesource.hawtjni.runtime.Library [WARNING] - org.fusesource.hawtjni.runtime.Callback [WARNING] maven-shade-plugin has detected that some .class files [WARNING] are present in two or more JARs. When this happens, only [WARNING] one single version of the class is copied in the uberjar. [WARNING] Usually this is not harmful and you can skeep these [WARNING] warnings, otherwise try to manually exclude artifacts [WARNING] based on mvn dependency:tree -Ddetail=true and the above [WARNING] output [WARNING] See http://docs.codehaus.org/display/MAVENUSER/Shade+Plugin [INFO] Attaching shaded artifact. [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ hive-jdbc --- [INFO] Installing /data/hive-ptest/working/apache-github-source-source/jdbc/target/hive-jdbc-1.3.0-SNAPSHOT.jar to /home/hiveptest/.m2/repository/org/apache/hive/hive-jdbc/1.3.0-SNAPSHOT/hive-jdbc-1.3.0-SNAPSHOT.jar [INFO] Installing /data/hive-ptest/working/apache-github-source-source/jdbc/pom.xml to /home/hiveptest/.m2/repository/org/apach
[jira] [Commented] (HIVE-10816) NPE in ExecDriver::handleSampling when submitted via child JVM
[ https://issues.apache.org/jira/browse/HIVE-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571995#comment-14571995 ] Rui Li commented on HIVE-10816: --- Hi [~navis], would you mind take a look at this when you got time? > NPE in ExecDriver::handleSampling when submitted via child JVM > -- > > Key: HIVE-10816 > URL: https://issues.apache.org/jira/browse/HIVE-10816 > Project: Hive > Issue Type: Bug >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-10816.1.patch, HIVE-10816.1.patch > > > When {{hive.exec.submitviachild = true}}, parallel order by fails with NPE > and falls back to single-reducer mode. Stack trace: > {noformat} > 2015-05-25 08:41:04,446 ERROR [main]: mr.ExecDriver > (ExecDriver.java:execute(386)) - Sampling error > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.mr.ExecDriver.handleSampling(ExecDriver.java:513) > at > org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:379) > at > org.apache.hadoop.hive.ql.exec.mr.ExecDriver.main(ExecDriver.java:750) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at org.apache.hadoop.util.RunJar.run(RunJar.java:221) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10907) Hive on Tez: Classcast exception in some cases with SMB joins
[ https://issues.apache.org/jira/browse/HIVE-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571986#comment-14571986 ] Hive QA commented on HIVE-10907: {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12737381/HIVE-10907.3.patch {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 8992 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autogen_colalias org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_nondeterministic org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx_cbo_2 {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4162/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4162/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4162/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12737381 - PreCommit-HIVE-TRUNK-Build > Hive on Tez: Classcast exception in some cases with SMB joins > - > > Key: HIVE-10907 > URL: https://issues.apache.org/jira/browse/HIVE-10907 > Project: Hive > Issue Type: Bug >Reporter: Vikram Dixit K >Assignee: Vikram Dixit K > Attachments: HIVE-10907.1.patch, HIVE-10907.2.patch, > HIVE-10907.3.patch > > > In cases where there is a mix of Map side work and reduce side work, we get a > classcast exception because we assume homogeneity in the code. We need to fix > this correctly. For now this is a workaround. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10925) Non-static threadlocals in metastore code can potentially cause memory leak
[ https://issues.apache.org/jira/browse/HIVE-10925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-10925: Description: There are many places where non-static threadlocals are used. I can't seem to find a good logic for using them. However, they can potentially result in leaking objects if for example they are created in a long running thread every time the thread handles a new session. (was: There are many places where non-static threadlocals are used. I can't seem to find a good logic of using them. However, they can potentially result in leaking objects if for example they are created in a long running thread every time the thread handles a new session.) > Non-static threadlocals in metastore code can potentially cause memory leak > --- > > Key: HIVE-10925 > URL: https://issues.apache.org/jira/browse/HIVE-10925 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 0.11.0, 0.12.0, 0.14.0, 1.0.0, 1.2.0, 1.1.0, 0.13 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > > There are many places where non-static threadlocals are used. I can't seem to > find a good logic for using them. However, they can potentially result in > leaking objects if for example they are created in a long running thread > every time the thread handles a new session. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10905) Quit&Exit fails ending with ';' [beeline-cli Branch]
[ https://issues.apache.org/jira/browse/HIVE-10905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu updated HIVE-10905: Issue Type: Sub-task (was: Bug) Parent: HIVE-10511 > Quit&Exit fails ending with ';' [beeline-cli Branch] > > > Key: HIVE-10905 > URL: https://issues.apache.org/jira/browse/HIVE-10905 > Project: Hive > Issue Type: Sub-task >Affects Versions: beeline-cli-branch >Reporter: Chinna Rao Lalam >Assignee: Chinna Rao Lalam > Attachments: HIVE-10905.patch > > > In CLI quit and exit will expect ending ';' > In Updated CLI quit and exit without ending ; is working. > quit and exit ending with ';' throwing exception. Support quit and exit with > ending ';' for the compatibility; -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10684) Fix the unit test failures for HIVE-7553 after HIVE-10674 removed the binary jar files
[ https://issues.apache.org/jira/browse/HIVE-10684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu updated HIVE-10684: Fix Version/s: 1.2.1 > Fix the unit test failures for HIVE-7553 after HIVE-10674 removed the binary > jar files > -- > > Key: HIVE-10684 > URL: https://issues.apache.org/jira/browse/HIVE-10684 > Project: Hive > Issue Type: Bug > Components: Tests >Reporter: Ferdinand Xu >Assignee: Ferdinand Xu > Fix For: 1.2.1 > > Attachments: HIVE-10684.1.patch, HIVE-10684.2.patch, HIVE-10684.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10705) Update tests for HIVE-9302 after removing binaries
[ https://issues.apache.org/jira/browse/HIVE-10705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu updated HIVE-10705: Fix Version/s: 1.2.1 > Update tests for HIVE-9302 after removing binaries > -- > > Key: HIVE-10705 > URL: https://issues.apache.org/jira/browse/HIVE-10705 > Project: Hive > Issue Type: Bug >Reporter: Ferdinand Xu >Assignee: Ferdinand Xu > Fix For: 1.2.1 > > Attachments: HIVE-10705.1.patch, HIVE-10705.2.patch, > HIVE-10705.3.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10922) In HS2 doAs=false mode, file system related errors in one query causes other failures
[ https://issues.apache.org/jira/browse/HIVE-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571926#comment-14571926 ] Vikram Dixit K commented on HIVE-10922: --- My only concern would be that we have code like this in other places and it can cause the same errors in HS2. Can you raise an enhancement jira to fix this correctly. Either by creating and passing a new fs object for each thread or by creating UGI objects as user hive when doAs is false and using fs' closeAllforUGI instead. +1 for this quick fix. > In HS2 doAs=false mode, file system related errors in one query causes other > failures > - > > Key: HIVE-10922 > URL: https://issues.apache.org/jira/browse/HIVE-10922 > Project: Hive > Issue Type: Bug >Affects Versions: 1.0.0, 1.2.0, 1.1.0 >Reporter: Thejas M Nair >Assignee: Thejas M Nair > Attachments: HIVE-10922.1.patch > > > Warehouse class has a few methods that close file system object on errors. > With doAs=false, since all queries use the same HS2 ugi, the filesystem > object is shared across queries/threads. When the close on one filesystem > object gets called, it leads to filesystem object used in other threads also > get closed and any files registered for deletion on exit also getting deleted. > There is also no close being done in case of the happy code path. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10909) Make TestFilterHooks robust
[ https://issues.apache.org/jira/browse/HIVE-10909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571921#comment-14571921 ] Hive QA commented on HIVE-10909: {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12737362/HIVE-10909.patch {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 8992 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autogen_colalias org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_nondeterministic org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx_cbo_2 org.apache.hive.spark.client.TestSparkClient.testRemoteClient {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4161/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4161/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4161/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12737362 - PreCommit-HIVE-TRUNK-Build > Make TestFilterHooks robust > --- > > Key: HIVE-10909 > URL: https://issues.apache.org/jira/browse/HIVE-10909 > Project: Hive > Issue Type: Test > Components: Metastore, Tests >Affects Versions: 1.2.0 >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Ashutosh Chauhan > Attachments: HIVE-10909.patch > > > Currently it fails sometimes when run in sequential order because of left > over state from previous tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-6727) Table level stats for external tables are set incorrectly
[ https://issues.apache.org/jira/browse/HIVE-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571902#comment-14571902 ] Bing Li commented on HIVE-6727: --- Thank you, Ashutosh! > Table level stats for external tables are set incorrectly > - > > Key: HIVE-6727 > URL: https://issues.apache.org/jira/browse/HIVE-6727 > Project: Hive > Issue Type: Bug > Components: Statistics >Affects Versions: 0.13.0, 0.14.0, 0.13.1, 1.0.0, 1.2.0, 1.1.0 >Reporter: Harish Butani >Assignee: Bing Li > Fix For: 1.3.0 > > Attachments: HIVE-6727.2.patch, HIVE-6727.3.patch > > > if you do the following: > {code} > CREATE EXTERNAL TABLE anaylyze_external (a INT) LOCATION > 'data/files/ext_test'; > describe formatted anaylyze_external; > {code} > The table level stats are: > {noformat} > Table Parameters: > COLUMN_STATS_ACCURATE true > EXTERNALTRUE > numFiles0 > numRows 6 > rawDataSize 6 > totalSize 0 > {noformat} > numFiles and totalSize is always 0. > Issue is: > MetaStoreUtils:updateUnpartitionedTableStatsFast attempts to set table level > stats from FileStatus. But it doesn't account for External tables, it always > calls Warehouse.getFileStatusesForUnpartitionedTable -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-6727) Table level stats for external tables are set incorrectly
[ https://issues.apache.org/jira/browse/HIVE-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571901#comment-14571901 ] Bing Li commented on HIVE-6727: --- Thank you, Ashutosh! > Table level stats for external tables are set incorrectly > - > > Key: HIVE-6727 > URL: https://issues.apache.org/jira/browse/HIVE-6727 > Project: Hive > Issue Type: Bug > Components: Statistics >Affects Versions: 0.13.0, 0.14.0, 0.13.1, 1.0.0, 1.2.0, 1.1.0 >Reporter: Harish Butani >Assignee: Bing Li > Fix For: 1.3.0 > > Attachments: HIVE-6727.2.patch, HIVE-6727.3.patch > > > if you do the following: > {code} > CREATE EXTERNAL TABLE anaylyze_external (a INT) LOCATION > 'data/files/ext_test'; > describe formatted anaylyze_external; > {code} > The table level stats are: > {noformat} > Table Parameters: > COLUMN_STATS_ACCURATE true > EXTERNALTRUE > numFiles0 > numRows 6 > rawDataSize 6 > totalSize 0 > {noformat} > numFiles and totalSize is always 0. > Issue is: > MetaStoreUtils:updateUnpartitionedTableStatsFast attempts to set table level > stats from FileStatus. But it doesn't account for External tables, it always > calls Warehouse.getFileStatusesForUnpartitionedTable -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10922) In HS2 doAs=false mode, file system related errors in one query causes other failures
[ https://issues.apache.org/jira/browse/HIVE-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-10922: - Attachment: HIVE-10922.1.patch > In HS2 doAs=false mode, file system related errors in one query causes other > failures > - > > Key: HIVE-10922 > URL: https://issues.apache.org/jira/browse/HIVE-10922 > Project: Hive > Issue Type: Bug >Affects Versions: 1.0.0, 1.2.0, 1.1.0 >Reporter: Thejas M Nair >Assignee: Thejas M Nair > Attachments: HIVE-10922.1.patch > > > Warehouse class has a few methods that close file system object on errors. > With doAs=false, since all queries use the same HS2 ugi, the filesystem > object is shared across queries/threads. When the close on one filesystem > object gets called, it leads to filesystem object used in other threads also > get closed and any files registered for deletion on exit also getting deleted. > There is also no close being done in case of the happy code path. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10410) Apparent race condition in HiveServer2 causing intermittent query failures
[ https://issues.apache.org/jira/browse/HIVE-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571864#comment-14571864 ] Chaoyu Tang commented on HIVE-10410: [~Richard Williams] are you still working on the patch? > Apparent race condition in HiveServer2 causing intermittent query failures > -- > > Key: HIVE-10410 > URL: https://issues.apache.org/jira/browse/HIVE-10410 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 0.13.1 > Environment: CDH 5.3.3 > CentOS 6.4 >Reporter: Richard Williams > > On our secure Hadoop cluster, queries submitted to HiveServer2 through JDBC > occasionally trigger odd Thrift exceptions with messages such as "Read a > negative frame size (-2147418110)!" or "out of sequence response" in > HiveServer2's connections to the metastore. For certain metastore calls (for > example, showDatabases), these Thrift exceptions are converted to > MetaExceptions in HiveMetaStoreClient, which prevents RetryingMetaStoreClient > from retrying these calls and thus causes the failure to bubble out to the > JDBC client. > Note that as far as we can tell, this issue appears to only affect queries > that are submitted with the runAsync flag on TExecuteStatementReq set to true > (which, in practice, seems to mean all JDBC queries), and it appears to only > manifest when HiveServer2 is using the new HTTP transport mechanism. When > both these conditions hold, we are able to fairly reliably reproduce the > issue by spawning about 100 simple, concurrent hive queries (we have been > using "show databases"), two or three of which typically fail. However, when > either of these conditions do not hold, we are no longer able to reproduce > the issue. > Some example stack traces from the HiveServer2 logs: > {noformat} > 2015-04-16 13:54:55,486 ERROR hive.log: Got exception: > org.apache.thrift.transport.TTransportException Read a negative frame size > (-2147418110)! > org.apache.thrift.transport.TTransportException: Read a negative frame size > (-2147418110)! > at > org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:435) > at > org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:414) > at > org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:37) > at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) > at > org.apache.hadoop.hive.thrift.TFilterTransport.readAll(TFilterTransport.java:62) > at > org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378) > at > org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297) > at > org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204) > at > org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_databases(ThriftHiveMetastore.java:600) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_databases(ThriftHiveMetastore.java:587) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabases(HiveMetaStoreClient.java:837) > at > org.apache.sentry.binding.metastore.SentryHiveMetaStoreClient.getDatabases(SentryHiveMetaStoreClient.java:60) > at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90) > at com.sun.proxy.$Proxy6.getDatabases(Unknown Source) > at > org.apache.hadoop.hive.ql.metadata.Hive.getDatabasesByPattern(Hive.java:1139) > at > org.apache.hadoop.hive.ql.exec.DDLTask.showDatabases(DDLTask.java:2445) > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:364) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1554) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1321) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1139) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:957) > at > org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:145) > at > org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperatio
[jira] [Updated] (HIVE-10921) Change trunk pom version to reflect the branch-1 split
[ https://issues.apache.org/jira/browse/HIVE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-10921: Affects Version/s: (was: 2.0.0) > Change trunk pom version to reflect the branch-1 split > -- > > Key: HIVE-10921 > URL: https://issues.apache.org/jira/browse/HIVE-10921 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: 2.0.0 > > Attachments: HIVE-10921.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10921) Change trunk pom version to reflect the branch-1 split
[ https://issues.apache.org/jira/browse/HIVE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-10921: Fix Version/s: 2.0.0 > Change trunk pom version to reflect the branch-1 split > -- > > Key: HIVE-10921 > URL: https://issues.apache.org/jira/browse/HIVE-10921 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: 2.0.0 > > Attachments: HIVE-10921.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10921) Change trunk pom version to reflect the branch-1 split
[ https://issues.apache.org/jira/browse/HIVE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571857#comment-14571857 ] Sergey Shelukhin commented on HIVE-10921: - [~alangates] [~sushanth] can you guys take a look? > Change trunk pom version to reflect the branch-1 split > -- > > Key: HIVE-10921 > URL: https://issues.apache.org/jira/browse/HIVE-10921 > Project: Hive > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-10921.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10921) Change trunk pom version to reflect the branch-1 split
[ https://issues.apache.org/jira/browse/HIVE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-10921: Attachment: HIVE-10921.patch Patch is a simple update for pom files > Change trunk pom version to reflect the branch-1 split > -- > > Key: HIVE-10921 > URL: https://issues.apache.org/jira/browse/HIVE-10921 > Project: Hive > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-10921.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-10918) ORC fails to read table with a 38Gb ORC file
[ https://issues.apache.org/jira/browse/HIVE-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V resolved HIVE-10918. Resolution: Duplicate > ORC fails to read table with a 38Gb ORC file > > > Key: HIVE-10918 > URL: https://issues.apache.org/jira/browse/HIVE-10918 > Project: Hive > Issue Type: Bug > Components: File Formats >Affects Versions: 1.3.0 >Reporter: Gopal V > > {code} > hive> set mapreduce.input.fileinputformat.split.maxsize=1; > hive> set mapreduce.input.fileinputformat.split.maxsize=1; > hive> alter table lineitem concatenate; > .. > hive> dfs -ls /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem; > Found 12 items > -rwxr-xr-x 3 gopal supergroup 41368976599 2015-06-03 15:49 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/00_0 > -rwxr-xr-x 3 gopal supergroup 36226719673 2015-06-03 15:48 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/01_0 > -rwxr-xr-x 3 gopal supergroup 27544042018 2015-06-03 15:50 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/02_0 > -rwxr-xr-x 3 gopal supergroup 23147063608 2015-06-03 15:44 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/03_0 > -rwxr-xr-x 3 gopal supergroup 21079035936 2015-06-03 15:44 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/04_0 > -rwxr-xr-x 3 gopal supergroup 13813961419 2015-06-03 15:43 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/05_0 > -rwxr-xr-x 3 gopal supergroup 8155299977 2015-06-03 15:40 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/06_0 > -rwxr-xr-x 3 gopal supergroup 6264478613 2015-06-03 15:40 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/07_0 > -rwxr-xr-x 3 gopal supergroup 4653393054 2015-06-03 15:40 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/08_0 > -rwxr-xr-x 3 gopal supergroup 3621672928 2015-06-03 15:39 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/09_0 > -rwxr-xr-x 3 gopal supergroup 1460919310 2015-06-03 15:38 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/10_0 > -rwxr-xr-x 3 gopal supergroup 485129789 2015-06-03 15:38 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/11_0 > {code} > Errors without PPD > Suspicions about ORC stripe padding and stream offsets in the stream > information, when concatenating. > {code} > Caused by: java.io.EOFException: Read past end of RLE integer from compressed > stream Stream for column 1 kind DATA position: 1608840 length: 1608840 range: > 0 offset: 1608840 limit: 1608840 range 0 = 0 to 1608840 uncompressed: 36845 > to 36845 > at > org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:56) > at > org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:302) > at > org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.nextVector(RunLengthIntegerReaderV2.java:346) > at > org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$LongTreeReader.nextVector(TreeReaderFactory.java:582) > at > org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.nextVector(TreeReaderFactory.java:2026) > at > org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1070) > ... 25 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-10917) ORC fails to read table with a 38Gb ORC file
[ https://issues.apache.org/jira/browse/HIVE-10917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V resolved HIVE-10917. Resolution: Duplicate > ORC fails to read table with a 38Gb ORC file > > > Key: HIVE-10917 > URL: https://issues.apache.org/jira/browse/HIVE-10917 > Project: Hive > Issue Type: Bug > Components: File Formats >Affects Versions: 1.3.0 >Reporter: Gopal V > > {code} > hive> set mapreduce.input.fileinputformat.split.maxsize=1; > hive> set mapreduce.input.fileinputformat.split.maxsize=1; > hive> alter table lineitem concatenate; > .. > hive> dfs -ls /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem; > Found 12 items > -rwxr-xr-x 3 gopal supergroup 41368976599 2015-06-03 15:49 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/00_0 > -rwxr-xr-x 3 gopal supergroup 36226719673 2015-06-03 15:48 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/01_0 > -rwxr-xr-x 3 gopal supergroup 27544042018 2015-06-03 15:50 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/02_0 > -rwxr-xr-x 3 gopal supergroup 23147063608 2015-06-03 15:44 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/03_0 > -rwxr-xr-x 3 gopal supergroup 21079035936 2015-06-03 15:44 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/04_0 > -rwxr-xr-x 3 gopal supergroup 13813961419 2015-06-03 15:43 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/05_0 > -rwxr-xr-x 3 gopal supergroup 8155299977 2015-06-03 15:40 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/06_0 > -rwxr-xr-x 3 gopal supergroup 6264478613 2015-06-03 15:40 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/07_0 > -rwxr-xr-x 3 gopal supergroup 4653393054 2015-06-03 15:40 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/08_0 > -rwxr-xr-x 3 gopal supergroup 3621672928 2015-06-03 15:39 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/09_0 > -rwxr-xr-x 3 gopal supergroup 1460919310 2015-06-03 15:38 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/10_0 > -rwxr-xr-x 3 gopal supergroup 485129789 2015-06-03 15:38 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/11_0 > {code} > Errors without PPD > Suspicions about ORC stripe padding and stream offsets in the stream > information, when concatenating. > {code} > Caused by: java.io.EOFException: Read past end of RLE integer from compressed > stream Stream for column 1 kind DATA position: 1608840 length: 1608840 range: > 0 offset: 1608840 limit: 1608840 range 0 = 0 to 1608840 uncompressed: 36845 > to 36845 > at > org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:56) > at > org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:302) > at > org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.nextVector(RunLengthIntegerReaderV2.java:346) > at > org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$LongTreeReader.nextVector(TreeReaderFactory.java:582) > at > org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.nextVector(TreeReaderFactory.java:2026) > at > org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1070) > ... 25 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10841) [WHERE col is not null] does not work sometimes for queries with many JOIN statements
[ https://issues.apache.org/jira/browse/HIVE-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571849#comment-14571849 ] Alexander Pivovarov commented on HIVE-10841: LOG info for the queries with different JOIN operators order L, LA, FR, A, PI, acct - only 2 log messages from ppd.OpProcFactory contain "= 120" {code} explain select acct.ACC_N, acct.brn FROM L JOIN LA ON L.id = LA.loan_id JOIN FR ON L.id = FR.loan_id JOIN A ON LA.aid = A.id JOIN PI ON PI.id = LA.pi_id JOIN acct ON A.id = acct.aid WHERE L.id = 4436 and acct.brn = 120; 15/06/03 16:31:47 [main]: INFO ppd.OpProcFactory: Processing for FIL(25) 15/06/03 16:31:47 [main]: INFO ppd.OpProcFactory: Pushdown Predicates of FIL For Alias : acct 15/06/03 16:31:47 [main]: INFO ppd.OpProcFactory: (_col20 = 120) 15/06/03 16:31:47 [main]: INFO ppd.OpProcFactory: Processing for JOIN(24) 15/06/03 16:31:47 [main]: INFO ppd.OpProcFactory: Pushdown Predicates of JOIN For Alias : acct 15/06/03 16:31:47 [main]: INFO ppd.OpProcFactory: (VALUE._col19 = 120) Stage: Stage-9 Map Reduce Select Operator expressions: _col19 (type: int), 120 (type: int) {code} L, LA, FR, A, acct, PI - 8 log lines from ppd.OpProcFactory contain "= 120" {code} explain select acct.ACC_N, acct.brn FROM L JOIN LA ON L.id = LA.loan_id JOIN FR ON L.id = FR.loan_id JOIN A ON LA.aid = A.id JOIN acct ON A.id = acct.aid JOIN PI ON PI.id = LA.pi_id WHERE L.id = 4436 and acct.brn = 120; 15/06/03 15:45:25 [main]: INFO ppd.OpProcFactory: Processing for FIL(25) 15/06/03 15:45:39 [main]: INFO ppd.OpProcFactory: Pushdown Predicates of FIL For Alias : acct 15/06/03 15:45:39 [main]: INFO ppd.OpProcFactory: (_col20 = 120) 15/06/03 15:46:23 [main]: INFO ppd.OpProcFactory: Processing for JOIN(24) 15/06/03 15:46:23 [main]: INFO ppd.OpProcFactory: Pushdown Predicates of JOIN For Alias : acct 15/06/03 15:46:23 [main]: INFO ppd.OpProcFactory: (VALUE._col19 = 120) 15/06/03 15:46:26 [main]: INFO ppd.OpProcFactory: Processing for RS(21) 15/06/03 15:46:26 [main]: INFO ppd.OpProcFactory: Pushdown Predicates of RS For Alias : acct 15/06/03 15:46:26 [main]: INFO ppd.OpProcFactory: (_col20 = 120) 15/06/03 15:46:43 [main]: INFO ppd.OpProcFactory: Processing for FIL(20) 15/06/03 15:46:49 [main]: INFO ppd.OpProcFactory: Pushdown Predicates of FIL For Alias : acct 15/06/03 15:46:49 [main]: INFO ppd.OpProcFactory: (_col20 = 120) 15/06/03 15:46:52 [main]: INFO ppd.OpProcFactory: Processing for JOIN(19) 15/06/03 15:46:52 [main]: INFO ppd.OpProcFactory: Pushdown Predicates of JOIN For Alias : acct 15/06/03 15:46:52 [main]: INFO ppd.OpProcFactory: (VALUE._col1 = 120) 15/06/03 15:59:18 [main]: INFO ppd.OpProcFactory: Processing for RS(18) 15/06/03 15:59:18 [main]: INFO ppd.OpProcFactory: Pushdown Predicates of RS For Alias : acct 15/06/03 15:59:18 [main]: INFO ppd.OpProcFactory: (brn = 120) 15/06/03 15:59:19 [main]: INFO ppd.OpProcFactory: Processing for FIL(17) 15/06/03 15:59:50 [main]: INFO ppd.OpProcFactory: Pushdown Predicates of FIL For Alias : acct 15/06/03 15:59:50 [main]: INFO ppd.OpProcFactory: (brn = 120) 15/06/03 16:00:20 [main]: INFO ppd.OpProcFactory: Processing for TS(4) 15/06/03 16:00:20 [main]: INFO ppd.OpProcFactory: Pushdown Predicates of TS For Alias : acct 15/06/03 16:00:20 [main]: INFO ppd.OpProcFactory: aid is not null 15/06/03 16:00:20 [main]: INFO ppd.OpProcFactory: (brn = 120) 15/06/03 16:01:38 [main]: INFO optimizer.ConstantPropagateProcFactory: expr Const int 120 fold from Column[VALUE._col19] is removed. 15/06/03 16:01:38 [main]: INFO optimizer.ColumnPrunerProcFactory: RS 21 oldColExprMap: {VALUE._col5=Column[_col5], VALUE._col4=Const int 4436, VALUE._col3=Column[_col3], VALUE._col2=Column[_col2], VALUE._col1=Column[_col1], VALUE._col0=Const int 4436, KEY.reducesinkkey0=Column[_col6], VALUE._col14=Column[_col15], VALUE._col13=Column[_col14], VALUE._col16=Column[_col17], VALUE._col15=Column[_col16], VALUE._col18=Column[_col19], VALUE._col9=Const int 4436, VALUE._col17=Column[_col18], VALUE._col8=Column[_col9], VALUE._col7=Column[_col8], VALUE._col19=Const int 120, VALUE._col6=Column[_col7], VALUE._col20=Column[_col21], VALUE._col11=Column[_col12], VALUE._col21=Column[_col22], VALUE._col12=Column[_col13], VALUE._col22=Column[_col23], VALUE._col10=Column[_col11]} 15/06/03 16:01:38 [main]: INFO optimizer.ColumnPrunerProcFactory: RS 18 oldColExprMap: {VALUE._col4=Column[ROW__ID], VALUE._col3=Column[INPUT__FILE__NAME], VALUE._col2=Column[BLOCK__OFFSET__INSIDE__FILE], VALUE._col1=Const int 120, VALUE._col0=Column[acc_n], KEY.reducesinkkey0=Column[aid]} STAGE PLANS: Stage: Stage-12 acct TableScan alias: acct Statistics: Num rows: 5 Data size: 63 Basic stats: COMPLETE Column stats: NONE Fi
[jira] [Resolved] (HIVE-10916) ORC fails to read table with a 38Gb ORC file
[ https://issues.apache.org/jira/browse/HIVE-10916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V resolved HIVE-10916. Resolution: Duplicate > ORC fails to read table with a 38Gb ORC file > > > Key: HIVE-10916 > URL: https://issues.apache.org/jira/browse/HIVE-10916 > Project: Hive > Issue Type: Bug > Components: File Formats >Affects Versions: 1.3.0 >Reporter: Gopal V > > {code} > hive> set mapreduce.input.fileinputformat.split.maxsize=1; > hive> set mapreduce.input.fileinputformat.split.maxsize=1; > hive> alter table lineitem concatenate; > .. > hive> dfs -ls /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem; > Found 12 items > -rwxr-xr-x 3 gopal supergroup 41368976599 2015-06-03 15:49 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/00_0 > -rwxr-xr-x 3 gopal supergroup 36226719673 2015-06-03 15:48 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/01_0 > -rwxr-xr-x 3 gopal supergroup 27544042018 2015-06-03 15:50 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/02_0 > -rwxr-xr-x 3 gopal supergroup 23147063608 2015-06-03 15:44 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/03_0 > -rwxr-xr-x 3 gopal supergroup 21079035936 2015-06-03 15:44 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/04_0 > -rwxr-xr-x 3 gopal supergroup 13813961419 2015-06-03 15:43 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/05_0 > -rwxr-xr-x 3 gopal supergroup 8155299977 2015-06-03 15:40 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/06_0 > -rwxr-xr-x 3 gopal supergroup 6264478613 2015-06-03 15:40 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/07_0 > -rwxr-xr-x 3 gopal supergroup 4653393054 2015-06-03 15:40 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/08_0 > -rwxr-xr-x 3 gopal supergroup 3621672928 2015-06-03 15:39 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/09_0 > -rwxr-xr-x 3 gopal supergroup 1460919310 2015-06-03 15:38 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/10_0 > -rwxr-xr-x 3 gopal supergroup 485129789 2015-06-03 15:38 > /apps/hive/warehouse/tpch_orc_flat_1000.db/lineitem/11_0 > {code} > Errors without PPD > Suspicions about ORC stripe padding and stream offsets in the stream > information, when concatenating. > {code} > Caused by: java.io.EOFException: Read past end of RLE integer from compressed > stream Stream for column 1 kind DATA position: 1608840 length: 1608840 range: > 0 offset: 1608840 limit: 1608840 range 0 = 0 to 1608840 uncompressed: 36845 > to 36845 > at > org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:56) > at > org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:302) > at > org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.nextVector(RunLengthIntegerReaderV2.java:346) > at > org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$LongTreeReader.nextVector(TreeReaderFactory.java:582) > at > org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.nextVector(TreeReaderFactory.java:2026) > at > org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1070) > ... 25 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10919) Windows: create table with JsonSerDe failed via beeline unless you add hcatalog core jar to classpath
[ https://issues.apache.org/jira/browse/HIVE-10919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571844#comment-14571844 ] Sushanth Sowmyan commented on HIVE-10919: - +1. Thanks, Hari! > Windows: create table with JsonSerDe failed via beeline unless you add > hcatalog core jar to classpath > - > > Key: HIVE-10919 > URL: https://issues.apache.org/jira/browse/HIVE-10919 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-10919.1.patch > > > NO PRECOMMIT TESTS > Before we run HiveServer2 tests, we create table via beeline. > And 'create table' with JsonSerDe failed on Winodws. It works on Linux: > {noformat} > 0: jdbc:hive2://localhost:10001> create external table all100kjson( > 0: jdbc:hive2://localhost:10001> s string, > 0: jdbc:hive2://localhost:10001> i int, > 0: jdbc:hive2://localhost:10001> d double, > 0: jdbc:hive2://localhost:10001> m map, > 0: jdbc:hive2://localhost:10001> bb array>, > 0: jdbc:hive2://localhost:10001> t timestamp) > 0: jdbc:hive2://localhost:10001> row format serde > 'org.apache.hive.hcatalog.data.JsonSerDe' > 0: jdbc:hive2://localhost:10001> WITH SERDEPROPERTIES > ('timestamp.formats'='-MM-dd\'T\'HH:mm:ss') > 0: jdbc:hive2://localhost:10001> STORED AS TEXTFILE > 0: jdbc:hive2://localhost:10001> location '/user/hcat/tests/data/all100kjson'; > Error: Error while processing statement: FAILED: Execution Error, return code > 1 from org.apache.hadoop.hive.ql.exec.DDLT > ask. Cannot validate serde: org.apache.hive.hcatalog.data.JsonSerDe > (state=08S01,code=1) > {noformat} > hive.log shows: > {noformat} > 2015-05-21 21:59:17,004 ERROR operation.Operation > (SQLOperation.java:run(209)) - Error running hive query: > org.apache.hive.service.cli.HiveSQLException: Error while processing > statement: FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. Cannot validate serde: > org.apache.hive.hcatalog.data.JsonSerDe > at > org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:315) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:156) > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71) > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Cannot validate > serde: org.apache.hive.hcatalog.data.JsonSerDe > at > org.apache.hadoop.hive.ql.exec.DDLTask.validateSerDe(DDLTask.java:3871) > at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4011) > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1650) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1409) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1192) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1054) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154) > ... 11 more > Caused by: java.lang.ClassNotFoundException: Class > org.apache.hive.hcatalog.data.JsonSerDe not found > at > org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101) > at > org.apache.hadoop.hive.ql.exec.DDLTask.validateSerDe(DDLTask.java:3865) > ... 21 more > {noformat} > If you do add the hcatalog jar to classpath, it works: > {noformat}0: jdbc:hive2://localhost:10001> add jar > hdfs:///tmp/testjars/hive-hcatalog-core-1.2.0.2.3.0.0-2079.jar; > INFO : converting to local > hdfs:///tmp/testjars/hive-hcatalog-core-1.2.0.2.3.0.0-2079.jar > INFO : Added > [/C:/Us
[jira] [Updated] (HIVE-10919) Windows: create table with JsonSerDe failed via beeline unless you add hcatalog core jar to classpath
[ https://issues.apache.org/jira/browse/HIVE-10919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-10919: - Description: NO PRECOMMIT TESTS Before we run HiveServer2 tests, we create table via beeline. And 'create table' with JsonSerDe failed on Winodws. It works on Linux: {noformat} 0: jdbc:hive2://localhost:10001> create external table all100kjson( 0: jdbc:hive2://localhost:10001> s string, 0: jdbc:hive2://localhost:10001> i int, 0: jdbc:hive2://localhost:10001> d double, 0: jdbc:hive2://localhost:10001> m map, 0: jdbc:hive2://localhost:10001> bb array>, 0: jdbc:hive2://localhost:10001> t timestamp) 0: jdbc:hive2://localhost:10001> row format serde 'org.apache.hive.hcatalog.data.JsonSerDe' 0: jdbc:hive2://localhost:10001> WITH SERDEPROPERTIES ('timestamp.formats'='-MM-dd\'T\'HH:mm:ss') 0: jdbc:hive2://localhost:10001> STORED AS TEXTFILE 0: jdbc:hive2://localhost:10001> location '/user/hcat/tests/data/all100kjson'; Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLT ask. Cannot validate serde: org.apache.hive.hcatalog.data.JsonSerDe (state=08S01,code=1) {noformat} hive.log shows: {noformat} 2015-05-21 21:59:17,004 ERROR operation.Operation (SQLOperation.java:run(209)) - Error running hive query: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Cannot validate serde: org.apache.hive.hcatalog.data.JsonSerDe at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:315) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:156) at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71) at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Cannot validate serde: org.apache.hive.hcatalog.data.JsonSerDe at org.apache.hadoop.hive.ql.exec.DDLTask.validateSerDe(DDLTask.java:3871) at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4011) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1650) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1409) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1192) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1054) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154) ... 11 more Caused by: java.lang.ClassNotFoundException: Class org.apache.hive.hcatalog.data.JsonSerDe not found at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101) at org.apache.hadoop.hive.ql.exec.DDLTask.validateSerDe(DDLTask.java:3865) ... 21 more {noformat} If you do add the hcatalog jar to classpath, it works: {noformat}0: jdbc:hive2://localhost:10001> add jar hdfs:///tmp/testjars/hive-hcatalog-core-1.2.0.2.3.0.0-2079.jar; INFO : converting to local hdfs:///tmp/testjars/hive-hcatalog-core-1.2.0.2.3.0.0-2079.jar INFO : Added [/C:/Users/hadoop/AppData/Local/Temp/bc941dac-3bca-4287-a490-8a65c2dac220_resources/hive-hcatalog-core-1.2 .0.2.3.0.0-2079.jar] to class path INFO : Added resources: [hdfs:///tmp/testjars/hive-hcatalog-core-1.2.0.2.3.0.0-2079.jar] No rows affected (0.304 seconds) 0: jdbc:hive2://localhost:10001> create external table all100kjson( 0: jdbc:hive2://localhost:10001> s string, 0: jdbc:hive2://localhost:10001> i int, 0: jdbc:hive2://localhost:10001> d double, 0: jdbc:hive2://localhost:10001> m map, 0: jdbc:hive2://localhost:10001> bb array>, 0: jdbc:hive2://localhost:10001> t timestamp) 0: jdbc:hive2://localhost:10001> row format serde 'or
[jira] [Updated] (HIVE-10919) Windows: create table with JsonSerDe failed via beeline unless you add hcatalog core jar to classpath
[ https://issues.apache.org/jira/browse/HIVE-10919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-10919: - Attachment: HIVE-10919.1.patch cc-ing [~sushanth] for review. > Windows: create table with JsonSerDe failed via beeline unless you add > hcatalog core jar to classpath > - > > Key: HIVE-10919 > URL: https://issues.apache.org/jira/browse/HIVE-10919 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-10919.1.patch > > > Before we run HiveServer2 tests, we create table via beeline. > And 'create table' with JsonSerDe failed on Winodws. It works on Linux: > {noformat} > 0: jdbc:hive2://localhost:10001> create external table all100kjson( > 0: jdbc:hive2://localhost:10001> s string, > 0: jdbc:hive2://localhost:10001> i int, > 0: jdbc:hive2://localhost:10001> d double, > 0: jdbc:hive2://localhost:10001> m map, > 0: jdbc:hive2://localhost:10001> bb array>, > 0: jdbc:hive2://localhost:10001> t timestamp) > 0: jdbc:hive2://localhost:10001> row format serde > 'org.apache.hive.hcatalog.data.JsonSerDe' > 0: jdbc:hive2://localhost:10001> WITH SERDEPROPERTIES > ('timestamp.formats'='-MM-dd\'T\'HH:mm:ss') > 0: jdbc:hive2://localhost:10001> STORED AS TEXTFILE > 0: jdbc:hive2://localhost:10001> location '/user/hcat/tests/data/all100kjson'; > Error: Error while processing statement: FAILED: Execution Error, return code > 1 from org.apache.hadoop.hive.ql.exec.DDLT > ask. Cannot validate serde: org.apache.hive.hcatalog.data.JsonSerDe > (state=08S01,code=1) > {noformat} > hive.log shows: > {noformat} > 2015-05-21 21:59:17,004 ERROR operation.Operation > (SQLOperation.java:run(209)) - Error running hive query: > org.apache.hive.service.cli.HiveSQLException: Error while processing > statement: FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. Cannot validate serde: > org.apache.hive.hcatalog.data.JsonSerDe > at > org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:315) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:156) > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71) > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Cannot validate > serde: org.apache.hive.hcatalog.data.JsonSerDe > at > org.apache.hadoop.hive.ql.exec.DDLTask.validateSerDe(DDLTask.java:3871) > at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4011) > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1650) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1409) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1192) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1054) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154) > ... 11 more > Caused by: java.lang.ClassNotFoundException: Class > org.apache.hive.hcatalog.data.JsonSerDe not found > at > org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101) > at > org.apache.hadoop.hive.ql.exec.DDLTask.validateSerDe(DDLTask.java:3865) > ... 21 more > {noformat} > If you do add the hcatalog jar to classpath, it works: > {noformat}0: jdbc:hive2://localhost:10001> add jar > hdfs:///tmp/testjars/hive-hcatalog-core-1.2.0.2.3.0.0-2079.jar; > INFO : converting to local > hdfs:///tmp/testjars/hive-hcatalog-core-1.2.0.2.3.0.0-2079.jar > INFO : Added > [/C:
[jira] [Updated] (HIVE-10736) LLAP: HiveServer2 shutdown of cached tez app-masters is not clean
[ https://issues.apache.org/jira/browse/HIVE-10736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vikram Dixit K updated HIVE-10736: -- Attachment: HIVE-10736.1.patch > LLAP: HiveServer2 shutdown of cached tez app-masters is not clean > - > > Key: HIVE-10736 > URL: https://issues.apache.org/jira/browse/HIVE-10736 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Reporter: Gopal V >Assignee: Vikram Dixit K > Attachments: HIVE-10736.1.patch > > > The shutdown process throws concurrent modification exceptions and fails to > clean up the app masters per queue. > {code} > 2015-05-17 20:24:00,464 INFO [Thread-6()]: service.AbstractService > (AbstractService.java:stop(125)) - Service:OperationManager is stopped. > 2015-05-17 20:24:00,464 INFO [Thread-6()]: service.AbstractService > (AbstractService.java:stop(125)) - Service:SessionManager is stopped. > 2015-05-17 20:24:00,464 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-9()]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:close(175)) - Closing tez session default? true > 2015-05-17 20:24:00,465 INFO [Thread-6()]: service.AbstractService > (AbstractService.java:stop(125)) - Service:CLIService is stopped. > 2015-05-17 20:24:00,465 INFO [Thread-6()]: service.AbstractService > (AbstractService.java:stop(125)) - Service:HiveServer2 is stopped. > 2015-05-17 20:24:00,465 INFO [Thread-6()]: tez.TezSessionState > (TezSessionState.java:close(332)) - Closing Tez Session > 2015-05-17 20:24:00,466 INFO [Thread-6()]: client.TezClient > (TezClient.java:stop(495)) - Shutting down Tez Session, > sessionName=HIVE-94cc629d-63bc-490a-a135-af85c0cc0f2e, > applicationId=application_1431919257083_0012 > 2015-05-17 20:24:00,570 ERROR [Thread-6()]: server.HiveServer2 > (HiveServer2.java:stop(322)) - Tez session pool manager stop had an error > during stop of HiveServer2. Shutting down HiveServer2 anyway. > java.util.ConcurrentModificationException > at > java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966) > at java.util.LinkedList$ListItr.next(LinkedList.java:888) > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.stop(TezSessionPoolManager.java:187) > at > org.apache.hive.service.server.HiveServer2.stop(HiveServer2.java:320) > at > org.apache.hive.service.server.HiveServer2$1.run(HiveServer2.java:107) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-10914) LLAP: fix hadoop-1 build for good by removing llap-server from hadoop-1 build
[ https://issues.apache.org/jira/browse/HIVE-10914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin resolved HIVE-10914. - Resolution: Fixed > LLAP: fix hadoop-1 build for good by removing llap-server from hadoop-1 build > - > > Key: HIVE-10914 > URL: https://issues.apache.org/jira/browse/HIVE-10914 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: llap > > > LLAP won't ever work with hadoop 1, so no point in building it -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10906) Value based UDAF function without orderby expression throws NPE
[ https://issues.apache.org/jira/browse/HIVE-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571819#comment-14571819 ] Hive QA commented on HIVE-10906: {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12737357/HIVE-10906.patch {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 8993 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autogen_colalias org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_histogram_numeric org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_nondeterministic org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx_cbo_2 {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4160/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4160/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4160/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12737357 - PreCommit-HIVE-TRUNK-Build > Value based UDAF function without orderby expression throws NPE > --- > > Key: HIVE-10906 > URL: https://issues.apache.org/jira/browse/HIVE-10906 > Project: Hive > Issue Type: Sub-task > Components: PTF-Windowing >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-10906.patch > > > The following query throws NPE. > {noformat} > select key, value, min(value) over (partition by key range between unbounded > preceding and current row) from small; > FAILED: NullPointerException null > 2015-06-03 13:48:09,268 ERROR [main]: ql.Driver > (SessionState.java:printError(957)) - FAILED: NullPointerException null > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.parse.WindowingSpec.validateValueBoundary(WindowingSpec.java:293) > at > org.apache.hadoop.hive.ql.parse.WindowingSpec.validateWindowFrame(WindowingSpec.java:281) > at > org.apache.hadoop.hive.ql.parse.WindowingSpec.validateAndMakeEffective(WindowingSpec.java:155) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genWindowingPlan(SemanticAnalyzer.java:11965) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:8910) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8868) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9713) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9606) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:10079) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:327) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10090) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:208) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:424) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1124) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1172) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1061) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1051) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165) > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606
[jira] [Resolved] (HIVE-4402) Support UPDATE statement
[ https://issues.apache.org/jira/browse/HIVE-4402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Friedrich resolved HIVE-4402. Resolution: Duplicate Insert, update, and delete in Hive were implemented in Hive 0.14.0 with HIVE-5317. > Support UPDATE statement > - > > Key: HIVE-4402 > URL: https://issues.apache.org/jira/browse/HIVE-4402 > Project: Hive > Issue Type: New Feature >Reporter: Bing Li > > It would be good if hive could support UPDATE statement like common database. > e.g. > update row into database (for edit rows and save back) > > cmd: >update "DB2ADMIN"."EMP" set "SALARY"=? where "EMPNO"=? and "DEPTNO"=? and > "SALARY"=? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-3217) Implement HiveDatabaseMetaData.getFunctions() to retrieve registered UDFs.
[ https://issues.apache.org/jira/browse/HIVE-3217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Friedrich resolved HIVE-3217. Resolution: Duplicate The getFunctions method in HiveDatabaseMetaData was implemented for HS2 with HIVE-2935. > Implement HiveDatabaseMetaData.getFunctions() to retrieve registered UDFs. > --- > > Key: HIVE-3217 > URL: https://issues.apache.org/jira/browse/HIVE-3217 > Project: Hive > Issue Type: Improvement > Components: JDBC >Affects Versions: 0.9.0 >Reporter: Richard Ding >Assignee: Richard Ding > Attachments: HIVE-3217.patch > > > Hive JDBC support currently throws UnsupportedException when getFunctions() > is called. Hive CL provides a SHOW FUNCTIONS command to return the names of > all registered UDFs. By getting a SQL Statement from the connection, > getFunctions can execute( "SHOW FUNCTIONS") to retrieve all the registered > functions (including those registered through create temporary function). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10551) OOM when running query_89 with vectorization on & hybridgrace=false
[ https://issues.apache.org/jira/browse/HIVE-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajesh Balamohan updated HIVE-10551: Attachment: hive-10551.png > OOM when running query_89 with vectorization on & hybridgrace=false > --- > > Key: HIVE-10551 > URL: https://issues.apache.org/jira/browse/HIVE-10551 > Project: Hive > Issue Type: Bug >Reporter: Rajesh Balamohan >Assignee: Vikram Dixit K > Attachments: HIVE-10551-explain-plan.log, hive-10551.png, > hive_10551.png > > > - TPC-DS Query_89 @ 10 TB scale > - Trunk version of Hive + Tez 0.7.0-SNAPSHOT > - Additional settings ( "hive.vectorized.groupby.maxentries=1024 , > tez.runtime.io.sort.factor=200 tez.runtime.io.sort.mb=1800 > hive.tez.container.size=4096 ,hive.mapjoin.hybridgrace.hashtable=false" ) > Will attach the profiler snapshot asap. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10551) OOM when running query_89 with vectorization on & hybridgrace=false
[ https://issues.apache.org/jira/browse/HIVE-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571791#comment-14571791 ] Rajesh Balamohan commented on HIVE-10551: - [~vikram.dixit] - OOM happens in Map-1 (tried again with apr 29 build commit:0cad50a193ba777f9271808f057caae674738817. i.e close to the date on which this JIRA was reported). This happens irrespective of container reuse. Posting the stacktrace of the OOM (but it could be misleading since it didn't have enough space to allocate to sorter) {noformat} Error: Failure while running task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:157) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:345) at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179) at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171) at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.OutOfMemoryError: Java heap space at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) at org.apache.tez.runtime.library.common.sort.impl.PipelinedSorter.(PipelinedSorter.java:152) at org.apache.tez.runtime.library.common.sort.impl.PipelinedSorter.(PipelinedSorter.java:117) at org.apache.tez.runtime.library.output.OrderedPartitionedKVOutput.start(OrderedPartitionedKVOutput.java:143) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:141) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:147) ... 14 more {noformat} > OOM when running query_89 with vectorization on & hybridgrace=false > --- > > Key: HIVE-10551 > URL: https://issues.apache.org/jira/browse/HIVE-10551 > Project: Hive > Issue Type: Bug >Reporter: Rajesh Balamohan >Assignee: Vikram Dixit K > Attachments: HIVE-10551-explain-plan.log, hive_10551.png > > > - TPC-DS Query_89 @ 10 TB scale > - Trunk version of Hive + Tez 0.7.0-SNAPSHOT > - Additional settings ( "hive.vectorized.groupby.maxentries=1024 , > tez.runtime.io.sort.factor=200 tez.runtime.io.sort.mb=1800 > hive.tez.container.size=4096 ,hive.mapjoin.hybridgrace.hashtable=false" ) > Will attach the profiler snapshot asap. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-10914) LLAP: fix hadoop-1 build for good by removing llap-server from hadoop-1 build
[ https://issues.apache.org/jira/browse/HIVE-10914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-10914: --- Assignee: Sergey Shelukhin > LLAP: fix hadoop-1 build for good by removing llap-server from hadoop-1 build > - > > Key: HIVE-10914 > URL: https://issues.apache.org/jira/browse/HIVE-10914 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: llap > > > LLAP won't ever work with hadoop 1, so no point in building it -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10914) LLAP: fix hadoop-1 build for good by removing llap-server from hadoop-1 build
[ https://issues.apache.org/jira/browse/HIVE-10914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-10914: Fix Version/s: llap > LLAP: fix hadoop-1 build for good by removing llap-server from hadoop-1 build > - > > Key: HIVE-10914 > URL: https://issues.apache.org/jira/browse/HIVE-10914 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: llap > > > LLAP won't ever work with hadoop 1, so no point in building it -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10880) The bucket number is not respected in insert overwrite.
[ https://issues.apache.org/jira/browse/HIVE-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongzhi Chen updated HIVE-10880: Attachment: HIVE-10880.2.patch > The bucket number is not respected in insert overwrite. > --- > > Key: HIVE-10880 > URL: https://issues.apache.org/jira/browse/HIVE-10880 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Yongzhi Chen >Assignee: Yongzhi Chen >Priority: Blocker > Attachments: HIVE-10880.1.patch, HIVE-10880.2.patch > > > When hive.enforce.bucketing is true, the bucket number defined in the table > is no longer respected in current master and 1.2. This is a regression. > Reproduce: > {noformat} > CREATE TABLE IF NOT EXISTS buckettestinput( > data string > ) > ROW FORMAT DELIMITED FIELDS TERMINATED BY ','; > CREATE TABLE IF NOT EXISTS buckettestoutput1( > data string > )CLUSTERED BY(data) > INTO 2 BUCKETS > ROW FORMAT DELIMITED FIELDS TERMINATED BY ','; > CREATE TABLE IF NOT EXISTS buckettestoutput2( > data string > )CLUSTERED BY(data) > INTO 2 BUCKETS > ROW FORMAT DELIMITED FIELDS TERMINATED BY ','; > Then I inserted the following data into the "buckettestinput" table > firstinsert1 > firstinsert2 > firstinsert3 > firstinsert4 > firstinsert5 > firstinsert6 > firstinsert7 > firstinsert8 > secondinsert1 > secondinsert2 > secondinsert3 > secondinsert4 > secondinsert5 > secondinsert6 > secondinsert7 > secondinsert8 > set hive.enforce.bucketing = true; > set hive.enforce.sorting=true; > insert overwrite table buckettestoutput1 > select * from buckettestinput where data like 'first%'; > set hive.auto.convert.sortmerge.join=true; > set hive.optimize.bucketmapjoin = true; > set hive.optimize.bucketmapjoin.sortedmerge = true; > select * from buckettestoutput1 a join buckettestoutput2 b on (a.data=b.data); > Error: Error while compiling statement: FAILED: SemanticException [Error > 10141]: Bucketed table metadata is not correct. Fix the metadata or don't use > bucketed mapjoin, by setting hive.enforce.bucketmapjoin to false. The number > of buckets for table buckettestoutput1 is 2, whereas the number of files is 1 > (state=42000,code=10141) > {noformat} > The related debug information related to insert overwrite: > {noformat} > 0: jdbc:hive2://localhost:1> insert overwrite table buckettestoutput1 > select * from buckettestinput where data like 'first%'insert overwrite table > buckettestoutput1 > 0: jdbc:hive2://localhost:1> ; > select * from buckettestinput where data like ' > first%'; > INFO : Number of reduce tasks determined at compile time: 2 > INFO : In order to change the average load for a reducer (in bytes): > INFO : set hive.exec.reducers.bytes.per.reducer= > INFO : In order to limit the maximum number of reducers: > INFO : set hive.exec.reducers.max= > INFO : In order to set a constant number of reducers: > INFO : set mapred.reduce.tasks= > INFO : Job running in-process (local Hadoop) > INFO : 2015-06-01 11:09:29,650 Stage-1 map = 86%, reduce = 100% > INFO : Ended Job = job_local107155352_0001 > INFO : Loading data to table default.buckettestoutput1 from > file:/user/hive/warehouse/buckettestoutput1/.hive-staging_hive_2015-06-01_11-09-28_166_3109203968904090801-1/-ext-1 > INFO : Table default.buckettestoutput1 stats: [numFiles=1, numRows=4, > totalSize=52, rawDataSize=48] > No rows affected (1.692 seconds) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10880) The bucket number is not respected in insert overwrite.
[ https://issues.apache.org/jira/browse/HIVE-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571767#comment-14571767 ] Yongzhi Chen commented on HIVE-10880: - Attach second patch to fix the test failures. > The bucket number is not respected in insert overwrite. > --- > > Key: HIVE-10880 > URL: https://issues.apache.org/jira/browse/HIVE-10880 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Yongzhi Chen >Assignee: Yongzhi Chen >Priority: Blocker > Attachments: HIVE-10880.1.patch > > > When hive.enforce.bucketing is true, the bucket number defined in the table > is no longer respected in current master and 1.2. This is a regression. > Reproduce: > {noformat} > CREATE TABLE IF NOT EXISTS buckettestinput( > data string > ) > ROW FORMAT DELIMITED FIELDS TERMINATED BY ','; > CREATE TABLE IF NOT EXISTS buckettestoutput1( > data string > )CLUSTERED BY(data) > INTO 2 BUCKETS > ROW FORMAT DELIMITED FIELDS TERMINATED BY ','; > CREATE TABLE IF NOT EXISTS buckettestoutput2( > data string > )CLUSTERED BY(data) > INTO 2 BUCKETS > ROW FORMAT DELIMITED FIELDS TERMINATED BY ','; > Then I inserted the following data into the "buckettestinput" table > firstinsert1 > firstinsert2 > firstinsert3 > firstinsert4 > firstinsert5 > firstinsert6 > firstinsert7 > firstinsert8 > secondinsert1 > secondinsert2 > secondinsert3 > secondinsert4 > secondinsert5 > secondinsert6 > secondinsert7 > secondinsert8 > set hive.enforce.bucketing = true; > set hive.enforce.sorting=true; > insert overwrite table buckettestoutput1 > select * from buckettestinput where data like 'first%'; > set hive.auto.convert.sortmerge.join=true; > set hive.optimize.bucketmapjoin = true; > set hive.optimize.bucketmapjoin.sortedmerge = true; > select * from buckettestoutput1 a join buckettestoutput2 b on (a.data=b.data); > Error: Error while compiling statement: FAILED: SemanticException [Error > 10141]: Bucketed table metadata is not correct. Fix the metadata or don't use > bucketed mapjoin, by setting hive.enforce.bucketmapjoin to false. The number > of buckets for table buckettestoutput1 is 2, whereas the number of files is 1 > (state=42000,code=10141) > {noformat} > The related debug information related to insert overwrite: > {noformat} > 0: jdbc:hive2://localhost:1> insert overwrite table buckettestoutput1 > select * from buckettestinput where data like 'first%'insert overwrite table > buckettestoutput1 > 0: jdbc:hive2://localhost:1> ; > select * from buckettestinput where data like ' > first%'; > INFO : Number of reduce tasks determined at compile time: 2 > INFO : In order to change the average load for a reducer (in bytes): > INFO : set hive.exec.reducers.bytes.per.reducer= > INFO : In order to limit the maximum number of reducers: > INFO : set hive.exec.reducers.max= > INFO : In order to set a constant number of reducers: > INFO : set mapred.reduce.tasks= > INFO : Job running in-process (local Hadoop) > INFO : 2015-06-01 11:09:29,650 Stage-1 map = 86%, reduce = 100% > INFO : Ended Job = job_local107155352_0001 > INFO : Loading data to table default.buckettestoutput1 from > file:/user/hive/warehouse/buckettestoutput1/.hive-staging_hive_2015-06-01_11-09-28_166_3109203968904090801-1/-ext-1 > INFO : Table default.buckettestoutput1 stats: [numFiles=1, numRows=4, > totalSize=52, rawDataSize=48] > No rows affected (1.692 seconds) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-3121) JDBC driver's getCatalogs() method returns schema/db information
[ https://issues.apache.org/jira/browse/HIVE-3121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Richard Ding resolved HIVE-3121. Resolution: Duplicate This was fixed as part of HIVE-2935 (HS2 implementation) > JDBC driver's getCatalogs() method returns schema/db information > > > Key: HIVE-3121 > URL: https://issues.apache.org/jira/browse/HIVE-3121 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 0.9.0 >Reporter: Carl Steinbach >Assignee: Richard Ding > Attachments: hive-3121.patch, hive-3121_1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10872) LLAP: make sure tests pass
[ https://issues.apache.org/jira/browse/HIVE-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-10872: Attachment: HIVE-10872.02.patch Disabling llap-server project for hadoop-1 before we decide on the course of action for this matter > LLAP: make sure tests pass > -- > > Key: HIVE-10872 > URL: https://issues.apache.org/jira/browse/HIVE-10872 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-10872.01.patch, HIVE-10872.02.patch, > HIVE-10872.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10907) Hive on Tez: Classcast exception in some cases with SMB joins
[ https://issues.apache.org/jira/browse/HIVE-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vikram Dixit K updated HIVE-10907: -- Attachment: HIVE-10907.3.patch > Hive on Tez: Classcast exception in some cases with SMB joins > - > > Key: HIVE-10907 > URL: https://issues.apache.org/jira/browse/HIVE-10907 > Project: Hive > Issue Type: Bug >Reporter: Vikram Dixit K >Assignee: Vikram Dixit K > Attachments: HIVE-10907.1.patch, HIVE-10907.2.patch, > HIVE-10907.3.patch > > > In cases where there is a mix of Map side work and reduce side work, we get a > classcast exception because we assume homogeneity in the code. We need to fix > this correctly. For now this is a workaround. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10894) LLAP: make sure the branch builds on hadoop-1: part 1 (non-llap)
[ https://issues.apache.org/jira/browse/HIVE-10894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-10894: Summary: LLAP: make sure the branch builds on hadoop-1: part 1 (non-llap) (was: LLAP: make sure the branch builds on hadoop-1) > LLAP: make sure the branch builds on hadoop-1: part 1 (non-llap) > > > Key: HIVE-10894 > URL: https://issues.apache.org/jira/browse/HIVE-10894 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: llap > > > for HIVE-10872 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10892) TestHCatClient should not accept external metastore param from -Dhive.metastore.uris
[ https://issues.apache.org/jira/browse/HIVE-10892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571717#comment-14571717 ] Sushanth Sowmyan commented on HIVE-10892: - Test failure linked is unrelated, will go ahead and commit. Thanks for the review, Thejas. > TestHCatClient should not accept external metastore param from > -Dhive.metastore.uris > > > Key: HIVE-10892 > URL: https://issues.apache.org/jira/browse/HIVE-10892 > Project: Hive > Issue Type: Bug > Components: Tests >Affects Versions: 1.2.0 >Reporter: Sushanth Sowmyan >Assignee: Sushanth Sowmyan >Priority: Minor > Attachments: HIVE-10892.patch > > > HIVE-10074 added the ability to specify a -Dhive.metastore.uris from the > commandline, so as to run the test against a deployed metastore. > However, because of the way HiveConf is written, this results in that > parameter always overriding any value specified in the conf passed into it > for instantiation, since it accepts System Var Overrides. This results in > some tests, notably those that attempt to connect between two metastores > (such as TestHCatClient#testPartitionRegistrationWithCustomSchema to fail. > Fixing this in HiveConf is not a good idea, since that behaviour is desired > for HiveConf. Fixing this in HCatUtil.getHiveConf doesn't really work either, > since that is a utility wrapper on HiveConf, and is supposed to behave > similarly. Thus, the fix for this then becomes something to use in all our > testcases, where we instantiate Configuration objects. It seems more > appropriate to change the parameter we use to specify test parameters then, > than to change each config object. > Thus, we should change semantics for running this test against an external > metastore by specifying the override in a different parameter name, say > test.hive.metastore.uris, instead of hive.metastore.uris, which has a > specific meaning. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-10912) LLAP: Exception in InputInitializer when creating HiveSplitGenerator
[ https://issues.apache.org/jira/browse/HIVE-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin resolved HIVE-10912. - Resolution: Duplicate > LLAP: Exception in InputInitializer when creating HiveSplitGenerator > > > Key: HIVE-10912 > URL: https://issues.apache.org/jira/browse/HIVE-10912 > Project: Hive > Issue Type: Sub-task >Reporter: Siddharth Seth > > {code} > 2015-06-03 13:46:32,212 ERROR [Dispatcher thread: Central] exec.Utilities: > Failed to load plan: > hdfs://localhost:8020/tmp/hive/sseth/9c4ce145-f7f4-49c4-a615-28ce154f7f1d/hive_2015-06-03_13-46-29_283_23518 > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.GlobalWorkMapFactory.get(GlobalWorkMapFactory.java:85) > at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:389) > at org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:299) > at > org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.(HiveSplitGenerator.java:94) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:69) > at > org.apache.tez.common.ReflectionUtils.createClazzInstance(ReflectionUtils.java:98) > at > org.apache.tez.dag.app.dag.RootInputInitializerManager.createInitializer(RootInputInitializerManager.java:137) > at > org.apache.tez.dag.app.dag.RootInputInitializerManager.runInputInitializers(RootInputInitializerManager.java:114) > at > org.apache.tez.dag.app.dag.impl.VertexImpl.setupInputInitializerManager(VertexImpl.java:4422) > at > org.apache.tez.dag.app.dag.impl.VertexImpl.access$4300(VertexImpl.java:200) > at > org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.handleInitEvent(VertexImpl.java:3271) > at > org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:3221) > at > org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:3202) > at > org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385) > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.tez.state.StateMachineTez.doTransition(StateMachineTez.java:57) > at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:1850) > at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:199) > at > org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2001) > at > org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:1987) > at org.apache.tez.common.AsyncDispatcher.dispatch(AsyncDispatcher.java:183) > at org.apache.tez.common.AsyncDispatcher$1.run(AsyncDispatcher.java:114) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-10913) LLAP: cache QF counters have a wrong value for consumer time
[ https://issues.apache.org/jira/browse/HIVE-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin resolved HIVE-10913. - Resolution: Fixed > LLAP: cache QF counters have a wrong value for consumer time > > > Key: HIVE-10913 > URL: https://issues.apache.org/jira/browse/HIVE-10913 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: llap > > > Also not enough data -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10913) LLAP: cache QF counters have a wrong value for consumer time
[ https://issues.apache.org/jira/browse/HIVE-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-10913: Summary: LLAP: cache QF counters have a wrong value for consumer time (was: LLAP: cache QF counters have a wrong counters ) > LLAP: cache QF counters have a wrong value for consumer time > > > Key: HIVE-10913 > URL: https://issues.apache.org/jira/browse/HIVE-10913 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: llap > > > Also not enough data -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10907) Hive on Tez: Classcast exception in some cases with SMB joins
[ https://issues.apache.org/jira/browse/HIVE-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571702#comment-14571702 ] Hive QA commented on HIVE-10907: {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12737356/HIVE-10907.1.patch {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 8992 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autogen_colalias org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_nondeterministic org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_smb_1 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx_cbo_2 {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4159/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4159/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4159/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12737356 - PreCommit-HIVE-TRUNK-Build > Hive on Tez: Classcast exception in some cases with SMB joins > - > > Key: HIVE-10907 > URL: https://issues.apache.org/jira/browse/HIVE-10907 > Project: Hive > Issue Type: Bug >Reporter: Vikram Dixit K >Assignee: Vikram Dixit K > Attachments: HIVE-10907.1.patch, HIVE-10907.2.patch > > > In cases where there is a mix of Map side work and reduce side work, we get a > classcast exception because we assume homogeneity in the code. We need to fix > this correctly. For now this is a workaround. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10913) LLAP: cache QF counters have a wrong counters
[ https://issues.apache.org/jira/browse/HIVE-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-10913: Fix Version/s: llap > LLAP: cache QF counters have a wrong counters > -- > > Key: HIVE-10913 > URL: https://issues.apache.org/jira/browse/HIVE-10913 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: llap > > > Also not enough data -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-10913) LLAP: cache QF counters have a wrong counters
[ https://issues.apache.org/jira/browse/HIVE-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-10913: --- Assignee: Sergey Shelukhin > LLAP: cache QF counters have a wrong counters > -- > > Key: HIVE-10913 > URL: https://issues.apache.org/jira/browse/HIVE-10913 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: llap > > > Also not enough data -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10909) Make TestFilterHooks robust
[ https://issues.apache.org/jira/browse/HIVE-10909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571695#comment-14571695 ] Thejas M Nair commented on HIVE-10909: -- +1 > Make TestFilterHooks robust > --- > > Key: HIVE-10909 > URL: https://issues.apache.org/jira/browse/HIVE-10909 > Project: Hive > Issue Type: Test > Components: Metastore, Tests >Affects Versions: 1.2.0 >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Ashutosh Chauhan > Attachments: HIVE-10909.patch > > > Currently it fails sometimes when run in sequential order because of left > over state from previous tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-9814) LLAP: JMX web-service end points for monitoring & metrics
[ https://issues.apache.org/jira/browse/HIVE-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin resolved HIVE-9814. Resolution: Fixed Fix Version/s: llap committed to branch > LLAP: JMX web-service end points for monitoring & metrics > - > > Key: HIVE-9814 > URL: https://issues.apache.org/jira/browse/HIVE-9814 > Project: Hive > Issue Type: Sub-task > Components: Diagnosability >Affects Versions: llap >Reporter: Gopal V >Assignee: Gopal V > Fix For: llap > > Attachments: HIVE-9814.patch, HIVE-9814.wip1.patch, > HIVE-9814.wip2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10735) LLAP: Cached plan race condition - VectorMapJoinCommonOperator has no closeOp()
[ https://issues.apache.org/jira/browse/HIVE-10735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571663#comment-14571663 ] Matt McCline commented on HIVE-10735: - Don't know why the tests results haven't been posted, but http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4156/testReport/ show 3 failures unrelated to this change... > LLAP: Cached plan race condition - VectorMapJoinCommonOperator has no > closeOp() > --- > > Key: HIVE-10735 > URL: https://issues.apache.org/jira/browse/HIVE-10735 > Project: Hive > Issue Type: Sub-task > Components: Vectorization >Reporter: Gopal V >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-10705.01.patch, HIVE-10705.02.patch > > > Looks like some state is mutated during execution across threads in LLAP. > Either we can't share the operator objects across threads, because they are > tied to the data objects per invocation or this is missing a closeOp() which > resets the common-setup between reuses. > {code} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyLongOperator.process(VectorMapJoinInnerBigOnlyLongOperator.java:380) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:850) > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:114) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:850) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:97) > at > org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:164) > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:45) > ... 18 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:379) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:850) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.forwardBigTableBatch(VectorMapJoinGenerateResultOperator.java:599) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyGenerateResultOperator.generateHashMultiSetResultRepeatedAll(VectorMapJoinInnerBigOnlyGenerateResultOperator.java:304) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyGenerateResultOperator.finishInnerBigOnlyRepeated(VectorMapJoinInnerBigOnlyGenerateResultOperator.java:328) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyLongOperator.process(VectorMapJoinInnerBigOnlyLongOperator.java:201) > ... 24 more > Caused by: java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector.setVal(BytesColumnVector.java:152) > at > org.apache.hadoop.hive.ql.exec.vector.VectorDeserializeRow$StringReaderByValue.apply(VectorDeserializeRow.java:349) > at > org.apache.hadoop.hive.ql.exec.vector.VectorDeserializeRow.deserializeByValue(VectorDeserializeRow.java:688) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultSingleValue(VectorMapJoinGenerateResultOperator.java:177) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerGenerateResultOperator.finishInner(VectorMapJoinInnerGenerateResultOperator.java:201) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:359) > ... 29 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10907) Hive on Tez: Classcast exception in some cases with SMB joins
[ https://issues.apache.org/jira/browse/HIVE-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vikram Dixit K updated HIVE-10907: -- Attachment: HIVE-10907.2.patch > Hive on Tez: Classcast exception in some cases with SMB joins > - > > Key: HIVE-10907 > URL: https://issues.apache.org/jira/browse/HIVE-10907 > Project: Hive > Issue Type: Bug >Reporter: Vikram Dixit K >Assignee: Vikram Dixit K > Attachments: HIVE-10907.1.patch, HIVE-10907.2.patch > > > In cases where there is a mix of Map side work and reduce side work, we get a > classcast exception because we assume homogeneity in the code. We need to fix > this correctly. For now this is a workaround. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10735) LLAP: Cached plan race condition - VectorMapJoinCommonOperator has no closeOp()
[ https://issues.apache.org/jira/browse/HIVE-10735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571616#comment-14571616 ] Sergey Shelukhin commented on HIVE-10735: - +1 > LLAP: Cached plan race condition - VectorMapJoinCommonOperator has no > closeOp() > --- > > Key: HIVE-10735 > URL: https://issues.apache.org/jira/browse/HIVE-10735 > Project: Hive > Issue Type: Sub-task > Components: Vectorization >Reporter: Gopal V >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-10705.01.patch, HIVE-10705.02.patch > > > Looks like some state is mutated during execution across threads in LLAP. > Either we can't share the operator objects across threads, because they are > tied to the data objects per invocation or this is missing a closeOp() which > resets the common-setup between reuses. > {code} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyLongOperator.process(VectorMapJoinInnerBigOnlyLongOperator.java:380) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:850) > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:114) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:850) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:97) > at > org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:164) > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:45) > ... 18 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:379) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:850) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.forwardBigTableBatch(VectorMapJoinGenerateResultOperator.java:599) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyGenerateResultOperator.generateHashMultiSetResultRepeatedAll(VectorMapJoinInnerBigOnlyGenerateResultOperator.java:304) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyGenerateResultOperator.finishInnerBigOnlyRepeated(VectorMapJoinInnerBigOnlyGenerateResultOperator.java:328) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyLongOperator.process(VectorMapJoinInnerBigOnlyLongOperator.java:201) > ... 24 more > Caused by: java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector.setVal(BytesColumnVector.java:152) > at > org.apache.hadoop.hive.ql.exec.vector.VectorDeserializeRow$StringReaderByValue.apply(VectorDeserializeRow.java:349) > at > org.apache.hadoop.hive.ql.exec.vector.VectorDeserializeRow.deserializeByValue(VectorDeserializeRow.java:688) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultSingleValue(VectorMapJoinGenerateResultOperator.java:177) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerGenerateResultOperator.finishInner(VectorMapJoinInnerGenerateResultOperator.java:201) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:359) > ... 29 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9664) Hive "add jar" command should be able to download and add jars from a repository
[ https://issues.apache.org/jira/browse/HIVE-9664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571603#comment-14571603 ] Anthony Hsu commented on HIVE-9664: --- Looks good. One question: Can you mix Ivy and other URIs in the same ADD JAR command? Something like: {code} ADD JARS ivy://... hdfs://...; {code} > Hive "add jar" command should be able to download and add jars from a > repository > > > Key: HIVE-9664 > URL: https://issues.apache.org/jira/browse/HIVE-9664 > Project: Hive > Issue Type: Improvement >Affects Versions: 0.14.0 >Reporter: Anant Nag >Assignee: Anant Nag > Labels: TODOC1.2, hive, patch > Fix For: 1.2.0 > > Attachments: HIVE-9664.4.patch, HIVE-9664.5.patch, HIVE-9664.patch, > HIVE-9664.patch, HIVE-9664.patch > > > Currently Hive's "add jar" command takes a local path to the dependency jar. > This clutters the local file-system as users may forget to remove this jar > later > It would be nice if Hive supported a Gradle like notation to download the jar > from a repository. > Example: add jar org:module:version > > It should also be backward compatible and should take jar from the local > file-system as well. > RB: https://reviews.apache.org/r/31628/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10735) LLAP: Cached plan race condition - VectorMapJoinCommonOperator has no closeOp()
[ https://issues.apache.org/jira/browse/HIVE-10735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571592#comment-14571592 ] Matt McCline commented on HIVE-10735: - Use of the thread-safe position has been internalized into the hash map Result class -- that is, as long as you pre-allocate a Result object per thread the usage is safe. > LLAP: Cached plan race condition - VectorMapJoinCommonOperator has no > closeOp() > --- > > Key: HIVE-10735 > URL: https://issues.apache.org/jira/browse/HIVE-10735 > Project: Hive > Issue Type: Sub-task > Components: Vectorization >Reporter: Gopal V >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-10705.01.patch, HIVE-10705.02.patch > > > Looks like some state is mutated during execution across threads in LLAP. > Either we can't share the operator objects across threads, because they are > tied to the data objects per invocation or this is missing a closeOp() which > resets the common-setup between reuses. > {code} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyLongOperator.process(VectorMapJoinInnerBigOnlyLongOperator.java:380) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:850) > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:114) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:850) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:97) > at > org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:164) > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:45) > ... 18 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:379) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:850) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.forwardBigTableBatch(VectorMapJoinGenerateResultOperator.java:599) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyGenerateResultOperator.generateHashMultiSetResultRepeatedAll(VectorMapJoinInnerBigOnlyGenerateResultOperator.java:304) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyGenerateResultOperator.finishInnerBigOnlyRepeated(VectorMapJoinInnerBigOnlyGenerateResultOperator.java:328) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerBigOnlyLongOperator.process(VectorMapJoinInnerBigOnlyLongOperator.java:201) > ... 24 more > Caused by: java.lang.ArrayIndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector.setVal(BytesColumnVector.java:152) > at > org.apache.hadoop.hive.ql.exec.vector.VectorDeserializeRow$StringReaderByValue.apply(VectorDeserializeRow.java:349) > at > org.apache.hadoop.hive.ql.exec.vector.VectorDeserializeRow.deserializeByValue(VectorDeserializeRow.java:688) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultSingleValue(VectorMapJoinGenerateResultOperator.java:177) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerGenerateResultOperator.finishInner(VectorMapJoinInnerGenerateResultOperator.java:201) > at > org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:359) > ... 29 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9814) LLAP: JMX web-service end points for monitoring & metrics
[ https://issues.apache.org/jira/browse/HIVE-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-9814: --- Attachment: HIVE-9814.patch Minor changes to the patch, it works. [~gopalv] anything else you want to add before committing this? > LLAP: JMX web-service end points for monitoring & metrics > - > > Key: HIVE-9814 > URL: https://issues.apache.org/jira/browse/HIVE-9814 > Project: Hive > Issue Type: Sub-task > Components: Diagnosability >Affects Versions: llap >Reporter: Gopal V >Assignee: Gopal V > Attachments: HIVE-9814.patch, HIVE-9814.wip1.patch, > HIVE-9814.wip2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10909) Make TestFilterHooks robust
[ https://issues.apache.org/jira/browse/HIVE-10909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-10909: Attachment: HIVE-10909.patch Reset location for derby database at beginning of test run. > Make TestFilterHooks robust > --- > > Key: HIVE-10909 > URL: https://issues.apache.org/jira/browse/HIVE-10909 > Project: Hive > Issue Type: Test > Components: Metastore, Tests >Affects Versions: 1.2.0 >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Ashutosh Chauhan > Attachments: HIVE-10909.patch > > > Currently it fails sometimes when run in sequential order because of left > over state from previous tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10898) CAST AS BIGINT produces wrong value
[ https://issues.apache.org/jira/browse/HIVE-10898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571579#comment-14571579 ] Thejas M Nair commented on HIVE-10898: -- note that hive-on-spark (hive with spark as execution engine) is different from spark-sql (spark project having parts of hive code copied into it). > CAST AS BIGINT produces wrong value > > > Key: HIVE-10898 > URL: https://issues.apache.org/jira/browse/HIVE-10898 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1 > Environment: Hive running on Spark in Standalone mode >Reporter: Andrey Kurochkin >Priority: Critical > > Example Query: > SELECT CAST("775983671874188101" as BIGINT) > Produces: 775983671874188160L > Note: last 2 digits. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10898) CAST AS BIGINT produces wrong value
[ https://issues.apache.org/jira/browse/HIVE-10898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571576#comment-14571576 ] Thejas M Nair commented on HIVE-10898: -- spark-sql bug is not a hive bug, can you please open a jira for spark project ? > CAST AS BIGINT produces wrong value > > > Key: HIVE-10898 > URL: https://issues.apache.org/jira/browse/HIVE-10898 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1 > Environment: Hive running on Spark in Standalone mode >Reporter: Andrey Kurochkin >Priority: Critical > > Example Query: > SELECT CAST("775983671874188101" as BIGINT) > Produces: 775983671874188160L > Note: last 2 digits. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-10898) CAST AS BIGINT produces wrong value
[ https://issues.apache.org/jira/browse/HIVE-10898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair resolved HIVE-10898. -- Resolution: Invalid > CAST AS BIGINT produces wrong value > > > Key: HIVE-10898 > URL: https://issues.apache.org/jira/browse/HIVE-10898 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1 > Environment: Hive running on Spark in Standalone mode >Reporter: Andrey Kurochkin >Priority: Critical > > Example Query: > SELECT CAST("775983671874188101" as BIGINT) > Produces: 775983671874188160L > Note: last 2 digits. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9814) LLAP: JMX web-service end points for monitoring & metrics
[ https://issues.apache.org/jira/browse/HIVE-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-9814: --- Attachment: (was: HIVE-9814.wip2.patch) > LLAP: JMX web-service end points for monitoring & metrics > - > > Key: HIVE-9814 > URL: https://issues.apache.org/jira/browse/HIVE-9814 > Project: Hive > Issue Type: Sub-task > Components: Diagnosability >Affects Versions: llap >Reporter: Gopal V >Assignee: Gopal V > Attachments: HIVE-9814.wip1.patch, HIVE-9814.wip2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9814) LLAP: JMX web-service end points for monitoring & metrics
[ https://issues.apache.org/jira/browse/HIVE-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-9814: --- Attachment: (was: HIVE-9814.wip2.patch) > LLAP: JMX web-service end points for monitoring & metrics > - > > Key: HIVE-9814 > URL: https://issues.apache.org/jira/browse/HIVE-9814 > Project: Hive > Issue Type: Sub-task > Components: Diagnosability >Affects Versions: llap >Reporter: Gopal V >Assignee: Gopal V > Attachments: HIVE-9814.wip1.patch, HIVE-9814.wip2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9814) LLAP: JMX web-service end points for monitoring & metrics
[ https://issues.apache.org/jira/browse/HIVE-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-9814: --- Attachment: HIVE-9814.wip2.patch > LLAP: JMX web-service end points for monitoring & metrics > - > > Key: HIVE-9814 > URL: https://issues.apache.org/jira/browse/HIVE-9814 > Project: Hive > Issue Type: Sub-task > Components: Diagnosability >Affects Versions: llap >Reporter: Gopal V >Assignee: Gopal V > Attachments: HIVE-9814.wip1.patch, HIVE-9814.wip2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10068) LLAP: adjust allocation after decompression
[ https://issues.apache.org/jira/browse/HIVE-10068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571562#comment-14571562 ] Sergey Shelukhin commented on HIVE-10068: - Update from some test runs on TPCDS and TPCH queries, we waste around 15% allocated memory due to buddy allocator granularity: {noformat} $ sed -E "s/.*ALLOCATED_BYTES=([0-9]+).*/\1/" lrfu1.log | awk '{s+=$1}END{print s}' 278162046976 $ sed -E "s/.*ALLOCATED_USED_BYTES=([0-9]+).*/\1/" lrfu1.log | awk '{s+=$1}END{print s}' 238565954908 {noformat} Some of that is obviously unavoidable, but some could be avoided by implementing this. However, it's not as bad as I expected (bad results can be seen on very small datasets were stripes/RGs are routinely smaller than compression block size. > LLAP: adjust allocation after decompression > --- > > Key: HIVE-10068 > URL: https://issues.apache.org/jira/browse/HIVE-10068 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin > > We don't know decompressed size of a compression buffer in ORC, all we know > is the file-level compression buffer size. For many files, compression > buffers can be smaller than that because of compact encoding, or because > compression block ends for other reasons (different streams, etc. - "present" > streams for example are very small). > BuddyAllocator should be able to accept back parts of the allocated memory > (e.g. allocate 256Kb with minimum allocation of 32Kb, decompress 45Kb, return > the last 192Kb as 64+128Kb). For generality (this depends on implementation), > we can make an API like "offer", and allocator can decide to take back > however much it can. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10906) Value based UDAF function without orderby expression throws NPE
[ https://issues.apache.org/jira/browse/HIVE-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-10906: Attachment: HIVE-10906.patch > Value based UDAF function without orderby expression throws NPE > --- > > Key: HIVE-10906 > URL: https://issues.apache.org/jira/browse/HIVE-10906 > Project: Hive > Issue Type: Sub-task > Components: PTF-Windowing >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-10906.patch > > > The following query throws NPE. > {noformat} > select key, value, min(value) over (partition by key range between unbounded > preceding and current row) from small; > FAILED: NullPointerException null > 2015-06-03 13:48:09,268 ERROR [main]: ql.Driver > (SessionState.java:printError(957)) - FAILED: NullPointerException null > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.parse.WindowingSpec.validateValueBoundary(WindowingSpec.java:293) > at > org.apache.hadoop.hive.ql.parse.WindowingSpec.validateWindowFrame(WindowingSpec.java:281) > at > org.apache.hadoop.hive.ql.parse.WindowingSpec.validateAndMakeEffective(WindowingSpec.java:155) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genWindowingPlan(SemanticAnalyzer.java:11965) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:8910) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8868) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9713) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9606) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:10079) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:327) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10090) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:208) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:424) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1124) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1172) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1061) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1051) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165) > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.util.RunJar.run(RunJar.java:221) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10907) Hive on Tez: Classcast exception in some cases with SMB joins
[ https://issues.apache.org/jira/browse/HIVE-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vikram Dixit K updated HIVE-10907: -- Attachment: HIVE-10907.1.patch > Hive on Tez: Classcast exception in some cases with SMB joins > - > > Key: HIVE-10907 > URL: https://issues.apache.org/jira/browse/HIVE-10907 > Project: Hive > Issue Type: Bug >Reporter: Vikram Dixit K >Assignee: Vikram Dixit K > Attachments: HIVE-10907.1.patch > > > In cases where there is a mix of Map side work and reduce side work, we get a > classcast exception because we assume homogeneity in the code. We need to fix > this correctly. For now this is a workaround. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9814) LLAP: JMX web-service end points for monitoring & metrics
[ https://issues.apache.org/jira/browse/HIVE-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-9814: --- Attachment: HIVE-9814.wip2.patch > LLAP: JMX web-service end points for monitoring & metrics > - > > Key: HIVE-9814 > URL: https://issues.apache.org/jira/browse/HIVE-9814 > Project: Hive > Issue Type: Sub-task > Components: Diagnosability >Affects Versions: llap >Reporter: Gopal V >Assignee: Gopal V > Attachments: HIVE-9814.wip1.patch, HIVE-9814.wip2.patch, > HIVE-9814.wip2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9814) LLAP: JMX web-service end points for monitoring & metrics
[ https://issues.apache.org/jira/browse/HIVE-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-9814: --- Attachment: HIVE-9814.wip2.patch rebased patch > LLAP: JMX web-service end points for monitoring & metrics > - > > Key: HIVE-9814 > URL: https://issues.apache.org/jira/browse/HIVE-9814 > Project: Hive > Issue Type: Sub-task > Components: Diagnosability >Affects Versions: llap >Reporter: Gopal V >Assignee: Gopal V > Attachments: HIVE-9814.wip1.patch, HIVE-9814.wip2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-10896) LLAP: the return of the stuck DAG
[ https://issues.apache.org/jira/browse/HIVE-10896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin resolved HIVE-10896. - Resolution: Fixed > LLAP: the return of the stuck DAG > - > > Key: HIVE-10896 > URL: https://issues.apache.org/jira/browse/HIVE-10896 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: llap > > Attachments: HIVE-10896.patch > > > Mapjoin issue again - preempted task that is loading the hashtable -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10906) Value based UDAF function without orderby expression throws NPE
[ https://issues.apache.org/jira/browse/HIVE-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-10906: Summary: Value based UDAF function without orderby expression throws NPE (was: Value based UDAF function throws NPE) > Value based UDAF function without orderby expression throws NPE > --- > > Key: HIVE-10906 > URL: https://issues.apache.org/jira/browse/HIVE-10906 > Project: Hive > Issue Type: Sub-task > Components: PTF-Windowing >Reporter: Aihua Xu >Assignee: Aihua Xu > > The following query throws NPE. > {noformat} > select key, value, min(value) over (partition by key range between unbounded > preceding and current row) from small; > FAILED: NullPointerException null > 2015-06-03 13:48:09,268 ERROR [main]: ql.Driver > (SessionState.java:printError(957)) - FAILED: NullPointerException null > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.parse.WindowingSpec.validateValueBoundary(WindowingSpec.java:293) > at > org.apache.hadoop.hive.ql.parse.WindowingSpec.validateWindowFrame(WindowingSpec.java:281) > at > org.apache.hadoop.hive.ql.parse.WindowingSpec.validateAndMakeEffective(WindowingSpec.java:155) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genWindowingPlan(SemanticAnalyzer.java:11965) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:8910) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8868) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9713) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9606) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:10079) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:327) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10090) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:208) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:424) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1124) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1172) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1061) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1051) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165) > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.util.RunJar.run(RunJar.java:221) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-10906) Value based UDAF function throws NPE
[ https://issues.apache.org/jira/browse/HIVE-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu reassigned HIVE-10906: --- Assignee: Aihua Xu > Value based UDAF function throws NPE > > > Key: HIVE-10906 > URL: https://issues.apache.org/jira/browse/HIVE-10906 > Project: Hive > Issue Type: Sub-task > Components: PTF-Windowing >Reporter: Aihua Xu >Assignee: Aihua Xu > > The following query throws NPE. > {noformat} > select key, value, min(value) over (partition by key range between unbounded > preceding and current row) from small; > FAILED: NullPointerException null > 2015-06-03 13:48:09,268 ERROR [main]: ql.Driver > (SessionState.java:printError(957)) - FAILED: NullPointerException null > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.parse.WindowingSpec.validateValueBoundary(WindowingSpec.java:293) > at > org.apache.hadoop.hive.ql.parse.WindowingSpec.validateWindowFrame(WindowingSpec.java:281) > at > org.apache.hadoop.hive.ql.parse.WindowingSpec.validateAndMakeEffective(WindowingSpec.java:155) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genWindowingPlan(SemanticAnalyzer.java:11965) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:8910) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8868) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9713) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9606) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:10079) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:327) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10090) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:208) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:424) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1124) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1172) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1061) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1051) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165) > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.util.RunJar.run(RunJar.java:221) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10898) CAST AS BIGINT produces wrong value
[ https://issues.apache.org/jira/browse/HIVE-10898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571435#comment-14571435 ] Andrey Kurochkin commented on HIVE-10898: - I have constant repro for this: api calls, spark-sql shell. How can I investigate it further? > CAST AS BIGINT produces wrong value > > > Key: HIVE-10898 > URL: https://issues.apache.org/jira/browse/HIVE-10898 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1 > Environment: Hive running on Spark in Standalone mode >Reporter: Andrey Kurochkin >Priority: Critical > > Example Query: > SELECT CAST("775983671874188101" as BIGINT) > Produces: 775983671874188160L > Note: last 2 digits. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-10551) OOM when running query_89 with vectorization on & hybridgrace=false
[ https://issues.apache.org/jira/browse/HIVE-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vikram Dixit K reassigned HIVE-10551: - Assignee: Vikram Dixit K > OOM when running query_89 with vectorization on & hybridgrace=false > --- > > Key: HIVE-10551 > URL: https://issues.apache.org/jira/browse/HIVE-10551 > Project: Hive > Issue Type: Bug >Reporter: Rajesh Balamohan >Assignee: Vikram Dixit K > Attachments: HIVE-10551-explain-plan.log, hive_10551.png > > > - TPC-DS Query_89 @ 10 TB scale > - Trunk version of Hive + Tez 0.7.0-SNAPSHOT > - Additional settings ( "hive.vectorized.groupby.maxentries=1024 , > tez.runtime.io.sort.factor=200 tez.runtime.io.sort.mb=1800 > hive.tez.container.size=4096 ,hive.mapjoin.hybridgrace.hashtable=false" ) > Will attach the profiler snapshot asap. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10551) OOM when running query_89 with vectorization on & hybridgrace=false
[ https://issues.apache.org/jira/browse/HIVE-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571430#comment-14571430 ] Vikram Dixit K commented on HIVE-10551: --- [~rajesh.balamohan] This OOM is happening in the mapper? Is this when re-using a container that was used in the mapper phase? What is the Xmx value used for the container? Can you provide the heap dump for this please. From the looks of it, the group by operator probably decided to flush its results and that could have resulted in the ReduceSinkOperator requesting for the sort buffers which could have pushed this over the edge. Can you also post the stack trace here for reference please. > OOM when running query_89 with vectorization on & hybridgrace=false > --- > > Key: HIVE-10551 > URL: https://issues.apache.org/jira/browse/HIVE-10551 > Project: Hive > Issue Type: Bug >Reporter: Rajesh Balamohan > Attachments: HIVE-10551-explain-plan.log, hive_10551.png > > > - TPC-DS Query_89 @ 10 TB scale > - Trunk version of Hive + Tez 0.7.0-SNAPSHOT > - Additional settings ( "hive.vectorized.groupby.maxentries=1024 , > tez.runtime.io.sort.factor=200 tez.runtime.io.sort.mb=1800 > hive.tez.container.size=4096 ,hive.mapjoin.hybridgrace.hashtable=false" ) > Will attach the profiler snapshot asap. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-4239) Remove lock on compilation stage
[ https://issues.apache.org/jira/browse/HIVE-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571402#comment-14571402 ] Sergey Shelukhin commented on HIVE-4239: [~thejas] ping? > Remove lock on compilation stage > > > Key: HIVE-4239 > URL: https://issues.apache.org/jira/browse/HIVE-4239 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Query Processor >Reporter: Carl Steinbach >Assignee: Sergey Shelukhin > Attachments: HIVE-4239.01.patch, HIVE-4239.02.patch, > HIVE-4239.03.patch, HIVE-4239.04.patch, HIVE-4239.05.patch, > HIVE-4239.06.patch, HIVE-4239.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10896) LLAP: the return of the stuck DAG
[ https://issues.apache.org/jira/browse/HIVE-10896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571404#comment-14571404 ] Sergey Shelukhin commented on HIVE-10896: - I'm just going to commit this, feel free to take a look later :) > LLAP: the return of the stuck DAG > - > > Key: HIVE-10896 > URL: https://issues.apache.org/jira/browse/HIVE-10896 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: llap > > Attachments: HIVE-10896.patch > > > Mapjoin issue again - preempted task that is loading the hashtable -- This message was sent by Atlassian JIRA (v6.3.4#6332)