[ https://issues.apache.org/jira/browse/HIVE-9371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14287934#comment-14287934 ]
Hive QA commented on HIVE-9371: ------------------------------- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12693767/HIVE-9371.1.patch {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 7348 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_types org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_histogram_numeric {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2477/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2477/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2477/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12693767 - PreCommit-HIVE-TRUNK-Build > Execution error for Parquet table and GROUP BY involving CHAR data type > ----------------------------------------------------------------------- > > Key: HIVE-9371 > URL: https://issues.apache.org/jira/browse/HIVE-9371 > Project: Hive > Issue Type: Bug > Components: File Formats, Query Processor > Reporter: Matt McCline > Assignee: Ferdinand Xu > Priority: Critical > Attachments: HIVE-9371.1.patch, HIVE-9371.patch, HIVE-9371.patch > > > Query fails involving PARQUET table format, CHAR data type, and GROUP BY. > Probably also fails for VARCHAR, too. > {noformat} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to > org.apache.hadoop.hive.serde2.io.HiveCharWritable > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:814) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95) > at > org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157) > at > org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:493) > ... 10 more > Caused by: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be > cast to org.apache.hadoop.hive.serde2.io.HiveCharWritable > at > org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveCharObjectInspector.copyObject(WritableHiveCharObjectInspector.java:104) > at > org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:305) > at > org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:150) > at > org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:142) > at > org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.copyKey(KeyWrapperFactory.java:119) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:827) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.processKey(GroupByOperator.java:739) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:809) > ... 16 more > {noformat} > Here is a q file: > {noformat} > SET hive.vectorized.execution.enabled=false; > drop table char_2; > create table char_2 ( > key char(10), > value char(20) > ) stored as parquet; > insert overwrite table char_2 select * from src; > select value, sum(cast(key as int)), count(*) numrows > from src > group by value > order by value asc > limit 5; > explain select value, sum(cast(key as int)), count(*) numrows > from char_2 > group by value > order by value asc > limit 5; > -- should match the query from src > select value, sum(cast(key as int)), count(*) numrows > from char_2 > group by value > order by value asc > limit 5; > select value, sum(cast(key as int)), count(*) numrows > from src > group by value > order by value desc > limit 5; > explain select value, sum(cast(key as int)), count(*) numrows > from char_2 > group by value > order by value desc > limit 5; > -- should match the query from src > select value, sum(cast(key as int)), count(*) numrows > from char_2 > group by value > order by value desc > limit 5; > drop table char_2; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)