[ https://issues.apache.org/jira/browse/HIVE-9371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14285089#comment-14285089 ]
Hive QA commented on HIVE-9371: ------------------------------- {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12693481/HIVE-9371.patch Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2453/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2453/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2453/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]] + export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera + JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera + export PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin + PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + cd /data/hive-ptest/working/ + tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2453/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ svn = \s\v\n ]] + [[ -n '' ]] + [[ -d apache-svn-trunk-source ]] + [[ ! -d apache-svn-trunk-source/.svn ]] + [[ ! -d apache-svn-trunk-source ]] + cd apache-svn-trunk-source + svn revert -R . ++ awk '{print $2}' ++ egrep -v '^X|^Performing status on external' ++ svn status --no-ignore svn: Error converting entry in directory 'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8 svn: Can't convert string from native encoding to 'UTF-8': svn: artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt" + rm -rf + svn update svn: Error converting entry in directory 'hcatalog/src/test/e2e/templeton/inpdir' to UTF-8 svn: Can't convert string from native encoding to 'UTF-8': svn: artof?\228?\182?\180?\227?\132?\169?\233?\188?\190?\228?\184?\132?\231?\139?\156?\227?\128?\135war.txt" + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12693481 - PreCommit-HIVE-TRUNK-Build > Execution error for Parquet table and GROUP BY involving CHAR data type > ----------------------------------------------------------------------- > > Key: HIVE-9371 > URL: https://issues.apache.org/jira/browse/HIVE-9371 > Project: Hive > Issue Type: Bug > Components: File Formats, Query Processor > Reporter: Matt McCline > Assignee: Ferdinand Xu > Priority: Critical > Attachments: HIVE-9371.patch > > > Query fails involving PARQUET table format, CHAR data type, and GROUP BY. > Probably also fails for VARCHAR, too. > {noformat} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to > org.apache.hadoop.hive.serde2.io.HiveCharWritable > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:814) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95) > at > org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157) > at > org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:493) > ... 10 more > Caused by: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be > cast to org.apache.hadoop.hive.serde2.io.HiveCharWritable > at > org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveCharObjectInspector.copyObject(WritableHiveCharObjectInspector.java:104) > at > org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:305) > at > org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:150) > at > org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:142) > at > org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.copyKey(KeyWrapperFactory.java:119) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:827) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.processKey(GroupByOperator.java:739) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:809) > ... 16 more > {noformat} > Here is a q file: > {noformat} > SET hive.vectorized.execution.enabled=false; > drop table char_2; > create table char_2 ( > key char(10), > value char(20) > ) stored as parquet; > insert overwrite table char_2 select * from src; > select value, sum(cast(key as int)), count(*) numrows > from src > group by value > order by value asc > limit 5; > explain select value, sum(cast(key as int)), count(*) numrows > from char_2 > group by value > order by value asc > limit 5; > -- should match the query from src > select value, sum(cast(key as int)), count(*) numrows > from char_2 > group by value > order by value asc > limit 5; > select value, sum(cast(key as int)), count(*) numrows > from src > group by value > order by value desc > limit 5; > explain select value, sum(cast(key as int)), count(*) numrows > from char_2 > group by value > order by value desc > limit 5; > -- should match the query from src > select value, sum(cast(key as int)), count(*) numrows > from char_2 > group by value > order by value desc > limit 5; > drop table char_2; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)