[
https://issues.apache.org/jira/browse/SPARK-17198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15444755#comment-15444755
]
tuming commented on SPARK-17198:
yes, it works fine on spark2.0. I am using spark1.5.1.
I can get the column type by the "desc" command. They are different in spark2.0
and spark1.5.1.
Spark2.0:
spark-sql> desc orc_test;
col1string NULL
col2string NULL
spark-sql>
Spark1.5.1:
spark-sql> desc orc_test;
col1string NULL
col2char(10)NULL
I have looked into the source code and found spark1.5.1 invoked the hive native
command to execute the create table sql(not CATS cases).
I have no idea if it is the modification of hive or spark. The spark parser of
spark 2.0 is much different from 1.5.1.
> ORC fixed char literal filter does not work
> ---
>
> Key: SPARK-17198
> URL: https://issues.apache.org/jira/browse/SPARK-17198
> Project: Spark
> Issue Type: Bug
> Components: SQL
>Affects Versions: 1.5.1
>Reporter: tuming
>
> I have got wrong result when I run the following query in SparkSQL.
> select * from orc_table where char_col ='5LZS';
> Table orc_table is a ORC format table.
> Column char_col is defined as char(6).
> The hive record reader will return a char(6) string to the spark. And the
> spark has no fixed char type. All fixed char type attributes are converted to
> String by default. Meanwhile the constant literal is parsed to a string
> Literal. So it won't return true forever while doing the equal comparison.
> For instance: '5LZS'=='5LZS '.
> But I can get correct result in Hive using same data and sql string because
> hive append spaces for those constant literal. Please refer to:
> https://issues.apache.org/jira/browse/HIVE-11312
> I found there is no such patch for spark.
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org