[ 
https://issues.apache.org/jira/browse/SPARK-14066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15206280#comment-15206280
 ] 

KaiXinXIaoLei edited comment on SPARK-14066 at 3/23/16 1:19 AM:
----------------------------------------------------------------

In the 
org.apache.spark.sql.catalyst.analysis.HiveTypeCoercion.FunctionArgumentConversion,
 I find in the value `findTightestCommonTypeOfTwo`: 
{code}
    case (t1: IntegralType, t2: DecimalType) if t2.isWiderThan(t1) =>
      Some(t2)
    case (t1: DecimalType, t2: IntegralType) if t1.isWiderThan(t2) =>
      Some(t1)
{code}

In `array(0,0.2,0.3,1)`, The type of `0` changes `DecimalType(10, 0)`, The type 
of `0.2` is `DecimalType(1, 1)`, so the value of  `t2.isWiderThan(t1) ` is 
false. So the type of  numbers will be [int, decimal(1,1), decimal(1,1), int]. 
And the query run failed. 
 
So I think the `TightestCommonTypeOfTwo` is not reasonable. Thanks.


was (Author: kaixinxiaolei):
In the 
org.apache.spark.sql.catalyst.analysis.HiveTypeCoercion.FunctionArgumentConversion,
 I find in the value `findTightestCommonTypeOfTwo`: 
```
    case (t1: IntegralType, t2: DecimalType) if t2.isWiderThan(t1) =>
      Some(t2)
    case (t1: DecimalType, t2: IntegralType) if t1.isWiderThan(t2) =>
      Some(t1)
```

In `array(0,0.2,0.3,1)`, The type of `0` changes `DecimalType(10, 0)`, The type 
of `0.2` is `DecimalType(1, 1)`, so the value of  `t2.isWiderThan(t1) ` is 
false. So the type of  numbers will be [int, decimal(1,1), decimal(1,1), int]. 
And the query run failed. 
 
So I think the `TightestCommonTypeOfTwo` is not reasonable. Thanks.

> Set "spark.sql.dialect=sql", there is a problen in running query "select 
> percentile(d,array(0,0.2,0.3,1))  as  a from t;"
> -------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-14066
>                 URL: https://issues.apache.org/jira/browse/SPARK-14066
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.5.1
>            Reporter: KaiXinXIaoLei
>
> In spark 1.5.1, I run "sh bin/spark-sql  --conf spark.sql.dialect=sql", and 
> run query "select percentile(d,array(0,0.2,0.3,1))  as  a from t". There is a 
> problem as follows.
> {code}
> spark-sql> select percentile(d,array(0,0.2,0.3,1))  as  a from t;
> 16/03/22 17:25:15 INFO HiveMetaStore: 0: get_table : db=default tbl=t
> 16/03/22 17:25:15 INFO audit: ugi=root  ip=unknown-ip-addr      cmd=get_table 
> : db=default tbl=t
> 16/03/22 17:25:16 ERROR SparkSQLDriver: Failed in [select 
> percentile(d,array(0,0.2,0.3,1))  as  a from t]
> org.apache.spark.sql.AnalysisException: cannot resolve 'array(0,0.2,0.3,1)' 
> due to data type mismatch: input to function array should all be the same 
> type, but it's [int, decimal(1,1), decimal(1,1), int];
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to