[ 
https://issues.apache.org/jira/browse/SPARK-5203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14372847#comment-14372847
 ] 

Cheng Lian commented on SPARK-5203:
-----------------------------------

Good point! [This 
line|https://github.com/apache/spark/blob/e5d2c37c68ac00a57c2542e62d1c5b4ca267c89e/sql/catalyst/src/main/scala/org/apache/spark/sql/types/dataTypes.scala#L926]
 is really a bug. Should apply the same rule as 
[here|https://github.com/apache/spark/pull/4004/files#diff-d33f6b266aab79a1708e888dc1a1caf3R339].

> union with different decimal type report error
> ----------------------------------------------
>
>                 Key: SPARK-5203
>                 URL: https://issues.apache.org/jira/browse/SPARK-5203
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>            Reporter: guowei
>
> Test case like this:
> {code:sql}
> create table test (a decimal(10,1));
> select a from test union all select a*2 from test;
> {code}
> Exception thown:
> {noformat}
> 15/01/12 16:28:54 ERROR SparkSQLDriver: Failed in [select a from test union 
> all select a*2 from test]
> org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Unresolved 
> attributes: *, tree:
> 'Project [*]
>  'Subquery _u1
>   'Union 
>    Project [a#1]
>     MetastoreRelation default, test, None
>    Project [CAST((CAST(a#2, DecimalType()) * CAST(CAST(2, DecimalType(10,0)), 
> DecimalType())), DecimalType(21,1)) AS _c0#0]
>     MetastoreRelation default, test, None
>       at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$CheckResolution$$anonfun$1.applyOrElse(Analyzer.scala:85)
>       at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$CheckResolution$$anonfun$1.applyOrElse(Analyzer.scala:83)
>       at 
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:144)
>       at 
> org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:135)
>       at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$CheckResolution$.apply(Analyzer.scala:83)
>       at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$CheckResolution$.apply(Analyzer.scala:81)
>       at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:61)
>       at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:59)
>       at 
> scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:51)
>       at 
> scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:60)
>       at scala.collection.mutable.WrappedArray.foldLeft(WrappedArray.scala:34)
>       at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:59)
>       at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:51)
>       at scala.collection.immutable.List.foreach(List.scala:318)
>       at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor.apply(RuleExecutor.scala:51)
>       at 
> org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:410)
>       at 
> org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:410)
>       at 
> org.apache.spark.sql.SQLContext$QueryExecution.withCachedData$lzycompute(SQLContext.scala:411)
>       at 
> org.apache.spark.sql.SQLContext$QueryExecution.withCachedData(SQLContext.scala:411)
>       at 
> org.apache.spark.sql.SQLContext$QueryExecution.optimizedPlan$lzycompute(SQLContext.scala:412)
>       at 
> org.apache.spark.sql.SQLContext$QueryExecution.optimizedPlan(SQLContext.scala:412)
>       at 
> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:417)
>       at 
> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:415)
>       at 
> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:421)
>       at 
> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:421)
>       at 
> org.apache.spark.sql.hive.HiveContext$QueryExecution.stringResult(HiveContext.scala:369)
>       at 
> org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:58)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:275)
>       at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:211)
>       at 
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to