[ 
https://issues.apache.org/jira/browse/SPARK-35795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HonglunChen updated SPARK-35795:
--------------------------------
    Issue Type: Bug  (was: Improvement)

> Cannot resolve column when there is a unrecognized hint in subquery
> -------------------------------------------------------------------
>
>                 Key: SPARK-35795
>                 URL: https://issues.apache.org/jira/browse/SPARK-35795
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 3.0.0
>            Reporter: HonglunChen
>            Priority: Major
>
> {code:java}
> select id,name from (select /*+ BROADCAS(t1) */ t1.id,t2.name from test_hint 
> t1 inner join tmp_db.test_hint t2 on(t1.id=t2.id));
> {code}
>  
> Run the above SQL and we will see the error: 
> {code:java}
> Error: Error operating EXECUTE_STATEMENT: 
> org.apache.spark.sql.AnalysisException: cannot resolve '`id`' given input 
> columns: [__auto_generated_subquery_name.id, 
> __auto_generated_subquery_name.name]; line 1 pos 7;
> 'Project ['id, 'name]
> +- SubqueryAlias __auto_generated_subquery_name
>    +- Project [id#64, name#67]
>       +- Join Inner, (id#64 = id#66)
>          :- SubqueryAlias t1
>          :  +- SubqueryAlias spark_catalog.tmp_db.test_hint
>          :     +- HiveTableRelation [`tmp_db`.`test_hint`, 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, Data Cols: [id#64, 
> name#65], Partition Cols: []]
>          +- SubqueryAlias t2
>             +- SubqueryAlias spark_catalog.tmp_db.test_hint
>                +- HiveTableRelation [`tmp_db`.`test_hint`, 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, Data Cols: [id#66, 
> name#67], Partition Cols: []]        at 
> org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
>         at 
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$$nestedInanonfun$checkAnalysis$1$2.applyOrElse(CheckAnalysis.scala:155)
>         at 
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$$nestedInanonfun$checkAnalysis$1$2.applyOrElse(CheckAnalysis.scala:152)
>         at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformUp$2(TreeNode.scala:341)
>         at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:73)
>         at 
> org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:341)
>         at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformExpressionsUp$1(QueryPlan.scala:104)
>         at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$1(QueryPlan.scala:116)
>         at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:73)
>         at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:116)
>         at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:127)
>         at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$3(QueryPlan.scala:132)
>         at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
>         at scala.collection.immutable.List.foreach(List.scala:392)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to