[ https://issues.apache.org/jira/browse/SPARK-34252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17272294#comment-17272294 ]
Terry Kim commented on SPARK-34252: ----------------------------------- I am working on the fix. > Subquery in aggregate's grouping expression fails the analysis check > -------------------------------------------------------------------- > > Key: SPARK-34252 > URL: https://issues.apache.org/jira/browse/SPARK-34252 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 3.1.0 > Reporter: Terry Kim > Priority: Major > > To repro: > {code:java} > sql("create temporary view ta(a, b) as select 1, 2") > sql("create temporary view tc(c, d) as select 1, 2") > sql("select a, (select sum(d) from tc where a = c) sum_d from ta l1 group by > 1, 2").show > {code} > fails with: > {code:java} > This method should not be called in the analyzer > java.lang.RuntimeException: This method should not be called in the analyzer > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.assertNotAnalysisRule(AnalysisHelper.scala:159) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.assertNotAnalysisRule$(AnalysisHelper.scala:155) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.assertNotAnalysisRule(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformUp(AnalysisHelper.scala:179) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformUp$(AnalysisHelper.scala:178) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformUp(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.analysis.EliminateView$.apply(view.scala:56) > at > org.apache.spark.sql.catalyst.plans.logical.View.doCanonicalize(basicLogicalOperators.scala:485) > at > org.apache.spark.sql.catalyst.plans.logical.View.doCanonicalize(basicLogicalOperators.scala:458) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized$lzycompute(QueryPlan.scala:373) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized(QueryPlan.scala:372) > at > org.apache.spark.sql.catalyst.plans.logical.SubqueryAlias.doCanonicalize(basicLogicalOperators.scala:953) > at > org.apache.spark.sql.catalyst.plans.logical.SubqueryAlias.doCanonicalize(basicLogicalOperators.scala:936) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized$lzycompute(QueryPlan.scala:373) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.canonicalized(QueryPlan.scala:372) > at > org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$doCanonicalize$1(QueryPlan.scala:387) > at scala.collection.immutable.List.map(List.scala:293) > ... > {code} > This works fine in Spark 3.0, and the issue seems to be brought by > https://github.com/apache/spark/pull/30567, which introduced > View.doCanonicalize() -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org