GitHub user xwu0226 opened a pull request:

    https://github.com/apache/spark/pull/12974

    [SPARK-14495][SQL] fix resolution failure of having clause with distinct 
aggregate function

    #### Symptom:
    In the latest **branch 1.6**, when a `DISTINCT` aggregation function is 
used in the `HAVING` clause, Analyzer throws `AnalysisException` with a message 
like following:
    ```
    resolved attribute(s) gid#558,id#559 missing from date#554,id#555 in 
operator !Expand [List(date#554, null, 0, if ((gid#558 = 1)) id#559 else 
null),List(date#554, id#555, 1, null)], [date#554,id#561,gid#560,if ((gid = 1)) 
id else null#562];
    ```
    #### Root cause:
    The problem is that the distinct aggregate in having condition are resolved 
by the rule `DistinctAggregationRewriter` twice, which messes up the resulted 
`EXPAND` operator. 
    
    In a `ResolveAggregateFunctions` rule, when resolving 
```Filter(havingCondition, _: Aggregate)```, the `havingCondition` is resolved 
as an `Aggregate` in a nested loop of analyzer rule execution (by invoking 
`RuleExecutor.execute`). At this nested level of analysis, the rule 
`DistinctAggregationRewriter` rewrites this distinct aggregate clause to an 
expanded two-layer aggregation, where the `aggregateExpresssions` of the final 
`Aggregate` contains the resolved `gid` and the aggregate expression attributes 
(In the above case, they are  `gid#558, id#559`).  
    
    After completion of the nested analyzer rule execution, the resulted 
`aggregateExpressions` in the `havingCondition` is pushed down into the 
underlying `Aggregate` operator. The `DistinctAggregationRewriter` rule is 
executed again. The `projections` field of `EXPAND` operator is populated with 
the `aggregateExpressions` of the `havingCondition` mentioned above. However, 
the attributes (In the above case, they are `gid#558, id#559`) in the 
projection list of `EXPAND` operator can not be found in the underlying 
relation.
    
    #### Solution:
    This PR retrofits part of 
[#11579](https://github.com/apache/spark/pull/11579) that moves the 
`DistinctAggregationRewriter` to the beginning of Optimizer, so that it 
guarantees that the rewrite only happens after all the aggregate functions are 
resolved first. Thus, it avoid resolution failure. 
    This PR also removes the unnecessary SQLConf property 
`spark.sql.specializeSingleDistinctAggPlanning` due to the above change. 
@cloud-fan @yhuai 
    
    #### How is the PR change tested
    New [test cases 
](https://github.com/xwu0226/spark/blob/f73428f94746d6d074baf6702589545bdbd11cad/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/AggregationQuerySuite.scala#L927-L988)
 are added to drive `DistinctAggregationRewriter` rewrites for multi-distinct 
aggregations , involving having clause. 
    
    A following up PR will be submitted to add these test cases to master(2.0) 
branch. 
     

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/xwu0226/spark SPARK-14495_review

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/12974.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #12974
    
----
commit c51448d1173739dd592895b0902ab61d66da499d
Author: xin Wu <xi...@us.ibm.com>
Date:   2016-05-05T14:50:37Z

    move DistinctAggregateRewrite rule to optimizer

commit f73428f94746d6d074baf6702589545bdbd11cad
Author: xin Wu <xi...@us.ibm.com>
Date:   2016-05-07T02:23:30Z

    modify testcases and remove property 
spark.sql.specializeSingleDistinctAggPlanning

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to