GitHub user jiangxb1987 opened a pull request:

    https://github.com/apache/spark/pull/15672

    [SPARK-18148][SQL] Misleading Error Message for Aggregation Without 
Window/GroupBy

    ## What changes were proposed in this pull request?
    
    Aggregation Without Window/GroupBy expressions will fail in 
`checkAnalysis`, the error message is a bit misleading, we should generate a 
more specific error message for this case.
    
    For example,
    ```
    spark.read.load("/some-data")
      .withColumn("date_dt", to_date($"date"))
      .withColumn("year", year($"date_dt"))
      .withColumn("week", weekofyear($"date_dt"))
      .withColumn("user_count", count($"userId"))
      .withColumn("daily_max_in_week", max($"user_count").over(weeklyWindow))
    )
    ```
    creates the following output:
    ```
    org.apache.spark.sql.AnalysisException: expression '`randomColumn`' is 
neither present in the group by, nor is it an aggregate function. Add to group 
by or wrap in first() (or first_value) if you don't care which value you get.;
    ```
    In the error message above, `randomColumn` doesn't appear in the 
query(acturally it's added by function `withColumn`), so the message is not 
enough for the user to address the problem.
    
    ## How was this patch tested?
    
    Manually test
    
    Before:
    ```
    scala> spark.sql("select col, count(col) from tbl")
    org.apache.spark.sql.AnalysisException: expression 'tbl.`col`' is neither 
present in the group by, nor is it an aggregate function. Add to group by or 
wrap in first() (or first_value) if you don't care which value you get.;;
    ```
    
    After:
    ```
    scala> spark.sql("select col, count(col) from tbl")
    org.apache.spark.sql.AnalysisException: grouping expressions sequence is 
empty, and 'tbl.`col`' is not an aggregate function. Wrap '(count(col#231L) AS 
count(col)#239L)' in windowing function(s) or wrap 'tbl.`col`' in first() (or 
first_value) if you don't care which value you get.;;
    ```

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/jiangxb1987/spark groupBy-empty

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/15672.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #15672
    
----
commit 494ef0931b6c8e0a4c0804f88a21b7dce7870750
Author: jiangxingbo <jiangxb1...@gmail.com>
Date:   2016-10-28T10:48:21Z

    improve error message when checkAnalysis fail on Aggregate operator.

commit 350b7a335038adbe8a5a766d7a77fa17db0dbdeb
Author: jiangxingbo <jiangxb1...@gmail.com>
Date:   2016-10-28T11:12:04Z

    better format.

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to