Github user dongjoon-hyun commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20023#discussion_r158155039
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala 
---
    @@ -1526,15 +1526,15 @@ class SQLQuerySuite extends QueryTest with 
SharedSQLContext {
         checkAnswer(sql("select 10.300000000000000000 * 3.000000000000000000"),
           Row(BigDecimal("30.900000000000000000000000000000000000", new 
MathContext(38))))
         checkAnswer(sql("select 10.300000000000000000 * 
3.0000000000000000000"),
    -      Row(null))
    --- End diff --
    
    Two cases (2 and 3) were mentioned in the email. If this is the only 
`NULL`-return test case from previous behavior, can we have another test case?
    ```
    Currently, Spark behaves like follows:
    
       1. It follows some rules taken from intial Hive implementation;
       2. it returns NULL;
       3. it returns NULL.
    ```


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to