Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21499#discussion_r193618762
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/types/DecimalType.scala ---
    @@ -161,13 +161,17 @@ object DecimalType extends AbstractDataType {
        * This method is used only when 
`spark.sql.decimalOperations.allowPrecisionLoss` is set to true.
        */
       private[sql] def adjustPrecisionScale(precision: Int, scale: Int): 
DecimalType = {
    -    // Assumptions:
    +    // Assumption:
         assert(precision >= scale)
    -    assert(scale >= 0)
     
         if (precision <= MAX_PRECISION) {
           // Adjustment only needed when we exceed max precision
           DecimalType(precision, scale)
    +    } else if (scale < 0) {
    +      // Decimal can have negative scale (SPARK-24468). In this case, we 
cannot allow a precision
    +      // loss since we would cause a loss of digits in the integer part.
    --- End diff --
    
    ok makes sense, do we have an end-to-end test case for returning null?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to