Github user mgaido91 commented on the issue:

    https://github.com/apache/spark/pull/22494
  
    > If your argument is, picking a precise precision for literal is an 
individual featue and not related to #20023, I'm OK to create a new config for 
it.
    
    Yes this is - I think - a better option. Indeed, what I meant was this: 
let's imagine I am a Spark 2.3.0 user and I have 
`DECIMAL_OPERATIONS_ALLOW_PREC_LOSS` turned to `false`. Before this patch, I 
can successfully run `select 1234567891 / (1.1 * 2 * 2 * 2 * 2)`. After this 
patch, this query would return `null` instead, as an overflow would happen. So 
this patch is "correcting" a regression from 2.2 but it is introducing another 
one from 2.3.0-2.3.1.
    
    Using another config is therefore a better workaround IMO.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to