Github user cloud-fan commented on the issue: https://github.com/apache/spark/pull/20023 I think for big data, throwing exception during runtime is pretty bad. I think it's ok to be incompatible with SQL standard for some cases, and return null. Personally I feel it's ok to follow SQL standard and truncate if precision lose, but we should not follow SQL standard to throw exception when overflow, at least this should not be the default behavior.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org