[ https://issues.apache.org/jira/browse/SPARK-28512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16892692#comment-16892692 ]
Marco Gaido commented on SPARK-28512: ------------------------------------- Thanks for pinging me [~maropu]. It is not the same issue, I think, because in SPARK-28470 we are dealing only with cases when there is an overflow. Here there is no overflow. Simply the value is not valid for the casted type. > New optional mode: throw runtime exceptions on casting failures > --------------------------------------------------------------- > > Key: SPARK-28512 > URL: https://issues.apache.org/jira/browse/SPARK-28512 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 3.0.0 > Reporter: Gengliang Wang > Priority: Major > > In popular DBMS like MySQL/PostgreSQL/Oracle, runtime exceptions are thrown > on casting, e.g. cast('abc' as int) > While in Spark, the result is converted as null silently. It is by design > since we don't want a long-running job aborted by some casting failure. But > there are scenarios that users want to make sure all the data conversion are > correct, like the way they use MySQL/PostgreSQL/Oracle. > If the changes touch too much code, we can limit the new optional mode to > table insertion first. By default the new behavior is disabled. -- This message was sent by Atlassian JIRA (v7.6.14#76016) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org