[ https://issues.apache.org/jira/browse/SPARK-39040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
XiDuo You updated SPARK-39040: ------------------------------ Description: For example the query will fail: {code:java} set spark.sql.ansi.enabled=true; set spark.sql.optimizer.excludedRules=org.apache.spark.sql.catalyst.optimizer.ConstantFolding; SELECT nanvl(1, 1/0 + 1/0); {code} {code:java} org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 4.0 failed 1 times, most recent failure: Lost task 0.0 in stage 4.0 (TID 4) (10.221.98.68 executor driver): org.apache.spark.SparkArithmeticException: divide by zero. To return NULL instead, use 'try_divide'. If necessary set spark.sql.ansi.enabled to false (except for ANSI interval type) to bypass this error. == SQL(line 1, position 17) == select nanvl(1 , 1/0 + 1/0) ^^^ at org.apache.spark.sql.errors.QueryExecutionErrors$.divideByZeroError(QueryExecutionErrors.scala:151) {code} We should respect the ordering of conditional expression that always evaluate the predicate branch first, so the query above should not fail. was: For example: {code:java} set spark.sql.ansi.enabled=true; set spark.sql.optimizer.excludedRules=org.apache.spark.sql.catalyst.optimizer.ConstantFolding; SELECT nanvl(1, 1/0 + 1/0); {code} We should respect the ordering of conditional expression that always evaluate the predicate branch first, so the query above should not fail. > Respect NaNvl in EquivalentExpressions for expression elimination > ----------------------------------------------------------------- > > Key: SPARK-39040 > URL: https://issues.apache.org/jira/browse/SPARK-39040 > Project: Spark > Issue Type: Sub-task > Components: SQL > Affects Versions: 3.4.0 > Reporter: XiDuo You > Priority: Major > > For example the query will fail: > {code:java} > set spark.sql.ansi.enabled=true; > set > spark.sql.optimizer.excludedRules=org.apache.spark.sql.catalyst.optimizer.ConstantFolding; > SELECT nanvl(1, 1/0 + 1/0); {code} > {code:java} > org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in > stage 4.0 failed 1 times, most recent failure: Lost task 0.0 in stage 4.0 > (TID 4) (10.221.98.68 executor driver): > org.apache.spark.SparkArithmeticException: divide by zero. To return NULL > instead, use 'try_divide'. If necessary set spark.sql.ansi.enabled to false > (except for ANSI interval type) to bypass this error. > == SQL(line 1, position 17) == > select nanvl(1 , 1/0 + 1/0) > ^^^ at > org.apache.spark.sql.errors.QueryExecutionErrors$.divideByZeroError(QueryExecutionErrors.scala:151) > {code} > We should respect the ordering of conditional expression that always evaluate > the predicate branch first, so the query above should not fail. -- This message was sent by Atlassian Jira (v8.20.7#820007) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org