andygrove opened a new issue, #2021:
URL: https://github.com/apache/datafusion-comet/issues/2021

   ### Describe the bug
   
   As part of exploring writing unit tests for serde code in 
https://github.com/apache/datafusion-comet/issues/2020, I discovered that we 
currently have incorrect behavior for `try_add` (and probably for many other 
`try_` functions).
   
   The issue can be reproduced with the following test:
   
   ```scala
     test("try_add") {
       val data = Seq((Integer.MAX_VALUE, 1))
       withParquetTable(data, "tbl") {
         checkSparkAnswerAndOperator("SELECT try_add(_1, _2) FROM tbl")
       }
     }
   ```
   
   This fails with:
   
   ```
   == Results ==
   !== Correct Answer - 1 ==      == Spark Answer - 1 ==
    struct<try_add(_1, _2):int>   struct<try_add(_1, _2):int>
   ![null]                        [-2147483648]
   ```
   
   The issue is that we do not respect the `EvalMode` for this (and other) 
expressions. When serializing the expression we populate a `fail_on_error` flag 
based on `add.evalMode == EvalMode.ANSI`. However, we should really be 
serializing the `evalMode` and implementing all three distinct behaviors.
   
   In the short term, we should probably just fall back to Spark for `Try`, 
since the behavior is incorrect.
   
   
   
   
   ### Steps to reproduce
   
   _No response_
   
   ### Expected behavior
   
   _No response_
   
   ### Additional context
   
   _No response_


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org
For additional commands, e-mail: github-h...@datafusion.apache.org

Reply via email to