[ 
https://issues.apache.org/jira/browse/SPARK-24834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16680170#comment-16680170
 ] 

Matt Cheah commented on SPARK-24834:
------------------------------------

[~srowen] - I know this is an old ticket but I wanted to propose re-opening 
this and addressing it for Spark 3.0. My understanding is that this behavior is 
also not consistent with other SQL systems like MySQL and PostGres. In a sense, 
even though this would be a behavioral change, one could argue that this is a 
correctness issue given what one should be expecting given behavior from other 
systems. Would it be reasonable to make the behavior change for Spark 3.0 and 
call it out in the release notes?

> Utils#nanSafeCompare{Double,Float} functions do not differ from normal java 
> double/float comparison
> ---------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-24834
>                 URL: https://issues.apache.org/jira/browse/SPARK-24834
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.3.2
>            Reporter: Benjamin Duffield
>            Priority: Minor
>
> Utils.scala contains two functions `nanSafeCompareDoubles` and 
> `nanSafeCompareFloats` which purport to have special handling of NaN values 
> in comparisons.
> The handling in these functions do not appear to differ from 
> java.lang.Double.compare and java.lang.Float.compare - they seem to produce 
> identical output to the built-in java comparison functions.
> I think it's clearer to not have these special Utils functions, and instead 
> just use the standard java comparison functions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to