[jira] [Commented] (SPARK-35207) hash() and other hash builtins do not normalize negative zero

2021-05-10 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342207#comment-17342207
 ] 

Apache Spark commented on SPARK-35207:
--

User 'planga82' has created a pull request for this issue:
https://github.com/apache/spark/pull/32496

> hash() and other hash builtins do not normalize negative zero
> -
>
> Key: SPARK-35207
> URL: https://issues.apache.org/jira/browse/SPARK-35207
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.1.1
>Reporter: Tim Armstrong
>Priority: Major
>  Labels: correctness
>
> I would generally expect that {{x = y => hash( x ) = hash( y )}}. However +-0 
> hash to different values for floating point types. 
> {noformat}
> scala> spark.sql("select hash(cast('0.0' as double)), hash(cast('-0.0' as 
> double))").show
> +-+--+
> |hash(CAST(0.0 AS DOUBLE))|hash(CAST(-0.0 AS DOUBLE))|
> +-+--+
> |  -1670924195|-853646085|
> +-+--+
> scala> spark.sql("select cast('0.0' as double) == cast('-0.0' as 
> double)").show
> ++
> |(CAST(0.0 AS DOUBLE) = CAST(-0.0 AS DOUBLE))|
> ++
> |true|
> ++
> {noformat}
> I'm not sure how likely this is to cause issues in practice, since only a 
> limited number of calculations can produce -0 and joining or aggregating with 
> floating point keys is a bad practice as a general rule, but I think it would 
> be safer if we normalised -0.0 to +0.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35207) hash() and other hash builtins do not normalize negative zero

2021-05-03 Thread Pablo Langa Blanco (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17338698#comment-17338698
 ] 

Pablo Langa Blanco commented on SPARK-35207:


Hi [~tarmstrong] ,

I have read about signed zero and it's a little bit tricky, from what I have 
read in IEEE 754 and it's implementation in Java I understand the same as you 
it's not consistent to have different hash values for 0 and -0. This applays to 
double and float.

Here is an extract from IEEE 754: "The two zeros are distinguishable 
arithmetically only by either division-byzero ( producing appropriately signed 
infinities ) or else by the CopySign function recommended by IEEE 754 /854. 
Infinities, SNaNs, NaNs and Subnormal numbers necessitate four more special 
cases"

I will raise a PR with this.

Regards

> hash() and other hash builtins do not normalize negative zero
> -
>
> Key: SPARK-35207
> URL: https://issues.apache.org/jira/browse/SPARK-35207
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.1.1
>Reporter: Tim Armstrong
>Priority: Major
>  Labels: correctness
>
> I would generally expect that {{x = y => hash( x ) = hash( y )}}. However +-0 
> hash to different values for floating point types. 
> {noformat}
> scala> spark.sql("select hash(cast('0.0' as double)), hash(cast('-0.0' as 
> double))").show
> +-+--+
> |hash(CAST(0.0 AS DOUBLE))|hash(CAST(-0.0 AS DOUBLE))|
> +-+--+
> |  -1670924195|-853646085|
> +-+--+
> scala> spark.sql("select cast('0.0' as double) == cast('-0.0' as 
> double)").show
> ++
> |(CAST(0.0 AS DOUBLE) = CAST(-0.0 AS DOUBLE))|
> ++
> |true|
> ++
> {noformat}
> I'm not sure how likely this is to cause issues in practice, since only a 
> limited number of calculations can produce -0 and joining or aggregating with 
> floating point keys is a bad practice as a general rule, but I think it would 
> be safer if we normalised -0.0 to +0.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org