GitHub user cloud-fan opened a pull request:

    https://github.com/apache/spark/pull/23265

    [2.4][SPARK-26021][SQL][FOLLOWUP] only deal with NaN and -0.0 in 
UnsafeWriter

    backport https://github.com/apache/spark/pull/23239 to 2.4
    
    ---------
    
    ## What changes were proposed in this pull request?
    
    A followup of https://github.com/apache/spark/pull/23043
    
    There are 4 places we need to deal with NaN and -0.0:
    1. comparison expressions. `-0.0` and `0.0` should be treated as same. 
Different NaNs should be treated as same.
    2. Join keys. `-0.0` and `0.0` should be treated as same. Different NaNs 
should be treated as same.
    3. grouping keys. `-0.0` and `0.0` should be assigned to the same group. 
Different NaNs should be assigned to the same group.
    4. window partition keys. `-0.0` and `0.0` should be treated as same. 
Different NaNs should be treated as same.
    
    The case 1 is OK. Our comparison already handles NaN and -0.0, and for 
struct/array/map, we will recursively compare the fields/elements.
    
    Case 2, 3 and 4 are problematic, as they compare `UnsafeRow` binary 
directly, and different NaNs have different binary representation, and the same 
thing happens for -0.0 and 0.0.
    
    To fix it, a simple solution is: normalize float/double when building 
unsafe data (`UnsafeRow`, `UnsafeArrayData`, `UnsafeMapData`). Then we don't 
need to worry about it anymore.
    
    Following this direction, this PR moves the handling of NaN and -0.0 from 
`Platform` to `UnsafeWriter`, so that places like `UnsafeRow.setFloat` will not 
handle them, which reduces the perf overhead. It's also easier to add comments 
explaining why we do it in `UnsafeWriter`.
    
    ## How was this patch tested?
    
    existing tests

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/cloud-fan/spark minor

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/23265.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #23265
    
----
commit 6a837c019eaf7bc9907715a54778bfbb339f3342
Author: Wenchen Fan <wenchen@...>
Date:   2018-12-08T19:18:09Z

    [SPARK-26021][SQL][FOLLOWUP] only deal with NaN and -0.0 in UnsafeWriter
    
    A followup of https://github.com/apache/spark/pull/23043
    
    There are 4 places we need to deal with NaN and -0.0:
    1. comparison expressions. `-0.0` and `0.0` should be treated as same. 
Different NaNs should be treated as same.
    2. Join keys. `-0.0` and `0.0` should be treated as same. Different NaNs 
should be treated as same.
    3. grouping keys. `-0.0` and `0.0` should be assigned to the same group. 
Different NaNs should be assigned to the same group.
    4. window partition keys. `-0.0` and `0.0` should be treated as same. 
Different NaNs should be treated as same.
    
    The case 1 is OK. Our comparison already handles NaN and -0.0, and for 
struct/array/map, we will recursively compare the fields/elements.
    
    Case 2, 3 and 4 are problematic, as they compare `UnsafeRow` binary 
directly, and different NaNs have different binary representation, and the same 
thing happens for -0.0 and 0.0.
    
    To fix it, a simple solution is: normalize float/double when building 
unsafe data (`UnsafeRow`, `UnsafeArrayData`, `UnsafeMapData`). Then we don't 
need to worry about it anymore.
    
    Following this direction, this PR moves the handling of NaN and -0.0 from 
`Platform` to `UnsafeWriter`, so that places like `UnsafeRow.setFloat` will not 
handle them, which reduces the perf overhead. It's also easier to add comments 
explaining why we do it in `UnsafeWriter`.
    
    existing tests
    
    Closes #23239 from cloud-fan/minor.
    
    Authored-by: Wenchen Fan <wenc...@databricks.com>
    Signed-off-by: Dongjoon Hyun <dongj...@apache.org>

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to