GitHub user tejasapatil opened a pull request:

    https://github.com/apache/spark/pull/17184

    [SPARK-19843] [SQL] UTF8String => (int / long) conversion expensive for 
invalid inputs

    ## What changes were proposed in this pull request?
    
    Jira : https://issues.apache.org/jira/browse/SPARK-19843
    
    Added cheap checks to `UTF8String` to check if it can be converted to int / 
long. The logic is motivated from [LazyUtils.isNumberMaybe in 
Hive](https://github.com/apache/hive/blob/ff67cdda1c538dc65087878eeba3e165cf3230f4/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyUtils.java#L90).
 
    
    ## How was this patch tested?
    
    - Added new unit tests
    - Ran a prod job which had conversion from string -> int and verified the 
outputs
    
    ## Performance
    
    Attached `UTF8StringBenchmark` with the PR. Here is the summary:
    ```
    conversion to int:                      Best/Avg Time(ms)    Rate(M/s)   
Per Row(ns)   Relative
    
------------------------------------------------------------------------------------------------
    without check over valid integers              500 /  503         33.6      
    29.8       1.0X
    with check over valid integers                1069 / 1072         15.7      
    63.7       0.5X
    without check over in-valid integers        34515 / 36096          0.5      
  2057.3       0.0X
    with check over in-valid integers              718 /  895         23.4      
    42.8       0.7X
    ```
    
    Note that this PR is a tradeoff. If the data does not have any invalid 
inputs, then performance of conversion is hurt by 2x. Based on the numbers in 
the benchmark, if more than 2.5% of input cannot be converted to `int`, then it 
will add perf. gains. For my workloads, I could see on average 37% savings 
(best case 2.08x performance gain).

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/tejasapatil/spark SPARK-19843_is_numeric_maybe

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/17184.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #17184
    
----
commit adcb405bc889345c662ddda1742d3ea95191570d
Author: Tejas Patil <tej...@fb.com>
Date:   2017-03-07T00:56:32Z

    [SPARK-19843] [SQL] UTF8String => (int / long) conversion expensive for 
invalid inputs

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to