GitHub user wangmiao1981 opened a pull request:

    https://github.com/apache/spark/pull/14613

    [SPARK-16883][SparkR]:SQL decimal type is not properly cast to number when 
collecting SparkDataFrame

    ## What changes were proposed in this pull request?
    
    (Please fill in changes proposed in this fix)
    
    registerTempTable(createDataFrame(iris), "iris")
    str(collect(sql("select cast('1' as double) as x, cast('2' as decimal) as y 
 from iris limit 5")))
    
    'data.frame':       5 obs. of  2 variables:
     $ x: num  1 1 1 1 1
     $ y:List of 5
      ..$ : num 2
      ..$ : num 2
      ..$ : num 2
      ..$ : num 2
      ..$ : num 2
    
    The problem is that spark returns `decimal(10, 0)` col type, instead of 
`decimal`. Thus, `decimal(10, 0)` is not handled correctly. It should be 
handled as "double".
    
    As discussed in JIRA thread, we can have two potential fixes:
    1). Scala side fix to add a new case when writing the object back; However, 
I can't use spark.sql.types._ in Spark core due to dependency issues. I don't 
find a way of doing type case match;
    
    2). SparkR side fix: Add a helper function to check special type like 
`"decimal(10, 0)"` and replace it with `double`, which is PRIMITIVE type. This 
special helper is generic for adding new types handling in the future. 
    
    I open this PR to discuss pros and cons of both approaches. If we want to 
do Scala side fix, we need to find a way to match the case of DecimalType and 
StructType in Spark Core.
    
    ## How was this patch tested?
    
    (Please explain how this patch was tested. E.g. unit tests, integration 
tests, manual tests)
    
    Manual test:
    > str(collect(sql("select cast('1' as double) as x, cast('2' as decimal) as 
y  from iris limit 5")))
    'data.frame':       5 obs. of  2 variables:
     $ x: num  1 1 1 1 1
     $ y: num  2 2 2 2 2
    R Unit tests
    


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/wangmiao1981/spark type

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/14613.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #14613
    
----
commit e95f5575018d15782917b9b3d679b4f6da345ee6
Author: wm...@hotmail.com <wm...@hotmail.com>
Date:   2016-08-11T23:15:15Z

    add a type check helper

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to