[ 
https://issues.apache.org/jira/browse/SPARK-20555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16282128#comment-16282128
 ] 

Shankar Kandaswamy edited comment on SPARK-20555 at 12/7/17 4:50 PM:
---------------------------------------------------------------------

[~gfeher]
May i know if first issue has been resolved?

"1. DECIMAL(1) becomes BooleanType
In Orcale, a DECIMAL(1) can have values from -9 to 9."

I am using the spark 2.2.0 but still i am getting Boolean "false" when source 
is having NUMBER(1) as 0.  I want it as 0 without customschema Could you please 
advise?


was (Author: shankarkool):
[~gfeher]
May i know if first issue has been resolved?

"1. DECIMAL(1) becomes BooleanType
In Orcale, a DECIMAL(1) can have values from -9 to 9."

I am using the spark 2.2.0 but still i am getting Boolean "false" when source 
is having numeric(1) as 0.  I want it as 0 without customschema Could you 
please advise?

> Incorrect handling of Oracle's decimal types via JDBC
> -----------------------------------------------------
>
>                 Key: SPARK-20555
>                 URL: https://issues.apache.org/jira/browse/SPARK-20555
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.1.0
>            Reporter: Gabor Feher
>            Assignee: Gabor Feher
>             Fix For: 2.1.2, 2.2.0
>
>
> When querying an Oracle database, Spark maps some Oracle numeric data types 
> to incorrect Catalyst data types:
> 1. DECIMAL(1) becomes BooleanType
> In Orcale, a DECIMAL(1) can have values from -9 to 9.
> In Spark now, values larger than 1 become the boolean value true.
> 2. DECIMAL(3,2) becomes IntegerType
> In Oracle, a DECIMAL(2) can have values like 1.23
> In Spark now, digits after the decimal point are dropped.
> 3. DECIMAL(10) becomes IntegerType
> In Oracle, a DECIMAL(10) can have the value 9999999999 (ten nines), which is 
> more than 2^31
> Spark throws an exception: "java.sql.SQLException: Numeric Overflow"
> I think the best solution is to always keep Oracle's decimal types. (In 
> theory we could introduce a FloatType in some case of #2, and fix #3 by only 
> introducing IntegerType for DECIMAL(9). But in my opinion, that would end up 
> complicated and error-prone.)
> Note: I think the above problems were introduced as part of  
> https://github.com/apache/spark/pull/14377
> The main purpose of that PR seems to be converting Spark types to correct 
> Oracle types, and that part seems good to me. But it also adds the inverse 
> conversions. As it turns out in the above examples, that is not possible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to