[ 
https://issues.apache.org/jira/browse/SPARK-7196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14522855#comment-14522855
 ] 

Liang-Chi Hsieh edited comment on SPARK-7196 at 5/1/15 7:04 AM:
----------------------------------------------------------------

[~kgeis] I can't reproduce this problem too. Would you mind provide more 
information, such as the schema of amounts? Are you using 
"org.apache.spark.sql.parquet" as your defaultDataSourceName?


was (Author: viirya):
[~kgeis] I can't reproduce this problem too. Would you mind provide more 
information, such as the schema of amounts? Are you using 
{code}"org.apache.spark.sql.parquet"{code} as {code}defaultDataSourceName{code}?

> decimal precision lost when loading DataFrame from JDBC
> -------------------------------------------------------
>
>                 Key: SPARK-7196
>                 URL: https://issues.apache.org/jira/browse/SPARK-7196
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.3.1
>            Reporter: Ken Geis
>            Assignee: Liang-Chi Hsieh
>             Fix For: 1.3.2, 1.4.0
>
>
> I have a decimal database field that is defined as 10.2 (i.e. ##########.##). 
> When I load it into Spark via sqlContext.jdbc(..), the type of the 
> corresponding field in the DataFrame is DecimalType, with precisionInfo None. 
> Because of that loss of precision information, SPARK-4176 is triggered when I 
> try to .saveAsTable(..).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to