Do you mind trying out build from master branch ?

1.5.3 is a bit old.

On Wed, Apr 20, 2016 at 5:25 AM, FangFang Chen <lulynn_2015_sp...@163.com>
wrote:

> I found spark sql lost precision, and handle data as int with some rule.
> Following is data got via hive shell and spark sql, with same sql to same
> hive table:
> Hive:
> 0.4
> 0.5
> 1.8
> 0.4
> 0.49
> 1.5
> Spark sql:
> 1
> 2
> 2
> Seems the handle rule is: when decimal point data <0.5 then to 0, when
> decimal point data>=0.5 then to 1.
>
> Is this a bug or some configuration thing? Please give some suggestions.
> Thanks
>
> 发自 网易邮箱大师 <http://u.163.com/signature>
> 在2016年04月20日 18:45,FangFang Chen <lulynn_2015_sp...@163.com> 写道:
>
> The output is:
> Spark SQ:6828127
> Hive:6980574.1269
>
> 发自 网易邮箱大师 <http://u.163.com/signature>
> 在2016年04月20日 18:06,FangFang Chen <lulynn_2015_sp...@163.com> 写道:
>
> Hi all,
> Please give some suggestions. Thanks
>
> With following same sql, spark sql and hive give different result. The sql
> is sum(decimal(38,18)) columns.
> Select sum(column) from table;
> column is defined as decimal(38,18).
>
> Spark version:1.5.3
> Hive version:2.0.0
>
> 发自 网易邮箱大师 <http://u.163.com/signature>
>
>
>
>
>
>
>

Reply via email to