Hi Wenchen,

thanks for your email. I agree adding doc for decimal type, but I am not
sure what you mean speaking of the behavior when writing: we are not
performing any automatic casting before writing; if we want to do that, we
need a design about it I think.

I am not sure if it makes sense to set a min for it. That would break
backward compatibility (for very weird use case), so I wouldn't do that.

Thanks,
Marco

Il giorno lun 7 gen 2019 alle ore 05:53 Wenchen Fan <cloud0...@gmail.com>
ha scritto:

> I think we need to do this for backward compatibility, and according to
> the discussion in the doc, SQL standard allows negative scale.
>
> To do this, I think the PR should also include a doc for the decimal type,
> like the definition of precision and scale(this one
> <https://stackoverflow.com/questions/35435691/bigdecimal-precision-and-scale>
> looks pretty good), and the result type of decimal operations, and the
> behavior when writing out decimals(e.g. we can cast decimal(1, -20) to
> decimal(20, 0) before writing).
>
> Another question is, shall we set a min scale? e.g. shall we allow
> decimal(1, -10000000)?
>
> On Thu, Oct 25, 2018 at 9:49 PM Marco Gaido <marcogaid...@gmail.com>
> wrote:
>
>> Hi all,
>>
>> a bit more than one month ago, I sent a proposal for handling properly
>> decimals with negative scales in our operations. This is a long standing
>> problem in our codebase as we derived our rules from Hive and SQLServer
>> where negative scales are forbidden, while in Spark they are not.
>>
>> The discussion has been stale for a while now. No more comments on the
>> design doc:
>> https://docs.google.com/document/d/17ScbMXJ83bO9lx8hB_jeJCSryhT9O_HDEcixDq0qmPk/edit#heading=h.x7062zmkubwm
>> .
>>
>> So I am writing this e-mail in order to check whether there are more
>> comments on it or we can go ahead with the PR.
>>
>> Thanks,
>> Marco
>>
>

Reply via email to