[ 
https://issues.apache.org/jira/browse/KYLIN-1345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15129475#comment-15129475
 ] 

Edward Zhang commented on KYLIN-1345:
-------------------------------------

[~honma]I looked into BigDecimalSumAggregator, it will not overflow till max 
precision 38 digits. The possible place for overflow is BigDecimalSerializer 
which will be instantiated with specified DataType(may include precision/scale 
like 6,2), but as the aggregated value could exceed this precision.

java.lang.IllegalArgumentException: '12350000001' exceeds the expected length 
for type decimal(6,2)

        at 
org.apache.kylin.metadata.datatype.BigDecimalSerializer.serialize(BigDecimalSerializer.java:58)
        at 
org.apache.kylin.measure.basic.TestBigDecimalSumAggregatorOverflow.testOverflow(TestBigDecimalSumAggregatorOverflow.java:41)

So, probably we need find the place where DataType is specified, probably that 
comes from DictionaryInfo.

Please let me know if you have the detailed stack trace, otherwise I have to 
reproduce that from M/R job

> measure type expansion when dealing sum of decimal metric
> ---------------------------------------------------------
>
>                 Key: KYLIN-1345
>                 URL: https://issues.apache.org/jira/browse/KYLIN-1345
>             Project: Kylin
>          Issue Type: Improvement
>            Reporter: hongbin ma
>            Assignee: Edward Zhang
>              Labels: newbie
>
> suppose a metric column price is of type decimal (6,2), the sum aggregator of 
> it might exceed the maximal of decimal (6,2). Currently for metric 
> aggregators we inherite the column's type in hive. We should think auto 
> expanding decimal type to decimal(18,4) (just an example) for such cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to