jiangjiangtian commented on PR #6954:
URL: 
https://github.com/apache/incubator-gluten/pull/6954#issuecomment-2311827645

   > Hi @jiangjiangtian, this adjustment, as I recall, is for the situation 
where one performs an arithmetic operation between a decimal and a number. In 
this instance, the number is converted to decimal and its precision and scale 
acquired from Spark are (38, 18), which are inconsistent with the real values. 
E.g., in the case you mentioned, `Decimal(0.00000001)` should have a precision 
and scale of (8, 8) instead of (38, 18). In order to produce accurate results, 
we need additional logic to extract the accurate precision and scale that 
native computing requires.
   > 
   > Perhaps you could help confirm if it is the case for the example you 
provided. Thanks.
   
   @rui-mo Thanks! It seems that Spark doesn't have the logic. I don't know why 
Spark doesn't need this logic.
   
   In my case, the type of the literal `0.00000001` is `Decimal(19, 8)`. After 
the adjustment, the type is still `Decimal(19, 8)`. Because the string 
representation of the decimal contains `.` and the decimal is not a valid long 
number. Perhaps we should not return the original precision and scale in the 
end of the function. 
https://github.com/apache/incubator-gluten/blob/main/gluten-core/src/main/scala/org/apache/gluten/utils/DecimalArithmeticUtil.scala#L110


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to