liukun4515 commented on code in PR #4400:
URL: https://github.com/apache/arrow-datafusion/pull/4400#discussion_r1035733041


##########
datafusion/expr/src/type_coercion/binary.rs:
##########
@@ -287,8 +287,8 @@ fn get_wider_decimal_type(
         (DataType::Decimal128(p1, s1), DataType::Decimal128(p2, s2)) => {
             // max(s1, s2) + max(p1-s1, p2-s2), max(s1, s2)
             let s = *s1.max(s2);
-            let range = (p1 - s1).max(p2 - s2);
-            Some(create_decimal_type(range + s, s))
+            let range = (*p1 as i8 - s1).max(*p2 as i8 - s2);

Review Comment:
   Using the negative scale in the arrow ecosystem is ok for me.
   But in the datafusion(SQL level system), it's better to make consistent with 
other SQL system, like Spark,PG,MySQL.
   In other SQL level system, I have not seen the usage of negative scale.
   
   In the PG
   ```
   postgres=# create table test(c1 decimal(10,-1));
   ERROR:  NUMERIC scale -1 must be between 0 and precision 10
   LINE 1: create table test(c1 decimal(10,-1));
   ```
   
   In the Spark:
   
   ```
   spark-sql> create table test_d(c1 decimal(10,-1));
   Error in query:
   extraneous input '-' expecting INTEGER_VALUE(line 1, pos 34)
   
   == SQL ==
   create table test_d(c1 decimal(10,-1))
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to