Github user travishegner commented on a diff in the pull request:

    https://github.com/apache/spark/pull/8780#discussion_r41172742
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/types/DecimalType.scala ---
    @@ -140,7 +140,12 @@ object DecimalType extends AbstractDataType {
       }
     
       private[sql] def bounded(precision: Int, scale: Int): DecimalType = {
    -    DecimalType(min(precision, MAX_PRECISION), min(scale, MAX_SCALE))
    --- End diff --
    
    I will take your word for the risk involved, I am very new to this project.
    
    From a layman's perspective, it seems that doing some basic checks when 
instantiating the type would make the type more robust. If I understand 
correctly a `precision <= 0` is not allowed, so this patch returns a /default/ 
decimal. Similarly, a `scale > precision` is not allowed, so it returns a 
decimal with the scale truncated to the size of the precision. My thoughts are 
that this will catch unexpected inputs and still behave in an expected way. 
Users instantiating these decimals in ways are intended will still get the same 
type back.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to