Github user gczsjdy commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20010#discussion_r157929494
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
 ---
    @@ -158,11 +169,6 @@ object TypeCoercion {
         findTightestCommonType(t1, t2)
           .orElse(findWiderTypeForDecimal(t1, t2))
           .orElse(stringPromotion(t1, t2))
    -      .orElse((t1, t2) match {
    -        case (ArrayType(et1, containsNull1), ArrayType(et2, 
containsNull2)) =>
    -          findWiderTypeForTwo(et1, et2).map(ArrayType(_, containsNull1 || 
containsNull2))
    -        case _ => None
    -      })
    --- End diff --
    
    My suggestion: we define a new function, like `findWiderTypeForArray`. This 
new function can provide 'findWider' functionality compared to the 
'findTightestArray' part(which is basically the first commit in your PR). We 
won't break the original `findTightest` semantic in this way and the code is 
clean.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to