Github user bdrillard commented on a diff in the pull request: https://github.com/apache/spark/pull/20010#discussion_r157901873 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala --- @@ -158,11 +169,6 @@ object TypeCoercion { findTightestCommonType(t1, t2) .orElse(findWiderTypeForDecimal(t1, t2)) .orElse(stringPromotion(t1, t2)) - .orElse((t1, t2) match { - case (ArrayType(et1, containsNull1), ArrayType(et2, containsNull2)) => - findWiderTypeForTwo(et1, et2).map(ArrayType(_, containsNull1 || containsNull2)) - case _ => None - }) --- End diff -- Sure, I think that's possible. In order to handle instances with and without string promotion, I think it may be necessary to add a boolean parameter, and then to handle the instances where the pointType/keyType and valueType may result in `None`, see https://github.com/apache/spark/pull/20010/files#diff-383a8cdd0a9c58cae68e0a79295520a3R105 To support the minor change in function signature for `findTightestCommonType`, I have to do some refactoring in the tests. Let me know if you think there's a cleaner implementation, but this seems to help localize like concerns into `findTightestCommonType`.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org