Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18460#discussion_r143238804
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
 ---
    @@ -100,6 +101,17 @@ object TypeCoercion {
         case (_: TimestampType, _: DateType) | (_: DateType, _: TimestampType) 
=>
           Some(TimestampType)
     
    +    case (t1 @ StructType(fields1), t2 @ StructType(fields2)) if 
t1.sameType(t2) =>
    +      Some(StructType(fields1.zip(fields2).map { case (f1, f2) =>
    +        // Since `t1.sameType(t2)` is true, two StructTypes have the same 
DataType
    +        // except `name` (in case of `spark.sql.caseSensitive=false`) and 
`nullable`.
    +        // - Different names: use a lower case name because 
findTightestCommonType is commutative.
    +        // - Different nullabilities: `nullable` is true iff one of them 
is nullable.
    +        val name = if (f1.name == f2.name) f1.name else 
f1.name.toLowerCase(Locale.ROOT)
    --- End diff --
    
    Why the output nested column name is lower case? Is Hive behaving like this?
    
    In addition, could you add one more test and check whether we also respect 
case sensitivity conf when we resolve the queries that contain the nested 
column in the references?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to