Github user mgaido91 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22713#discussion_r225091990
  
    --- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/RemoveRedundantAliasAndProjectSuite.scala
 ---
    @@ -124,4 +124,11 @@ class RemoveRedundantAliasAndProjectSuite extends 
PlanTest with PredicateHelper
         val expected = Subquery(relation.select('a as "a", 'b).where('b < 
10).select('a).analyze)
         comparePlans(optimized, expected)
       }
    +
    +  test("SPARK-25691: RemoveRedundantProject works also with different 
cases") {
    +    val relation = LocalRelation('a.int, 'b.int)
    +    val query = relation.select('A, 'b).analyzeCaseInsensitive
    +    val optimized = Optimize.execute(query)
    +    comparePlans(optimized, relation)
    --- End diff --
    
    thanks for you comment. Then let me focus for this only to the view topic, 
we can open other tickets for each change later.
    
    > For instance, I don't think this is a valid case.
    
    I see the concern about the possible breaking change, so I agree about not 
introducing this. My point is: then we are saying that Spark is never really 
case-insensitive, even though the case sensitive option is turned to false, 
isn't it? Shouldn't datasources write/read columns in a non-case-sensitive way 
when this flag is turned on?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to