Github user budde commented on the issue:

    https://github.com/apache/spark/pull/16797
  
    Bringing back schema inference is certainly a much cleaner option, although 
I imagine doing this in the old manner would negate the performance 
improvements brought by #14690 for any non-Spark 2.1 dataset.
    
    Ideally, I think we would infer the schema only from the pruned partition 
list for tables we can't read a case-sensitive schema for. Unless I'm mistaken, 
this would have to happen during optimization of the logical plan, after the 
PruneFileSourcePartitions rule has been applied. My thought is that we could 
write a rule that passes the pruned file list to the file format's 
inferSchema() method to replace the HadoopFsRelation's dataSchema with the 
result. I'm not very familiar with Catalyst though, so I'm not sure if changing 
the relation's schema during optimization will cause problems.
    
    There is [an open PR to add support for case-insensitive schemas to 
Parquet](https://github.com/apache/parquet-mr/pull/210) which would be helpful 
here since it would provide a way to avoid schema inference when your Parquet 
files have case-sensitive fields but you don't care about case sensitivity when 
querying. Unfortunately the PR seems to be more or less abandoned though.
    
    Pinging @mallman, the author of #14690, to see if he has any input on this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to