Github user kmanamcheri commented on the issue:

    https://github.com/apache/spark/pull/22614
  
    @gatorsmile I have added the config option and an additional test.
    
    Here's the new behavior
    - Setting spark.sql.metastorePartitionPruningFallback to 'false' will 
ALWAYS throw an exception if partition pushdown fails (Hive throws an 
exception). This is suggested for queries where you want to fail fast and you 
know that you have a large number of partitions.
    - Setting spark.sql.metastorePartitionPruningFallback to 'true' (this is 
the default setting) will ALWAYS catch exception from Hive and retry with 
fetching all partitions. However, to be helpful to users, Spark will read the 
directSql config value from Hive and provide good log messages on what the next 
steps to do.
    
    @dongjoon-hyun @mallman @vanzin If these look good, can we move on this to 
merge? Thanks a lot for all the comments and discussions.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to