Github user gatorsmile commented on the issue:

    https://github.com/apache/spark/pull/22614
  
    Based on my understanding, the solution of FB team is to retry the 
following commands multiple times:
    ```
    getPartitionsByFilterMethod.invoke(hive, table, 
filter).asInstanceOf[JArrayList[Partition]]
    ```
    
    This really depends on what is the actual errors that fail 
`getPartitionsByFilterMethod`. When there are many concurrent users share the 
same metastore, `exponential backoff with retries` is very reasonable since 
most of errors might be caused by timeout or similar reasons. 
    
    If it still fails, I would suggest to fail fast or depends on the conf 
value of `spark.sql.hive.metastorePartitionPruning.fallback.enabled`


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to