Github user gatorsmile commented on the issue:

    https://github.com/apache/spark/pull/16809
  
    Found the design doc: 
https://docs.google.com/document/d/1h5SzfC5UsvIrRpeLNDKSMKrKJvohkkccFlXo-GBAwQQ/edit?ts=574f717f#
    
    > An alternative is to support a new command  REFRESH path that invalidates 
and refreshes all the cached data (and the associated metadata) for any 
dataframe that contains the given data source path. This acts as an explicit 
hammer without modifying the default behavior. Given that it’s fairly late to 
make significant changes in 2.0, this option might be less intrusive to the 
default behavior.
    
    Should we revisit what is the expected default behavior in 2.2?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to