Github user liancheng commented on the issue:

    https://github.com/apache/spark/pull/13989
  
    One concern of mine is that, analyzed plan, optimized plan, and executed 
(physical) plan stored in `QueryExecution` are all lazy vals, which means that 
they won't be re-optimized/planned accordingly after refreshing metadata of the 
corresponding logical plan.
    
    Say we constructed a DataFrame `df` to join a small table `A` and a large 
table `B`. After calling `df.write.parquet(...)`, analyzed, optimized, and 
executed plans of `df` are all computed. Since `A` is small, the planner may 
decide to broadcast it, and this decision is reflected in the physical plan.
    
    Next, we add a bunch of files into the directory where table `A` lives to 
make it super large, then call `df.refresh()` to refresh the logical plan. Now, 
if we try to call `df.write.parquet(...)` again, the query may probably crash 
since the physical plan is not refreshed and still thinks that `A` should be 
broadcasted.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to