[ 
https://issues.apache.org/jira/browse/SPARK-22616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16266995#comment-16266995
 ] 

Andreas Maier commented on SPARK-22616:
---------------------------------------

I don't see how simply adding an option "blocking" with default value "false" 
is a breaking API change. All the old code would behave as before, only new 
code could set e.g. df.cache(blocking=True) and see a different behaviour. Or 
am I wrong?

> df.cache() / df.persist() should have an option blocking like df.unpersist()
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-22616
>                 URL: https://issues.apache.org/jira/browse/SPARK-22616
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark, Spark Core
>    Affects Versions: 2.2.0
>            Reporter: Andreas Maier
>            Priority: Minor
>
> The method dataframe.unpersist() has an option blocking, which allows for 
> eager unpersisting of a dataframe. On the other side the method 
> dataframe.cache() and dataframe.persist() don't have a comparable option. A 
> (undocumented) workaround for this is to call dataframe.count() directly 
> after cache() or persist(). But for API consistency and convenience it would 
> make sense to give cache() and persist() also the option blocking. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to