Github user nchammas commented on the issue:

    https://github.com/apache/spark/pull/14579
  
    > So there is no chaining requirement, and it will only work in a with 
statement.
    
    @MLnick - Couldn't we also create a scenario (like @holdenk did earlier) 
where a user does something like this?
    
    ```python
    persisted_rdd = persisted(rdd)
    persisted_rdd.map(...).filter(...).count()
    ```
    
    This would break pipelining too, no?
    
    And I think the expectation would be for it not to break pipelining, 
because existing common context managers in Python don't have a requirement 
that they _must_ be used in a `with` block.
    
    For example, `f = open(file)` works fine, as does `s = requests.Session()`, 
and the resulting objects have the same behavior as they would inside a `with` 
block.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to