Github user tgravescs commented on the issue:

    https://github.com/apache/spark/pull/18388
  
    
    I don't think we reject any requests at this point.  So yes you could still 
run into an issue. Generally I think limiting the # of blocks you are fetching 
at once will solve this also, but its not guaranteed.  By default we get one 
chunk per block, so again I think the most requests you would have buffered 
would be spark.reducer.maxBlocksInFlightPerAddress * # reducer connected.   
    
    I'm still ok  with adding in something like what you have here as a last 
resort type thing.    I recommend just closing the connections rather then 
changing the api to keep backwards compatibility.  If there is some issue with 
that though then perhaps we can config it off by default like you suggest but I 
think that is much harder for the broader community to see benefit from it.
     You mentioned a problem with just doing a close, what was the problem?
    
    The flow control part I mention above is really just if the outbound 
buffers are taking to much memory.  I think doing all 3 of these would be good.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to