Github user sitalkedia commented on the issue:

    https://github.com/apache/spark/pull/17297
  
    cc - @kayousterhout - Addressed your earlier comment about 
https://github.com/apache/spark/pull/12436 ignoring fetch failure from stale 
map output. I have addressed this issue by adding epoch for each map output 
registered, that way if the task's epoch is smaller than the epoch of the map 
output, we can ignore the fetch failure. This also takes care of  epoch changes 
which will be triggered due to executor loss for a shuffle task when its 
shuffle map task executor is gone as pointed out by @mridulm. 
    
    Let me know what you think of the approach. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to