hi, guys.
We have set up the dynamic allocation resource on spark-yarn. Now we use
spark 1.5.
One executor tries to fetch data from another nodemanager's shuffle
service, and the nodemanager crashes, which makes the executor stop on the
states util the crashed nodemanager has been launched again.

I just want to know whether spark will resubmit the completed tasks if the
latter tasks being executing cannot find the output?

Thanks for any explanation.

-- 
Bing Jiang

Reply via email to