Github user LiShuMing commented on the issue:
https://github.com/apache/spark/pull/18905
See another approach to solve this problem:
https://github.com/apache/spark/pull/19032 and I will close this pr.
Thanks @jerryshao @tgravescs .
---
If your project is set up for it, you
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/18905
The recovery path returned by yarn is supposed to be reliable and if it
isn't working then the NM itself shouldn't run. So in general you should just
use that if you want spark to be able to reco
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18905
I have two questions about the fix:
1. Is it a good idea to change recovery path to other directory? Since
recovery path is configured by user or figured out by yarn, so maybe YARN has
so
Github user LiShuMing commented on the issue:
https://github.com/apache/spark/pull/18905
ping @jerryshao
I found a method to check disk in hadoop:
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DiskChecker
Github user LiShuMing commented on the issue:
https://github.com/apache/spark/pull/18905
Sorry, busy recently, I will update it today...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18905
@LiShuMing any update on this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled an
Github user LiShuMing commented on the issue:
https://github.com/apache/spark/pull/18905
@jerryshao Thanks for your replies! I will do such things then:
1. "it is good to change to other directories (is yarn internally relying
on it)?"
I think the recovery path(local variable