GitHub user CodingCat opened a pull request:
https://github.com/apache/spark/pull/35
[SPARK-1104] kill Process in workerThread
As reported in https://spark-project.atlassian.net/browse/SPARK-1104
By @pwendell: "Sometimes due to large shuffles executors will take a long
time shutting down. In particular this can happen if large numbers of shuffle
files are around (this will be alleviated by SPARK-1103, but nonetheless...).
The symptom is you have DEAD workers sitting around in the UI and the
existing workers keep trying to re-register but can't because they've been
assumed dead."
In this patch, I add lines in the handler of InterruptedException in
workerThread of executorRunner, so that the process.destroy() and
process.waitFor() can only block the workerThread instead of blocking the
worker Actor...
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/CodingCat/spark SPARK-1104
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/35.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #35
----
commit 48a88d9c2cee13410c2a7a7891566fae6609fcd8
Author: CodingCat <[email protected]>
Date: 2014-02-27T15:22:30Z
kill Process in workerThread
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---