Github user kanzhang commented on the pull request:
https://github.com/apache/spark/pull/306#issuecomment-40303520
test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled an
Github user kanzhang commented on the pull request:
https://github.com/apache/spark/pull/306#issuecomment-40124236
Could someone review and merge this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user kanzhang commented on the pull request:
https://github.com/apache/spark/pull/306#issuecomment-39409752
Code comments updated.
Any more review comments? Thx.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/306#discussion_r11237180
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/ExecutorRunner.scala ---
@@ -142,7 +142,7 @@ private[spark] class ExecutorRunner(
//
Github user kanzhang commented on a diff in the pull request:
https://github.com/apache/spark/pull/306#discussion_r11237143
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/ExecutorRunner.scala ---
@@ -142,7 +142,7 @@ private[spark] class ExecutorRunner(
// l
Github user kanzhang commented on a diff in the pull request:
https://github.com/apache/spark/pull/306#discussion_r11236480
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/ExecutorRunner.scala ---
@@ -142,7 +142,7 @@ private[spark] class ExecutorRunner(
// l
Github user kanzhang commented on the pull request:
https://github.com/apache/spark/pull/306#issuecomment-39404001
Thanks for your comments. I just did a test of the ctrl+c case using Spark
Shell. When I used ctrl+c to exit the shell, both Executor log and Master log
showed that they
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/306#discussion_r11235592
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/ExecutorRunner.scala ---
@@ -142,7 +142,7 @@ private[spark] class ExecutorRunner(
//
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/306#discussion_r11235339
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -290,10 +290,11 @@ private[spark] class Master(
appInfo.rem
Github user kanzhang commented on a diff in the pull request:
https://github.com/apache/spark/pull/306#discussion_r11235230
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -290,10 +290,11 @@ private[spark] class Master(
appInfo.remo
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/306#issuecomment-39400244
@kanzhang I think the tricky part is just here, a normally exited program
seems the same with the ones being interrupted during the running .though I
think your PR c
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/306#issuecomment-39399935
Ah, I see, your test program SparkPI just does everything in a standard
way, call new SparkContext and then sc.stop()
so everything seems straightforward.
Github user kanzhang commented on the pull request:
https://github.com/apache/spark/pull/306#issuecomment-39399983
That's ok. If the driver doesn't instruct executor to shutdown, the process
won't exit, right? The executor will eventually be KILLED by Master when app
disassociates.
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/306#issuecomment-39399719
so, you are assuming that the user always explicitly call sc.stop(), but
that's not always true, even the example program in spark's document does not
do that
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/306#discussion_r11234539
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -290,10 +290,11 @@ private[spark] class Master(
appInfo.rem
Github user kanzhang commented on a diff in the pull request:
https://github.com/apache/spark/pull/306#discussion_r11234468
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -290,10 +290,11 @@ private[spark] class Master(
appInfo.remo
Github user kanzhang commented on the pull request:
https://github.com/apache/spark/pull/306#issuecomment-39398593
When the driver sends StopExecutor to executor (log msg "Driver commanded a
shutdown"), it also ends up here. So it can be a normal exit depends on the
exit code. See my
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/306#discussion_r11234306
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -290,10 +290,11 @@ private[spark] class Master(
appInfo.rem
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/306#discussion_r11233840
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/ExecutorRunner.scala ---
@@ -142,7 +142,7 @@ private[spark] class ExecutorRunner(
//
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/306#issuecomment-39391986
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proj
GitHub user kanzhang opened a pull request:
https://github.com/apache/spark/pull/306
SPARK-1118
adding EXITED executor state and not relaunching normally exited executors
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/kanzhang/s
21 matches
Mail list logo