Github user GraceH commented on the issue:
https://github.com/apache/spark/pull/7927
@sprite331. According to my understanding, this patch tries to catch
certain exceptions when the user introducing dynamic allocation. One quick
solution is to disable dynamic allocation if possible
Github user GraceH commented on the issue:
https://github.com/apache/spark/pull/14683
Thanks @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user GraceH commented on the issue:
https://github.com/apache/spark/pull/14683
@srowen, I have revised that accordingly.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user GraceH commented on the issue:
https://github.com/apache/spark/pull/14683
@srowen. I have updated the patch accordingly. please let me know your
comments. anything missing, please let me know.
---
If your project is set up for it, you can reply to this email and have
Github user GraceH commented on the issue:
https://github.com/apache/spark/pull/14683
Sorry about my mistake. I will re-post one.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user GraceH commented on the issue:
https://github.com/apache/spark/pull/14683
Oops. @srowen I thought the previous pull request to be closed without
merge. That is why I re-post that here.
Do you mean we just need the document here, right?
---
If your project is set up
Github user GraceH commented on the issue:
https://github.com/apache/spark/pull/14683
@srowen Here we go. please feel free to let me know your comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user GraceH opened a pull request:
https://github.com/apache/spark/pull/14683
[SPARK-16968]Add additional options in jdbc when creating a new table
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
In the PR, we
Github user GraceH commented on the issue:
https://github.com/apache/spark/pull/14559
Hi @srowen @rxin , sorry for late response. I have added the document part.
https://github.com/GraceH/spark/commit/8360c2911b70aa628f8edba593e3764d3b07ca55
Shall I raise a new PR?
---
If your
Github user GraceH commented on the issue:
https://github.com/apache/spark/pull/14559
sure. Both are ok to me. will document those options.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/14559#discussion_r74550111
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -423,6 +423,10 @@ final class DataFrameWriter[T] private[sql](ds
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/14559#discussion_r74546716
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -423,6 +423,10 @@ final class DataFrameWriter[T] private[sql](ds
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/14559#discussion_r74542628
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCOptions.scala
---
@@ -20,14 +20,21 @@ package
Github user GraceH commented on the issue:
https://github.com/apache/spark/pull/14559
Thanks all. I have added the unit test in JDBCWriterSuite. Any further
comment, please feel free to let me know.
BTW, or we can point the user to check JDBCOptions for further
Github user GraceH commented on the issue:
https://github.com/apache/spark/pull/14559
@HyukjinKwon and @srowen, here is the initial proposal. Please let me know
your comment. I will refine that with unit test later.
BTW, the readwriter.py calls high level api of jdbc(url
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/14559#discussion_r74278231
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -447,7 +447,11 @@ final class DataFrameWriter[T] private[sql](ds
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/14559#discussion_r74188842
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -447,7 +447,11 @@ final class DataFrameWriter[T] private[sql](ds
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/14559#discussion_r74187289
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -447,7 +447,11 @@ final class DataFrameWriter[T] private[sql](ds
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/14559#discussion_r74029250
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -447,7 +447,16 @@ final class DataFrameWriter[T] private[sql](ds
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/14559#discussion_r74028584
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -447,7 +447,16 @@ final class DataFrameWriter[T] private[sql](ds
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/14559#discussion_r74027475
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -447,7 +447,16 @@ final class DataFrameWriter[T] private[sql](ds
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/14559#discussion_r74026903
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -447,7 +447,16 @@ final class DataFrameWriter[T] private[sql](ds
GitHub user GraceH opened a pull request:
https://github.com/apache/spark/pull/14559
[SPARK-16968]Add additional options in jdbc when creating a new table
## What changes were proposed in this pull request?
In the PR, we just allow the user to add additional options when
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/9796#issuecomment-165957323
@andreor14 thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/9796#issuecomment-164991082
I leave my thoughts under GraceH#2. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/9796#discussion_r47732666
--- Diff:
core/src/test/scala/org/apache/spark/deploy/StandaloneDynamicAllocationSuite.scala
---
@@ -386,17 +386,21 @@ class StandaloneDynamicAllocationSuite
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/9796#issuecomment-163839364
Thanks @zsxwing. The patch seems to pass all tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/9796#issuecomment-159824727
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/9796#discussion_r45939198
--- Diff:
core/src/test/scala/org/apache/spark/deploy/StandaloneDynamicAllocationSuite.scala
---
@@ -386,17 +386,21 @@ class StandaloneDynamicAllocationSuite
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/9796#discussion_r45939112
--- Diff:
core/src/test/scala/org/apache/spark/deploy/StandaloneDynamicAllocationSuite.scala
---
@@ -386,17 +386,21 @@ class StandaloneDynamicAllocationSuite
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/9796#discussion_r45937858
--- Diff:
core/src/test/scala/org/apache/spark/deploy/StandaloneDynamicAllocationSuite.scala
---
@@ -386,17 +386,21 @@ class StandaloneDynamicAllocationSuite
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/9796#issuecomment-159778011
Yes. The replacement is finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/9796#issuecomment-159776246
I have added the test case
https://github.com/GraceH/spark/commit/2e4884c30d9edb0a366e9138cbad8772c5645c5d.
Please let me know your comments.
---
If your project is
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/9796#issuecomment-159774039
@andrewor14 Yes. you are so right. Meanwhile it seems the original
implementation has waited for a while to check if the replacement is there.
According to you
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/9796#discussion_r45685308
--- Diff:
core/src/test/scala/org/apache/spark/deploy/StandaloneDynamicAllocationSuite.scala
---
@@ -395,8 +395,8 @@ class StandaloneDynamicAllocationSuite
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/9796#discussion_r45418928
--- Diff:
core/src/test/scala/org/apache/spark/deploy/StandaloneDynamicAllocationSuite.scala
---
@@ -395,8 +395,8 @@ class StandaloneDynamicAllocationSuite
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/9796#discussion_r45418211
--- Diff:
core/src/test/scala/org/apache/spark/deploy/StandaloneDynamicAllocationSuite.scala
---
@@ -395,8 +395,8 @@ class StandaloneDynamicAllocationSuite
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/9796#discussion_r45412931
--- Diff:
core/src/test/scala/org/apache/spark/deploy/StandaloneDynamicAllocationSuite.scala
---
@@ -395,8 +395,8 @@ class StandaloneDynamicAllocationSuite
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/9796#discussion_r45284000
--- Diff:
core/src/test/scala/org/apache/spark/deploy/StandaloneDynamicAllocationSuite.scala
---
@@ -395,8 +395,8 @@ class StandaloneDynamicAllocationSuite
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/9796#discussion_r45283718
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -462,7 +463,8 @@ class
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/9796#discussion_r45283688
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -408,7 +408,8 @@ private[spark] class ExecutorAllocationManager
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/9796#discussion_r45283707
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -450,7 +450,8 @@ class
GitHub user GraceH opened a pull request:
https://github.com/apache/spark/pull/9796
Return "false" is nothing to kill in killExecutors
In discussion (SPARK-9552), we proposed a force kill in `killExecutors`.
But if there is nothing to kill, it will return back
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-157604487
@andrewor14 @vanzin Thanks all. I will follow that by creating a new patch
under SPARK-9552.
---
If your project is set up for it, you can reply to this email and have
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-157307254
@andrewor14 My bad. Since the `val executors = getExecutorIds(sc)` is
fetched beforehand. We should not kill `executors.head` again and again (it
should be
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-157227255
@vanzin Also thanks for helping me to clarify the thoughts for
acknowledgement part.
---
If your project is set up for it, you can reply to this email and have your
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-157224591
@andrewor14 That is really a good way to have mock busy status. Thanks a
lot, really learn a lot from that.
---
If your project is set up for it, you can reply to this
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r45008834
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -429,7 +433,13 @@ class
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r45008783
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -442,7 +452,7 @@ class
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r45008710
--- Diff:
core/src/test/scala/org/apache/spark/deploy/StandaloneDynamicAllocationSuite.scala
---
@@ -404,6 +404,33 @@ class StandaloneDynamicAllocationSuite
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r45008692
--- Diff:
core/src/test/scala/org/apache/spark/deploy/StandaloneDynamicAllocationSuite.scala
---
@@ -455,6 +482,19 @@ class StandaloneDynamicAllocationSuite
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r45008522
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -87,8 +87,8 @@ private[spark] class TaskSchedulerImpl
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-156315327
@vanzin and @andrewor14 , please let me know your further imports. sorry
for certain rounds of amendments.
---
If your project is set up for it, you can reply to this
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r44746716
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -1489,7 +1489,7 @@ class SparkContext(config: SparkConf) extends Logging
with
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-156296415
@vanzin My bad. I change the code a little bit as below. Only force ==
true will change the semantics, i.e., to return back false when
`executorsToKill.isEmpty
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-156109836
@andrewor14, @vanzin
1. I have changed `sparkcontext.killExecutors` as `force = true`.
2. And keep the current public APIs
3. Add a simple unit test to test
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-156078896
@vanzin After changing the semantics in `killExecutors()`, it causes
certain unit test failure. Since the original expectation is even
`executorsToKill.isEmpty`, it will
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r44612985
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -341,7 +344,10 @@ private[spark] class TaskSchedulerImpl
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r44612998
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -88,7 +88,8 @@ private[spark] class TaskSchedulerImpl(
val
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r44612916
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -419,17 +420,32 @@ class
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r44612960
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -410,8 +410,9 @@ class
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-155963074
@andrewor14 Here is the problem. Since we didn't provide public API with
force control. It is impossible to add `force = true` into `
b.killExecutors(executorIds)`
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-155689636
@vanzin @andrewor14 I have changed code accordingly. Please let me know
your comments. Meanwhile, I will try to add unit tests.
---
If your project is set up for it
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r44501893
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -509,6 +511,13 @@ private[spark] class ExecutorAllocationManager
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r44501678
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -509,6 +511,13 @@ private[spark] class ExecutorAllocationManager
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r44499652
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -509,6 +511,13 @@ private[spark] class ExecutorAllocationManager
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r44497839
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -509,6 +511,13 @@ private[spark] class ExecutorAllocationManager
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r44496393
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -509,6 +511,13 @@ private[spark] class ExecutorAllocationManager
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r44493155
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -509,6 +511,13 @@ private[spark] class ExecutorAllocationManager
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r44490668
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -509,6 +511,13 @@ private[spark] class ExecutorAllocationManager
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-155602453
Thanks @andrewor14. I will cleanup the API stuffs, and meanwhile, to add
certain unit tests.
---
If your project is set up for it, you can reply to this email and have
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r44482609
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -1489,7 +1493,7 @@ class SparkContext(config: SparkConf) extends Logging
with
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r44482560
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -442,6 +458,7 @@ class
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-155436527
@vanzin I have changed the patch according to you comments. The only left
is the return value for `killExecutor`. Please let me understand your thoughts.
---
If your
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-155409327
@vanzin Sorry. I missed one important thing. The `ExecutorAllocationClient`
defines `killExecutors()` API for both sparkcontext and
CoarseGrainedSchedulerBackend. It
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-155333294
@vanzin Got your point. I will follow that by eliminating the secondary
option in public API. thanks for the confirmation.
---
If your project is set up for it, you
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r44375390
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -509,6 +511,13 @@ private[spark] class ExecutorAllocationManager
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-155274058
@vanzin Regarding that public API, if it is not necessary to enable the
force control, I will move that option. Basically, it is an additional option
with default value
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-154929972
Thanks @vanzin for the comments. I will change the stuffs accordingly.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-154322010
Thanks @andrewor14.
Hi @vanzin, Let me give a quick brief to you about the patch and its goal.
There is a bug in dynamic allocation. Since some of the
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-151875147
@andrewor14 I have tried to rebase the original proposal to latest master
branch. Please let me know if you have further question or concern. Thanks a
lot.
---
If your
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-138573813
@andrewor14 I have pushed another proposal. Please let me know your
comments.
* The SparkContext allows end-user to set `force` control while
killExecutor(s
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r38890521
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -413,25 +413,38 @@ class
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-138161276
@andrewor14 Thanks for the comments.
Regarding #1, very good point. That's why I try to return back false if
force-killing failed. This is the simples
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r38828767
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -413,25 +413,38 @@ class
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r38828688
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -413,25 +413,38 @@ class
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-138149073
@andrewor14 Thanks for the feedback. I will take a look at your comments,
and to revise the code accordingly. any concern, will let you know.
---
If your project is
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/8128#discussion_r37046803
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/basicOperators.scala ---
@@ -224,6 +225,56 @@ case class Limit(limit: Int, child: SparkPlan
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/8128#discussion_r37045748
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/basicOperators.scala ---
@@ -224,6 +225,56 @@ case class Limit(limit: Int, child: SparkPlan
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/8128#discussion_r37044861
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/basicOperators.scala ---
@@ -224,6 +225,56 @@ case class Limit(limit: Int, child: SparkPlan
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7927#discussion_r36596179
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -590,10 +590,21 @@ private[spark] class BlockManager(
private def
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7927#discussion_r36372781
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -590,10 +590,21 @@ private[spark] class BlockManager(
private def
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7927#discussion_r36271550
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -592,8 +592,14 @@ private[spark] class BlockManager(
val locations
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-127855211
It seems the test failure not related to this PR
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-127445692
@CodingCat Sorry for the ambiguous words in the description. In general,
the patch aims to fix the false killing bug in dynamic allocation. And at the
same time, we
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r36149140
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -413,25 +413,38 @@ class
Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-127444561
@CodingCat What I mean is to add the force control in the
```killExecutors``` API. Currently, the dynamic allocation is using that API
with force=false (I suppose we
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r36148284
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -264,10 +264,10 @@ private[spark] class ExecutorAllocationManager
Github user GraceH commented on a diff in the pull request:
https://github.com/apache/spark/pull/7888#discussion_r36148319
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -413,25 +413,38 @@ class
GitHub user GraceH opened a pull request:
https://github.com/apache/spark/pull/7888
Add force control for killExecutors to avoid false killing for those busy
executors
By using the dynamic allocation, sometimes it occurs false killing for
those busy executors. Some executors with
1 - 100 of 113 matches
Mail list logo