Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/11205
I guess the issue still exists, let me verify the issue again, if it still
exists I will bring the PR to latest. Thanks
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19396
IMO I think it might be better to throw an exception instead of shifting to
another shuffle. Since user want to use external shuffle explicitly, letting
user to know the issues and fix the issue
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19398
AFAIK, this is a by-design choice to manually create event log directory.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19358
Merging to master and branch 2.2. Thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19287
Generally it looks fine to me.
CC @markhamstra @squito , would you please help to review it? Thanks
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19287#discussion_r141510861
--- Diff: core/src/main/scala/org/apache/spark/scheduler/TaskInfo.scala ---
@@ -74,6 +81,10 @@ class TaskInfo(
gettingResultTime = time
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19358
LGTM.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19338
lGTM, merging to master.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19346
OK, let me merge to master branch.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19338#discussion_r141329721
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/BlacklistIntegrationSuite.scala
---
@@ -115,8 +115,9 @@ class BlacklistIntegrationSuite extends
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19274
This is because it is the only way to guarantee the ordering of data in
Kafka partition mapping to Spark partition. Maybe some other users took as as
an assumption to write the code
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19338
Jenkins, retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19184
After discussed with @mridulm offline. Though the patch here cannot address
the issue of `getSortedIterator` - which uses a PriorityQueue, somehow it
solves the problem of `getIterator
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19338
There's one related test failure, can you please check.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19184
Hi @mridulm , sorry for late response. I agree with you that the scenario
is different between here and shuffle, but the underlying structure and
solutions to spill data is the same, so
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19338#discussion_r141235801
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -670,9 +670,12 @@ private[spark] class TaskSetManager
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19338#discussion_r141235459
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -670,9 +670,12 @@ private[spark] class TaskSetManager
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19358
Would you please add a [MESOS]tag in your PR title, like other PR did.
Thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19358
ok to test.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19338#discussion_r141234485
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -670,9 +670,12 @@ private[spark] class TaskSetManager
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19338#discussion_r141226978
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -671,8 +671,10 @@ private[spark] class TaskSetManager
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19338#discussion_r141226456
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetBlacklist.scala ---
@@ -61,6 +61,16 @@ private[scheduler] class TaskSetBlacklist(val conf
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19263
I see, thanks for the explanation.
@vanzin would you please help to review this PR, thanks!
---
-
To unsubscribe, e
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19274
Yes, I understand your scenario, but my concern is that your proposal is
quite scenario specific, it may well serve your scenario, but somehow it breaks
the design purpose of KafkaRDD. From my
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19263
Hi @michaelmalak , history server only shows the last state of application
before finished, and cached blocks can be evicted/unpersisted during the middle
of application. So you probably cannot
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19274
Hi @loneknightpy , think a bit on your PR, I think this can also be done in
the user side. User could create several threads in one task
(RDD#mapPartitions) to consume the records concurrently
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19338#discussion_r141002261
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetBlacklist.scala ---
@@ -61,6 +61,8 @@ private[scheduler] class TaskSetBlacklist(val conf
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19338#discussion_r141001713
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -671,8 +671,9 @@ private[spark] class TaskSetManager
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19287#discussion_r140996395
--- Diff: core/src/main/scala/org/apache/spark/scheduler/TaskInfo.scala ---
@@ -74,6 +81,10 @@ class TaskInfo(
gettingResultTime = time
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19287#discussion_r140995846
--- Diff: core/src/main/scala/org/apache/spark/scheduler/TaskInfo.scala ---
@@ -66,6 +66,13 @@ class TaskInfo(
*/
var finishTime: Long = 0
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19346
GLTM. @gatorsmile , would you please take a look at this PR, is it good for
you?
---
-
To unsubscribe, e-mail: reviews
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19346
Please fix the title.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19346
ok to test.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18015
There's still left comment not addressed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19303
IIUC, if there's no core left, requesting new executors should be a no-op,
am I right? So there should be no problem even without your fix?
From your patch, it looks like you're putting
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19315
@animenon can you please fix the PR title like what other PR did. Also is
this only for better readability or do you fix any other issue? IMO, I found
that previous txt is more readable than your
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18015
Yes, I'm fine with it. @ajbozarth would you please take another look on
this PR? Thanks.
---
-
To unsubscribe, e-mail
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18015
Jenkins, retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18015#discussion_r140416046
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
---
@@ -61,7 +59,37 @@ private[ui] class AllExecutionsPage
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19074
@loneknightpy can you please elaborate more about the issue?
I believe you brought this remote resources support in #18078. It doesn't
support cluster mode from beginning. Also your
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19074#discussion_r140159631
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -366,7 +376,7 @@ object SparkSubmit extends CommandLineUtils
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19074#discussion_r140159376
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -376,8 +386,8 @@ object SparkSubmit extends CommandLineUtils
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19074#discussion_r140159253
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -366,7 +376,7 @@ object SparkSubmit extends CommandLineUtils
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19274
Will this break the assumption that one Kafka partition will map to one
Spark partition?
---
-
To unsubscribe, e-mail
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19160
Thanks all for your review, let me merge to master.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19277
Strictly saying, this line `new BufferedInputStream(fs.open(log))` will
also throw exception, shouldn't you try-catch
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19145
@klion26 , this is not a problem related to Spark Streaming and Structured
Streaming. For any Spark application it will run into this problem. This is
basically a YARN problem and looks hard
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19277#discussion_r139867429
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -351,11 +351,11 @@ private[spark] object EventLoggingListener
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19277#discussion_r139867369
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -351,11 +351,11 @@ private[spark] object EventLoggingListener
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19285
ok to test.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19160#discussion_r139861892
--- Diff:
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/ExternalShuffleClient.java
---
@@ -117,6 +118,12 @@ public void
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19160#discussion_r139861341
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -115,6 +115,7 @@ private[spark] class Executor(
if (!isLocal
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19160#discussion_r139861303
--- Diff:
core/src/main/scala/org/apache/spark/deploy/ExternalShuffleServiceSource.scala
---
@@ -19,19 +19,19 @@ package org.apache.spark.deploy
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19160#discussion_r139860969
--- Diff:
core/src/main/scala/org/apache/spark/network/netty/NettyBlockTransferService.scala
---
@@ -18,11 +18,14 @@
package
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19160#discussion_r139860924
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -248,6 +251,16 @@ private[spark] class BlockManager(
logInfo(s
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19130
Hi @cloud-fan , the main purpose of `spark.yarn.dist.forceDownloadSchemes`
is to explicitly using Spark's own logic to handle remote resources instead of
relying on Hadoop. For example
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r139608374
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -385,4 +385,14 @@ package object config {
.checkValue(v
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19145
And based on your fix:
1. looks like you don't have retention mechanism, which will potential
introduce memory leak.
2. I don't see your logic to avoid requesting new containers
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19145
>But if we restart the RM, then, the lost containers in the NM will be
reported to RM as lost again because of recovery
Since you already enabled RM and NM recovery, IIUC the fail
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19211#discussion_r139606603
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/AsyncEventQueue.scala ---
@@ -0,0 +1,196 @@
+/*
+ * Licensed to the Apache Software
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19210
LGTM, merging to master.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19074
@loneknightpy did you open a new JIRA about this issue?
AFAIK, downloading resources to local disk is not supported for cluster
mode even from beginning, would you please elaborate
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r139577257
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -385,4 +385,13 @@ package object config {
.checkValue(v
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r139577191
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -367,6 +368,54 @@ object SparkSubmit extends CommandLineUtils with
Logging
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r139576893
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -367,6 +368,54 @@ object SparkSubmit extends CommandLineUtils with
Logging
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r139576814
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -367,6 +368,54 @@ object SparkSubmit extends CommandLineUtils with
Logging
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19145
Did you enable RM or NM recovery, can you please clarify it?
Normally, if we assume there's are 2 containers running on this NM, after
10 minutes, RM will detect the failure of NM
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19160
@zsxwing @jiangxb1987 would you please help to review this PR when you have
time, thanks a lot.
---
-
To unsubscribe, e-mail
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19263
@michaelmior would you please follow the instruction
(https://spark.apache.org/contributing.html) to update PR title and create a
corresponding JIRA, thanks
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19145
Hi @klion26 , sorry for the late response. Can we please understand the
problem first, would you please describe your problem in detail and how to
reproduce your issue
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19210
LGTM, let me retest this again.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19210
Jenkins, retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19171
Ok, seems the test is passed, let me merge to master branch.
Please be noted such trivial fix usually doesn't require a JIRA, also
please think carefully about the necessity of such fix
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19233
IIUC streaming DRA seems an obsolete code. Long ago when I played with it,
there existed some bugs, but seems not so many users used this feature. I'm not
sure if we really need to put efforts
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r139053961
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -367,6 +368,54 @@ object SparkSubmit extends CommandLineUtils with
Logging
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19171
ok to test.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19171#discussion_r139053438
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -988,6 +988,12 @@ private[spark] class BlockManager
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19210
BTW, can you please create a JIRA, and fix the PR title like other PRs.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19133
This is not a necessary fix. We usually don't do such changes without
really fix anything.
---
-
To unsubscribe, e-mail
Github user jerryshao closed the pull request at:
https://github.com/apache/spark/pull/19227
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19227
I see, so based on your comments:
1. Mesos should not honor principal/keytab configuration. Instead of rename
them, we should remove the `MESOS` here
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19227
> I don't think Mesos honors it (and it shouldn't be, since IIRC it hasn't
implemented long-lived app support yet).
Current Spark on Mesos code actually hon
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19210#discussion_r139045423
--- Diff:
core/src/main/scala/org/apache/spark/metrics/sink/GraphiteSink.scala ---
@@ -69,7 +69,7 @@ private[spark] class GraphiteSink(val property
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19227
The purpose of changing configuration name is that these configurations are
not only used by yarn mode in `SparkSubmit`, Mesos, local will also honor this,
so that's why I rename them. What do
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r139041065
--- Diff:
core/src/test/scala/org/apache/spark/deploy/SparkSubmitSuite.scala ---
@@ -897,6 +897,76 @@ class SparkSubmitSuite
sysProps
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19227
Hi @vanzin thanks a lot for your comments. Would you please elaborate more?
I'm not sure if I really understand your comment. According to this PR I don't
think I ship the keytab around
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19132
Cannot cleanly merge to 2.2, so this will only land to master branch.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19132
LGTM, merging to master, if possible to 2.2.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19210#discussion_r138879962
--- Diff:
core/src/main/scala/org/apache/spark/metrics/sink/GraphiteSink.scala ---
@@ -69,7 +69,7 @@ private[spark] class GraphiteSink(val property
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r138827462
--- Diff: core/src/main/scala/org/apache/spark/deploy/DependencyUtils.scala
---
@@ -123,6 +123,11 @@ private[deploy] object DependencyUtils
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19210
Jenkins, retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19132#discussion_r138805063
--- Diff:
core/src/main/scala/org/apache/spark/status/api/v1/AllStagesResource.scala ---
@@ -69,7 +70,8 @@ private[v1] object AllStagesResource
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r138801682
--- Diff:
core/src/test/scala/org/apache/spark/deploy/SparkSubmitSuite.scala ---
@@ -897,6 +897,80 @@ class SparkSubmitSuite
sysProps
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r138801550
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -367,6 +368,53 @@ object SparkSubmit extends CommandLineUtils with
Logging
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19130#discussion_r138791246
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -367,6 +368,53 @@ object SparkSubmit extends CommandLineUtils with
Logging
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19210
@HyukjinKwon would you please help to trigger the Jenkins? Thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19132#discussion_r138789492
--- Diff:
core/src/main/scala/org/apache/spark/status/api/v1/OneStageResource.scala ---
@@ -81,7 +83,8 @@ private[v1] class OneStageResource(ui: SparkUI
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19132#discussion_r138789213
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/UIData.scala ---
@@ -97,6 +97,7 @@ private[spark] object UIData {
var memoryBytesSpilled
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/19227
[SPARK-20060][CORE] Support accessing secure Hadoop cluster in standalone
client mode
## What changes were proposed in this pull request?
This PR leverages the facility of SPARK-16742
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19135
So it somehow reflects that CPU core contention is the main issue for
memory pre-occupation , am I right?
AFAIK from our customer, we usually don't allocate so many cores to one
executor
701 - 800 of 2740 matches
Mail list logo