Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18329#discussion_r123442164
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
---
@@ -264,12 +281,12 @@ final class DataStreamWriter[T
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18385
Can you please point where do we change from "spark.ssl.[namespace].port"
to "spark.ssl.port"?
---
If your project is set up for it, you can reply to this email and ha
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18385
Sorry I cannot get your point?
>But I still have a question,whether "spark.ssl.[namespace].port" should be
modify to "spark.ssl.port" to meet the naming style o
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18385
It indeed used in Spark code `SSLOptions`, for example ssl port is used to
specify https port for https live and history UI. Also `needClientAuth` is used
for mutual authentication. I'm not sure
Github user jerryshao closed the pull request at:
https://github.com/apache/spark/pull/18213
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18213
Thanks @tgravescs @vanzin for your comments, I think it is quite valid, I
will close this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/11994
@jiangxb1987 yes, I can work this if you could help to review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18322
@jiangxb1987 can you please help to review this PR? This is a simple code
improvement to avoid some unnecessary code execution when left cores is not
enough for one executor.
I don't
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17113
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/9518#discussion_r123420182
--- Diff:
core/src/main/scala/org/apache/spark/metrics/sink/StatsdReporter.scala ---
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/9518#discussion_r123425292
--- Diff:
core/src/main/scala/org/apache/spark/metrics/sink/StatsdReporter.scala ---
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/9518#discussion_r123424723
--- Diff:
core/src/main/scala/org/apache/spark/metrics/sink/StatsdReporter.scala ---
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18329#discussion_r123418793
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
---
@@ -264,12 +281,12 @@ final class DataStreamWriter[T
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18329#discussion_r123417606
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
---
@@ -235,6 +237,21 @@ final class DataStreamWriter[T] private
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18329#discussion_r123416837
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
---
@@ -17,8 +17,10 @@
package
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r123415327
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -543,6 +545,42 @@ class SparkConf(loadDefaults: Boolean) extends
Cloneable
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r123415688
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -258,23 +256,7 @@ private[deploy] class SparkSubmitArguments(args
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r123413523
--- Diff:
core/src/test/scala/org/apache/spark/deploy/master/MasterSuite.scala ---
@@ -704,6 +707,43 @@ class MasterSuite extends SparkFunSuite
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r122927744
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -543,6 +543,30 @@ class SparkConf(loadDefaults: Boolean) extends
Cloneable
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r122927162
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -543,6 +543,30 @@ class SparkConf(loadDefaults: Boolean) extends
Cloneable
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r122926762
--- Diff:
core/src/test/scala/org/apache/spark/deploy/master/MasterSuite.scala ---
@@ -704,6 +707,43 @@ class MasterSuite extends SparkFunSuite
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18213
@vanzin , how about the current changes, I set the default maxAttempts to 1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r122921443
--- Diff:
core/src/test/scala/org/apache/spark/deploy/master/MasterSuite.scala ---
@@ -704,6 +707,43 @@ class MasterSuite extends SparkFunSuite
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r122920139
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -543,6 +543,30 @@ class SparkConf(loadDefaults: Boolean) extends
Cloneable
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18351
Using modification time may have some issues, please see the comment.
>// Use loading time as lastUpdated since some filesystems don't update
modifiedTime
// each time f
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18351
Looks from the
[code](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala#L458),
`lastUpdated` will still be increased even
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18351
@fjh100456 IMO, abrupt abortion of application without application end time
is not the normal case. For the most of the incompleted applications
"currTimeInMs - startTime" as
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18351
@guoxiaolongzte this column is already hidden in the master code.
@fjh100456 would you please explain more about this?
>the application of the exception abort will alw
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18351
I'm not familiar with JS, I'm wondering if we could use `currTimeInMs -
startTime` as the **Duration** instead of "0". Not sure if it is easy to do in
JS.
---
If your project
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17620
@jiangxb1987 according to what @lvdongr described, seems there's an issue
in state transition for recovered master:
> This happend at the time the previous master leader remove the d
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18235#discussion_r122707282
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -858,19 +844,33 @@ object SparkSubmit extends CommandLineUtils
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18213
Thanks @vanzin , a valid concern, actually it is hard for AM to
differentiate several different scenarios and treat with different approaches.
So your suggestion is only to set max attempt to 1
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18230
@vanzin "reload" here meanings retrieving back `SparkConf` from checkpoint
file and using this retrieved `SparkConf` to create `SparkContext` when
restarting streaming a
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18322
@eatoncys can you please add a unit test in `MasterSuite` to verify your
new code if possible?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r122610588
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -278,6 +278,14 @@ private[deploy] class SparkSubmitArguments(args
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18322
I see, I understand your changes now.
IMO, because user specifically request for 3 cores per executor (as an
example), it is not so good to allocate 1 executor with only 1 core, this may
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18322
If you don't see any issue here, what's problem you met?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18322
According to my test, current code of `Master` could handle this situation
correctly, did you see any issue here without your fix?
---
If your project is set up for it, you can reply
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18321#discussion_r122365295
--- Diff:
core/src/test/scala/org/apache/spark/deploy/master/MasterSuite.scala ---
@@ -214,7 +214,7 @@ class MasterSuite extends SparkFunSuite
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18308
> I wonder if whether executor is completely gone or whether executor is
still there but has no cached RDD, if both scenarios return false.
Yes, that's the case, we cannot differenti
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/18321
[SPARK-12552][FOLLOWUP] Fix flaky test for
"o.a.s.deploy.master.MasterSuite.master correctly recover the application"
## What changes were proposed in this pull request?
Due
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18308#discussion_r122129884
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -432,8 +432,10 @@ private[spark] class ExecutorAllocationManager
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18308#discussion_r122129357
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -432,8 +432,10 @@ private[spark] class ExecutorAllocationManager
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18308#discussion_r122129269
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -432,8 +432,10 @@ private[spark] class ExecutorAllocationManager
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18308
LGTM. BTW can you please complement the PR description, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18290
@jiangxb1987 , I think using environment variable to check if there's
number of same-node workers connected id potentially vulnerable, user can
manually start worker one by one without setting
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18290#discussion_r121892743
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -742,6 +742,17 @@ private[deploy] object Worker extends Logging
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18290#discussion_r121888174
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -742,6 +742,17 @@ private[deploy] object Worker extends Logging
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18290#discussion_r121887546
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -742,6 +742,17 @@ private[deploy] object Worker extends Logging
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/10506
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/10506#discussion_r121610617
--- Diff:
core/src/test/scala/org/apache/spark/deploy/master/MasterSuite.scala ---
@@ -134,6 +138,79 @@ class MasterSuite extends SparkFunSuite
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/10506#discussion_r121596180
--- Diff:
core/src/test/scala/org/apache/spark/deploy/master/MasterSuite.scala ---
@@ -134,6 +138,71 @@ class MasterSuite extends SparkFunSuite
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/10506
>It would be great if we can add test framework to verify the states and
statistics on the condition of Driver/Executor Lost/Join/Relaunch
@jiangxb1987 can you explain more about what
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18230
I don't agree with your point of view, there's already some potential
issues regarding to internal configurations, either they will potentially lead
to unexpected state, either they're
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/10506
@jiangxb1987 , to reproduce this issue, you can:
1. Configure to enable standalone HA, for example
"spark.deploy.recoveryMode FILESYSTEM" and "spark.deploy.recoveryDire
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/10506#discussion_r121041958
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -367,7 +367,7 @@ private[deploy] class Master
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18213#discussion_r121039957
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
---
@@ -744,9 +746,23 @@ object ApplicationMaster
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18230
From my understanding this two configurations will also point to expired
timestamp, since each started application will reset it, so no need to
checkpoint them.
I think all the Spark
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18210#discussion_r120848724
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -336,9 +336,9 @@ private[scheduler] object BlacklistTracker
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18213
@mridulm , the exit code of pyspark or R is really user defined, user could
exit with any code, for example `sys.exit(100)`, so potentially it could be
overlapping.
---
If your project is set
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18230
I guess "spark.yarn.credentials.renewalTime" and
"spark.yarn.credentials.updateTime" should also be excluded.
---
If your project is set up for it, you can reply to this
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18210#discussion_r120816560
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -336,9 +336,9 @@ private[scheduler] object BlacklistTracker
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18210#discussion_r120816418
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -336,9 +336,9 @@ private[scheduler] object BlacklistTracker
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18230
@saturday-shi would you please update the title to track SPARK-19688?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17882#discussion_r120809352
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
---
@@ -176,16 +179,6 @@ private[spark
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18213#discussion_r120808171
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
---
@@ -229,8 +229,17 @@ private[spark] class
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18213
For Streaming application, I just treat it as normal Spark application, if
it fails from itself internally, AM will unregister itself, if it is from
external issue then AM will not unregister
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18213#discussion_r120804119
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
---
@@ -229,8 +229,17 @@ private[spark] class
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/18235
[SPARK-21012][Submit] Add glob support for resources adding to Spark
Current "--jars (spark.jars)", "--files (spark.files)", "--py-files
(spark.submit.
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17882
CC @mridulm , can you please review this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17882#discussion_r120352440
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
---
@@ -176,16 +179,6 @@ private[spark
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17882#discussion_r120351695
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
---
@@ -68,6 +68,8 @@ private[spark
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17113
Hi @squito ,
For the 1st point, I tested manually in real cluster, but I'm not sure how
to make it happen in UT, if you think it is necessary to add a UT about this
issue, then I
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18213
CC @mridulm, can you please help to review?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/18213
[SPARK-20996][YARN] Better handling AM reattempt based on exit code in yarn
mode
## What changes were proposed in this pull request?
Yarn provides max attempt configuration
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18192
The change should be safe, but usually we don't do such code structure
refactoring alone without a strong reason, so I'm neutral of this change.
---
If your project is set up for it, you can
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18192
@zhengcanbin can you please clarify the benefit of your changes? I don't
see a big difference here.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18201
Yes, that's right.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18201
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/18201
[SPARK-20981][SparkSubmit]Add new configuration spark.jars.repositories as
equivalence of --repositories
## What changes were proposed in this pull request?
In our use case of launching
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18130
There's a [JIRA](https://issues.apache.org/jira/browse/SPARK-20650)
planning to remove this `JobProgressListener`, so I'd suggest to not change
this deprecated code unnecessarily.
---
If your
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18129#discussion_r119620209
--- Diff:
resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/ClientSuite.scala
---
@@ -122,6 +122,7 @@ class ClientSuite extends
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17617
Thanks @jiangxb1987 @cloud-fan @ueshin for your review!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18170#discussion_r119520479
--- Diff:
core/src/main/resources/org/apache/spark/ui/static/historypage-template.html ---
@@ -20,17 +20,17
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/10506
Sure, I will bring this to update.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17617#discussion_r119517532
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -143,14 +144,30 @@ class SparkHadoopUtil extends Logging
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17617#discussion_r119517540
--- Diff:
core/src/test/scala/org/apache/spark/metrics/InputOutputMetricsSuite.scala ---
@@ -319,6 +319,37 @@ class InputOutputMetricsSuite extends
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17113
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18129
CC @vanzin to take a review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18129#discussion_r119277116
--- Diff:
resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/ClientSuite.scala
---
@@ -116,15 +116,16 @@ class ClientSuite extends
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17617#discussion_r119276245
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -143,14 +144,18 @@ class SparkHadoopUtil extends Logging
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17617#discussion_r119275824
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -143,14 +144,18 @@ class SparkHadoopUtil extends Logging
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18041
@srowen , this issue existed when reading from metrics.properties conf
file, I think we should fix this part. As for SparkConf part, I don't think it
is necessary to fix.
---
If your project
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17113#discussion_r119260232
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -54,7 +54,7 @@ import org.apache.spark.util.{AccumulatorV2
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17617
@jiangxb1987 the UT I wrote cannot actually reflect this issue, I just
update the UT, please review, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17113
Sorry @tgravescs I didn't test executor killing in real cluster. There has
bug in it, so I pushed a commit to fix it. Thanks for your reviewing.
---
If your project is set up for it, you can
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18124
I'm not sure how this could be happened, "SPARK_YARN_STAGING_DIR" is a
Spark internal environment variable which should be empty, unless you
deliberately unset it.
In th
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17617#discussion_r118809133
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -251,7 +251,13 @@ class HadoopRDD[K, V](
null
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17617#discussion_r118808801
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -142,14 +143,18 @@ class SparkHadoopUtil extends Logging
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17113
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
1101 - 1200 of 2761 matches
Mail list logo