Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5236#discussion_r28261106
--- Diff: core/src/main/scala/org/apache/spark/util/AkkaUtils.scala ---
@@ -78,8 +78,10 @@ private[spark] object AkkaUtils extends Logging {
Github user ilganeli commented on a diff in the pull request:
https://github.com/apache/spark/pull/5236#discussion_r28263636
--- Diff: core/src/test/scala/org/apache/spark/util/UtilsSuite.scala ---
@@ -35,7 +36,64 @@ import org.apache.hadoop.fs.Path
import
Github user hlin09 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5493#discussion_r28263584
--- Diff: R/pkg/inst/tests/test_rdd.R ---
@@ -141,7 +141,8 @@ test_that(PipelinedRDD support actions: cache(),
persist(), unpersist(), checkp
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5463#issuecomment-92454492
[Test build #30177 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30177/consoleFull)
for PR 5463 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5350#issuecomment-92456850
[Test build #30178 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30178/consoleFull)
for PR 5350 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5236#issuecomment-92456832
[Test build #30179 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30179/consoleFull)
for PR 5236 at commit
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/5236#issuecomment-92458362
I just took a quick look at the existing configs that have units. It
appears that none of them are documented, fortunately. I think this patch is
more or less ready
Github user MechCoder commented on the pull request:
https://github.com/apache/spark/pull/5330#issuecomment-92464457
@jkbradley fixed!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/3314#issuecomment-92464346
correct, that was the last proposal anyway. @scwf were you going to make
those changes or did you have concerns or other ideas?
---
If your project is set up for it,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5459#issuecomment-92473419
[Test build #30188 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30188/consoleFull)
for PR 5459 at commit
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5236#discussion_r28260637
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -174,6 +174,35 @@ class SparkConf(loadDefaults: Boolean) extends
Cloneable with
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5493#discussion_r28261741
--- Diff: R/pkg/inst/tests/test_rdd.R ---
@@ -141,7 +141,8 @@ test_that(PipelinedRDD support actions: cache(),
persist(), unpersist(), checkp
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5463#issuecomment-92445769
[Test build #30169 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30169/consoleFull)
for PR 5463 at commit
Github user ilganeli commented on a diff in the pull request:
https://github.com/apache/spark/pull/5236#discussion_r28264329
--- Diff: core/src/test/scala/org/apache/spark/util/UtilsSuite.scala ---
@@ -35,7 +36,64 @@ import org.apache.hadoop.fs.Path
import
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5236#issuecomment-92453159
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5459#issuecomment-92453291
[Test build #30173 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30173/consoleFull)
for PR 5459 at commit
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5236#discussion_r28266570
--- Diff:
network/common/src/main/java/org/apache/spark/network/util/JavaUtils.java ---
@@ -121,4 +125,66 @@ private static boolean isSymlink(File file)
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/5450#discussion_r28267250
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/clustering/PowerIterationClustering.scala
---
@@ -38,7 +43,63 @@ import
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/5290#discussion_r28267224
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -379,6 +381,33 @@ class Analyzer(catalog: Catalog,
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5354#issuecomment-92468412
I think I am wondering two things -- why is it necessary to include some of
these things like commons-beanutils, when they are already part of the
dependency graph of
Github user concretevitamin commented on the pull request:
https://github.com/apache/spark/pull/5495#issuecomment-92477984
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/5042#issuecomment-92442370
Actually, on giving this a closer look I'm not sure whether this faithfully
respects all of the possible Parquet configurations for controlling
OutputCommitter
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5236#discussion_r28261343
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -174,6 +174,35 @@ class SparkConf(loadDefaults: Boolean) extends
Cloneable with Logging
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5463#issuecomment-92445803
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user ilganeli commented on a diff in the pull request:
https://github.com/apache/spark/pull/5236#discussion_r28263954
--- Diff: docs/configuration.md ---
@@ -429,10 +441,10 @@ Apart from these, the following properties are also
available, and may be useful
/tr
tr
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5236#discussion_r28265566
--- Diff: docs/configuration.md ---
@@ -35,8 +35,20 @@ val conf = new SparkConf()
val sc = new SparkContext(conf)
{% endhighlight %}
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/5321#issuecomment-92457444
Looks good.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1063#issuecomment-92457478
The biggest reason for the divergence is this API is much lighter weight
(you can define functions in a single line, inline with the rest of your
program). We can
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/5455#issuecomment-92458011
@MechCoder I think you need to update the matrix SerDe in PythonMLlibAPI.
Could you double check?
---
If your project is set up for it, you can reply to this email and
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/5290#discussion_r28266955
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -379,6 +381,33 @@ class Analyzer(catalog: Catalog,
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/4995#issuecomment-92464041
Here is the command I ran:
```
sc.parallelize(1 to 10).map(_ =
org.apache.hadoop.hive.ql.session.SessionState.get().getCurrentDatabase()).collect()
```
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5330#issuecomment-92466609
[Test build #30185 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30185/consoleFull)
for PR 5330 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5173#issuecomment-92466565
[Test build #30186 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30186/consoleFull)
for PR 5173 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5493#discussion_r28270144
--- Diff: R/pkg/inst/tests/test_rdd.R ---
@@ -141,7 +141,8 @@ test_that(PipelinedRDD support actions: cache(),
persist(), unpersist(), checkp
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/5459#issuecomment-92472629
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5236#discussion_r28261212
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1011,6 +1013,22 @@ private[spark] object Utils extends Logging {
}
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5236#discussion_r28262232
--- Diff: docs/configuration.md ---
@@ -429,10 +441,10 @@ Apart from these, the following properties are also
available, and may be useful
/tr
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/5491#issuecomment-92451595
I can easily imagine sparks applications that running in jobserver
mode that sit idle for a long time between active jobs.
I can see that, but I wonder it a
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5236#issuecomment-92452711
[Test build #30176 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30176/consoleFull)
for PR 5236 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5463#issuecomment-92452706
[Test build #30175 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30175/consoleFull)
for PR 5463 at commit
Github user ilganeli commented on a diff in the pull request:
https://github.com/apache/spark/pull/5236#discussion_r28266158
--- Diff:
network/common/src/main/java/org/apache/spark/network/util/JavaUtils.java ---
@@ -121,4 +125,66 @@ private static boolean isSymlink(File file)
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5236#issuecomment-92462131
[Test build #30184 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30184/consoleFull)
for PR 5236 at commit
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/5442#discussion_r28268272
--- Diff: docs/programming-guide.md ---
@@ -576,6 +660,34 @@ before the `reduce`, which would cause `lineLengths`
to be saved in memory after
/div
Github user piaozhexiu commented on the pull request:
https://github.com/apache/spark/pull/5321#issuecomment-92467104
Thank you @tgravescs !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5236#discussion_r28262452
--- Diff: docs/configuration.md ---
@@ -35,8 +35,20 @@ val conf = new SparkConf()
val sc = new SparkContext(conf)
{% endhighlight %}
Github user squito commented on the pull request:
https://github.com/apache/spark/pull/5491#issuecomment-92447578
@vanzin as a counterpoint -- I can easily imagine sparks applications that
running in jobserver mode that sit idle for a long time between active jobs.
Can we
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5434#issuecomment-92456712
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5434#issuecomment-92456696
[Test build #30174 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30174/consoleFull)
for PR 5434 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/5321
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/5330#discussion_r28267768
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/model/treeEnsembleModels.scala
---
@@ -166,6 +158,60 @@ class GradientBoostedTreesModel(
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/5330#discussion_r28267765
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/model/treeEnsembleModels.scala
---
@@ -131,34 +131,26 @@ class GradientBoostedTreesModel(
Github user concretevitamin commented on the pull request:
https://github.com/apache/spark/pull/5495#issuecomment-92479065
Hey @pwendell I can't seem to summon Jenkins now :(
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/5330#issuecomment-92481316
LGTM once tests pass. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4145#issuecomment-92481656
[Test build #30189 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30189/consoleFull)
for PR 4145 at commit
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5447#issuecomment-92483319
This may seem a silly question, but why not just use `File.toURI`? It does
handle windows paths robustly
---
If your project is set up for it, you can reply to
Github user hlin09 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5493#discussion_r28260573
--- Diff: R/pkg/inst/tests/test_rdd.R ---
@@ -141,7 +141,7 @@ test_that(PipelinedRDD support actions: cache(),
persist(), unpersist(), checkp
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5236#issuecomment-92453148
[Test build #30176 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30176/consoleFull)
for PR 5236 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5459#issuecomment-92453329
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4435#issuecomment-92460625
[Test build #30182 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30182/consoleFull)
for PR 4435 at commit
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/5330#issuecomment-92460453
@MechCoder Thanks a lot for working through all of these tweaks with me!
The updates look good except for those 2 items
---
If your project is set up for it, you can
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/4015#discussion_r28270424
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -43,6 +43,68 @@ import org.apache.spark.util.Utils
import
GitHub user hlin09 opened a pull request:
https://github.com/apache/spark/pull/5495
[Minor][SparkR] Minor refactor and removes redundancy related to
cleanClosure.
1. Only use `cleanClosure` in creation of RRDDs. Normally, user and
developer do not need to call `cleanClosure` in
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5236#discussion_r28265238
--- Diff: docs/configuration.md ---
@@ -35,8 +35,20 @@ val conf = new SparkConf()
val sc = new SparkContext(conf)
{% endhighlight %}
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5463#issuecomment-92454126
[Test build #30177 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30177/consoleFull)
for PR 5463 at commit
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5236#discussion_r28265905
--- Diff:
network/common/src/main/java/org/apache/spark/network/util/JavaUtils.java ---
@@ -121,4 +125,66 @@ private static boolean isSymlink(File file)
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5350#issuecomment-92459624
[Test build #30181 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30181/consoleFull)
for PR 5350 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/5450
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user calvinjia commented on the pull request:
https://github.com/apache/spark/pull/5354#issuecomment-92459698
@srowen Thanks for the feedback!
The number of dependencies increased because transitive dependencies were
promoted to avoid issues with being unable to
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5350#issuecomment-92459633
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4435#issuecomment-92462673
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5442#issuecomment-92462452
[Test build #30183 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30183/consoleFull)
for PR 5442 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5495#issuecomment-92473254
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5463#issuecomment-92454499
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5236#discussion_r28265326
--- Diff: docs/running-on-yarn.md ---
@@ -48,7 +48,7 @@ Most of the configs are the same for Spark on YARN as for
other deployment modes
/tr
tr
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5350#issuecomment-92457149
[Test build #30178 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30178/consoleFull)
for PR 5350 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5236#issuecomment-92457121
[Test build #30179 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30179/consoleFull)
for PR 5236 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5350#issuecomment-92457158
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5236#issuecomment-92457128
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5463#issuecomment-92457831
[Test build #30180 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30180/consoleFull)
for PR 5463 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5350#issuecomment-92459319
[Test build #30181 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30181/consoleFull)
for PR 5350 at commit
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/3314#issuecomment-92459285
Ah I see. I misunderstood what was being proposed. The proposal is not to
have a global range, but a local range for each of the existing port configs,
correct?
---
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/5450#issuecomment-92459332
LGTM. Merged into master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5173#issuecomment-92486933
[Test build #30186 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30186/consoleFull)
for PR 5173 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5173#issuecomment-92486953
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5442#issuecomment-92488896
[Test build #30183 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30183/consoleFull)
for PR 5442 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5442#issuecomment-92488912
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5330#issuecomment-92491539
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5330#issuecomment-92491526
[Test build #30185 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30185/consoleFull)
for PR 5330 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5383#issuecomment-92501509
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5383#issuecomment-92501504
[Test build #30195 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30195/consoleFull)
for PR 5383 at commit
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/4764#issuecomment-92509606
Here is the JIRA: SPARK-4366. Unless you think you will have something in
the next day or two, would you mind closing this JIRA. I'd like to keep the PR
queue to only
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/5390#issuecomment-92509822
Thanks! Merged to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/5431#issuecomment-92509721
Should this method in Params be made abstract?
```
def validate(paramMap: ParamMap): Unit = {}
```
I just realized we haven't been using it, and making
GitHub user rtreffer opened a pull request:
https://github.com/apache/spark/pull/5498
Export driver quirks
Make it possible to (temporary) overwrite the driver quirks. This
can be used to overcome problems with specific schemas or to
add new jdbc driver support on the fly.
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4688#discussion_r28284272
--- Diff:
core/src/main/scala/org/apache/spark/deploy/ExecutorDelegationTokenUpdater.scala
---
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/5497#issuecomment-92514199
@yhuai this is a really cool improvement, definitely will improve the
performance a lot. I have some of the comments about the future improvement(of
course we can
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5236#issuecomment-92514243
Looking strong. I think that's all the comments addressed now. Going once,
going twice?
---
If your project is set up for it, you can reply to this email and have your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5499#issuecomment-92515928
[Test build #30199 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30199/consoleFull)
for PR 5499 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5173#issuecomment-92517118
[Test build #30193 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30193/consoleFull)
for PR 5173 at commit
Github user harishreedharan commented on a diff in the pull request:
https://github.com/apache/spark/pull/4688#discussion_r28285855
--- Diff:
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala
---
@@ -164,6 +174,9 @@ private[spark] object
1 - 100 of 604 matches
Mail list logo