Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3249#issuecomment-65758205
[Test build #24177 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24177/consoleFull)
for PR 3249 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3249#issuecomment-65758275
[Test build #24177 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24177/consoleFull)
for PR 3249 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3249#issuecomment-65758277
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user suyanNone commented on a diff in the pull request:
https://github.com/apache/spark/pull/3574#discussion_r21359660
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -1089,15 +1089,17 @@ private[spark] class BlockManager(
val info
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2853#issuecomment-65759456
[Test build #541 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/541/consoleFull)
for PR 2853 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3249#issuecomment-65760473
[Test build #24178 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24178/consoleFull)
for PR 3249 at commit
Github user tsingfu commented on the pull request:
https://github.com/apache/spark/pull/3501#issuecomment-65760992
@marmbrus Need we do something more.
If this implement is pretty complex, We may consider that reduce the number
of comment support styles(only support -- as comment
Github user varunsaxena commented on the pull request:
https://github.com/apache/spark/pull/3562#issuecomment-65761296
@rxin , I will just summarize what are the configuration defaults I have
used. I put a value of 100 in initial pull request with the intention of having
a futher
Github user maji2014 commented on the pull request:
https://github.com/apache/spark/pull/3553#issuecomment-65761712
@pwendell any idea about this title?/
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user varunsaxena commented on a diff in the pull request:
https://github.com/apache/spark/pull/3562#discussion_r21361036
--- Diff: docs/configuration.md ---
@@ -777,6 +777,16 @@ Apart from these, the following properties are also
available, and may be useful
/td
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3618#issuecomment-65764171
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user saucam opened a pull request:
https://github.com/apache/spark/pull/3618
SPARK-4762: Add support for tuples in 'where in' clause query
Currently, in the where in clause the filter is applied only on a single
column. We can enhance it to accept filter on multiple
Github user saucam commented on the pull request:
https://github.com/apache/spark/pull/3618#issuecomment-65764552
@pwendell this PR requires a change in the hive parser for which i created
a PR against hive trunk here : https://github.com/apache/hive/pull/25
can you please
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/3348#issuecomment-65766741
I have Rebased with master,Please review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3249#issuecomment-65767090
[Test build #24178 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24178/consoleFull)
for PR 3249 at commit
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/3215#issuecomment-65767093
@tgravescs @andrewor14 do you feel comfortable merging this now that 1.2 is
out the door?
---
If your project is set up for it, you can reply to this email and have your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3249#issuecomment-65767098
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/3618#issuecomment-65767357
@saucam mind tagging this PR as [SQL]?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3215#issuecomment-65767819
[Test build #24180 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24180/consoleFull)
for PR 3215 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3215#issuecomment-65768623
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
GitHub user ankurdave opened a pull request:
https://github.com/apache/spark/pull/3619
[SPARK-4763] All-pairs shortest paths algorithm for GraphX
Computes unweighted all-pairs shortest paths, returning an RDD containing
the shortest-path distance between all pairs of reachable
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3619#issuecomment-65769987
[Test build #24181 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24181/consoleFull)
for PR 3619 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3215#issuecomment-65776868
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3215#issuecomment-65776860
[Test build #24180 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24180/consoleFull)
for PR 3215 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3619#issuecomment-65778564
[Test build #24181 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24181/consoleFull)
for PR 3619 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3619#issuecomment-65778569
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/3574#issuecomment-65779235
@JoshRosen
ThreadA into RemoveBlock(), and got info for blockId1
ThreadB into DropFromMemory(), and got info for blockId1
now Thread A, B all want got
Github user tsliwowicz commented on the pull request:
https://github.com/apache/spark/pull/2914#issuecomment-65783784
Seems like an issue with Jenkins
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user tsliwowicz commented on the pull request:
https://github.com/apache/spark/pull/2854#issuecomment-65783822
Seems like an issue with Jenkins
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user CrazyJvm opened a pull request:
https://github.com/apache/spark/pull/3620
do you mean inadvertently?
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/CrazyJvm/spark streaming-foreachRDD
Alternatively you can review
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3620#issuecomment-65791786
[Test build #24182 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24182/consoleFull)
for PR 3620 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/3620#issuecomment-65791847
Correct, but this is really trivial.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/3597#issuecomment-65792502
See my comments in https://issues.apache.org/jira/browse/SPARK-4734 as to
why I don't think this is a good idea. In particular, this solution clearly has
the potential to
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3587#discussion_r21372796
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaUtils.scala ---
@@ -32,7 +33,65 @@ private[spark] object JavaUtils {
def
Github user brdw commented on the pull request:
https://github.com/apache/spark/pull/2872#issuecomment-65801064
I'd love to see this as well. We have a strict vpc policy.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3620#issuecomment-65802460
[Test build #24182 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24182/consoleFull)
for PR 3620 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3620#issuecomment-65802466
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
GitHub user liancheng opened a pull request:
https://github.com/apache/spark/pull/3621
[SPARK-4761][SQL] Enables Kryo by default in Spark SQL Thrift server
Enables Kryo and disables reference tracking by default in Spark SQL Thrift
server. Configurations explicitly defined by users
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/3619#discussion_r21377222
--- Diff: graphx/src/main/scala/org/apache/spark/graphx/Pregel.scala ---
@@ -139,6 +146,14 @@ object Pregel extends Logging {
// get to send
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3621#issuecomment-65804482
[Test build #24183 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24183/consoleFull)
for PR 3621 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3587#discussion_r21377699
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaUtils.scala ---
@@ -32,7 +33,65 @@ private[spark] object JavaUtils {
def
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3587#issuecomment-65805992
[Test build #24184 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24184/consoleFull)
for PR 3587 at commit
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/3587#issuecomment-65806925
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/3409#issuecomment-65812731
I'm personally not a fan of executorLauncher. Cluster mode also launches
executors and users shouldn't really have to know executorLauncher = client
mode. If you
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3621#issuecomment-65814668
[Test build #24183 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24183/consoleFull)
for PR 3621 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3621#issuecomment-65814677
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/2676#discussion_r21382056
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -641,6 +641,7 @@ class SparkContext(config: SparkConf) extends Logging {
Github user changetip commented on the pull request:
https://github.com/apache/spark/pull/2872#issuecomment-65814859
Hi mvj101, dreid93 sent you a Bitcoin tip worth 1 lunch (21,255
bits/$8.00), and I'm here to deliver it â **[collect your tip at
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/2676#issuecomment-65814785
I was waiting for clarification from @pwendell on my question about his
comment.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user dreid93 commented on the pull request:
https://github.com/apache/spark/pull/2872#issuecomment-65814791
I'll buy anyone willing to take care of this merge lunch via @ChangeTip :)
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3587#issuecomment-65819459
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3587#issuecomment-65819448
[Test build #24184 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24184/consoleFull)
for PR 3587 at commit
Github user ryan-williams commented on a diff in the pull request:
https://github.com/apache/spark/pull/2848#discussion_r21386174
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -412,6 +403,48 @@ private[spark] object Utils extends Logging {
}
Github user ryan-williams commented on a diff in the pull request:
https://github.com/apache/spark/pull/2848#discussion_r21386186
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -412,6 +403,48 @@ private[spark] object Utils extends Logging {
}
Github user ryan-williams commented on a diff in the pull request:
https://github.com/apache/spark/pull/2848#discussion_r21386332
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -412,6 +403,48 @@ private[spark] object Utils extends Logging {
}
Github user ryan-williams commented on a diff in the pull request:
https://github.com/apache/spark/pull/2848#discussion_r21386398
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -412,6 +403,48 @@ private[spark] object Utils extends Logging {
}
Github user ryan-williams commented on a diff in the pull request:
https://github.com/apache/spark/pull/2848#discussion_r21386447
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -412,6 +403,48 @@ private[spark] object Utils extends Logging {
}
Github user ryan-williams commented on a diff in the pull request:
https://github.com/apache/spark/pull/2848#discussion_r21386501
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -412,6 +403,48 @@ private[spark] object Utils extends Logging {
}
Github user ryan-williams commented on a diff in the pull request:
https://github.com/apache/spark/pull/2848#discussion_r21386652
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -412,6 +403,48 @@ private[spark] object Utils extends Logging {
}
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/3409#issuecomment-65826773
@tgravescs that makes sense. clientmode.am sounds good to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user ryan-williams commented on a diff in the pull request:
https://github.com/apache/spark/pull/2848#discussion_r21386945
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -412,6 +403,48 @@ private[spark] object Utils extends Logging {
}
Github user ryan-williams commented on the pull request:
https://github.com/apache/spark/pull/2848#issuecomment-65827581
Thanks for the review pass, @JoshRosen.
As I mentioned in some of the comments, this was attempting to shoehorn 4
basically-identical blocks of code from
Github user koertkuipers commented on a diff in the pull request:
https://github.com/apache/spark/pull/2963#discussion_r21387829
--- Diff: core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
---
@@ -460,6 +461,63 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
Github user koertkuipers commented on the pull request:
https://github.com/apache/spark/pull/2963#issuecomment-65828969
Hey @zsxwing,
In Scala Seq the order in which the values get processed in foldLeft is
well defined.
But can we make any assumptions at all about the
Github user ryan-williams commented on a diff in the pull request:
https://github.com/apache/spark/pull/2848#discussion_r21389032
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -412,6 +403,48 @@ private[spark] object Utils extends Logging {
}
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3621#issuecomment-65832481
Awesome, thanks Cheng. This is great. I forgot we can still modify the
SparkConf before we pass it to the SparkContext constructor.
---
If your project is set up for
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3621
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user tsudukim closed the pull request at:
https://github.com/apache/spark/pull/3280
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3280#issuecomment-65834528
Thank you! @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3617#issuecomment-65834619
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ryan-williams commented on the pull request:
https://github.com/apache/spark/pull/2848#issuecomment-65834877
OK @JoshRosen I fixed and cleaned things up.
* the two overloaded `maybeMoveFile` signatures are more distinctly named
(`downloadStreamAndMove` and
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3617#issuecomment-65835734
[Test build #24185 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24185/consoleFull)
for PR 3617 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3523#issuecomment-65836128
[Test build #24186 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24186/consoleFull)
for PR 3523 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3617#issuecomment-65836379
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3617#issuecomment-65836374
[Test build #24185 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24185/consoleFull)
for PR 3617 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3523#issuecomment-65837247
[Test build #24187 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24187/consoleFull)
for PR 3523 at commit
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3591#issuecomment-65837669
I wonder which is good but I tend not to think to submit this to upstream.
It is a good idea if this was made from the latest sbt script, but
unfortunately this is made
GitHub user kayousterhout opened a pull request:
https://github.com/apache/spark/pull/3622
[SPARK-4765] Make GC time always shown in UI.
This commit removes the GC time for each task from the set of
optional, additional metrics, and instead always shows it for
each task.
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3622#issuecomment-65840120
[Test build #24188 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24188/consoleFull)
for PR 3622 at commit
Github user ankurdave commented on a diff in the pull request:
https://github.com/apache/spark/pull/3619#discussion_r21393556
--- Diff: graphx/src/main/scala/org/apache/spark/graphx/Pregel.scala ---
@@ -139,6 +146,14 @@ object Pregel extends Logging {
// get to send
Github user ankurdave commented on the pull request:
https://github.com/apache/spark/pull/2631#issuecomment-65841397
Due to https://issues.apache.org/jira/browse/SPARK-4672, we now support
checkpointing graphs (by checkpointing their constituent vertices and edges)
with the same
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3570#issuecomment-65844299
[Test build #24189 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24189/consoleFull)
for PR 3570 at commit
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2853#issuecomment-65845790
LGTM. Since this is code-cleanup and not a bugfix, I'm only going to merge
this into `master` (1.3.0). Thanks!
---
If your project is set up for it, you can reply
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2853
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user petervandenabeele commented on the pull request:
https://github.com/apache/spark/pull/3517#issuecomment-65846515
More problematic (and sorry I had not seen that before) ... there already
_is_ an example file named `people.txt` with a different format:
```
$
GitHub user holdenk opened a pull request:
https://github.com/apache/spark/pull/3623
SPARK-4767: Add support for launching in a specified placement group to
spark_ec2
Placement groups are cool and all the cool kids are using them. Lets add
support for them to spark_ec2.py because
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3523#issuecomment-65847289
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3523#issuecomment-65847280
[Test build #24186 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24186/consoleFull)
for PR 3523 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3623#issuecomment-65847864
[Test build #24190 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24190/consoleFull)
for PR 3623 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3523#issuecomment-65848971
[Test build #24187 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24187/consoleFull)
for PR 3523 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3523#issuecomment-65848977
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3622#issuecomment-65851401
[Test build #24188 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24188/consoleFull)
for PR 3622 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3622#issuecomment-65851408
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/3622#issuecomment-65852060
MIMA tests pass locally; I rebased this on master to see if that makes the
tests pass
---
If your project is set up for it, you can reply to this email and have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3622#issuecomment-65852692
[Test build #24191 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24191/consoleFull)
for PR 3622 at commit
GitHub user sryza opened a pull request:
https://github.com/apache/spark/pull/3624
SPARK-4770. [DOC] [YARN] spark.scheduler.minRegisteredResourcesRatio doc...
...umented default is incorrect for YARN
You can merge this pull request into a Git repository by running:
$ git pull
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3624#issuecomment-65854478
[Test build #24192 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24192/consoleFull)
for PR 3624 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3570#issuecomment-65855809
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3570#issuecomment-65855799
[Test build #24189 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24189/consoleFull)
for PR 3570 at commit
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/3625
[SPARK-4740] [WIP] Create multiple concurrent connections between two peer
nodes in Netty.
Need to test add test cases.
You can merge this pull request into a Git repository by running:
$ git
1 - 100 of 194 matches
Mail list logo