Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3050#issuecomment-65286730
[Test build #24052 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24052/consoleFull)
for PR 3050 at commit
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/3470#discussion_r21183040
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/sources/interfaces.scala ---
@@ -37,7 +37,7 @@ import
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/3470#discussion_r21183110
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/sources/interfaces.scala ---
@@ -37,7 +37,7 @@ import
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3543#issuecomment-65288815
[Test build #24053 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24053/consoleFull)
for PR 3543 at commit
GitHub user brennonyork opened a pull request:
https://github.com/apache/spark/pull/3559
[SPARK-4616][Core] - SPARK_CONF_DIR is not effective in spark-submit
By default the `SPARK_CONF_DIR` is not capable of being set from the
`spark-submit` script, but a spark properties file is.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3559#issuecomment-65291188
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3517#issuecomment-65291586
/cc @marmbrus, since this is a SQL change.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/3560
[SPARK-4701] Typo in sbt/sbt
Modified typo.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tsudukim/spark feature/SPARK-4701
Alternatively
GitHub user brennonyork opened a pull request:
https://github.com/apache/spark/pull/3561
[SPARK-4298][Core] - The spark-submit cannot read Main-Class from Manifest.
Resolves a bug where the `Main-Class` from a .jar file wasn't being read in
properly. This was caused by the fact
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3558#issuecomment-65292413
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3558#issuecomment-65292399
[Test build #24051 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24051/consoleFull)
for PR 3558 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3560#issuecomment-65292562
[Test build #24054 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24054/consoleFull)
for PR 3560 at commit
Github user petervandenabeele commented on the pull request:
https://github.com/apache/spark/pull/3517#issuecomment-65292817
Thx @JoshRosen for your follow-up.
I locally verified a squashed version of my 2 commits. The squashed change
change is now very limited, affecting 6
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3561#issuecomment-65292887
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ilganeli closed the pull request at:
https://github.com/apache/spark/pull/3518
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user ilganeli reopened a pull request:
https://github.com/apache/spark/pull/3518
[SPARK-3694] RDD and Task serialization debugging output
Hi all - in addition to what was explicitly requested in the original JIRA,
I also added the ability to have a trace of the serialization
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3552
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3560#issuecomment-65295163
[Test build #24055 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24055/consoleFull)
for PR 3560 at commit
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3555#issuecomment-65295402
Please add a test to `HiveQuerySuite`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3401
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3401#issuecomment-65295741
Thanks! Merged to master and 1.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/3508#issuecomment-65295994
Yeah I think what we want from this patch is the following behavior:
```
System.exit(X) = success
Uncaught exception = fail
sc.stop = success
```
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3526
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3050#issuecomment-65299414
[Test build #24052 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24052/consoleFull)
for PR 3050 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3050#issuecomment-65299421
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/3541#issuecomment-65301702
@davies when you can have multiple executors per host or executor restarted
on host on failure, then this can manifest ... please refer to the comments
that
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3526#issuecomment-65303090
Thanks, merged to master and 1.2
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ankurdave commented on the pull request:
https://github.com/apache/spark/pull/3549#issuecomment-65303586
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/3541#issuecomment-65303921
@mridulm I think maybe what @davies was asking (which I'm also wondering
about) was: why would a task fail on executor A but not on executor B, if
they're both on
Github user ankurdave commented on the pull request:
https://github.com/apache/spark/pull/3544#issuecomment-65304224
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3549#issuecomment-65304304
[Test build #24056 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24056/consoleFull)
for PR 3549 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3543#issuecomment-65304557
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3543#issuecomment-65304552
[Test build #24053 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24053/consoleFull)
for PR 3543 at commit
Github user markhamstra commented on the pull request:
https://github.com/apache/spark/pull/3548#issuecomment-65304935
https://github.com/apache/spark/pull/3550 doesn't address SPARK-2424; so if
we want to handle that issue in 1.2, then we still need a PR it.
---
If your project is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3544#issuecomment-65305050
[Test build #24057 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24057/consoleFull)
for PR 3544 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3560#issuecomment-65305205
[Test build #24054 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24054/consoleFull)
for PR 3560 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3560#issuecomment-65305216
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/3543#discussion_r21192103
--- Diff: docs/configuration.md ---
@@ -664,6 +665,24 @@ Apart from these, the following properties are also
available, and may be useful
/td
/tr
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/3543#issuecomment-65306213
LGTM. Could you rephrase the title to just say [SPARK-4229] instead of
Closes blah, to follow the convention used for PR titles?
---
If your project is set up for it,
Github user koeninger commented on a diff in the pull request:
https://github.com/apache/spark/pull/3543#discussion_r21192361
--- Diff: docs/configuration.md ---
@@ -664,6 +665,24 @@ Apart from these, the following properties are also
available, and may be useful
/td
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3550#issuecomment-65307121
One idea for testing this: comment out the line in the
`DisassociationEvent` handler that removes the application then check that a
killed application is eventually
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3560#issuecomment-65307833
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3560#issuecomment-65307820
[Test build #24055 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24055/consoleFull)
for PR 3560 at commit
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/3541#issuecomment-65309912
Note: I am ignoring deterministic failure reasons here (which will fail on
any host and usually points to bug in user or spark codebase).
Task failure could be due to
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/3541#issuecomment-65312874
@mridulm In the case of one executor restarted on failure, the executor id
will be changed, put the previous executor-id in blacklist does make sense. It
only help when
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/3538#discussion_r21195689
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/api/java/DataType.java ---
@@ -82,6 +82,11 @@
*/
public static final ShortType
GitHub user varunsaxena opened a pull request:
https://github.com/apache/spark/pull/3562
[SPARK-4688] Have a single shared network timeout in Spark
[SPARK-4688] Have a single shared network timeout in Spark
You can merge this pull request into a Git repository by running:
$
Github user jacek-lewandowski commented on the pull request:
https://github.com/apache/spark/pull/2739#issuecomment-65314387
I still have got one test failing:
```
[info] InputOutputMetricsSuite:
[info] - input metrics when reading text file with single split (34
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3562#issuecomment-65314923
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/3562#discussion_r21196545
--- Diff: core/src/main/scala/org/apache/spark/util/AkkaUtils.scala ---
@@ -65,7 +65,8 @@ private[spark] object AkkaUtils extends Logging {
val
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/3562#discussion_r21196524
--- Diff:
network/common/src/main/java/org/apache/spark/network/util/TransportConf.java
---
@@ -37,7 +37,8 @@ public boolean preferDirectBufs() {
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/3562#discussion_r21196604
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockManagerMasterActor.scala ---
@@ -53,7 +53,9 @@ class BlockManagerMasterActor(val isLocal: Boolean,
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3538
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3538#issuecomment-65315929
Thanks for fixing this! I've merged to master and 1.2 (I fixed the
indentation while merging).
---
If your project is set up for it, you can reply to this email and
Github user varunsaxena commented on the pull request:
https://github.com/apache/spark/pull/3562#issuecomment-65316168
Wanted to know what should be the default spark.network.timeout value ? I
have kept it at 100 sec. Should it be different ?
---
If your project is set up for it,
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3443#issuecomment-65316695
Thanks for fixing this! I've merged to master and 1.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user JoshRosen closed the pull request at:
https://github.com/apache/spark/pull/2186
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user JoshRosen closed the pull request at:
https://github.com/apache/spark/pull/2951
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3443
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3549#issuecomment-65316886
[Test build #24056 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24056/consoleFull)
for PR 3549 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3549#issuecomment-65316895
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3528#issuecomment-65317312
Thanks for fixing this! Merged to master and 1.2
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3528
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3544#issuecomment-65317441
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3544#issuecomment-65317431
[Test build #24057 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24057/consoleFull)
for PR 3544 at commit
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3556#issuecomment-65317492
Can you please add a test to `HiveQuerySuite`?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/3547#discussion_r21197678
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala ---
@@ -416,6 +416,8 @@ object HiveContext {
case (bin: Array[Byte],
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3547#issuecomment-65317727
Thanks for tracking this down! I'm going to go ahead and merge now so we
can include this in 1.2. Can you please open a follow up PR to remove the
unnecessary
Github user varunsaxena commented on a diff in the pull request:
https://github.com/apache/spark/pull/3562#discussion_r21197916
--- Diff: core/src/main/scala/org/apache/spark/util/AkkaUtils.scala ---
@@ -65,7 +65,8 @@ private[spark] object AkkaUtils extends Logging {
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3547
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3431#issuecomment-65318247
Cool feature :) I wanted to include this in the first cut but ran out of
time. It's too late for 1.2 but I'll try and review this soon.
---
If your project is set up
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/3545#issuecomment-65318496
I talked to @tdas and this is fine, but even with this, we should figure
out why f is capturing its outer this way and remove that since it is expensive
for serialization.
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3046#issuecomment-65318463
[Test build #24058 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24058/consoleFull)
for PR 3046 at commit
Github user varunsaxena commented on the pull request:
https://github.com/apache/spark/pull/3562#issuecomment-65320054
Made the changes as per review. Also updated configuration.md
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/2739#issuecomment-65321102
If you don't believe it's your fault, it will be much easier to help if you
create the new PR and an admin triggers a jenkins job to test it. Then we can
see whether it's
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2739#issuecomment-65322205
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2739#issuecomment-65322917
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24059/consoleFull)
for PR 2739 at commit
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/3461#issuecomment-65322960
@jkbradley minor: Shall we merge RF and GBT into a single section called
tree ensembles (random forests and gradient-boosted trees (on the same level
as decision trees) ?
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2739#issuecomment-65323531
Oh yeah - this is still against 1.1. @jacek-lewandowski can you open a new
PR and close this one?
---
If your project is set up for it, you can reply to this email and
GitHub user scwf opened a pull request:
https://github.com/apache/spark/pull/3563
[SQL] Remove unnecessary case in HiveContext.toHiveString
a follow up of #3547
/cc @marmbrus
You can merge this pull request into a Git repository by running:
$ git pull
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/3547#issuecomment-65324177
Sure, opened #3563 to delete it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3563#issuecomment-65324471
[Test build #24060 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24060/consoleFull)
for PR 3563 at commit
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/3470#discussion_r21201664
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/sources/interfaces.scala ---
@@ -37,7 +37,7 @@ import
GitHub user nchammas opened a pull request:
https://github.com/apache/spark/pull/3564
[SPARK-3431] [WIP] Parallelize test execution
This is currently a work in progress to experiment with various options for
parallelizing tests.
You can merge this pull request into a Git
Github user Lewuathe commented on the pull request:
https://github.com/apache/spark/pull/3554#issuecomment-65327154
@sryza There is no other additional process. You can build documents with
below instruction.
https://github.com/apache/spark/blob/master/docs/README.md
---
If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-65327428
[Test build #24061 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24061/consoleFull)
for PR 3564 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-65327960
[Test build #24062 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24062/consoleFull)
for PR 3564 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3046#issuecomment-65328235
[Test build #24058 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24058/consoleFull)
for PR 3046 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3046#issuecomment-65328241
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/3524#issuecomment-65329606
I don't understand this, why would it be giving 3 * currentMemory? The
problem is that myMemoryThreshold is what the memory manager thought we were
using so far (what it
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/3524#issuecomment-65329936
BTW while it's true that it will ask for more than twice myMemoryThreshold,
that's actually intentional. At the beginning, when a collection ramps up, its
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2739#issuecomment-65330247
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24059/consoleFull)
for PR 2739 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2739#issuecomment-65330260
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user ryan-williams commented on the pull request:
https://github.com/apache/spark/pull/3524#issuecomment-65330404
OK, I think I understand the intention. You want to increase the amount
that the memory pool thinks you have from `myMemoryThreshold` to
`2*currentMemory`, hence
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/3524#issuecomment-65330532
Oh yes, that's true, we changed it to 0. Anyway this comment change sounds
fine. Maybe you can also make sure that the ScalaDoc of
ShuffleMemoryManager.tryToAcquire says
Github user Lewuathe commented on a diff in the pull request:
https://github.com/apache/spark/pull/3562#discussion_r21203749
--- Diff:
core/src/main/scala/org/apache/spark/network/nio/ConnectionManager.scala ---
@@ -81,7 +81,8 @@ private[nio] class ConnectionManager(
Github user ryan-williams commented on the pull request:
https://github.com/apache/spark/pull/3524#issuecomment-65330854
OK, I'll re-work this and the JIRA to reflect all of this, thanks Matei
---
If your project is set up for it, you can reply to this email and have your
reply
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3563#issuecomment-65331599
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3563#issuecomment-65331593
[Test build #24060 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24060/consoleFull)
for PR 3563 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-65332926
[Test build #24061 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24061/consoleFull)
for PR 3564 at commit
101 - 200 of 310 matches
Mail list logo