Github user tmyklebu commented on the pull request:
https://github.com/apache/spark/pull/493#issuecomment-41883891
Something wrong with Jenkins? Looks like it hit some sort of OOM condition?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/609#issuecomment-41883608
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/609#issuecomment-41883609
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14603/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/608#issuecomment-41883581
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/475#issuecomment-41883582
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/475#issuecomment-41883584
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14602/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/608#issuecomment-41883583
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14601/
---
If your project
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/601#discussion_r12179554
--- Diff: docs/cluster-overview.md ---
@@ -118,21 +118,25 @@ If you are ever unclear where configuration options
are coming from. fine-graine
information ca
Github user nishkamravi2 commented on the pull request:
https://github.com/apache/spark/pull/492#issuecomment-41882773
--driver-java-options would only work through the spark-submit script. If
the developer invokes yarn.deploy.Client directly (as is common practice
thusfar), they woul
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/607#issuecomment-41882183
I would think the right thing would be to enclose the entire executor in a
doAs (or save the UGI and keep using it, as this patch is doing, if the former
isn't straightforwa
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/609#issuecomment-41882158
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/609#issuecomment-41882150
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
GitHub user pwendell opened a pull request:
https://github.com/apache/spark/pull/609
SPARK-1691: Support quoted arguments inside of spark-submit.
This is a fairly straightforward fix. The bug was reported by @vanzin and
the fix was proposed by @deanwampler and myself. Please take a
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/607#issuecomment-41882098
A second downside, following Patrick's idea, is that all FileSystems
created during the execution of the same task will share the same UGI and thus
may reuse the cached v
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/493#issuecomment-41881988
Build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this f
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/493#issuecomment-41881989
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14600/
---
If your project is set up for it, you can r
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/607#issuecomment-41881386
One downside to closing the FileSystems after task completion is you lose
all benefits of having a FileSystem cache, which could still be significant if
the FileSystem in
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/475#issuecomment-41880801
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/475#discussion_r12178932
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/DecisionTree.scala ---
@@ -72,7 +74,28 @@ class DecisionTree (private val strategy: Strategy)
ex
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/475#issuecomment-41880798
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/608#issuecomment-41880197
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/608#issuecomment-41880210
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user liancheng opened a pull request:
https://github.com/apache/spark/pull/608
[SPARK-1678][SPARK-1679] In-memory compression bug fix and made compression
configurable, disabled by default
In-memory compression is now configurable in `SparkConf` by the
`spark.sql.inMemoryCom
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/599#discussion_r12178082
--- Diff: bagel/src/main/scala/org/apache/spark/bagel/package-info.java ---
@@ -0,0 +1,21 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
Github user tmyklebu commented on the pull request:
https://github.com/apache/spark/pull/493#issuecomment-41877934
Note that the new code doesn't actually work if you supply user and product
partitioners that have different numbers of partitions. However, it can be
straightforwardly
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/493#issuecomment-41877761
Build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/493#issuecomment-41877766
Build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/607#issuecomment-41875941
I agree we need to make sure it doesn't do that, but if its creating
filesystems specific for the ugi (supposed to be the leak) then closing them
after the task is finis
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/607#issuecomment-41875269
@tgravescs that could be a solution, but should make sure it doesn't
interfere with other tasks that are also using that filesystem. I.e. if task 1
runs and then calls th
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/607#issuecomment-41875084
From reading HDFS-3545 it sounds like we should just
FileSystem.closeAllForUGI after the task runs. I'll talk to Daryn tomorrow
about it just to make sure.
Ha
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/607#discussion_r12175232
--- Diff: docs/configuration.md ---
@@ -679,6 +679,17 @@ Apart from these, the following properties are also
available, and may be useful
Set a speci
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/607#issuecomment-41872687
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14599/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/607#issuecomment-41872686
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/591#issuecomment-41871148
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14597/
---
If your project is set up for it, you can r
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/591#issuecomment-41871263
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/591#issuecomment-41871264
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14598/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/591#issuecomment-41871146
Build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this f
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/607#issuecomment-41870766
@tgraves @sryza @pwendell Please take a look if possible. I know next to
nothing about Hadoop security, but this problem bit us pretty hard recently. As
far as my underst
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/607#issuecomment-41870614
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/475#issuecomment-41870303
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14596/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/475#issuecomment-41870302
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
GitHub user aarondav opened a pull request:
https://github.com/apache/spark/pull/607
SPARK-1676: Cache Hadoop UGIs by default to prevent FileSystem leak
UserGroupInformation objects (UGIs) are used for Hadoop security. A
relatively recent PR (#29) makes Spark always use UGIs when ex
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/607#issuecomment-41870618
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/606#issuecomment-41869561
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proj
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/591#issuecomment-41869562
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/591#issuecomment-41869567
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/601#issuecomment-41869512
Hey @sryza thanks a bunch for this. Looking good. I built it locally and
read through the doc.
I noticed a few other issues with the doc that you can choose to ad
GitHub user xiliu82 opened a pull request:
https://github.com/apache/spark/pull/606
Add support for clock offset
We need to reset the clock to run streaming spark on old data for testing,
debugging, etc.
You can merge this pull request into a Git repository by running:
$ git p
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/591#issuecomment-41869340
Build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/591#issuecomment-41869350
Build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/605#issuecomment-41869295
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/605#issuecomment-41869298
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14595/
---
If your project
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/601#discussion_r12173948
--- Diff: docs/running-on-yarn.md ---
@@ -12,12 +12,14 @@ was added to Spark in version 0.6.0, and improved in
0.7.0 and 0.8.0.
We need a consolidated Sp
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/601#discussion_r12173921
--- Diff: docs/running-on-yarn.md ---
@@ -12,12 +12,14 @@ was added to Spark in version 0.6.0, and improved in
0.7.0 and 0.8.0.
We need a consolidated Sp
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/601#discussion_r12173906
--- Diff: docs/running-on-yarn.md ---
@@ -12,12 +12,14 @@ was added to Spark in version 0.6.0, and improved in
0.7.0 and 0.8.0.
We need a consolidated Sp
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/524#issuecomment-41869016
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14594/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/524#issuecomment-41869015
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/601#discussion_r12173726
--- Diff: docs/running-on-yarn.md ---
@@ -47,83 +49,42 @@ System Properties:
# Launching Spark on YARN
Ensure that HADOOP_CONF_DIR or YARN_CONF
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/601#discussion_r12173709
--- Diff: docs/running-on-yarn.md ---
@@ -12,12 +12,14 @@ was added to Spark in version 0.6.0, and improved in
0.7.0 and 0.8.0.
We need a consolidated Sp
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/601#discussion_r12173720
--- Diff: docs/running-on-yarn.md ---
@@ -12,12 +12,14 @@ was added to Spark in version 0.6.0, and improved in
0.7.0 and 0.8.0.
We need a consolidated Sp
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/601#discussion_r12173690
--- Diff: docs/cluster-overview.md ---
@@ -118,21 +118,25 @@ If you are ever unclear where configuration options
are coming from. fine-graine
information
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/601#discussion_r12173623
--- Diff: docs/cluster-overview.md ---
@@ -118,21 +118,25 @@ If you are ever unclear where configuration options
are coming from. fine-graine
information
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/475#issuecomment-41868295
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/475#issuecomment-41868301
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user manishamde commented on a diff in the pull request:
https://github.com/apache/spark/pull/475#discussion_r12173509
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/configuration/Strategy.scala
---
@@ -35,6 +35,9 @@ import
org.apache.spark.mllib.tree.configura
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/603#issuecomment-41867972
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/603#issuecomment-41867974
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14593/
---
If your project
Github user manishamde commented on a diff in the pull request:
https://github.com/apache/spark/pull/475#discussion_r12173284
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/DecisionTree.scala ---
@@ -72,7 +74,28 @@ class DecisionTree (private val strategy: Strategy)
ex
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/605#issuecomment-41867507
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/605#issuecomment-41867498
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
GitHub user aarondav opened a pull request:
https://github.com/apache/spark/pull/605
SPARK-1689 AppClient should indicate app is dead() when removed
Previously, we indicated disconnected(), which keeps the application in a
limbo state where it has no executors but thinks it will get
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/524#issuecomment-41866936
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/524#issuecomment-41866926
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/475#issuecomment-41866840
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14592/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/475#issuecomment-41866839
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/604#issuecomment-41866556
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proj
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/604
Modified BlockFetchIterator to handle fetch failure
I think, when an Executor which has block(s) to be fetched is lost, fetch
from the Executor will be fail and re-fetch from another Executor will oc
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/571#issuecomment-41866376
No, I have strong feelings against "org.apache.spark.examples.streaming.*"
This is entirely inconsistent with rest of the API.
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/603#issuecomment-41865620
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/603#issuecomment-41865606
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
GitHub user andrewor14 opened a pull request:
https://github.com/apache/spark/pull/603
[SPARK-1688] PySpark throws unhelpful exception when pyspark is not found
Currently, if pyspark cannot be loaded for any reason, Spark throws a
random `java.io.EOFException` when trying to read fr
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/539#issuecomment-41864970
Ah, I see what you mean. I don't see a clean way to make "--name" work in
client mode; SparkSubmit could call System.setProperty("spark.app.name"), but
that (i) looks hacky
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/482#issuecomment-41864569
test this please.
btw, you can ignore travis as it is broken now. jenkins is the source of
truth.
---
If your project is set up for it, you can reply to this e
Github user manishamde commented on the pull request:
https://github.com/apache/spark/pull/475#issuecomment-41864396
@mengxr I added a unit test.
I also changed the default maxBins setting to 100 in the example. People
will use the examples to test the tree algorithms locally
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/482#issuecomment-41864406
Can you re-test this, it passed the unittest in my local, but the CI
reports it timeouts or something.
---
If your project is set up for it, you can reply to this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/475#issuecomment-41864283
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/475#issuecomment-41864274
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/492#issuecomment-41862363
I think this is (now) already available as "--driver-java-options" (works
both for yarn-client and yarn-cluster modes).
---
If your project is set up for it, you can reply
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/561#discussion_r12170141
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientBase.scala ---
@@ -328,22 +327,22 @@ trait ClientBase extends Logging {
// If
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/561#discussion_r12170125
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientBase.scala ---
@@ -328,22 +327,22 @@ trait ClientBase extends Logging {
// If
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/601#discussion_r12169839
--- Diff: docs/running-on-yarn.md ---
@@ -47,83 +49,42 @@ System Properties:
# Launching Spark on YARN
Ensure that HADOOP_CONF_DIR or YARN_CONF_D
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/601#discussion_r12169776
--- Diff: docs/cluster-overview.md ---
@@ -118,21 +118,25 @@ If you are ever unclear where configuration options
are coming from. fine-graine
information c
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/30#issuecomment-41853864
The command I used is: mvn -Dyarn.version=2.4.0 -Dhadoop.version=2.4.0
-Pyarn package -DskipTests
I'll try doing a clean build with the 2.2.0 version to see
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/602#issuecomment-41852339
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user pierre-borckmans commented on the pull request:
https://github.com/apache/spark/pull/600#issuecomment-41852359
A last idea.
How about this sbt plugin?
https://github.com/sbt/sbt-buildinfo
It could definitely do the trick.
We would still need sth equ
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/602#issuecomment-41852340
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14591/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/602#issuecomment-41848807
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14590/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/602#issuecomment-41848806
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/602#issuecomment-41847778
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/602#issuecomment-41847763
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
1 - 100 of 186 matches
Mail list logo