Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/125#issuecomment-37379933
@pwendell thoughts ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/92#issuecomment-37380473
Thanks, merged.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/86#issuecomment-37381283
Updated patch takes review comments form @mridulm and @pwendell into
account.
spark.max.cores is now correctly handled. Jars passed in with --more-jars
are not
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10507644
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -130,6 +130,16 @@ class SparkContext(
val isLocal = (master == local ||
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/86#issuecomment-37382160
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/86#issuecomment-37382161
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/86#issuecomment-37382223
One or more automated tests failed
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13127/
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/86#issuecomment-3738
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/120#issuecomment-37385546
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/120#issuecomment-37389760
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13128/
---
If your project
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37394958
@pwendell Yes that's the thing. While the repo was down I could still build
the whole project from an empty repo. For artifacts like paho, where it's found
not in the 3
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37405632
Yes the build failed for me with the error I put above.
Note that this pr would need to have the maven build updated too.
---
If your project is set up for it,
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/120#discussion_r10516035
--- Diff: docs/running-on-yarn.md ---
@@ -60,11 +60,11 @@ The command to launch the Spark application on the
cluster is as follows:
--jar
Github user qqsun8819 commented on the pull request:
https://github.com/apache/spark/pull/110#issuecomment-37407753
modify patch according to @aarondav 's review
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/120#issuecomment-37411628
I'm getting a compile error building this against hadoop 0.23:
[ERROR]
yarn/alpha/src/main/scala/org/apache/spark/deploy/yarn/ExecutorLauncher.scala:231:
Folks,
I want just to pint something out...
I didn't had time yet to sort it out and to think enough to give valuable
strict explanation of -- event though, intuitively I feel they are a lot
=== need spark people or time to move forward.
But here is the thing regarding *flatMap*.
Actually, it
On Wed, Mar 12, 2014 at 3:06 PM, andy petrella andy.petre...@gmail.comwrote:
Folks,
I want just to pint something out...
I didn't had time yet to sort it out and to think enough to give valuable
strict explanation of -- event though, intuitively I feel they are a lot
=== need spark people
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/120#issuecomment-37415976
I fixed the above compile error and tried to run but the executors return
the following error:
Unknown/unsupported param List(--num-executor, 2)
Usage:
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/91#discussion_r10521879
--- Diff: core/pom.xml ---
@@ -17,274 +17,260 @@
--
project xmlns=http://maven.apache.org/POM/4.0.0;
GitHub user tgravescs opened a pull request:
https://github.com/apache/spark/pull/127
[SPARK-1232] Fix the hadoop 0.23 yarn build
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tgravescs/spark SPARK-1232
Alternatively you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/127#issuecomment-37423955
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/127#issuecomment-37423957
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/91#discussion_r10524194
--- Diff: core/pom.xml ---
@@ -17,274 +17,260 @@
--
project xmlns=http://maven.apache.org/POM/4.0.0;
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/127#issuecomment-37426702
Oh, you beat me to it. +1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/127#issuecomment-37431117
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37431342
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/129#issuecomment-37432091
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/129#issuecomment-37432092
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user tgravescs opened a pull request:
https://github.com/apache/spark/pull/129
[SPARK-1233] Fix running hadoop 0.23 due to java.lang.NoSuchFieldException:
DEFAULT_M...
...APREDUCE_APPLICATION_CLASSPATH
You can merge this pull request into a Git repository by running:
$
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/110#issuecomment-37432422
This looks good to me, but I will leave this PR for a little longer in case
anyone wants to raise questions about changing the behavior here.
---
If your project is set
Should we try to deprecate these types of configs for 1.0.0? We can start
by accepting both and giving a warning if you use the old one, and then
actually remove them in the next minor release. I think
spark.speculation.enabled=true is better than spark.speculation=true,
and if we decide to use
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/129#issuecomment-37434030
+1. Sorry again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10527985
--- Diff: docs/configuration.md ---
@@ -393,6 +394,16 @@ Apart from these, the following properties are also
available, and may be useful
/td
/tr
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37434570
Hey @sryza thanks for the review. I responded to your comment and also
added a unit and a doc change test to clarify the behavior wrt threads.
---
If your project is
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/125#issuecomment-37435981
Hi @ScrapCodes,
As @aarondav mentioned hopefully we do not need to have the RAT docs and
jars in Spark source.
I miss the part on why we do not want
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10528875
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -130,6 +130,16 @@ class SparkContext(
val isLocal = (master == local ||
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37437138
@mateiz do you mind taking a look at this? Also, how would you feel about
turning this on by default? I think in pretty much every case we'd want the
jars added to be
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/127#issuecomment-37437658
This was removed by accident in #91. Looks good to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/129#issuecomment-37438275
Looks like this was introduced in #102. Looks good to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/127#issuecomment-37438389
Aha! I _knew_ that I added this originally. I'll merge this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/127
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37439012
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/129#issuecomment-37438972
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13131/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37438959
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13130/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37438988
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37439159
One or more automated tests failed
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13133/
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37439158
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37438986
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37438958
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37439013
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37440253
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/130#issuecomment-37440531
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/125#issuecomment-37441481
Right now we manually run RAT before making releases - but the proposal
here was to run it every time a PR is created. That will be much better since
we will catch
Github user manishamde commented on a diff in the pull request:
https://github.com/apache/spark/pull/79#discussion_r10531612
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/DecisionTree.scala ---
@@ -0,0 +1,1055 @@
+/*
+ * Licensed to the Apache Software
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10531841
--- Diff: core/src/test/scala/org/apache/spark/TestUtils.scala ---
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10531878
--- Diff: core/src/test/scala/org/apache/spark/TestUtils.scala ---
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10531950
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -767,6 +781,20 @@ class SparkContext(
case _ =
path
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10532081
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -130,6 +130,18 @@ class SparkContext(
val isLocal = (master == local ||
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/129#issuecomment-37445337
Merged into master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37446754
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/126#discussion_r10533790
--- Diff: core/src/main/scala/org/apache/spark/Dependency.scala ---
@@ -49,9 +49,28 @@ class ShuffleDependency[K, V](
@transient rdd: RDD[_ : Product2[K,
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37450691
About turning this on by default, I'm afraid it will mess up uses of Spark
inside a servlet container or similar. Maybe we can keep it off at first.
---
If your project
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10535551
--- Diff: core/src/test/scala/org/apache/spark/TestUtils.scala ---
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37453908
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10536660
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -130,6 +130,18 @@ class SparkContext(
val isLocal = (master == local ||
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37459180
jenkins failure seem unrelated to this change. Can someone kick it again
perhaps?
---
If your project is set up for it, you can reply to this email and have your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/117#issuecomment-37461163
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/44
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/126#discussion_r10540258
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -50,23 +54,26 @@ private[spark] class
MapOutputTrackerMasterActor(tracker:
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/126#discussion_r10540325
--- Diff: core/src/main/scala/org/apache/spark/ContextCleaner.scala ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/126#discussion_r10540432
--- Diff: core/src/main/scala/org/apache/spark/ContextCleaner.scala ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/126#discussion_r10540569
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -181,15 +186,49 @@ private[spark] class MapOutputTracker(conf:
SparkConf)
Github user yaoshengzhe commented on a diff in the pull request:
https://github.com/apache/spark/pull/120#discussion_r10540808
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientArguments.scala
---
@@ -67,24 +67,39 @@ class ClientArguments(val args:
Hi,
I am new to Spark and would like to contribute. I wanted to assign a task
to myself but looks like I do not have permission. What is the process if I
want to work on a JIRA?
Thanks
Sujeet
Github user yaoshengzhe commented on the pull request:
https://github.com/apache/spark/pull/120#issuecomment-37468388
From the code, it looks like this patch is only for renaming, is that
really important to use Yarn's jargon instead of master/worker ? I think
master/worker is clear
+1. I agree to keep the old ones only for backward compatibility purpose.
On Wed, Mar 12, 2014 at 12:38 PM, Evan Chan e...@ooyala.com wrote:
+1.
Not just for Typesafe Config, but if we want to consider hierarchical
configs like JSON rather than flat key mappings, it is necessary. It
is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/117#issuecomment-37470402
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/117#issuecomment-37470409
One or more automated tests failed
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13136/
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37470401
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
One solution for typesafe config is to use
spark.speculation = true
Typesafe will recognize the key as a string rather than a path, so the name
will actually be \spark.speculation\, so you need to handle this
contingency when passing the config operations to spark (stripping the
quotes from the
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/126#discussion_r10543160
--- Diff: core/src/main/scala/org/apache/spark/ContextCleaner.scala ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/126#discussion_r10543349
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -181,15 +186,49 @@ private[spark] class MapOutputTracker(conf:
SparkConf)
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/129#issuecomment-37474387
@tgravescs mind closing this? some how the auto close didn't work
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yaoshengzhe commented on the pull request:
https://github.com/apache/spark/pull/126#issuecomment-37475144
@Tathagata, I am strongly disagree to put cleanup logic in finalize.
Finalizers are unpredictable, often dangerous, and generally unnecessary, e.g.
there is a severe
Someone with proper karma needs to add you to the contributor list in the
JIRA.
Cos
On Wed, Mar 12, 2014 at 02:19PM, Sujeet Varakhedi wrote:
Hi,
I am new to Spark and would like to contribute. I wanted to assign a task
to myself but looks like I do not have permission. What is the process if
In the mean time, you don't need to wait for the task to be assigned to you
to start work. If you're worried about someone else picking it up, you can
drop a short comment on the JIRA saying that you're working on it.
On Wed, Mar 12, 2014 at 3:25 PM, Konstantin Boudnik c...@apache.org wrote:
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/120#issuecomment-37477894
The goal isn't to replace Spark's names with YARN's names, but rather to be
consistent with the terminology used in the rest of Spark. Master refers to
the service in
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/97#issuecomment-37479507
takeOrdered should always return the smallest elements according to the
ordering, so it's not the same as top. For example takeOrdered(2) on [1,2,3,4]
should return [1,2].
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/93#issuecomment-37479585
Looks good, thanks! I'll merge this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user aarondav commented on a diff in the pull request:
https://github.com/apache/spark/pull/101#discussion_r10546476
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -165,12 +174,29 @@ class HadoopRDD[K, V](
override def
Github user aarondav commented on a diff in the pull request:
https://github.com/apache/spark/pull/101#discussion_r10546480
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -165,12 +174,29 @@ class HadoopRDD[K, V](
override def
Github user aarondav commented on a diff in the pull request:
https://github.com/apache/spark/pull/101#discussion_r10546472
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -165,12 +174,29 @@ class HadoopRDD[K, V](
override def
Github user aarondav commented on a diff in the pull request:
https://github.com/apache/spark/pull/101#discussion_r10546478
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -165,12 +174,29 @@ class HadoopRDD[K, V](
override def
Github user aarondav commented on a diff in the pull request:
https://github.com/apache/spark/pull/101#discussion_r10546468
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -34,6 +39,7 @@ import org.apache.spark.broadcast.Broadcast
import
Github user aarondav commented on a diff in the pull request:
https://github.com/apache/spark/pull/101#discussion_r10546475
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -165,12 +174,29 @@ class HadoopRDD[K, V](
override def
I think Kevin's point is somewhat different: there's no question that Sbt can
be integrated into Maven ecosystem - mostly the repositories and artifact
management, of course.
However, Sbt is a niche build tool and is unlikely to be widely supported by
engineering teams nor IT organizations. Sbt
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/93
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37482277
Ok I pushed a new version with Maven build changes as well. This is ready
to be merged from my perspective.
---
If your project is set up for it, you can reply to this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37482660
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37482662
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
1 - 100 of 239 matches
Mail list logo