GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/4043
SPARK-5217 Spark UI should report pending stages during job execution on
AllStagesPage.
![pending_stages](https://cloud.githubusercontent.com/assets/992952/5738019/70ee913e-9c0d-11e4-9970
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3707#issuecomment-67115834
Hi @brennonyork
I was trying this patch out. Seemed good overall. I felt it would be good
to print some info messages that indicates what is happening
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3341#issuecomment-63939765
Looks good.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/3402
[SPARK-4377] Fixed serialization issue by switching to akka provided
serializer.
... - there is no way around this for deserializing actorRef(s).
You can merge this pull request into a Git
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3402#issuecomment-63950396
@JoshRosen Please take a look and see if this fix works for us.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3402#issuecomment-63951951
Even after this fix someone can run into same errors if he suppose builds
spark with scala 2.10 and runs master first and then try to recover it with
spark built
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/3364
[BUILD] Mvn with zinc support, this script will autodownload zinc and setup
for you.
You just have to run maven as mvn/mvn and other params as usual.
for example
mvn/mvn clean
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3364#issuecomment-63624974
@pwendell Please take a look.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/3339#discussion_r20502874
--- Diff: docs/building-spark.md ---
@@ -113,7 +113,17 @@ mvn -Pyarn -Phive -Phive-thriftserver-0.12.0
-Phadoop-2.4 -Dhadoop.version=2.4.0
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3339#issuecomment-63466308
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3339#issuecomment-63602415
You are absolutely right about it. I just did not think in that way.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/3339
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3282#issuecomment-63270880
Looks good.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/3310
SPARK-4445, Don't display storage level in toDebugString unless RDD is
persisted.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/3339
Corrected build instructions for scala 2.11 in building-spark.md.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ScrapCodes/spark-1 patch-3
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3282#issuecomment-63261469
You can also relocate the algebird example back to example/src.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/3201
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3201#issuecomment-63022589
So basically our GenerateMimaIgnore adds all protected method to
mima-ignores automatically. So we are not check binary compatibility on them
anyway. The reason we
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/3239#discussion_r20347028
--- Diff: docs/building-spark.md ---
@@ -113,9 +113,9 @@ mvn -Pyarn -Phive -Phive-thriftserver-0.12.0
-Phadoop-2.4 -Dhadoop.version=2.4.0
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/3239#discussion_r20348481
--- Diff: docs/building-spark.md ---
@@ -113,9 +113,9 @@ mvn -Pyarn -Phive -Phive-thriftserver-0.12.0
-Phadoop-2.4 -Dhadoop.version=2.4.0
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/3111
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/3181
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3228#issuecomment-62847896
Currently there is no way around not passing -Pscala-2.10/-Pscala-2.11 for
maven users. We can either document this, or I am not sure if there is another
way
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3239#issuecomment-62852292
@sryza I was assuming that we create profile for kafka only and not other
submodules.
---
If your project is set up for it, you can reply to this email and have
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3201#issuecomment-62516494
I really don't know why this was reported with the PR in question, but
doing a cleanup of lib_managed before running mima helped as I tested.
---
If your project
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3201#issuecomment-62520405
No not yet, I just thought it fixed things but it did not. It seemed to
work randomly. Sorry about that.
---
If your project is set up for it, you can reply
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3201#issuecomment-62522535
One question do we expect `protected def getPartitions: Array[Partition]`
of `RDD.scala` to be excluded from mima checks ? Not sure why it is landing in
mima
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3201#issuecomment-62523657
If this is expected then it is okay to merge this patch, if not then I have
to fix these bugs.
---
If your project is set up for it, you can reply to this email
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/771#issuecomment-62524905
Did I miss something ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3201#issuecomment-62507722
This should fix it, haven't verified it. It takes long to build and so on.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2947#issuecomment-62309276
Looks good then.
On Nov 9, 2014 8:24 AM, Adam Pingel notificati...@github.com wrote:
Algebird 0.8.1 for Scala 2.11 is on the central repo:
http
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/3181
[WIP] Scala 2.11
Trying to test things on jenkins.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ScrapCodes/spark-1 scala-2.11-prashant
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3111#issuecomment-62342768
Closing this in favour of #3159
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2947#issuecomment-62245429
I could not find artifact for Scala 2.11(when I last checked). If they are
going to be available soon, then we can hold this until that happens.
---
If your project
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3111#issuecomment-61952127
I really have no clue how to solve jline dependency issue.
We use jline 0.9.94(jline 1) in hive-cli and hive-thriftserver and scala
repl uses jline2.12
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3111#issuecomment-62108614
The only workaround I can think of is shading jline inside hive-cli the way
we do it for akka.
---
If your project is set up for it, you can reply to this email
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/3111
[WIP] Scala 2.11 support.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ScrapCodes/spark-1 scala-2.11-full
Alternatively you can review
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3111#issuecomment-61802259
@pwendell take a look, I am planning to the docs in a different PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3111#issuecomment-61804569
To do fix jline dependency issue for scala 2.11 by may be creating
different build paths
---
If your project is set up for it, you can reply to this email and have
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/2615
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2615#issuecomment-60885128
Jenkins, retest this please. (I might get lucky this time, Looks like the
compilation failure is random.)
---
If your project is set up for it, you can reply
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2615#issuecomment-60739486
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2615#issuecomment-60562052
@pwendell I tried your reproducer after changing a few things, so I am not
sure whether I fixed it accidentally or I could not reproduce at all.
---
If your project
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/2959
SPARK-3962 Marked scope as provided for external projects.
Somehow maven shade plugin set in infinite loop of creating effective pom.
You can merge this pull request into a Git repository
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2959#issuecomment-60582511
I was still trying to debug what is causing it. Any help would be
appreciated.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2846#discussion_r19374351
--- Diff: project/SparkBuild.scala ---
@@ -111,7 +110,7 @@ object SparkBuild extends PomBuild {
lazy val MavenCompile = config(m2r) extend(Compile
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2615#discussion_r19324374
--- Diff: conf/spark-env.sh.template ---
@@ -3,6 +3,9 @@
# This file is sourced when running various Spark programs.
# Copy it as spark-env.sh
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2615#discussion_r19324476
--- Diff: bin/compute-classpath.sh ---
@@ -36,6 +34,18 @@ else
CLASSPATH=$CLASSPATH:$FWDIR/conf
fi
+if [ -z $SCALA_VERSION
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2615#issuecomment-60350175
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2615#issuecomment-60350893
Not sure why compiler is consistently crashing here. The build
compilation(and tests) passes locally for both scala 2.10 and scala 2.11.
---
If your project is set
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/771#issuecomment-60352013
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2615#issuecomment-60352347
So now every thing went fine with same flags as jenkins used and failed.
Looks like jenkins env issue ?
---
If your project is set up for it, you can reply
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2878#issuecomment-60352593
It already has a warning
https://github.com/apache/spark/commit/9f7a095184d6c7a9b1bbac55efcc3d878f876768.
---
If your project is set up for it, you can reply
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2878#issuecomment-60352745
But that is related to client itself for Spark-submmit.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2615#issuecomment-60357463
Yes it does.
On Oct 24, 2014 1:24 PM, Patrick Wendell notificati...@github.com wrote:
Jenkins, test this please. @ScrapCodes https://github.com/ScrapCodes
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2615#discussion_r19265089
--- Diff: dev/change-version-to-2.10.sh ---
@@ -0,0 +1,20 @@
+#!/usr/bin/env bash
+
+#
+# Licensed to the Apache Software Foundation (ASF
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2615#discussion_r19322477
--- Diff: conf/spark-env.sh.template ---
@@ -3,6 +3,9 @@
# This file is sourced when running various Spark programs.
# Copy it as spark-env.sh
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2615#discussion_r19322483
--- Diff: dev/change-version-to-2.11.sh ---
@@ -0,0 +1,20 @@
+#!/usr/bin/env bash
+
+#
+# Licensed to the Apache Software Foundation (ASF
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/2921
SPARK-3812 Build changes to publish effective pom.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ScrapCodes/spark-1 build-changes
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2673#issuecomment-60041268
Not sure which is better ! Creating an empty project with just a pom file
in it. Or depending on random jar from maven central. ? I prefer first approach
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/771#discussion_r19197267
--- Diff:
core/src/main/scala/org/apache/spark/deploy/master/PersistenceEngine.scala ---
@@ -26,35 +30,58 @@ package org.apache.spark.deploy.master
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2615#issuecomment-60058177
Hey Jason,
Thanks for offering to help, In short, I need help from scala-internals so
that it is possible to customize user supplied wrappers. I tried my bit
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2615#issuecomment-60193446
Sure @retronym and @som-snytt, I will give it an another try soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2615#discussion_r19259888
--- Diff: project/SparkBuild.scala ---
@@ -90,13 +90,21 @@ object SparkBuild extends PomBuild {
profiles
}
- override val
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2615#discussion_r19259910
--- Diff: project/SparkBuild.scala ---
@@ -110,7 +118,7 @@ object SparkBuild extends PomBuild {
lazy val MavenCompile = config(m2r) extend(Compile
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2615#discussion_r19260085
--- Diff: bin/compute-classpath.sh ---
@@ -132,6 +138,9 @@ if [[ $SPARK_TESTING == 1 ]]; then
CLASSPATH=$CLASSPATH:$FWDIR/sql/catalyst/target/scala
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/2079
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/1905
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/2357
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2615#discussion_r19260306
--- Diff: bin/compute-classpath.sh ---
@@ -20,7 +20,7 @@
# This script computes Spark's classpath and prints it to stdout; it's
used by both the run
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2615#discussion_r19260373
--- Diff: dev/change-version-to-2.11.sh ---
@@ -0,0 +1,20 @@
+#!/usr/bin/env bash
+
+#
+# Licensed to the Apache Software Foundation (ASF
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2615#discussion_r19260463
--- Diff: bin/compute-classpath.sh ---
@@ -20,7 +20,7 @@
# This script computes Spark's classpath and prints it to stdout; it's
used by both the run
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2615#discussion_r19260520
--- Diff: bin/compute-classpath.sh ---
@@ -20,7 +20,7 @@
# This script computes Spark's classpath and prints it to stdout; it's
used by both the run
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2520#issuecomment-59887766
Makes sense to me, this is the only way to solve this problem. I am okay
with this patch.
---
If your project is set up for it, you can reply to this email and have
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2673#issuecomment-59889967
You were right @pwendell. I was just imagining things.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2673#discussion_r19134094
--- Diff: pom.xml ---
@@ -248,7 +248,19 @@
/snapshots
/pluginRepository
/pluginRepositories
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2673#discussion_r19134114
--- Diff: pom.xml ---
@@ -994,6 +1006,34 @@
plugins
plugin
groupIdorg.apache.maven.plugins/groupId
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2673#issuecomment-59907791
Yeah I verified the resultant dependency tree as shown in gists above.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2673#discussion_r19140246
--- Diff: pom.xml ---
@@ -248,7 +248,19 @@
/snapshots
/pluginRepository
/pluginRepositories
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/2877
[BUILD] Fixed resolver for scalastyle plugin and upgrade sbt version.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ScrapCodes/spark-1
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/2878
[SPARK-4032] Deprecate YARN alpha support in Spark 1.2
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ScrapCodes/spark-1
SPARK-4032
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2673#issuecomment-59736352
This is the gist of dependency tree for artifacts published by this patch.
https://gist.github.com/ScrapCodes/a5857e57d828b4b787ff
---
If your project is set up
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2615#issuecomment-59761949
Hey @pwendell, I have updated this patch to include effective pom changes.
So that you can try it out. Also I think this is ready for review !
---
If your project
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2615#discussion_r19086365
--- Diff: core/pom.xml ---
@@ -264,6 +284,10 @@
scopetest/scope
/dependency
dependency
+ groupIdcom.twitter/groupId
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2673#issuecomment-59469813
We can definitely install other modules, I am afraid if the
resultant(effective) pom(s) will carry reference to parent pom. Let me try that
out.
---
If your
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/771#issuecomment-59493430
@aarondav So you are asking for another `read()` method, which end users
can override. So that we can have a default implementation for
`readPersistedData
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2673#issuecomment-59331286
Even if we do this trick and shade something in all artifacts, what about
spark-parent ? There since we don't build jar shading plugin throws NPEs.
---
If your
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2803#discussion_r18876515
--- Diff: project/MimaExcludes.scala ---
@@ -50,7 +50,11 @@ object MimaExcludes
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2784#issuecomment-59163637
Thanks, LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2784#issuecomment-59163981
Minor: Your PR title looks misleading ! :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2782#issuecomment-59008076
Hey Patrick, You are right about that. We can make TaskContext an interface
if we only allow TaskContextHelper.get() instead of TaskContext.get(). And then
maybe I
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2784#issuecomment-59013875
Hey Aaron,
I increased the interval because its any way a noise !, We don't intend
to use the akka's Failure Detector because we have our own heart beat tracking
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2795#issuecomment-59021297
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2795#issuecomment-59021625
Nice fix, LGTM. This is required by scala 2.11 too. @pwendell take a look ?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2520#discussion_r18821425
--- Diff: project/SparkBuild.scala ---
@@ -170,6 +178,24 @@ object SparkBuild extends PomBuild {
}
+object YARNCommon {
+ lazy
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/771#issuecomment-59026836
I just read your proposal, somehow this update was missed by me. Forgive me
for a late reply, your proposal appears to be good. I will take a stab at it
soon
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2796#issuecomment-59027255
Yeah docs should not be in gitignore. LGTM. (running jenkins appears to be
wasteful)
---
If your project is set up for it, you can reply to this email and have your
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2799#issuecomment-59028558
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2799#issuecomment-59028701
LGTM,
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2520#discussion_r18822347
--- Diff: project/SparkBuild.scala ---
@@ -170,6 +178,24 @@ object SparkBuild extends PomBuild {
}
+object YARNCommon {
+ lazy
301 - 400 of 696 matches
Mail list logo