Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/180#issuecomment-35968583
Hmm okay I'm still wondering if there is a path for in-memory data where
two copies are created:
Cache manager runs:
```
val elements
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/180#discussion_r10022071
--- Diff: core/src/main/scala/org/apache/spark/CacheManager.scala ---
@@ -71,10 +71,21 @@ private[spark] class CacheManager(blockManager
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/180#discussion_r10021460
--- Diff: core/src/main/scala/org/apache/spark/CacheManager.scala ---
@@ -71,10 +71,21 @@ private[spark] class CacheManager(blockManager
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/180#issuecomment-35966452
@kellrott - ah I see, this has just moved the copy from one location to
another... I retract my comment.
---
If your project is set up for it, you can reply
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/180#discussion_r10021400
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -65,17 +65,19 @@ private class MemoryStore(blockManager
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/180#issuecomment-35966227
Hey @kellrott - I started to do a review on this focused on the tests and
smaller stuff. But I realized, this makes a fairly major change to the block
manager
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/180#discussion_r10020928
--- Diff:
core/src/test/scala/org/apache/spark/storage/LargeIteratorSuite.scala ---
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/180#discussion_r10020917
--- Diff:
core/src/test/scala/org/apache/spark/storage/LargeIteratorSuite.scala ---
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/180#discussion_r10020880
--- Diff:
core/src/test/scala/org/apache/spark/storage/LargeIteratorSuite.scala ---
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/180#discussion_r10020710
--- Diff:
core/src/main/scala/org/apache/spark/serializer/JavaSerializer.scala ---
@@ -25,7 +25,22 @@ import org.apache.spark.SparkConf
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/180#discussion_r10020704
--- Diff:
core/src/main/scala/org/apache/spark/serializer/JavaSerializer.scala ---
@@ -25,7 +25,22 @@ import org.apache.spark.SparkConf
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/180#discussion_r10020702
--- Diff:
core/src/main/scala/org/apache/spark/serializer/JavaSerializer.scala ---
@@ -25,7 +25,22 @@ import org.apache.spark.SparkConf
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/539#discussion_r10019661
--- Diff:
core/src/main/java/org/apache/spark/api/java/function/DoubleFunction.java ---
@@ -15,13 +15,10 @@
* limitations under the License
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/538#issuecomment-35954077
Hey @tgraves do you mind taking a quick look? I was hoping to merge this
one.
---
If your project is set up for it, you can reply to this email and have your
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/553#issuecomment-35954097
@tgravescs mind taking a look here as well?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/180#discussion_r10011961
--- Diff:
core/src/test/scala/org/apache/spark/storage/LargeIteratorSuite.scala ---
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/638#issuecomment-35856428
We should put this fix in 0.9 as well once it's ready to merge.
---
If your project is set up for it, you can reply to this email and have your
reply appe
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/638#issuecomment-35856419
Thanks a lot for tracking this down, fixing it, and adding tests! I added
some minor style comments, modulo those comments LGTM.
---
If your project is set up
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/638#discussion_r9980747
--- Diff:
core/src/test/scala/org/apache/spark/rdd/PairRDDFunctionsSuite.scala ---
@@ -330,4 +335,74 @@ class PairRDDFunctionsSuite extends FunSuite
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/638#discussion_r9980751
--- Diff:
core/src/test/scala/org/apache/spark/rdd/PairRDDFunctionsSuite.scala ---
@@ -330,4 +335,74 @@ class PairRDDFunctionsSuite extends FunSuite
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/638#discussion_r9980734
--- Diff:
core/src/test/scala/org/apache/spark/rdd/PairRDDFunctionsSuite.scala ---
@@ -26,6 +26,11 @@ import com.google.common.io.Files
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/638#discussion_r9980719
--- Diff: core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
---
@@ -617,6 +617,10 @@ class PairRDDFunctions[K: ClassTag, V: ClassTag
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/638#discussion_r9980723
--- Diff: core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
---
@@ -617,6 +617,10 @@ class PairRDDFunctions[K: ClassTag, V: ClassTag
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/638#issuecomment-35851045
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/635#issuecomment-35848534
@coderxiang btw - it might be something where we make it a private API so
it can be used inside of Spark if other packages need this to do broadcast
joins. It
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/635#issuecomment-35841766
Hey @coderxiang - this is interesting functionality but I'm -1 on including
it in the standard API. The main reason is that this will perform poorly on
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/626#issuecomment-35841445
Hey @CodingCat this approach has a few drawbacks. First, it will mean a
pretty bad regression for some users. For instance, say that a user is calling
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/570#issuecomment-35840761
@srowen thanks for this clean-up. I'm going to merge this into master.
---
If your project is set up for it, you can reply to this email and have your
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/627#issuecomment-35824436
Hey @berngp - I can't reproduce this problem. There is already a line in
`spark-classpath` that adds this correctly:
```
else
# Else use
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/632#issuecomment-35823177
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/570#issuecomment-35821527
hey @srowen this is looking good. My main observation was I think we should
add the slf4j-log4j bridge as a compile dependency in core in the maven build
and
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/570#discussion_r9974017
--- Diff: project/SparkBuild.scala ---
@@ -236,13 +236,15 @@ object SparkBuild extends Build {
publishLocalBoth <<= Seq(publishLo
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/570#discussion_r9973984
--- Diff: project/SparkBuild.scala ---
@@ -268,9 +272,9 @@ object SparkBuild extends Build {
"it.unimi.dsi" %
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/570#discussion_r9973958
--- Diff: bagel/pom.xml ---
@@ -51,6 +51,11 @@
scalacheck_${scala.binary.version}
test
+
+ org.slf4j
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/570#discussion_r9973960
--- Diff: bagel/pom.xml ---
@@ -51,6 +51,11 @@
scalacheck_${scala.binary.version}
test
+
+ org.slf4j
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/614#discussion_r9972298
--- Diff: repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -876,7 +876,14 @@ class SparkILoop(in0: Option[BufferedReader
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/614#issuecomment-35809765
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/614#discussion_r9972286
--- Diff: repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -876,7 +876,14 @@ class SparkILoop(in0: Option[BufferedReader
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/614#issuecomment-35809685
// @scrapcodes - mind reviewing this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/634#discussion_r9972227
--- Diff: docker/spark-test/base/Dockerfile ---
@@ -25,8 +25,8 @@ RUN apt-get update
# install a few other useful packages plus Open Jdk 7
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/631#issuecomment-35796348
Thanks @mengxr. Merged into master and 0.9
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/624#issuecomment-35793996
Hmm... github API having issues so it'll have to wait.
---
If your project is set up for it, you can reply to this email and have your
reply appear on G
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/624#issuecomment-35793977
Thanks @andrewor14 I'll merge this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/523#issuecomment-35793875
I realize now I back-ported this already:
https://git-wip-us.apache.org/repos/asf?p=incubator-spark.git;a=commit;h=5edbd175e07dc9704b1babb9c5e8d97fb644be65
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/523#issuecomment-35793620
@JoshRosen should we pick this into the 0.9 branch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/570#issuecomment-35783848
@srowen - hey just wondering, why are you including the jul and jcl
bindings? Are there random dependencies that are outputting logs through this?
---
If your
Github user pwendell closed the pull request at:
https://github.com/apache/incubator-spark/pull/573
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top-post your response.
If your project does not have this
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/624#issuecomment-35776248
@andrewor14 Looks good to me. This adds a few function calls in the `next`
path... would you mind doing some local regression testing just to make sure
there
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/629#issuecomment-35775112
@rxin would it make sense to backport this? Seems like a bug fix
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/624#discussion_r9959040
--- Diff:
core/src/main/scala/org/apache/spark/util/collection/ExternalAppendOnlyMap.scala
---
@@ -302,12 +304,14 @@ private[spark] class
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/624#discussion_r9958879
--- Diff:
core/src/main/scala/org/apache/spark/util/collection/ExternalAppendOnlyMap.scala
---
@@ -323,13 +327,12 @@ private[spark] class
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/624#discussion_r9958856
--- Diff:
core/src/main/scala/org/apache/spark/util/collection/ExternalAppendOnlyMap.scala
---
@@ -323,13 +327,12 @@ private[spark] class
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/625#discussion_r9956570
--- Diff: core/src/main/scala/org/apache/spark/deploy/ClientArguments.scala
---
@@ -115,3 +110,7 @@ private[spark] class ClientArguments(args
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/625#discussion_r9955905
--- Diff: core/src/test/scala/org/apache/spark/deploy/ClientSuite.scala ---
@@ -0,0 +1,17 @@
+package org.apache.spark.deploy
--- End diff
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/625#discussion_r9953789
--- Diff: core/src/main/scala/org/apache/spark/deploy/ClientArguments.scala
---
@@ -115,3 +110,7 @@ private[spark] class ClientArguments(args
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/625#issuecomment-35706454
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/625#issuecomment-35705998
@aarondav fixed the nit, waiting for tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To
GitHub user pwendell opened a pull request:
https://github.com/apache/incubator-spark/pull/625
SPARK-: URL Validation Throws Error for HDFS URL's
Fixes an error where HDFS URL's cause an exception. Should be merged into
master and 0.9.
You can merge this pull requ
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/622#issuecomment-35691749
@ahirreddy Could you create a JIRA for this? Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/402#discussion_r9934007
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaRDDLike.scala
---
@@ -73,7 +74,7 @@ trait JavaRDDLike[T, This <: JavaRDDLike[T, T
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/402#discussion_r9933178
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaRDDLike.scala
---
@@ -73,7 +74,7 @@ trait JavaRDDLike[T, This <: JavaRDDLike[T, T
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/585#discussion_r9933131
--- Diff: project/MimaBuild.scala ---
@@ -0,0 +1,105 @@
+import com.typesafe.tools.mima.plugin.MimaKeys.{binaryIssueFilters,
previousArtifact
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/623#issuecomment-35687583
Thanks aaron looks good. I'll merge this into master and 0.9.
---
If your project is set up for it, you can reply to this email and have your
reply appe
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/615#issuecomment-35684716
Hm actually sorry that was totally wrong. Who uses this script externally
at all? Why don't we just _not_ document this...
---
If your project is set u
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/615#issuecomment-35684474
I think SPARK_CLIENT_MEMORY isn't so hot either because most often
`spark-class` isn't used to run a client, it's most often used by users t
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/612#issuecomment-35681623
@guojc - hey since Andrew may propose a slightly different fix, I want to
make sure you are credited with this in our release notes. Are you Jiacheng
Guo
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/621#issuecomment-35680133
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/615#issuecomment-35581371
It looks like there is a separate variable called `amMemory` that deals
with this in YARN. The command for launching that JVM gets set-up in
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/615#issuecomment-35581271
@sryza - I don't think this is relevant to the YARN codepath. AFAIK YARN
doesn't use the ./spark-class script to launch the YARN application mast
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/192#issuecomment-35580209
See SPARK-1110... I took down some notes there relevant to this:
https://spark-project.atlassian.net/browse/SPARK-1110
---
If your project is set up for it
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/585#discussion_r9890431
--- Diff: project/MimaBuild.scala ---
@@ -0,0 +1,105 @@
+import com.typesafe.tools.mima.plugin.MimaKeys.{binaryIssueFilters,
previousArtifact
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/585#discussion_r9890236
--- Diff: project/MimaBuild.scala ---
@@ -0,0 +1,105 @@
+import com.typesafe.tools.mima.plugin.MimaKeys.{binaryIssueFilters,
previousArtifact
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/585#discussion_r9890048
--- Diff: project/MimaBuild.scala ---
@@ -0,0 +1,115 @@
+import com.typesafe.tools.mima.plugin.MimaKeys.{binaryIssueFilters,
previousArtifact
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/585#discussion_r9890019
--- Diff: project/MimaBuild.scala ---
@@ -0,0 +1,115 @@
+import com.typesafe.tools.mima.plugin.MimaKeys.{binaryIssueFilters,
previousArtifact
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/585#discussion_r9889925
--- Diff: project/MimaBuild.scala ---
@@ -0,0 +1,105 @@
+import com.typesafe.tools.mima.plugin.MimaKeys.{binaryIssueFilters,
previousArtifact
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/618#issuecomment-35567163
Thanks guys I put this in master and 0.9.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/618#issuecomment-35516252
LGTM pending a small fix -- @aarondav want to take a look?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/incubator-spark/pull/618#discussion_r9870730
--- Diff: docs/index.md ---
@@ -19,7 +19,7 @@ Spark uses [Simple Build Tool](http://www.scala-sbt.org),
which is bundled with
sbt/sbt
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/585#issuecomment-35471000
It turns out our issues are shortfallings of the existing MIMA tool. The
visibility of classes at the bytecode level is different than the visibility
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/617#issuecomment-35459569
Thanks this is a nice. LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/616#issuecomment-35459440
Hey so I looked at it more closely, we need to look on a case-by-case basis
where this is used. Some of the cases should be changed to 2.10.3 and others
should
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/616#issuecomment-35457956
I'm looking into it with @aarondav right now... just want to make sure this
doesn't break anything.
---
If your project is set up for it, you ca
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/616#issuecomment-35457734
I reverted this in master and 0.9
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/616#issuecomment-35457550
@aarondav this needs to be revered... it will break everywhere we tell
people how to depend on Spark:
scala-programming-guide.md:artifactId
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/585#issuecomment-35416994
@ScrapCodes hm... I noticed a bigger issue that things which are package
private are not being filtered in MIMA and this is leading to a bunch of the
false
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/585#discussion_r9834615
Let's just exclude the entire `ExternalAppendOnlyMap` class. The you can
add a comment that says "Was made private in 1.0"
---
If your project i
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/585#discussion_r9834629
Then you can add a comment that says "Scheduler is not considered a public
API"
---
If your project is set up for it, you can reply to this email and
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/585#discussion_r9834543
I think you can do it with excludePackage:
https://github.com/typesafehub/migration-manager/blob/master/core/src/main/scala/com/typesafe/tools/mima/core
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/585#discussion_r9834337
is it possible to exclude the entire `org.apache.spark.deploy` package?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/299#issuecomment-35353411
@velvia took a pass on this. This will be a useful feature. One question
about the need to explicitly pass classloaders.
---
If your project is set up for it
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/299#discussion_r9812862
It would be great if it were possible to not modify the way akka
initializes... but there may be some reason I'm missing why we need to.
---
If your proje
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/299#discussion_r9812824
I'm a bit confused on this one... if we set the context class loader to be
the new one in SparkContext, and akka's default is to get the context cla
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/299#discussion_r9812781
Mind formatting this to meet the style guide:
```
private[spark] val classLoader =
if (conf.getBoolean("spark.driver.add-dynamic-jars&qu
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/299#discussion_r9812713
Makes sense, I think it's fine to just change the local:// semantics a bit.
---
If your project is set up for it, you can reply to this email and have your
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/299#discussion_r9812694
I also think it's fine to ditch this and just include it in the config
docs... it's up to you though... I just realized after seeing this there is
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/299#discussion_r9812596
Now that I see this here, I wonder if maybe it just makes sense to note
this in the configuration doc below rater than here. It's really only relevant
to p
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/299#discussion_r9812541
To be consistent with the other naming this should be `camelCase` rather
than `lower-with-hyphens`.
Also, how about `spark.driver.addJars`. It seems to
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/603#issuecomment-35328014
@nicklan @mateiz as an alternative, it might be simplest to just download a
Tachyon package (e.g. don't use maven to deal with this). And have a nice
gra
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/603#issuecomment-35316947
@mateiz @nicklan - I think it's fine to ask Tachyon to publish this stuff
in their jars. Having Tachyon code inside of the Spark codebase is not good f
Github user pwendell commented on the pull request:
https://github.com/apache/incubator-spark/pull/585#issuecomment-35139456
Hey @ScrapCodes - this is a great start! To give some background, we
actually want to use this and reject pull requests that cause Mima errors. That
means we
1 - 100 of 169 matches
Mail list logo