Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2014#discussion_r17472638
--- Diff: README.md ---
@@ -66,78 +69,24 @@ Many of the example programs print usage help if no
params are given.
## Running Tests
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2346#issuecomment-55484675
@rxin (Yeah wasn't sure how to handle the continuation indent, feel free to
change it.) I didn't add it to `SparkContext` because I figured the purpose of
the change
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2386#issuecomment-55526164
Again, still looks like a duplicate of
https://github.com/apache/spark/pull/1875
---
If your project is set up for it, you can reply to this email and have your
reply
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2357#issuecomment-55570721
- If you bind a plugin to `install` phase and declare it before
`maven-install-plugin`, will it happen to respect the ordering?
- This is arguably something that can
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2393#discussion_r17533627
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/TestHive.scala
---
@@ -41,7 +49,27 @@ import org.apache.spark.sql.SQLConf
import
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2391#issuecomment-55571126
Can you explain this patch? What problem does it solve and why? There is no
JIRA here either.
---
If your project is set up for it, you can reply to this email and have
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2357#issuecomment-55572928
I see, I thought you mentioned above that running a plugin before `install`
would work. It sounds like there is some internal state of the plugin you need
to modify, OK
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2014#issuecomment-55598514
@andrewor14 @nchammas @pwendell Humble ping on this one, I think it's good
to go, and probably helps head off some build questions going forward.
---
If your project
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2014#discussion_r17546414
--- Diff: docs/building-spark.md ---
@@ -159,4 +160,13 @@ then ship it over to the cluster. We are investigating
the exact cause
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2014#issuecomment-55609958
Yes, the build already warns if zinc is not being used.
To keep this scoped, I suggest that could be handled separately if more
docs were desired about zinc
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/2399
SPARK-2932 [STREAMING] Move MasterFailureTest out of main source directory
(HT @vanzin) Whatever the reason was for having this test class in `main`,
if there is one, appear to be moot. This may
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2014#issuecomment-55661991
@pwendell I changed to `sbt/sbt`, and @markhamstra I took the liberty of
adding a note on `zinc` while we're at it.
---
If your project is set up for it, you can reply
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2014#issuecomment-55664791
@markhamstra Nice one, change coming up...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/2403
SPARK-2745 [STREAMING] Add Java friendly methods to Duration class
@tdas is this what you had in mind for this JIRA? I saw this one and
thought it would be easy to take care of, and helpful as I use
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2395#issuecomment-55684771
Hang on, that file is versioned in the repo. I don't think you want to
ignore it! Not without deciding it should be deleted.
---
If your project is set up for it, you
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2395#issuecomment-55705179
I was commenting on a comment, suggesting to also ignore conf/slaves. It is
not in the PR so LGTM.
---
If your project is set up for it, you can reply to this email
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2014#issuecomment-55717325
@pwendell no I believe that the user still has to install the gem. I did at
least. Yes this is GTG from my end.
---
If your project is set up for it, you can reply
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2393#issuecomment-55728033
@chenghao-intel For example, in `FileServerSuite`:
```
override def beforeAll() {
super.beforeAll()
tmpDir = Files.createTempDir
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2408#discussion_r17598920
--- Diff: core/src/main/scala/org/apache/spark/network/ManagedBuffer.scala
---
@@ -66,8 +67,13 @@ final class FileSegmentManagedBuffer(val file: File, val
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2408#discussion_r17598955
--- Diff:
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala
---
@@ -111,13 +112,21 @@ final class ShuffleBlockFetcherIterator
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2408#discussion_r17600258
--- Diff:
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala
---
@@ -111,13 +112,21 @@ final class ShuffleBlockFetcherIterator
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2403#issuecomment-55777417
@mateiz Will do. There's one catch. Since `Duration` has an accessor named
`milliseconds`, and has a private accessor called `millis` from the
constructor, I can't create
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2328#discussion_r17620855
--- Diff: pom.xml ---
@@ -888,7 +888,7 @@
plugin
groupIdorg.scalatest/groupId
artifactIdscalatest-maven-plugin
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2403#issuecomment-55825501
@mateiz Ah of course. I overlooked the obvious somehow. I'm looking at why
MIMA binary checks fail to see if it has a point or not now.
---
If your project is set up
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2419#issuecomment-55858790
@derrickburns I think these notes can go in code comments? (They each
generate their own email too.)
This is also a big-bang change covering several issues, some
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2403#issuecomment-55864499
I'm not sure what to make of the MIMA errors:
* the type hierarchy of object org.apache.spark.streaming.Duration has
changed in new version. Missing types
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2463#issuecomment-56239570
Is the goal here just to make the recursive calls take fewer stack frames
and make it harder to overflow ? I got the impression there was an infinite
recusrsion lurking
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2480#discussion_r17845815
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -1270,13 +1270,12 @@ abstract class RDD[T: ClassTag](
* doCheckpoint() is called
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2399#issuecomment-56368310
@pwendell `compute-classpath` will put test classes on the classpath for
`spark-submit` et al if `SPARK_TESTING=1`, which is set by test invocations.
This is how
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/2508
[SPARK-3356] [DOCS] Document when RDD elements' ordering within partitions
is nondeterministic
As suggested by @mateiz , and because it came up on the mailing list again
last week, this attempts
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2514#discussion_r17954025
--- Diff:
core/src/main/scala/org/apache/spark/util/collection/ExternalSorter.scala ---
@@ -152,7 +152,7 @@ private[spark] class ExternalSorter[K, V, C
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2514#discussion_r17957564
--- Diff:
core/src/main/scala/org/apache/spark/util/collection/ExternalSorter.scala ---
@@ -152,7 +152,7 @@ private[spark] class ExternalSorter[K, V, C
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2508#issuecomment-56651085
@mateiz Got it. On the zip methods, I want to capture the key point from
https://issues.apache.org/jira/browse/SPARK-3098 , that the ordering is not
only not guaranteed
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2216#issuecomment-56787307
MIMA is complaining because a method is added to trait `JavaDStreamLike`. I
think it should just be suppressed, as there's no guarantee to callers that
this trait won't
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2508#issuecomment-56829748
@mateiz Yeah, there's no mention of zip methods in the programming guide,
so if the groupBy method note isn't so valuable, I think there's probably no
useful note
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2474#issuecomment-56871752
Yes, as I recall, Hadoop 1 + S3 requires jets3t 0.7 to work correctly. (Or
else we would have also updated it to 0.9). I also believe that 3.x and 4.x of
the HTTP
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2474#issuecomment-56932682
@JoshRosen It looks like HttpClient 4.1.3 comes in from Thrift via Hive:
```
mvn dependency:tree
...
[INFO] org.apache.spark:spark-hive_2.10:jar:1.2.0
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2545#issuecomment-56949709
@397090770 I think you are accidentally opening pull requests. Can you
close these please?
---
If your project is set up for it, you can reply to this email and have
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2546#discussion_r18089631
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -153,13 +153,18 @@ private[history] class FsHistoryProvider(conf
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2558#issuecomment-57053829
No, `building-spark.html` is the new URL of the page. This should not be
changed. The project site however does need to be rebuilt soon.
---
If your project is set up
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2455#discussion_r18124375
--- Diff:
core/src/main/scala/org/apache/spark/util/random/RandomSampler.scala ---
@@ -53,56 +81,237 @@ trait RandomSampler[T, U] extends Pseudorandom
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2455#discussion_r18124373
--- Diff:
core/src/main/scala/org/apache/spark/util/random/RandomSampler.scala ---
@@ -43,9 +46,34 @@ trait RandomSampler[T, U] extends Pseudorandom
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2455#discussion_r18124383
--- Diff:
core/src/main/scala/org/apache/spark/util/random/RandomSampler.scala ---
@@ -53,56 +81,237 @@ trait RandomSampler[T, U] extends Pseudorandom
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2455#discussion_r18124388
--- Diff:
core/src/main/scala/org/apache/spark/util/random/RandomSampler.scala ---
@@ -53,56 +81,237 @@ trait RandomSampler[T, U] extends Pseudorandom
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2455#discussion_r18124397
--- Diff:
core/src/main/scala/org/apache/spark/util/random/RandomSampler.scala ---
@@ -53,56 +81,237 @@ trait RandomSampler[T, U] extends Pseudorandom
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2455#discussion_r18124403
--- Diff:
core/src/main/scala/org/apache/spark/util/random/RandomSampler.scala ---
@@ -53,56 +81,237 @@ trait RandomSampler[T, U] extends Pseudorandom
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/2564
SPARK-2548 [STREAMING] JavaRecoverableWordCount is missing
Here's my attempt to re-port `RecoverableNetworkWordCount` to Java,
following the example of its Scala and Java siblings. I fixed a few
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2564#issuecomment-57095584
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/2575
SPARK-2626 [DOCS] Stop SparkContext in all examples
Call SparkContext.stop() in all examples (and touch up minor nearby code
style issues while at it)
You can merge this pull request into a Git
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2575#discussion_r18148901
--- Diff:
examples/src/main/scala/org/apache/spark/examples/GroupByTest.scala ---
@@ -44,11 +44,11 @@ object GroupByTest {
arr1(i
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2575#discussion_r18148941
--- Diff: examples/src/main/java/org/apache/spark/examples/JavaSparkPi.java
---
@@ -61,5 +60,7 @@ public Integer call(Integer integer, Integer integer2
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2579#discussion_r18166753
--- Diff: docs/running-on-yarn.md ---
@@ -159,7 +159,7 @@ For example:
lib/spark-examples*.jar \
10
-The above starts
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2455#discussion_r18202539
--- Diff:
core/src/main/scala/org/apache/spark/util/random/RandomSampler.scala ---
@@ -43,9 +46,34 @@ trait RandomSampler[T, U] extends Pseudorandom
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2455#discussion_r18202776
--- Diff:
core/src/main/scala/org/apache/spark/util/random/RandomSampler.scala ---
@@ -53,56 +81,237 @@ trait RandomSampler[T, U] extends Pseudorandom
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2596#issuecomment-57356921
It looks like the `search.maven.org` URL now just redirects to the
`repo1.maven.org` URL:
```
$ curl
http://search.maven.org/remotecontent?filepath=org
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/2601
SPARK-3744 [STREAMING] FlumeStreamSuite will fail during port contention
Since it looked quite easy, I took the liberty of making a quick PR that
just uses `Utils.startServiceOnPort` to fix
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2611#discussion_r18267520
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1470,6 +1472,7 @@ private[spark] object Utils extends Logging {
return
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2564#issuecomment-57449006
@tdas No problem, text removed. I tested the Java example using the
instructions in the javadoc, and that worked. I was lazy, and didn't try it on
a cluster and try
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2623#discussion_r18326718
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1437,7 +1437,13 @@ private[spark] object Utils extends Logging {
val
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2623#discussion_r18332145
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1437,7 +1437,13 @@ private[spark] object Utils extends Logging {
val
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2623#discussion_r18334033
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1437,7 +1437,13 @@ private[spark] object Utils extends Logging {
val
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2640#issuecomment-57771950
I don't agree that users should be directed to build with SBT. Maven is the
default, but, probably best to not even make a specific recommendation.
---
If your project
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2638#issuecomment-57791903
A particular instance of Spark will be built for a particular version of
Hadoop and/or YARN. It is not at this point a universal binary anyway, and so,
I do not think
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2637#issuecomment-57792726
+1, makes more sense for sure. Open a JIRA for this as well?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2643#issuecomment-57808284
Some changes here make lines longer than 100 characters, which contradicts
the scalastyle rules for the project. I am not sure that moving comments around
to a previous
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2640#issuecomment-57808436
Oh I'm sorry, my poor brain read the diff the wrong way around. I agree
with the change!
---
If your project is set up for it, you can reply to this email and have your
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2629#issuecomment-57936350
Since this is a doc change only, the test failure must be spurious and I
think it's ignorable. (Although you might break your long line across two lines
if you change
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/2662
SPARK-3794 [CORE] Building spark core fails due to inadvertent dependency
on Commons IO
Remove references to Commons IO FileUtils and replace with pure Java
version, which doesn't need to traverse
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2660#issuecomment-57937489
I think this may be subsumed in https://github.com/apache/spark/pull/2662
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2662#discussion_r18440969
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -710,18 +708,20 @@ private[spark] object Utils extends Logging {
* Determines
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2667#discussion_r18447056
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RankingMetrics.scala ---
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2667#discussion_r18447076
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RankingMetrics.scala ---
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2667#discussion_r18447094
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RankingMetrics.scala ---
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2667#discussion_r18447346
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RankingMetrics.scala ---
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2667#discussion_r18447381
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/evaluation/RankingMetricsSuite.scala
---
@@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2667#discussion_r18447409
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RankingMetrics.scala ---
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2667#discussion_r18447432
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RankingMetrics.scala ---
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/2670
SPARK-3811 [CORE] More robust / standard Utils.deleteRecursively,
Utils.createTempDir
I noticed a few issues with how temp directories are created and deleted:
*Minor*
* Guava's
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2667#discussion_r18468077
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RankingMetrics.scala ---
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2667#discussion_r18469803
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RankingMetrics.scala ---
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2670#discussion_r18473543
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -251,15 +265,8 @@ private[spark] object Utils extends Logging {
} catch
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2670#discussion_r18474437
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -666,15 +673,27 @@ private[spark] object Utils extends Logging {
*/
def
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2670#discussion_r18474544
--- Diff:
core/src/test/scala/org/apache/spark/rdd/PairRDDFunctionsSuite.scala ---
@@ -381,14 +382,13 @@ class PairRDDFunctionsSuite extends FunSuite
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2670#discussion_r18474568
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -666,15 +673,27 @@ private[spark] object Utils extends Logging {
*/
def
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2682#issuecomment-58103371
Is it the Hadoop 2.2 test code that needs these things, or Spark test code?
I am guessing it is the former, and this is because test deps aren't transitive
but happen
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2691#issuecomment-58146963
This seems quite heavyweight compared to Patrick's suggestion of just using
a static object. Why the need for custom logic to load classes? (which even
opens up security
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2523#discussion_r18541629
--- Diff: docs/graphx-programming-guide.md ---
@@ -620,7 +620,7 @@ more senior followers of each user.
import org.apache.spark.graphx.util.GraphGenerators
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/1508#issuecomment-49590837
It's the MIMA test that fails, since the method signature is changed. It's
possible to keep and deprecate the existing method of course. Should we just do
that, or OK
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1425#discussion_r15216588
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/util/TestingUtils.scala ---
@@ -18,28 +18,90 @@
package org.apache.spark.mllib.util
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/1425#issuecomment-49713754
Is it possible to support syntax like `0.3 +- 0.1` for absolute error, and
`0.3 +- 10%` for relative error? Seems like the kind of crazy thing that Scala
just might
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/1529#issuecomment-49745431
Interesting, I don't see any such error, and haven't as far as I can
remember. I'm on OS X. The change is probably harmless anyway but what is your
configuration
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/1425#issuecomment-49853321
@mengxr Sure, maybe the % syntax isn't helpful. I just mean two different
operators or methods of some kind. Why bother with these issues instead of
making two methods
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/1547
SPARK-2646. log4j initialization not quite compatible with log4j 2.x
The logging code that handles log4j initialization leads to an stack
overflow error when used with log4j 2.x, which has just been
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1608#discussion_r15438411
--- Diff: external/hbase/pom.xml ---
@@ -0,0 +1,217 @@
+project xmlns=http://maven.apache.org/POM/4.0.0;
xmlns:xsi=http://www.w3.org/2001/XMLSchema
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1608#discussion_r15438415
--- Diff: external/hbase/pom.xml ---
@@ -0,0 +1,217 @@
+project xmlns=http://maven.apache.org/POM/4.0.0;
xmlns:xsi=http://www.w3.org/2001/XMLSchema
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1608#discussion_r15438420
--- Diff: external/hbase/pom.xml ---
@@ -0,0 +1,217 @@
+project xmlns=http://maven.apache.org/POM/4.0.0;
xmlns:xsi=http://www.w3.org/2001/XMLSchema
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1608#discussion_r15438418
--- Diff: external/hbase/pom.xml ---
@@ -0,0 +1,217 @@
+project xmlns=http://maven.apache.org/POM/4.0.0;
xmlns:xsi=http://www.w3.org/2001/XMLSchema
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1608#discussion_r15438426
--- Diff: external/hbase/pom.xml ---
@@ -0,0 +1,217 @@
+project xmlns=http://maven.apache.org/POM/4.0.0;
xmlns:xsi=http://www.w3.org/2001/XMLSchema
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1608#discussion_r15438429
--- Diff:
external/hbase/src/main/scala/org/apache/spark/hbase/HBaseContext.scala ---
@@ -0,0 +1,544 @@
+/*
+ * Licensed to the Apache Software
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1608#discussion_r15438438
--- Diff:
external/hbase/src/main/scala/org/apache/spark/hbase/HBaseContext.scala ---
@@ -0,0 +1,544 @@
+/*
+ * Licensed to the Apache Software
1 - 100 of 15282 matches
Mail list logo