Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/908#issuecomment-44497341
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15279/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/908#issuecomment-44497340
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user laserson commented on the pull request:
https://github.com/apache/spark/pull/866#issuecomment-44497355
Will add tests tomorrow morning...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/187#issuecomment-44497685
Merged this, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/187
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/876#issuecomment-44497883
This functionality doesn't fit the definition of a Partitioner as used in
Spark (which requires it to consistently return the same partition for each
key), so it would be
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/860#discussion_r13170390
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -329,8 +329,26 @@ private[spark] class BlockManager(
* never deletes
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/866#issuecomment-44498510
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15282/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/866#issuecomment-44498507
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/854#issuecomment-44498506
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/854#issuecomment-44498508
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15281/
---
If your project
GitHub user liancheng opened a pull request:
https://github.com/apache/spark/pull/909
[SPARK-1959] String NULL shouldn't be interpreted as null value
JIRA issue: [SPARK-1959](https://issues.apache.org/jira/browse/SPARK-1959)
You can merge this pull request into a Git repository by
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/909#issuecomment-44498854
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/909#issuecomment-44498868
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/685#discussion_r13170960
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/util/MLUtils.scala ---
@@ -180,7 +180,39 @@ object MLUtils {
}
/**
- * ::
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/685#discussion_r13170957
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/util/MLUtils.scala ---
@@ -180,7 +180,39 @@ object MLUtils {
}
/**
- * ::
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/685#discussion_r13171027
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/util/NumericParser.scala ---
@@ -0,0 +1,121 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/685#discussion_r13171126
--- Diff: python/pyspark/mllib/util.py ---
@@ -160,6 +157,40 @@ def saveAsLibSVMFile(data, dir):
lines.saveAsTextFile(dir)
+
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/685#issuecomment-44500939
@mengxr made a few more small comments, but this looks good to merge once
those are fixed.
---
If your project is set up for it, you can reply to this email and have your
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/685#issuecomment-44500971
BTW I also prefer the non-JSON format, it's a bit clearer and I don't think
there are huge benefits to JSON here.
---
If your project is set up for it, you can reply to
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/735#issuecomment-44501123
Hey one other thought, is there a reason to have the max this low? It might
be good to make it even higher to deal with the odd large object (e.g. people
working with
Github user cloud-fan commented on the pull request:
https://github.com/apache/spark/pull/860#issuecomment-44503345
@mateiz That's a good idea! I have moved the lazy iterator into
`BlockManager.dataDeserialize`. Thanks for your comments!
---
If your project is set up for it, you
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/909#issuecomment-44503402
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/909#issuecomment-44503403
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15284/
---
If your project
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/910
[WIP] Enable mima on spark-core.
I am not very sure, if it was intentional or an oversight.
Just wanted to see the jenkins reaction to this.
You can merge this pull request into a Git
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/910#issuecomment-44511559
@pwendell Take a look !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/910#issuecomment-44511717
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/910#issuecomment-44511731
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-44513813
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-44513802
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user guowei2 opened a pull request:
https://github.com/apache/spark/pull/911
[external/kafka] Receive kafka message with muti-consumers
It seems KafkaUtils works with only one consumer
this patch make Receiving kafka messages with muti-consumers
You can merge this pull
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/911#issuecomment-44515432
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/910#issuecomment-44515815
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-44518322
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-44518323
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15286/
---
If your project is set up for it, you can
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-44518455
Not sure why the build is marked as failed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/911#issuecomment-44525900
I think this can be done by user in application level, I'm not if it is
suitable to change it in API layer. Also this is not only Kafka's problem,
input stream like
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/905#issuecomment-44533025
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15287/
---
If your project
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/910#issuecomment-44545232
@ScrapCodes hey I actually noticed this a while back, but I decided not to
change it yet until 1.0 comes out (since we had many chances). Could you just
add this change
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/889
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/912
SPARK-1935: Explicitly add commons-codec 1.5 as a dependency (for
branch-0.9).
This is for branch 0.9.
You can merge this pull request into a Git repository by running:
$ git pull
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/912#issuecomment-44552106
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/912#issuecomment-44552092
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/877#issuecomment-44552096
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/877#issuecomment-44552118
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/912#issuecomment-44553143
LGTM pending tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/911#discussion_r13189924
--- Diff:
external/kafka/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala
---
@@ -70,7 +92,7 @@ object KafkaUtils {
kafkaParams:
Github user douglaz commented on the pull request:
https://github.com/apache/spark/pull/813#issuecomment-4455
I'll take a look at the python interface soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/913#issuecomment-44555062
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user marmbrus opened a pull request:
https://github.com/apache/spark/pull/913
[SQL] SPARK-1964 Add timestamp to hive metastore type parser.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/marmbrus/spark timestampMetastore
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/913#issuecomment-44555072
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/912#issuecomment-4428
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/877#issuecomment-44556900
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/877#issuecomment-44556903
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15289/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/913#issuecomment-44562888
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/913#issuecomment-44562890
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15290/
---
If your project
Github user cmccabe commented on the pull request:
https://github.com/apache/spark/pull/850#issuecomment-44564644
I tested this by running Spark on YARN against Hadoop 2.4 on a 4 node
cluster. I used the Pi example job.
---
If your project is set up for it, you can reply to this
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/776#discussion_r13195998
--- Diff:
core/src/main/scala/org/apache/spark/rdd/ParallelCollectionRDD.scala ---
@@ -128,18 +137,17 @@ private object ParallelCollectionRDD {
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/860#issuecomment-44566608
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/860#issuecomment-44566086
Jenkins, this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/860#issuecomment-44566588
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/655#issuecomment-44567226
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/655#issuecomment-44567239
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/560#issuecomment-44569275
I rebased and cleaned up the code some more. I think it's in pretty good
shape now, and the tests are much better.
Tested on yarn client / cluster, with and
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/860#issuecomment-44571555
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15291/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/860#issuecomment-44571554
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/655#issuecomment-44572188
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15292/
---
If your project
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/781#issuecomment-44569747
Ping.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/655#issuecomment-44572185
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/914#issuecomment-44584725
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user laserson commented on the pull request:
https://github.com/apache/spark/pull/866#issuecomment-44589138
@mateiz test added
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/866#issuecomment-44589254
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/915#issuecomment-44590372
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/915#issuecomment-44590733
As Patrick alluded to, those imports are on purpose, to direct logging from
this like Pig that use JCL to the logging that the rest of the app uses.
Because things like
GitHub user dorx opened a pull request:
https://github.com/apache/spark/pull/916
SPARK-1939 Refactor takeSample method in RDD to use ScaSRS
Modified the takeSample method in RDD to use the ScaSRS sampling technique
to improve performance. Added a private method that computes
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/758#issuecomment-44593628
First merge as a committer :)
Thanks for doing this!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/866#issuecomment-44594312
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15294/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/914#issuecomment-44594309
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/866#issuecomment-44594310
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/914#issuecomment-44594311
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15293/
---
If your project is set up for it, you can
Github user ankurdave commented on the pull request:
https://github.com/apache/spark/pull/905#issuecomment-44595125
Thanks! Merged.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/905
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/916#issuecomment-44595987
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/916#issuecomment-44595991
Jenkins, add to whitelist.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/916#issuecomment-44596376
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/916#discussion_r13209687
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -402,10 +411,11 @@ abstract class RDD[T: ClassTag](
}
if (num
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/916#discussion_r13209716
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -421,6 +431,22 @@ abstract class RDD[T: ClassTag](
Utils.randomizeInPlace(samples,
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/916#discussion_r13209738
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -421,6 +431,22 @@ abstract class RDD[T: ClassTag](
Utils.randomizeInPlace(samples,
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/916#discussion_r13209758
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -421,6 +431,22 @@ abstract class RDD[T: ClassTag](
Utils.randomizeInPlace(samples,
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/916#discussion_r13209770
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -421,6 +431,22 @@ abstract class RDD[T: ClassTag](
Utils.randomizeInPlace(samples,
Github user dorx commented on a diff in the pull request:
https://github.com/apache/spark/pull/916#discussion_r13209778
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -402,10 +411,11 @@ abstract class RDD[T: ClassTag](
}
if (num
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/916#discussion_r13209810
--- Diff:
core/src/main/scala/org/apache/spark/util/random/RandomSampler.scala ---
@@ -70,7 +70,7 @@ class BernoulliSampler[T](lb: Double, ub: Double,
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/916#discussion_r13209847
--- Diff: core/src/test/scala/org/apache/spark/rdd/RDDSuite.scala ---
@@ -494,56 +495,84 @@ class RDDSuite extends FunSuite with
SharedSparkContext {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/916#discussion_r13209832
--- Diff: core/src/test/scala/org/apache/spark/rdd/RDDSuite.scala ---
@@ -22,6 +22,7 @@ import scala.reflect.ClassTag
import org.scalatest.FunSuite
Github user dorx commented on a diff in the pull request:
https://github.com/apache/spark/pull/916#discussion_r13209893
--- Diff:
core/src/main/scala/org/apache/spark/util/random/RandomSampler.scala ---
@@ -70,7 +70,7 @@ class BernoulliSampler[T](lb: Double, ub: Double,
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/916#discussion_r13210003
--- Diff: core/pom.xml ---
@@ -68,6 +68,10 @@
artifactIdcommons-lang3/artifactId
/dependency
dependency
+
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/916#discussion_r13210001
--- Diff: core/src/test/scala/org/apache/spark/rdd/RDDSuite.scala ---
@@ -494,56 +495,84 @@ class RDDSuite extends FunSuite with
SharedSparkContext {
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/916#discussion_r13210016
--- Diff: pom.xml ---
@@ -246,6 +246,11 @@
version1.5/version
/dependency
dependency
+
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/916#discussion_r13210082
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -421,6 +431,22 @@ abstract class RDD[T: ClassTag](
Utils.randomizeInPlace(samples,
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/916#discussion_r13210149
--- Diff: core/src/test/scala/org/apache/spark/rdd/RDDSuite.scala ---
@@ -494,56 +495,84 @@ class RDDSuite extends FunSuite with
SharedSparkContext {
1 - 100 of 114 matches
Mail list logo