Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4688#discussion_r28284858
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -17,18 +17,22 @@
package org.apache.spark.deploy
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4688#discussion_r28284852
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -17,18 +17,22 @@
package org.apache.spark.deploy
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/5499#issuecomment-92515941
@liancheng Where should I add the test? At first, I thought
`NullableColumnBuilderSuite` is the place But, why `NullableColumnBuilderSuite`
does not really use those real
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5495#issuecomment-92515969
[Test build #30200 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30200/consoleFull)
for PR 5495 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5493#issuecomment-92515980
[Test build #30201 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30201/consoleFull)
for PR 5493 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4688#discussion_r28285532
--- Diff:
core/src/main/scala/org/apache/spark/deploy/ExecutorDelegationTokenUpdater.scala
---
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5173#issuecomment-92517129
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/5208#discussion_r28287371
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoin.scala
---
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5495#issuecomment-92537882
[Test build #30200 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30200/consoleFull)
for PR 5495 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5495#issuecomment-92537908
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5499#issuecomment-92537454
[Test build #30199 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30199/consoleFull)
for PR 5499 at commit
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4688#issuecomment-92537887
Hi @harishreedharan - could you add some more documentation for this? The
high level architecture here may be hard for users to see. Here are some places
you might
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/5431#discussion_r28290532
--- Diff: mllib/src/main/scala/org/apache/spark/ml/param/params.scala ---
@@ -179,52 +179,96 @@ trait Params extends Identifiable with Serializable {
GitHub user ilganeli opened a pull request:
https://github.com/apache/spark/pull/5501
[SPARK-6703][Core][WIP] Provide a way to discover existing SparkContext's
I've added a static getOrCreate method to the static SparkContext object
that allows one to either retrieve a previously
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/5493#issuecomment-92539716
LGTM. Thanks @hlin09 - Merging this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/5459#issuecomment-92540457
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5500#issuecomment-92539907
[Test build #30206 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30206/consoleFull)
for PR 5500 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5501#issuecomment-92547798
[Test build #30207 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30207/consoleFull)
for PR 5501 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5173#issuecomment-92547739
[Test build #30205 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30205/consoleFull)
for PR 5173 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5173#issuecomment-92547751
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user yu-iskw commented on the pull request:
https://github.com/apache/spark/pull/5267#issuecomment-92547494
@freeman-lab, do you know any good evaluations of a hierarchical clustering
algorithm except Within Set Sumb of Squared Error(WSSSE)? For example, I know
Silhouette
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/5480#discussion_r28293883
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -414,6 +414,10 @@ class SQLQuerySuite extends QueryTest with
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5431#issuecomment-92549813
[Test build #30212 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30212/consoleFull)
for PR 5431 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5431#issuecomment-92552866
[Test build #30210 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30210/consoleFull)
for PR 5431 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5431#issuecomment-92552876
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/5140#issuecomment-92552856
Ok LGTM I'm merging this into master thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/5491#issuecomment-92555754
where the app itself would explicitly keep the log's mod time updated
All I mean here is that `EventLoggingListener` could from time to time just
call
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2342#issuecomment-92564304
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2342#issuecomment-92564279
[Test build #30211 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30211/consoleFull)
for PR 2342 at commit
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/4723#discussion_r28296922
--- Diff: python/pyspark/streaming/kafka.py ---
@@ -70,7 +71,103 @@ def createStream(ssc, zkQuorum, groupId, topics,
kafkaParams={},
except
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4688#discussion_r28286363
--- Diff:
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala
---
@@ -74,6 +77,13 @@ private[spark] class
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/731#discussion_r28288122
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -524,52 +524,25 @@ private[master] class Master(
}
/**
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/731#discussion_r28288632
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -582,32 +555,63 @@ private[master] class Master(
pos = (pos
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/731#discussion_r28288621
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -582,32 +555,63 @@ private[master] class Master(
pos = (pos
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5140#discussion_r28289277
--- Diff: sbin/stop-slaves.sh ---
@@ -29,10 +29,4 @@ if [ -e $sbin/../tachyon/bin/tachyon ]; then
$sbin/slaves.sh cd $SPARK_HOME \;
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/5431#discussion_r28290818
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/param/shared/SharedParamsCodeGen.scala
---
@@ -0,0 +1,169 @@
+/*
+ * Licensed to the Apache
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/5459#issuecomment-92541203
I had triggered several runs in a row, waiting for the results. If they all
passed, we could merge this.
---
If your project is set up for it, you can reply to this
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/5431#discussion_r28290778
--- Diff: mllib/src/main/scala/org/apache/spark/ml/param/params.scala ---
@@ -325,7 +379,7 @@ class ParamMap private[ml] (private val map:
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/5431#discussion_r28290695
--- Diff: mllib/src/main/scala/org/apache/spark/ml/param/params.scala ---
@@ -179,52 +179,96 @@ trait Params extends Identifiable with Serializable {
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5459#issuecomment-92541148
[Test build #663 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/663/consoleFull)
for PR 5459 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5501#issuecomment-92541131
[Test build #30207 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30207/consoleFull)
for PR 5501 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5459#issuecomment-92541260
[Test build #664 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/664/consoleFull)
for PR 5459 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5459#issuecomment-92541125
[Test build #30208 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30208/consoleFull)
for PR 5459 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5432#issuecomment-92541143
[Test build #30209 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30209/consoleFull)
for PR 5432 at commit
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/5085#issuecomment-92546252
@vanzin @nishkamravi2 When I try to run `bin/spark-shell --master
local-cluster[2,1,512]` my executors keep failing complaining that the scala
classes are not found.
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/5487#issuecomment-92552525
Thanks, merged to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/5487
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/5491#issuecomment-92566099
@vanzin Erhhh, It seems like another solution, but there are few questions:
1.It adds logic to event logger(more codes and more action)
2.It increase the
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/5497#discussion_r28286461
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlSerializer2.scala
---
@@ -0,0 +1,378 @@
+/*
+ * Licensed to the Apache
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5383#issuecomment-92520820
[Test build #30202 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30202/consoleFull)
for PR 5383 at commit
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/5208#discussion_r28287180
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/Exchange.scala ---
@@ -33,7 +32,11 @@ import org.apache.spark.util.MutablePair
* ::
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5430#issuecomment-92527921
[Test build #30203 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30203/consoleFull)
for PR 5430 at commit
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/4506#issuecomment-92527942
Thanks! Merged to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/5481
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/731#discussion_r28288516
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -582,32 +555,63 @@ private[master] class Master(
pos = (pos
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5354#issuecomment-92532627
That ship may have sailed for better or worse. Yes we have to be careful
about bringing things into core, so I'm glad to see the exclusions, but I think
there are simpler
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/5236
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/5236#issuecomment-92532459
Ok, LGTM I'm merging this into master. It's great to see things on your
todo list getting ticked off by others in the community. Thanks @ilganeli
@srowen @vanzin!
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/5431#discussion_r28289697
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/Classifier.scala ---
@@ -17,15 +17,16 @@
package
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/5431#discussion_r28289673
--- Diff: mllib/src/test/scala/org/apache/spark/ml/param/ParamsSuite.scala
---
@@ -78,23 +81,42 @@ class ParamsSuite extends FunSuite {
}
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/5459#issuecomment-92540777
Alright, one last time. If this passes, I'd suggest re-enabling the test,
so that if it's still flaky we can look at logs and figure out why. (I worked
with Sean and have
Github user calvinjia commented on the pull request:
https://github.com/apache/spark/pull/5354#issuecomment-92540562
For the first point, the conflict was in `httpclient` which was not
resolved correctly by the parent, leading to this iteration of the PR.
For the second
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/5493
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5431#issuecomment-92544002
[Test build #30210 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30210/consoleFull)
for PR 5431 at commit
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/5085#issuecomment-92548540
Looks like @davies filed a JIRA for this already:
https://issues.apache.org/jira/browse/SPARK-6890
---
If your project is set up for it, you can reply to this email
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/5247#issuecomment-92551194
You are correct, thanks for clarifying. Query planning was not the right
phrase, but really my point was that ideally the logic in DataFrame would
handle only ensuring
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/5491#issuecomment-92551078
Okay I made an observation on my cluster, the thrift server is started at
21:01:32 and it hadn't do anything from that. Its evnet log's modification time
is
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4027#issuecomment-92553493
By the way, just an update on this. @pwendell and I think we should use the
same approach on Mesos as we do on YARN and will do so on standalone mode as in
#731.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5500#issuecomment-92553426
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/5485
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5500#issuecomment-92553404
[Test build #30206 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30206/consoleFull)
for PR 5500 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5459#issuecomment-92556316
[Test build #663 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/663/consoleFull)
for PR 5459 at commit
GitHub user vanzin opened a pull request:
https://github.com/apache/spark/pull/5504
[SPARK-6890] [core] Fix launcher lib work with SPARK_PREPEND_CLASSES.
The fix for SPARK-6406 broke the case where sub-processes are launched
when SPARK_PREPEND_CLASSES is set, because the code
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/5504#issuecomment-92570963
I tested on Linux with and without SPARK_PREPEND_CLASSES. I'll try it on
Windows tomorrow just to make sure I didn't break anything there.
/cc @davies @andrewor14
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/5247#issuecomment-92573420
yeah, its good to directly from RunnableCommand to LocalTableScan, i am
updating this.
---
If your project is set up for it, you can reply to this email and have your
Github user harishreedharan commented on a diff in the pull request:
https://github.com/apache/spark/pull/4688#discussion_r28286340
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/AMDelegationTokenRenewer.scala
---
@@ -0,0 +1,211 @@
+/*
+ * Licensed to the
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5354#issuecomment-92521916
In this case, `tachyon` includes `thrift` (shaded) and `httpclient`
(unshaded). Adding a direct dependency on `httpclient` shouldn't do anything.
Shading hasn't changed
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/5208#discussion_r28286721
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala ---
@@ -72,6 +72,12 @@ abstract class SparkPlan extends
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/5481#issuecomment-92527536
Thanks, merged to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/731#discussion_r28288173
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -524,52 +524,25 @@ private[master] class Master(
}
/**
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5173#issuecomment-92531906
[Test build #30205 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30205/consoleFull)
for PR 5173 at commit
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/5431#discussion_r28289791
--- Diff: mllib/src/main/scala/org/apache/spark/ml/param/params.scala ---
@@ -55,58 +49,42 @@ class Param[T] (
*/
def -(value: T): ParamPair[T]
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5499#issuecomment-92537484
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5502#issuecomment-92541715
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/5431#discussion_r28290995
--- Diff: mllib/src/main/scala/org/apache/spark/ml/param/params.scala ---
@@ -55,58 +49,42 @@ class Param[T] (
*/
def -(value: T):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5459#issuecomment-92549076
[Test build #665 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/665/consoleFull)
for PR 5459 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4996
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user li-zhihui closed the pull request at:
https://github.com/apache/spark/pull/5451
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5431#issuecomment-92565573
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5431#issuecomment-92565550
[Test build #30212 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30212/consoleFull)
for PR 5431 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5504#issuecomment-92572834
[Test build #30214 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30214/consoleFull)
for PR 5504 at commit
Github user rtreffer commented on the pull request:
https://github.com/apache/spark/pull/5498#issuecomment-92521352
Added a ticket: https://issues.apache.org/jira/browse/SPARK-6888
Will add that to the commit after some sleep .zZzZzZ
---
If your project is set up for it, you can
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/5208#issuecomment-92526609
I think this is getting pretty close! If you run into trouble implementing
some of my suggestions on the `SparkPlan` interfaces for ordering please let me
know and I
Github user ilganeli commented on a diff in the pull request:
https://github.com/apache/spark/pull/5236#discussion_r28288002
--- Diff: docs/running-on-yarn.md ---
@@ -48,9 +48,9 @@ Most of the configs are the same for Spark on YARN as for
other deployment modes
/tr
tr
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5236#issuecomment-92530750
[Test build #30204 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30204/consoleFull)
for PR 5236 at commit
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/5431#discussion_r28290419
--- Diff: mllib/src/main/scala/org/apache/spark/ml/param/params.scala ---
@@ -179,52 +179,96 @@ trait Params extends Identifiable with Serializable {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/5431#discussion_r28290414
--- Diff: mllib/src/main/scala/org/apache/spark/ml/param/params.scala ---
@@ -179,52 +179,96 @@ trait Params extends Identifiable with Serializable {
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5383#issuecomment-92543221
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
GitHub user mandar2812 opened a pull request:
https://github.com/apache/spark/pull/5503
[MLLIB][WIP] SPARK-4638: Kernels feature for MLLIB
1) Class hierarchy for SVM Kernels, with unit tests.
2) Entropy based subset selection for low rank approximation of Large
Kernel Matrices,
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5236#issuecomment-92547091
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
201 - 300 of 604 matches
Mail list logo