Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
sorry being slow on this simple fix (my first pyspark/py4j commit :P)
thank you very much for help all the way through ð
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
Jenkins, retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
run `RENs-MacBook-Pro:spark xin$ ./python/run-tests
--modules=pyspark-mllib,pyspark-ml --parallelism=4` and passed on my local Mac.
Jenkins, retest this please
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
Jenkins, retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
```
>>> from pyspark.ml.feature import Word2Vec
>>> sent = ("a b " * 100 + "a c " * 10).split(" ")
>>> doc = spark.createDat
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
`self._java_obj.findSynonymsArray` is totally a much nicer and more elegant
solution ð
---
-
To unsubscribe, e-mail
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
ping :)
I'll have some time this weekend, and can work on any further
comments/reviews, thanks!
---
If your project is set up for it, you can reply to this email and have your
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
thanks Nick, I also referenced to Py4J docs
https://www.py4j.org/advanced_topics.html , and scala `tuple` seems not
supported:
`Py4J allows elements to be modified (like a real Java
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17451#discussion_r125339579
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Word2Vec.scala
---
@@ -274,6 +274,29 @@ class Word2VecModel private[ml
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
Oh now I got you, will do. Thank you :)
Sent from my iPhone at Canada Place ð¨ð¦
On Sat, Jul 1, 2017 at 4:39 PM Holden Karau
wrote:
> *@holdenk* commented
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
@holdenk thank you and Happy Canda 150 to you too, it's today! ð ð
ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as wel
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17451#discussion_r125158303
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Word2Vec.scala
---
@@ -274,6 +274,29 @@ class Word2VecModel private[ml
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17451#discussion_r125158304
--- Diff: python/pyspark/ml/feature.py ---
@@ -2869,6 +2871,20 @@ def findSynonyms(self, word, num):
word = _convert_to_vector(word
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
@holdenk do you mind have a look? I have some free time weekend and can
work on it ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
ping :) @MLnick
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
thank you ð @MLnick
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
a quick question that, is there a better way to run a test on specific
tests?
like for ml.tests, every time I'm running test on the whole module of ml
`./python/run-tests --p
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17451#discussion_r123950209
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Word2Vec.scala
---
@@ -274,6 +274,31 @@ class Word2VecModel private[ml
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17451#discussion_r123940294
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Word2Vec.scala
---
@@ -274,6 +274,31 @@ class Word2VecModel private[ml
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
no worries Holden, totally understood
thank you for the input and I'll try it out ð
---
If your project is set up for it, you can reply to this email and have your
reply appe
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/18120
Oh sorry i missed it, thanks for the heads up.
On Sat, May 27, 2017 at 2:16 PM Yan Facai (é¢åæ)
wrote:
> Hi, @keypointt <https://github.com/keypointt> .
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/18120
Hi Facai, you exposed api `getMaxDepth()` in `TreeEnsembleModel`, and the
rest changes are in comment and not being run.
I'd suggest you put the tests in test module and run th
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
I add a helper function findSynonymsTuple()
but in python terminal, I don't know how to access the tuple, could you
please help here? thanks a lot @holdenk
```
>
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
thank you Holden and Joseph, I'm on it now :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17451
hi @MLnick , I'm stuck when trying to add test cases for python
I tried below code chunk in pyspark terminal via `./bin/pyspark`
```
from pyspark.ml.feature import Wor
GitHub user keypointt opened a pull request:
https://github.com/apache/spark/pull/17451
[SPARK-19866][ML][PySpark] Add local version of Word2Vec findSynonyms for
spark.ml: Python API
https://issues.apache.org/jira/browse/SPARK-19866
## What changes were proposed in this
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17207
ping @felixcheung
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17207
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17207
Jenkins retest it please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/17207
sorry being vague, I updated the PR description just now
yes, it's for R model summary, R wrapper returning `maxDepth` to R model
as mentioned in the comment at
GitHub user keypointt opened a pull request:
https://github.com/apache/spark/pull/17207
[SPARK-19282][ML][SparkR] Expose model param "maxDepth" for R MLlib tree
ensemble
## What changes were proposed in this pull request?
Expose model param `maxDepth` for R
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/16643#discussion_r97123399
--- Diff:
core/src/main/resources/org/apache/spark/ui/static/spark-dag-viz.js ---
@@ -363,6 +364,27 @@ function resizeSvg(svg
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/16643
sorry...tricky word "interpret" for a non-native English speaker...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If yo
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/16643#discussion_r97097843
--- Diff:
core/src/main/resources/org/apache/spark/ui/static/spark-dag-viz.js ---
@@ -363,6 +364,27 @@ function resizeSvg(svg
Github user keypointt closed the pull request at:
https://github.com/apache/spark/pull/15353
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/16643#discussion_r96896268
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -19,7 +19,7 @@ package org.apache.spark.ui.jobs
import
GitHub user keypointt opened a pull request:
https://github.com/apache/spark/pull/16643
[SPARK-17724][Streaming][WebUI] Unevaluated new lines in tooltip in DAG
Visualization of a job
https://issues.apache.org/jira/browse/SPARK-17724
## What changes were proposed in this
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/12195
thank you @jkbradley for the detailed explanation :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/12195
I've also updated the description of this PR
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/12195
I'll update the PR description if it looks good now
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/12195
The main reason deleting those without duplication is because of the name,
but the actual code differs:
* StreamingLinearRegression
https://github.com/apache/spark/blob/master
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/15353
sorry I was on vacation last 2 weeks. working on it now, please allow me
some time to get it done :)
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/15353
Got it, thanks Andrew
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/15353
@felixcheung nice catch, you are right and a more general fix is needed
then, working on it
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/15353
I just gave it a quick manually visual check and it works for me on both
Chome and Safari, and @andrewor14 could you explain more how it's not working?
And I'll dig more to fin
GitHub user keypointt opened a pull request:
https://github.com/apache/spark/pull/15353
[SPARK-17724][WebUI][Streaming] Unevaluated new lines in tooltip in DAG
Visualization of a job
https://issues.apache.org/jira/browse/SPARK-17724
## What changes were proposed in this
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/15191
oh I see...sorry...I'll close this one
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
Github user keypointt closed the pull request at:
https://github.com/apache/spark/pull/15191
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user keypointt opened a pull request:
https://github.com/apache/spark/pull/15191
[SPARK-17628][Streaming][Examples] change name "StreamingExamples" to be
"StreamingExamplesUtils", more descriptive
https://issues.apache.org/jira/browse/SPARK-17628
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/14467#discussion_r79881217
--- Diff: core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala
---
@@ -880,42 +883,70 @@ private class PythonAccumulatorParam(@transient
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/15121
I just noticed, that at http://spark.apache.org/research.html
> Spark offers an abstraction called resilient distributed datasets (RDDs)
to support these applications efficien
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/15121
Just to be consistent with the url at http://spark.apache.org/research.html
> Resilient Distributed Datasets: A Fault-Tolerant Abstraction for
In-Memory Cluster Computing. Matei Zaha
GitHub user keypointt opened a pull request:
https://github.com/apache/spark/pull/15121
[SPARK-17567] Use valid url to Spark RDD paper
https://issues.apache.org/jira/browse/SPARK-17567
## What changes were proposed in this pull request?
Documentation
(http
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14993
hi @yanboliang it looks good to me but I don't have right to merge to
master, maybe you have to ping the other reviewers :p
---
If your project is set up for it, you can reply to this emai
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/15015
how about adding type check? `expect_equal(typeof(summary$weights), "list")`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub a
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/15015
There is already a test `expect_equal(length(summary$weights), 64)`
https://github.com/apache/spark/blob/master/R/pkg/inst/tests/testthat/test_mllib.R#L371
is it enough or we should
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14993
Vote for appending a random_sequence, and I believe almost definitely no
collision for this random_sequence
---
If your project is set up for it, you can reply to this email and have your
reply
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/15015
hi @felixcheung this is just a minor patch to SPARK-16445, where the
`@return` description is not accurate and fixed here. Do you mind have a look
here? Thanks a lot :)
---
If your project is
GitHub user keypointt opened a pull request:
https://github.com/apache/spark/pull/15015
[SPARK-16445][MLlib][SparkR] Fix @return description for sparkR mlp
summary() method
## What changes were proposed in this pull request?
Fix summary() method's `@return` descriptio
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/13584#discussion_r77273428
--- Diff: mllib/src/main/scala/org/apache/spark/ml/r/RWrapperUtils.scala ---
@@ -35,13 +35,37 @@ object RWrapperUtils extends Logging
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/13584#discussion_r77272735
--- Diff: mllib/src/main/scala/org/apache/spark/ml/r/RWrapperUtils.scala ---
@@ -35,13 +35,37 @@ object RWrapperUtils extends Logging
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/13584#discussion_r77271690
--- Diff: mllib/src/main/scala/org/apache/spark/ml/r/RWrapperUtils.scala ---
@@ -35,13 +35,37 @@ object RWrapperUtils extends Logging
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/13584#discussion_r77098961
--- Diff: mllib/src/main/scala/org/apache/spark/ml/r/RWrapperUtils.scala ---
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/14856#discussion_r76842594
--- Diff: R/pkg/inst/tests/testthat/test_mllib.R ---
@@ -99,6 +99,10 @@ test_that("spark.glm summary", {
expect_match(out[2], "Dev
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14836
Thanks Sean, I got what you mean. I'll avoid such PRs in the future, thanks
a lot for the tips.
---
If your project is set up for it, you can reply to this email and have your
reply appe
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/14836#discussion_r76646997
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
---
@@ -1452,7 +1452,6 @@ class DataFrameSuite extends QueryTest with
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/14856#discussion_r76552881
--- Diff: R/pkg/R/mllib.R ---
@@ -171,7 +172,8 @@ predict_internal <- function(object, newData) {
#' @note spark.glm since 2.0.0
#
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/13584#discussion_r76543182
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/feature/RFormulaSuite.scala ---
@@ -54,9 +54,6 @@ class RFormulaSuite extends SparkFunSuite with
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/13584#discussion_r76543133
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/r/RWrapperUtilsSuite.scala ---
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software
GitHub user keypointt opened a pull request:
https://github.com/apache/spark/pull/14856
[SPARK-17241][SparkR][MLlib] SparkR spark.glm should have configurable
regularization parameter
https://issues.apache.org/jira/browse/SPARK-17241
## What changes were proposed in this
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14848
sure I'll change SQLQuerySuite too
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
GitHub user keypointt opened a pull request:
https://github.com/apache/spark/pull/14848
[SPARK-17276][Core][Test] Stop env params output on Jenkins job page
https://issues.apache.org/jira/browse/SPARK-17276
## What changes were proposed in this pull request?
When
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14836
Jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14836
I just noticed that there are many other places where 'PropSpec' style is
used `with Matchers`, while I guess spark project is mainly using `FunSuite`
sylte
---
If your project is set
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/14836#discussion_r76502969
--- Diff: core/src/test/scala/org/apache/spark/AccumulatorSuite.scala ---
@@ -171,7 +171,7 @@ class AccumulatorSuite extends SparkFunSuite with
Matchers
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/14836#discussion_r76502012
--- Diff: core/src/test/scala/org/apache/spark/AccumulatorSuite.scala ---
@@ -100,7 +100,7 @@ class AccumulatorSuite extends SparkFunSuite with
Matchers
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/14836#discussion_r76491325
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -1238,7 +1238,7 @@ class SparkContext(config: SparkConf) extends Logging
with
GitHub user keypointt opened a pull request:
https://github.com/apache/spark/pull/14836
[MINOR][MLlib][SQL] Clean up unused variables and unused import
## What changes were proposed in this pull request?
Clean up unused variables and unused import statements, unnecessary
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/13584
sure I'll try to scan through all the mllib algorithms
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project doe
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/13584
I'm not sure, I guess this one is skipped and not important anymore?
I can close it if it's not going to be merged
---
If your project is set up for it, you can reply to this
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14447
hi @felixcheung anything else I should do here?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14447
I did test locally this time, thanks Felix
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14447
hi @felixcheung , I'm not sure what does this mean and how to get rid of it?
```
* checking CRAN incoming feasibility ...Warning: bad markup (extra space?)
at spark.mlp.Rd:68:77
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14447
hi @felixcheung, after I merge master in, I found that `spark.kmeans`,
`spark.naiveBayes`, `spark.survreg`, `spark.isoreg` don't have the `#' @param
... Additional parameters ...
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14447
sure, will rebase
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14447
@felixcheung sure, no problem
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14489
got it, thanks @shivaram
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14489
hi @shivaram what is the warning message? I can fix it here
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14447
hi @felixcheung I got below lint-r error when running it on my laptop,
after merging with master
```
Warning message:
In readLines(filename) :
incomplete final line found on
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/14681#discussion_r75167761
--- Diff:
streaming/src/test/scala/org/apache/spark/streaming/ui/StreamingJobProgressListenerSuite.scala
---
@@ -68,6 +68,7 @@ class
GitHub user keypointt opened a pull request:
https://github.com/apache/spark/pull/14681
[SPARK-17038][Streaming] fix metrics retrieval source of 'lastReceivedBatch'
https://issues.apache.org/jira/browse/SPARK-17038
## What changes were proposed in this pu
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/14447#discussion_r74871845
--- Diff: R/pkg/R/mllib.R ---
@@ -414,6 +421,94 @@ setMethod("predict", signature(object = "KMeansModel"),
return(d
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/14447#discussion_r74710214
--- Diff: R/pkg/R/mllib.R ---
@@ -414,6 +421,94 @@ setMethod("predict", signature(object = "KMeansModel"),
return(d
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/14447#discussion_r74709917
--- Diff: R/pkg/R/mllib.R ---
@@ -414,6 +421,94 @@ setMethod("predict", signature(object = "KMeansModel"),
return(d
Github user keypointt commented on a diff in the pull request:
https://github.com/apache/spark/pull/14447#discussion_r74675584
--- Diff: R/pkg/R/mllib.R ---
@@ -433,15 +433,15 @@ setMethod("predict", signature(object =
"KMeansModel"),
#'
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14609
hi @srowen , above 2 builds are using exactly the code but one failed and
the other passed. I checked the failed reason and it's
`org.apache.spark.sql.hive.HiveSparkSubmitSuite`, different
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14609
Thanks a lot Sean. Actually, I run these tests on my laptop and they all
pass. And also, it built several times on Spark Jenkins, and every time it
fails at different suites (like random failure
GitHub user keypointt opened a pull request:
https://github.com/apache/spark/pull/14610
[SPARK-17026][MLlib][Tests] remove identical values substraction
https://issues.apache.org/jira/browse/SPARK-17026
## What changes were proposed in this pull request?
I'
1 - 100 of 294 matches
Mail list logo