Github user BoleynSu commented on a diff in the pull request:
https://github.com/apache/spark/pull/18836#discussion_r131316095
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala
---
@@ -82,7 +82,7 @@ case class SortMergeJoinExec(
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131315665
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala
---
@@ -157,12 +168,8 @@ private[hive]
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131314418
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala
---
@@ -404,6 +404,13 @@ private[spark] object HiveUtils extends Logging {
Github user nchammas commented on the issue:
https://github.com/apache/spark/pull/18820
> I don't think we should allow user to change field nullability while
doing replace.
Why not? As long as we correctly update the schema from non-nullable to
nullable, it seems OK to me.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18820
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80229/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18820
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user raajay commented on the issue:
https://github.com/apache/spark/pull/18690
I understand. My previous comment was just a clarification to your
question: "I'm not sure how does this code work in your changes?". I will close
this PR. The JIRA is already closed.
---
If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18820
**[Test build #80229 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80229/testReport)**
for PR 18820 at commit
Github user raajay closed the pull request at:
https://github.com/apache/spark/pull/18690
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80234 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80234/testReport)**
for PR 18742 at commit
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18831#discussion_r131313635
--- Diff: R/pkg/tests/fulltests/test_mllib_regression.R ---
@@ -173,6 +173,14 @@ test_that("spark.glm summary", {
expect_equal(stats$df.residual,
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/18829
I understand what you mean. These metricsURLs do not need to be displayed
in the WEB UI.
Important metrics can be used as a header for the WEB UI, such as
'aliveWorkers'
---
If your
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/18829
I'm not personally saying these metrics need to be in the Web UI, I'm just
saying that if you think they're important enough to surface this way then they
should be important enough to you to
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/18829
Your comments I accepted, thank you.
If you really make these important metrics to WEB UI, the workload is not
small. I will try to do that.
---
If your project is set up for it, you
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131311220
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by 2
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131311047
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by 2
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18813
ping @cloud-fan May you have time to look at this? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131309985
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by 2
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18840
**[Test build #80233 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80233/testReport)**
for PR 18840 at commit
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18840#discussion_r131309601
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/EventTimeWatermarkSuite.scala
---
@@ -391,6 +391,30 @@ class EventTimeWatermarkSuite
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131309398
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by 2
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/18840
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131309163
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by 2
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131309076
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by 2
Github user facaiy commented on the issue:
https://github.com/apache/spark/pull/18764
Test failures in pyspark.ml.tests with python2.6, but I don't have the
environment.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80230/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80230 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80230/testReport)**
for PR 18742 at commit
Github user ueshin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18664#discussion_r131308903
--- Diff: python/pyspark/sql/tests.py ---
@@ -3036,6 +3052,9 @@ def test_toPandas_arrow_toggle(self):
pdf = df.toPandas()
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131308502
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131308421
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18668
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18668
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80227/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18668
**[Test build #80227 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80227/testReport)**
for PR 18668 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18840
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user joseph-torres opened a pull request:
https://github.com/apache/spark/pull/18840
[SPARK-21565] Propagate metadata in attribute replacement.
## What changes were proposed in this pull request?
Propagate metadata in attribute replacement during streaming execution.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18779
**[Test build #80232 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80232/testReport)**
for PR 18779 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18779
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18779
ok,thanks @viirya
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18839
**[Test build #80231 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80231/testReport)**
for PR 18839 at commit
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18839
Some test on string form of the plan might fail. Let's see ...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/18839
[SPARK-21634][SQL] Change OneRowRelation from a case object to case class
## What changes were proposed in this pull request?
OneRowRelation is the only plan that is a case object, which causes
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18779
@10110346 As using `resolveOperators` can solve the whole bug, let's do it
and simplify the whole change. Sorry for confusing.
---
If your project is set up for it, you can reply to this email and
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131305760
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteUnresolvedOrdinals.scala
---
@@ -1,54 +0,0 @@
-/*
- *
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/18829
I think if we really want these metrics in the UI we should look at adding
them to the UI in some way rather as a link to a json dump. I am not a fan of
json dumps as part of a UI in general, I
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80230 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80230/testReport)**
for PR 18742 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80226/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80226 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80226/testReport)**
for PR 18742 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18820
**[Test build #80229 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80229/testReport)**
for PR 18820 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18820
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18746
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80228/
Test PASSed.
---
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/18829
Hbase WEB UI has metrics, Spark WEB UI should also have the function.
This is just my opinion.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18746
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18746
**[Test build #80228 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80228/testReport)**
for PR 18746 at commit
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/18779
We need to backport this issue to branch-2.2? I think the opinion depends
on the backport decision. If no, I'm with your suggestion (keep this issue as a
blocker for branch-2.3).
---
If your
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17980
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17980
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80224/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17980
**[Test build #80224 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80224/testReport)**
for PR 17980 at commit
Github user mpjlu commented on the issue:
https://github.com/apache/spark/pull/18832
Thanks @sethah .
I strongly think we should update the commend or just delete the comment as
the current PR.
Another reason is: there are three kinds of feature: categorical, ordered
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131300093
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteUnresolvedOrdinals.scala
---
@@ -1,54 +0,0 @@
-/*
- *
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18395
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80222/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18395
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18746
**[Test build #80228 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80228/testReport)**
for PR 18746 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18395
**[Test build #80222 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80222/testReport)**
for PR 18395 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18640
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80221/
Test PASSed.
---
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131299271
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteUnresolvedOrdinals.scala
---
@@ -1,54 +0,0 @@
-/*
- *
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18640
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18640
**[Test build #80221 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80221/testReport)**
for PR 18640 at commit
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/18815
Ok. That master, worker log and Executor log can be displayed in the WEB UI?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131298714
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteUnresolvedOrdinals.scala
---
@@ -1,54 +0,0 @@
-/*
- *
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18668
**[Test build #80227 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80227/testReport)**
for PR 18668 at commit
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18690
So I think if you want to connect your custom sink to Spark Metrics System,
then you should at least follow what Spark and codahale metrics library did.
Adding a feature in Spark specifically
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80226 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80226/testReport)**
for PR 18742 at commit
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/18815
Ok then I'm really confused, if the logs we're talking about can already be
viewed in the ui why do we need to display their location on the system?
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18460
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18460
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80223/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18838
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18460
**[Test build #80223 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80223/testReport)**
for PR 18460 at commit
GitHub user liu-zhaokun opened a pull request:
https://github.com/apache/spark/pull/18838
[SPARK-21632] There is no need to make attempts for createDirectory if the
dir had existed
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18829
I tend to agree with @ajbozarth , since we already have the APIs to access
metrics dump with json format, this looks like not so necessary. Also directly
displaying such kind of json dump on the
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18779
Well, maybe we should revisit this after #17770 gets merged. Because after
that, we won't go through analyzed plans anymore.
At that time, we can simply solve all the issues by making
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131294567
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteUnresolvedOrdinals.scala
---
@@ -1,54 +0,0 @@
-/*
- *
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/18815
Why is the Executor log can be displayed in the WEB UI?
Similarly, I think master, worker log and Executor log is the same. They
can be displayed in the WEB UI.
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18746#discussion_r131293810
--- Diff: python/pyspark/ml/tests.py ---
@@ -1957,6 +1988,40 @@ def test_chisquaretest(self):
self.assertTrue(all(field in fieldNames for
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/18829
Json Metrics information is very full, a lot of information UI is currently
unable to show, but this information for the application developers is also
very important.
I
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/18746
@ajaysaini725Â Is there a JIRA for this PR? Please tag this PR in the title.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131292299
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteUnresolvedOrdinals.scala
---
@@ -1,54 +0,0 @@
-/*
- *
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131284503
--- Diff: python/pyspark/ml/tests.py ---
@@ -1957,6 +1964,46 @@ def test_chisquaretest(self):
self.assertTrue(all(field in fieldNames for
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131285744
--- Diff: mllib/src/main/scala/org/apache/spark/ml/util/ReadWrite.scala ---
@@ -471,3 +471,24 @@ private[ml] object MetaAlgorithmReadWrite {
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131286314
--- Diff: python/pyspark/ml/util.py ---
@@ -61,20 +66,74 @@ def _randomUID(cls):
@inherit_doc
-class MLWriter(object):
+class
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131287820
--- Diff: python/pyspark/ml/util.py ---
@@ -237,6 +300,13 @@ def _load_java_obj(cls, clazz):
java_obj = getattr(java_obj, name)
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131288629
--- Diff: python/pyspark/ml/util.py ---
@@ -237,6 +300,13 @@ def _load_java_obj(cls, clazz):
java_obj = getattr(java_obj, name)
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131288896
--- Diff: python/pyspark/ml/util.py ---
@@ -283,3 +353,143 @@ def numFeatures(self):
Returns the number of features the model was trained on.
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131288360
--- Diff: python/pyspark/ml/util.py ---
@@ -283,3 +353,143 @@ def numFeatures(self):
Returns the number of features the model was trained on.
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131287028
--- Diff: python/pyspark/ml/util.py ---
@@ -61,20 +66,74 @@ def _randomUID(cls):
@inherit_doc
-class MLWriter(object):
+class
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131288786
--- Diff: python/pyspark/ml/util.py ---
@@ -283,3 +353,143 @@ def numFeatures(self):
Returns the number of features the model was trained on.
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131288351
--- Diff: python/pyspark/ml/util.py ---
@@ -283,3 +353,143 @@ def numFeatures(self):
Returns the number of features the model was trained on.
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131290424
--- Diff: python/pyspark/ml/util.py ---
@@ -283,3 +353,143 @@ def numFeatures(self):
Returns the number of features the model was trained on.
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131288910
--- Diff: python/pyspark/ml/util.py ---
@@ -283,3 +353,143 @@ def numFeatures(self):
Returns the number of features the model was trained on.
1 - 100 of 482 matches
Mail list logo