Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18659#discussion_r139366789
--- Diff: python/pyspark/sql/functions.py ---
@@ -2142,18 +2159,26 @@ def udf(f=None, returnType=StringType()):
| 8| JOHN DOE|
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19145
Hi @klion26 , sorry for the late response. Can we please understand the
problem first, would you please describe your problem in detail and how to
reproduce your issue?
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19259
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19259
**[Test build #81870 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81870/testReport)**
for PR 19259 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18704#discussion_r139364602
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/compression/compressionSchemes.scala
---
@@ -169,6 +267,125 @@
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19264
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user ConeyLiu commented on the issue:
https://github.com/apache/spark/pull/19135
Hi @cloud-fan, thanks for reviewing. The code has updated, pls take a look.
---
-
To unsubscribe, e-mail:
Github user michaelmior commented on the issue:
https://github.com/apache/spark/pull/19263
Whoops. Sorry about that. I opened the PR via the CLI so I didn't see the
pointer on the web interface. I should have known better though. Updated!
---
GitHub user cloud-fan opened a pull request:
https://github.com/apache/spark/pull/19265
[SPARK-22047][flaky test] HiveExternalCatalogVersionsSuite
## What changes were proposed in this pull request?
This PR tries to download Spark for each test run, to make sure each test
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18659#discussion_r139365685
--- Diff: python/pyspark/sql/functions.py ---
@@ -2142,18 +2159,26 @@ def udf(f=None, returnType=StringType()):
| 8| JOHN DOE|
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19230
**[Test build #81872 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81872/testReport)**
for PR 19230 at commit
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19210
LGTM, let me retest this again.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands,
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/19152
@marktab You should close merged PR. Thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user maver1ck commented on the issue:
https://github.com/apache/spark/pull/19234
I check with some samples and code with float can trigger errors.
---
-
To unsubscribe, e-mail:
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/19135
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user klion26 commented on the issue:
https://github.com/apache/spark/pull/19145
Hi @jerryshao, thank you for your reply.
# Problem
the problem is for long running jobs which run on **yarn with HA** will
request more executors than it requests.
# How to
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18853
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18853
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81871/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19219
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81874/
Test FAILed.
---
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19263
@michaelmior would you please follow the instruction
(https://spark.apache.org/contributing.html) to update PR title and create a
corresponding JIRA, thanks!
---
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19160
@zsxwing @jiangxb1987 would you please help to review this PR when you have
time, thanks a lot.
---
-
To unsubscribe, e-mail:
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19230
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19230
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81872/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19259
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81870/
Test PASSed.
---
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19210
Jenkins, retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands,
Github user original-brownbear commented on the issue:
https://github.com/apache/spark/pull/19254
@srowen rebased against `master` to get the test ignore
https://github.com/apache/spark/commit/894a7561de2c2ff01fe7fcc5268378161e9e5643
, should be good to retest now :)
---
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18853
retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/19230
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/19265
ok to test
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19219
**[Test build #81874 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81874/testReport)**
for PR 19219 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18853
**[Test build #81871 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81871/testReport)**
for PR 18853 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19219
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user fjh100456 commented on the issue:
https://github.com/apache/spark/pull/19218
Encounter two problems:
1. I tried to fix it in the order of 'compression' > 'parquet.compression'
> 'spark.sql.parquet.compression. codec', but found 'parquet.compression' may
come from a
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19135
**[Test build #81878 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81878/testReport)**
for PR 19135 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19210
**[Test build #81875 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81875/testReport)**
for PR 19210 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15544
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81881/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15544
**[Test build #81881 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81881/testReport)**
for PR 15544 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15544
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/19266
I though, if this limit highly depends on JVM implementations, better to
put the limit as a global variable somewhere (e.g., `ARRAY_INT_MAX` in
`spark.util.Utils` or other places)? As another
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18924
**[Test build #81885 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81885/testReport)**
for PR 18924 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18853#discussion_r139465565
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -352,11 +374,16 @@ object TypeCoercion {
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19256
It looks good, but the actual code should be very simple if you are writing
using the Scala way
---
-
To unsubscribe,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19210
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81875/
Test PASSed.
---
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19230
Thanks! Merged to master.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19250
**[Test build #81888 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81888/testReport)**
for PR 19250 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18887#discussion_r139467580
--- Diff:
core/src/test/scala/org/apache/spark/deploy/history/FsHistoryProviderSuite.scala
---
@@ -624,7 +639,9 @@ class FsHistoryProviderSuite extends
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18887#discussion_r139467662
--- Diff:
core/src/test/scala/org/apache/spark/deploy/history/HistoryServerSuite.scala ---
@@ -74,6 +76,7 @@ class HistoryServerSuite extends SparkFunSuite
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19222
**[Test build #81889 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81889/testReport)**
for PR 19222 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19264
**[Test build #81873 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81873/testReport)**
for PR 19264 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19264
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81873/
Test PASSed.
---
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/17819
ok to test.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19145
Did you enable RM or NM recovery, can you please clarify it?
Normally, if we assume there's are 2 containers running on this NM, after
10 minutes, RM will detect the failure of NM and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/19234
Seems fine to me too as is. @maver1ck, I think you could take out `[WIP]`
and let it be merged.
---
-
To unsubscribe,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19265
**[Test build #81880 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81880/testReport)**
for PR 19265 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19265
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19265
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81879/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18853
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81876/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18853
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user a10y commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r139450187
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1810,17 +1810,20 @@ def _to_scala_map(sc, jm):
return sc._jvm.PythonUtils.toScalaMap(jm)
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19254
**[Test build #3925 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3925/testReport)**
for PR 19254 at commit
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/19229
Looks not the reason. maybe issues somewhere else. Let me run test later.
Thanks!
But there is some small issues in test:
Don't include gen data time:
```
val start =
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/19211#discussion_r139458935
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/LiveListenerBus.scala ---
@@ -65,53 +60,76 @@ private[spark] class LiveListenerBus(conf: SparkConf)
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/19211#discussion_r139458812
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/LiveListenerBus.scala ---
@@ -65,53 +60,76 @@ private[spark] class LiveListenerBus(conf: SparkConf)
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18853#discussion_r139464231
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -925,6 +925,12 @@ object SQLConf {
.intConf
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/19266
Yeah, agree, it could be some global constant. I don't think it should be
configurable. Ideally it's determined from the JVM, but don't know a way to do
that.
In many cases, assuming
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18887#discussion_r139468324
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -720,19 +633,67 @@ private[history] class
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19238
I can see the value, but it does not perform well in most cases if we using
JDBC connection. Instead of adding the extra dialect to upstream, could you
please add Hive as a separate data source?
GitHub user Taaffy opened a pull request:
https://github.com/apache/spark/pull/19268
Incorrect Metric reported in MetricsReporter.scala
Current implementation for processingRate-total uses wrong metric:
mistakenly uses inputRowsPerSecond instead of processedRowsPerSecond
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/19211#discussion_r139458303
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/LiveListenerBus.scala ---
@@ -65,53 +60,76 @@ private[spark] class LiveListenerBus(conf: SparkConf)
Github user maver1ck commented on the issue:
https://github.com/apache/spark/pull/19234
OK. It passed all tests, so let merge it
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19211
**[Test build #81884 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81884/testReport)**
for PR 19211 at commit
Github user akopich commented on the issue:
https://github.com/apache/spark/pull/18924
Ping @jkbradley .
Thank you @WeichenXu123 one again for the comment! Please, have a look.
---
-
To unsubscribe, e-mail:
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18887#discussion_r139468045
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -720,19 +633,67 @@ private[history] class
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18887#discussion_r139468080
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -720,19 +633,67 @@ private[history] class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18924
**[Test build #81885 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81885/testReport)**
for PR 18924 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18887#discussion_r139468509
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -742,53 +703,150 @@ private[history] object FsHistoryProvider {
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18924
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18924
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81885/
Test FAILed.
---
GitHub user juanrh opened a pull request:
https://github.com/apache/spark/pull/19267
[WIP][SPARK-20628][CORE] Blacklist nodes when they transition to
DECOMMISSIONING state in YARN
## What changes were proposed in this pull request?
Dynamic cluster configurations where cluster
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19261
I think we should not do it, because no DB vendor does it.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user kevinyu98 commented on the issue:
https://github.com/apache/spark/pull/12646
can we retest this ? The unknown return code is not related to the code.
Thanks.
---
-
To unsubscribe, e-mail:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18704
**[Test build #81883 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81883/testReport)**
for PR 18704 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/12646
**[Test build #81886 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81886/testReport)**
for PR 12646 at commit
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/19196
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19230
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19196
**[Test build #81887 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81887/testReport)**
for PR 19196 at commit
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/19229
@viirya I run the code, you're right, most of time cost on the executedPlan
generation (The old version code). thanks!
But can you append benchmark comparison with `RDD.aggregate` version?
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/19261
What does this even mean?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18887
**[Test build #81890 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81890/testReport)**
for PR 18887 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19135
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19135
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81878/
Test PASSed.
---
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/12646
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18853#discussion_r139464749
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -925,6 +925,12 @@ object SQLConf {
.intConf
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18853#discussion_r139464467
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -925,6 +925,12 @@ object SQLConf {
.intConf
Github user WeichenXu123 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18924#discussion_r139470472
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/clustering/LDAOptimizer.scala ---
@@ -462,31 +462,44 @@ final class OnlineLDAOptimizer extends
Github user WeichenXu123 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18924#discussion_r139467949
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/clustering/LDAOptimizer.scala ---
@@ -462,31 +462,44 @@ final class OnlineLDAOptimizer extends
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19267
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/19268
Please make a JIRA @Taaffy
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17819
@WeichenXu123 Do you mean we keep both inputCol and inputCols in
`Bucketizer`?
---
-
To unsubscribe, e-mail:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19230
**[Test build #81877 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81877/testReport)**
for PR 19230 at commit
1 - 100 of 433 matches
Mail list logo