Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18559
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79312/
Test FAILed.
---
GitHub user gengliangwang opened a pull request:
https://github.com/apache/spark/pull/18560
Revise rand comparison in BatchEvalPythonExecSuite
## What changes were proposed in this pull request?
Revise rand comparison in BatchEvalPythonExecSuite
In
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r126074450
--- Diff: sql/core/src/test/resources/sql-tests/inputs/create.sql ---
@@ -0,0 +1,23 @@
+-- Catch case-sensitive name duplication
+SET
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18559
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18559
**[Test build #79312 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79312/testReport)**
for PR 18559 at commit
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r126074238
--- Diff: sql/core/src/test/resources/sql-tests/inputs/create.sql ---
@@ -0,0 +1,23 @@
+-- Catch case-sensitive name duplication
+SET
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18559
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r126073404
--- Diff: sql/core/src/test/resources/sql-tests/inputs/create.sql ---
@@ -0,0 +1,23 @@
+-- Catch case-sensitive name duplication
+SET
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18559
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79311/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18559
**[Test build #79311 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79311/testReport)**
for PR 18559 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18444
Thanks for asking @ueshin. Sounds OK to me too. I currently have some
pending review comments for minor nits. Let me finish mine within today.
---
If your project is set up for it, you can
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18559#discussion_r126072754
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2638,4 +2638,17 @@ class SQLQuerySuite extends QueryTest with
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/18559#discussion_r126072760
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2638,4 +2638,17 @@ class SQLQuerySuite extends QueryTest with
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18559#discussion_r126072118
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2638,4 +2638,17 @@ class SQLQuerySuite extends QueryTest with
Github user dongjoon-hyun closed the pull request at:
https://github.com/apache/spark/pull/18557
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18557
Yep. It's totally internal officially.
What I meant with `performance issue` is 3rd party can still use it and
there might be a performance gap between `float` and `double`.
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18559#discussion_r126071406
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2638,4 +2638,17 @@ class SQLQuerySuite extends QueryTest with
Github user ueshin commented on the issue:
https://github.com/apache/spark/pull/18444
LGTM, pending Jenkins.
@HyukjinKwon, @holdenk, Do you have any other concerns?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user koertkuipers commented on the issue:
https://github.com/apache/spark/pull/609
@ganeshm25 it seems to work in newer spark versions. i havent tried in
spark 1.4.2. however its still very tricky to get it right and i would prefer a
simpler solution.
---
If your project is
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18462#discussion_r126071195
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -1007,6 +1007,10 @@ class Dataset[T] private[sql](
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/18559#discussion_r126071223
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2638,4 +2638,17 @@ class SQLQuerySuite extends QueryTest with
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18558
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user ganeshm25 commented on the issue:
https://github.com/apache/spark/pull/609
@koertkuipers i am trying to do achieve running the multiple
driver-java-options with Spark 1.4.2 inside a bash script? is there a solution
you found for this ?
---
If your project is set up for
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18444
**[Test build #79314 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79314/testReport)**
for PR 18444 at commit
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18558
thanks, merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/18388
We didn't change `spark.shuffle.io.numConnectionsPerPeer`. Our biggest
cluster has 6000 `NodeManager`s. There are 50 executors running on a same host
at the same time.
---
If your project is
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18425
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18425
thanks, merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/18388
@cloud-fan
To be honest, it's a little bit tricky to reject "open blocks" by closing
the connection. The following reconnection will surely have extra cost. In
current change we are relying
Github user ueshin commented on the issue:
https://github.com/apache/spark/pull/18444
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18553
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18557
`ColumnVector` is total internal in Spark 2.2, so there won't be 3rd party
Spark library issue.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18558
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18558
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79309/
Test PASSed.
---
Github user ueshin commented on the issue:
https://github.com/apache/spark/pull/18553
Thanks for reviewing! merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18558
**[Test build #79309 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79309/testReport)**
for PR 18558 at commit
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/18388
> there are 200K+ connections and 3.5M blocks(FileSegmentManagedBuffer)
being fetched.
Did you use a large `spark.shuffle.io.numConnectionsPerPeer`? If not, the
number of connections seems
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18557
BTW, thank you for swift reviews and feedbacks on my PR. :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/18388
Analyzing the heap dump, there are 200K+ connections and 3.5M
blocks(`FileSegmentManagedBuffer`) being fetched. Yes, flow control is a good
idea. But I still think it make much sense to control
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18557
I know that 'there is no usage of this API internally in Spark 2.2', but
it's only for 2.2.0.
My reason was any 3rd party Spark library cannot use `ColumnVector` for
`float` type in Spark
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18307
**[Test build #79313 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79313/testReport)**
for PR 18307 at commit
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18307
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18557
I've changed the ticket type from `bug` to `improvement`, adding a new API
is not fixing a bug.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17633#discussion_r126068180
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala ---
@@ -589,18 +590,43 @@ private[client] class Shim_v0_13 extends
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18557
We have not seen any failure in test suites. And, [there is no usage of
this API](https://github.com/apache/spark/pull/17836#discussion_r114488839) in
Spark 2.2.
Does this missing cause any
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17633#discussion_r126067892
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala ---
@@ -589,18 +590,43 @@ private[client] class Shim_v0_13 extends
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16697
LGTM, pending tests
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18556
Thank you @cloud-fan!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17633#discussion_r126067471
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala ---
@@ -589,18 +590,40 @@ private[client] class Shim_v0_13 extends
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18559
**[Test build #79312 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79312/testReport)**
for PR 18559 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18556
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18288
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user facaiy commented on the issue:
https://github.com/apache/spark/pull/18554
I'm not familiar with R, and use grep to search "OneVsRest" and get
nothing. Hence it seems that nothing is needed to do with R part.
---
If your project is set up for it, you can reply to this
Github user facaiy commented on the issue:
https://github.com/apache/spark/pull/18523
@SparkQA test again, please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18557
Hi, @kiszk .
I think this is a bug fix of `ColumnVector` as described in
[SPARK-20566](https://issues.apache.org/jira/browse/SPARK-20566).
---
If your project is set up for it, you can
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18556
LGTM, merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18556#discussion_r126066952
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/util/MLUtils.scala ---
@@ -102,6 +104,25 @@ object MLUtils extends Logging {
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18559
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18559#discussion_r126066595
--- Diff:
sql/core/src/test/resources/sql-tests/results/string-functions.sql.out ---
@@ -30,20 +30,20 @@ abc
-- !query 3
EXPLAIN EXTENDED
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18558
LGTM pending jenkins, also cc @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18559
**[Test build #79311 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79311/testReport)**
for PR 18559 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18559#discussion_r126066489
--- Diff:
sql/core/src/test/resources/sql-tests/results/string-functions.sql.out ---
@@ -30,20 +30,20 @@ abc
-- !query 3
EXPLAIN EXTENDED
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18559#discussion_r126066311
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -751,15 +751,17 @@ class AstBuilder extends
GitHub user cloud-fan opened a pull request:
https://github.com/apache/spark/pull/18559
[SPARK-21335][SQL] support un-aliased subquery
## What changes were proposed in this pull request?
un-aliased subquery is supported by Spark SQL for a long time. Its semantic
was not
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18559
cc @rxin @viirya
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18557
@dongjoon-hyun Is there any reason to backport this to previous versions?
This is because we had such [a
discussion](https://github.com/apache/spark/pull/17836#pullrequestreview-35957231).
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18557
Hi, @cloud-fan .
This is the backport for #17836 .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18557
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18557
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79306/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18557
**[Test build #79306 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79306/testReport)**
for PR 18557 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16697
**[Test build #79310 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79310/testReport)**
for PR 16697 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18465
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79308/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18465
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18465
**[Test build #79308 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79308/testReport)**
for PR 18465 at commit
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/16697
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18558
**[Test build #79309 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79309/testReport)**
for PR 18558 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18553
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18553
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79304/
Test PASSed.
---
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18558
cc @cloud-fan This removes the writeTime metrics.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18553
**[Test build #79304 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79304/testReport)**
for PR 18553 at commit
GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/18558
[SPARK-20703][SQL][FOLLOW-UP] Associate metrics with data writes onto
DataFrameWriter operations
## What changes were proposed in this pull request?
Remove time metrics since it seems no
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18556
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79307/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18556
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18556
**[Test build #79307 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79307/testReport)**
for PR 18556 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18465
**[Test build #79308 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79308/testReport)**
for PR 18465 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18465
(simply rebased)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user jinxing64 closed the pull request at:
https://github.com/apache/spark/pull/18482
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18159#discussion_r126056717
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
---
@@ -314,21 +339,40 @@ object FileFormatWriter
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/18482
Sure, I will update the document soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18556
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79305/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18556
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18556
**[Test build #79305 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79305/testReport)**
for PR 18556 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18556#discussion_r126053740
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/source/libsvm/LibSVMRelation.scala ---
@@ -89,18 +93,14 @@ private[libsvm] class LibSVMFileFormat
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18556
**[Test build #79307 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79307/testReport)**
for PR 18556 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18509
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/18509
Thanks! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18556#discussion_r126051332
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/source/libsvm/LibSVMRelation.scala ---
@@ -89,18 +93,17 @@ private[libsvm] class LibSVMFileFormat
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18557
**[Test build #79306 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79306/testReport)**
for PR 18557 at commit
Github user facaiy commented on a diff in the pull request:
https://github.com/apache/spark/pull/18556#discussion_r126050849
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/source/libsvm/LibSVMRelation.scala ---
@@ -89,18 +93,17 @@ private[libsvm] class LibSVMFileFormat extends
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/18557
[SPARK-20566][SQL][BRANCH-2.2] ColumnVector should support `appendFloats`
for array
## What changes were proposed in this pull request?
This PR aims to add a missing `appendFloats`
1 - 100 of 514 matches
Mail list logo