Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17953#discussion_r116157223
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -1504,6 +1504,7 @@ class AstBuilder extends
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17959
merged to master/2.2, please send a follow-up PR to address @gatorsmile 's
comments, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17711#discussion_r116156736
--- Diff: sql/core/src/test/resources/sql-tests/inputs/operators.sql ---
@@ -32,3 +32,11 @@ select 1 - 2;
select 2 * 5;
select 5 % 3;
select
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17711
@maropu The solution using `tailrec` looks more straightforward. Could you
submit the PR based on that? Thanks!
---
If your project is set up for it, you can reply to this email and have your
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17711#discussion_r116156556
--- Diff: sql/core/src/test/resources/sql-tests/inputs/operators.sql ---
@@ -32,3 +32,11 @@ select 1 - 2;
select 2 * 5;
select 5 % 3;
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17959
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17959
How about `HiveTableScanExec`?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17959#discussion_r116156087
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala
---
@@ -519,8 +519,18 @@ case class FileSourceScanExec(
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17948#discussion_r116155996
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1003,18 +1003,32 @@ class Analyzer(
*/
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14582
We could ask it to mailing-list if you strongly feel about this. For
example, `from_json` function was also asked too to mailing list before getting
merged.
I think we should not add
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17948#discussion_r116155780
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1003,18 +1003,32 @@ class Analyzer(
*/
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17948
LGTM too. : )
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user ghoto commented on a diff in the pull request:
https://github.com/apache/spark/pull/17940#discussion_r116155652
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/linalg/Matrices.scala
---
@@ -992,7 +992,20 @@ object Matrices {
new DenseMatrix(dm.rows,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17959
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76843/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17959
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17959
**[Test build #76843 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76843/testReport)**
for PR 17959 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17957
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17957
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76842/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17957
**[Test build #76842 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76842/testReport)**
for PR 17957 at commit
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/16985
@cloud-fan : I have made suggested change(s).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17956
LGTM except for a minor comment.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17956#discussion_r116154314
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonSuite.scala
---
@@ -1988,4 +1988,47 @@ class JsonSuite extends
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17052
Yea, I just wanted to check if it is in progress in any way. Thanks for
your input.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user lalinsky commented on the issue:
https://github.com/apache/spark/pull/14582
You were asking for more interest in the feature, there was no way I could
answer that. :)
Regarding the change itself, the system can already auto cast integer to a
timestamp, but not a
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17711
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17711
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76840/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17711
**[Test build #76840 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76840/testReport)**
for PR 17711 at commit
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17052
@HyukjinKwon Sorry! Busy for this period of time. Let me resolve this
conflict.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17956
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76841/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17956
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17956
**[Test build #76841 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76841/testReport)**
for PR 17956 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17935#discussion_r116153501
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SubquerySuite.scala
---
@@ -868,6 +868,29 @@ class SubquerySuite extends QueryTest with
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17948
**[Test build #76849 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76849/testReport)**
for PR 17948 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17960#discussion_r116153263
--- Diff: sql/core/src/test/resources/sql-tests/inputs/limit.sql ---
@@ -1,23 +1,27 @@
-- limit on various data types
-select * from
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17948#discussion_r116153034
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1003,18 +1003,31 @@ class Analyzer(
*/
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17644#discussion_r116152814
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -871,6 +886,23 @@ private[hive] object HiveClientImpl {
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17948#discussion_r116152643
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1003,18 +1003,31 @@ class Analyzer(
*/
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17959
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17948
**[Test build #76848 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76848/testReport)**
for PR 17948 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17960
**[Test build #76847 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76847/testReport)**
for PR 17960 at commit
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/17960
[SPARK-20719] [SQL] Support LIMIT ALL
### What changes were proposed in this pull request?
`LIMIT ALL` is the same as omitting the `LIMIT` clause. It is supported by
both PrestgreSQL and
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17948#discussion_r116151791
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1003,18 +1003,30 @@ class Analyzer(
*/
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17948
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17948#discussion_r116150714
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1003,18 +1003,30 @@ class Analyzer(
*/
Github user ConeyLiu commented on the issue:
https://github.com/apache/spark/pull/17936
The cluster test result. The `RDD.cartesian` is used in Spark mllib ALS
algorithm, and compared with the latest spark master branch.
Environments: Spark on Yarn with 9 executors(10 cores
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/16697
Yes. If inside, you are right - only the first will be logged !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user CodingCat commented on the issue:
https://github.com/apache/spark/pull/16697
you mean outside of
https://github.com/apache/spark/pull/16697/files#diff-ca0fe05a42fd5edcab8a1bdaa8e58db9R210?
---
If your project is set up for it, you can reply to this email and have your
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/17942#discussion_r116148995
--- Diff: core/src/main/scala/org/apache/spark/scheduler/Task.scala ---
@@ -115,26 +115,33 @@ private[spark] abstract class Task[T](
case t:
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/17942#discussion_r116148841
--- Diff: core/src/main/scala/org/apache/spark/util/taskListeners.scala ---
@@ -55,14 +55,16 @@ class TaskCompletionListenerException(
extends
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17956
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17906
thanks, merging to master/2.2/2.1/2.0!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17906
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/17959
also cc @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17956
**[Test build #76846 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76846/testReport)**
for PR 17956 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17958
**[Test build #76845 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76845/testReport)**
for PR 17958 at commit
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/17958
@zsxwing @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17958
**[Test build #76844 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76844/testReport)**
for PR 17958 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17959
**[Test build #76843 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76843/testReport)**
for PR 17959 at commit
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/17959
cc @cloud-fan @hvanhovell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user wzhfy opened a pull request:
https://github.com/apache/spark/pull/17959
[SPARK-20718][SQL] FileSourceScanExec with different filter orders should
be the same after canonicalization
## What changes were proposed in this pull request?
Since `constraints` in
GitHub user tdas opened a pull request:
https://github.com/apache/spark/pull/17958
[SPARK-20716][SS] StateStore.abort() should not throw exceptions
## What changes were proposed in this pull request?
StateStore.abort() should do a best effort attempt to clean up temporary
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17887
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17887
Thanks @cloud-fan @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17887
thanks, merging to master/2.2!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/17954
@marmbrus @zsxwing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17956#discussion_r116145813
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala
---
@@ -127,13 +126,15 @@ class JacksonParser(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17957
**[Test build #76842 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76842/testReport)**
for PR 17957 at commit
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/17957
@marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
GitHub user tdas opened a pull request:
https://github.com/apache/spark/pull/17957
[SPARK-20717][SS] Minor tweaks to the MapGroupsWithState behavior
## What changes were proposed in this pull request?
Timeout and state data are two independent entities and should be
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17858
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76839/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17858
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17858
**[Test build #76839 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76839/testReport)**
for PR 17858 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17222#discussion_r116145107
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/AggregationQuerySuite.scala
---
@@ -20,16 +20,19 @@ package
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17955
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76838/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17955
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17955
**[Test build #76838 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76838/testReport)**
for PR 17955 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17222#discussion_r116144809
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/UDFRegistration.scala ---
@@ -491,20 +491,42 @@ class UDFRegistration private[sql]
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17956#discussion_r116144531
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala
---
@@ -127,13 +126,15 @@ class JacksonParser(
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17956
cc @NathanHowell, @cloud-fan and @viirya.
(I just want to note this will not change any input/output but just the
exception type and avoid additional conversion try.)
---
If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17956
**[Test build #76841 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76841/testReport)**
for PR 17956 at commit
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/17956
[SPARK-18772][SQL] Avoid unnecessary conversion try for special floats in
JSON and add related tests
## What changes were proposed in this pull request?
This PR is based on
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17942#discussion_r116143769
--- Diff: core/src/main/scala/org/apache/spark/scheduler/Task.scala ---
@@ -115,26 +115,33 @@ private[spark] abstract class Task[T](
case t:
Github user mallman commented on the issue:
https://github.com/apache/spark/pull/17633
Hey guys. Just a quick update. I made good progress on implementing
multi-version testing today, however it's not quite ready. I'm going to be on
leave from tomorrow through the rest of next week,
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17711
I quickly brushed up the Optimizer code based on your advice:
Using `Stack`:
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17887
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76837/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17887
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17887
**[Test build #76837 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76837/testReport)**
for PR 17887 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17711
**[Test build #76840 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76840/testReport)**
for PR 17711 at commit
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17942#discussion_r116143097
--- Diff: core/src/main/scala/org/apache/spark/util/taskListeners.scala ---
@@ -55,14 +55,16 @@ class TaskCompletionListenerException(
extends
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17711#discussion_r116142572
--- Diff: sql/core/src/test/resources/sql-tests/inputs/operators.sql ---
@@ -32,3 +32,11 @@ select 1 - 2;
select 2 * 5;
select 5 % 3;
select
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/14719
@HyukjinKwon Thanks for pinging me! I still think this issue should be
fixed but I didn't notice @nsyca's last comment. I'll consider the problem
which he mentioned soon.
---
If your project is
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/16697
`onDropEvent` is invoked for every dropped event : not just the first.
If all you need is a way to find out what the dropped events were - simply
enable trace logging for the class after
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17906
@cloud-fan Spark 2.0 and Spark 2.1 have the same issue. I have updated
the affected versions in the JIRA. Thanks!
---
If your project is set up for it, you can reply to this email and have your
Github user JeremyNixon commented on the issue:
https://github.com/apache/spark/pull/13621
I ran the Keras experiment with code up at [[GitHub link]
](https://github.com/JeremyNixon/autoencoder) if anyone wants to build on this
or replicate it.
Running Sethâs example on
Github user heary-cao commented on the issue:
https://github.com/apache/spark/pull/17869
@HyukjinKwon
I would like to suggest that
modify ALSCleanerSuite with another PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17948
@cloud-fan Could you check this? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17952
Obviously you said it was right. I have modified as requested. I have been
manually tested, it is ok.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user hhbyyh commented on a diff in the pull request:
https://github.com/apache/spark/pull/17940#discussion_r116139174
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/linalg/Matrices.scala
---
@@ -992,7 +992,20 @@ object Matrices {
new DenseMatrix(dm.rows,
Github user hhbyyh commented on a diff in the pull request:
https://github.com/apache/spark/pull/17940#discussion_r116139610
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/linalg/MatricesSuite.scala ---
@@ -46,6 +46,26 @@ class MatricesSuite extends SparkFunSuite {
Github user hhbyyh commented on a diff in the pull request:
https://github.com/apache/spark/pull/17940#discussion_r116139038
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/linalg/Matrices.scala
---
@@ -992,7 +992,20 @@ object Matrices {
new DenseMatrix(dm.rows,
1 - 100 of 621 matches
Mail list logo