Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16579
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16579
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71824/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16579
**[Test build #71824 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71824/testReport)**
for PR 16579 at commit
[`7879201`](https://github.com/apache/spark/commit/7
Github user ouyangxiaochen commented on the issue:
https://github.com/apache/spark/pull/16638
I am sorry that I did't grasp the key points of your question. In Hive, if
there are data files under the specified path while creating an external table,
then Hive will identify the files as
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16642
**[Test build #71829 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71829/testReport)**
for PR 16642 at commit
[`c200b98`](https://github.com/apache/spark/commit/c2
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16566
**[Test build #71828 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71828/testReport)**
for PR 16566 at commit
[`d36c23a`](https://github.com/apache/spark/commit/d3
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16642#discussion_r97262909
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala
---
@@ -92,6 +111,16 @@ class PartitionedWriteSuite extends Que
Github user windpiger commented on a diff in the pull request:
https://github.com/apache/spark/pull/16642#discussion_r97262157
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala
---
@@ -92,6 +96,47 @@ class PartitionedWriteSuite extends Quer
Github user windpiger commented on a diff in the pull request:
https://github.com/apache/spark/pull/16642#discussion_r97262179
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala
---
@@ -92,6 +96,47 @@ class PartitionedWriteSuite extends Quer
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16642
**[Test build #71827 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71827/testReport)**
for PR 16642 at commit
[`aff53dc`](https://github.com/apache/spark/commit/af
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/16521
Made one pass. Looks good overall. Just some nits.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/1
**[Test build #71826 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71826/testReport)**
for PR 1 at commit
[`d1a2d6c`](https://github.com/apache/spark/commit/d1
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/1#discussion_r97260863
--- Diff: R/pkg/R/mllib_clustering.R ---
@@ -225,10 +225,12 @@ setMethod("spark.kmeans", signature(data =
"SparkDataFrame", formula = "formula"
Github user admackin commented on the issue:
https://github.com/apache/spark/pull/16652
I've addressed all the problems I think â code style now fixed,
MLTestingUtils patched (and verified all MLLib test cases still pass), and
added a test case for zero-valued labels
---
If your
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16638
Please keep updating your PR description. For example, this PR is not
relying on `manual tests`. In addition, you also need to summarize what this PR
did. List more details to help reviewers unde
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16638
Let me rephrase it. If the directory specified in the `LOCATION` spec
contains the other files, what does Hive behave?
---
If your project is set up for it, you can reply to this email and have
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16638
First, please change the PR title to `[SPARK-19115] [SQL] Supporting Create
External Table Like Location`
---
If your project is set up for it, you can reply to this email and have your
reply ap
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16645
**[Test build #71825 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71825/testReport)**
for PR 16645 at commit
[`c55a1f9`](https://github.com/apache/spark/commit/c5
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16671
The connectors by some DBMS vendors are using the UNLOAD utility, which
performs much better, and build the RDD in the connectors.
Normally, JDBC is not a good option for large table fetc
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/16654
Metrics evaluate the clustering though; the details of the algorithm are
irrelevant. This still clusters points in a continuous space so you can measure
WSSSE.
---
If your project is set up for it,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16579
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71821/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16579
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16594
:- ) No perfect solution, but we should use the [metric
prefix](https://en.wikipedia.org/wiki/Metric_prefix) when the number is huge.
---
If your project is set up for it, you can reply to this
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16594
SQLServer has three ways to show the plan: graphical plans, text plans, and
XML plans. Actually, it is pretty advanced. When using the text plans, users
can set the output formats:
1. SH
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/16675
@yanboliang Thanks. Seems to have passed tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16659
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16579
**[Test build #71824 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71824/testReport)**
for PR 16579 at commit
[`7879201`](https://github.com/apache/spark/commit/78
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16659
thanks, merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wi
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16579
The only failure is irrelevant to this PR.
```
[info] - set spark.sql.warehouse.dir *** FAILED *** (5 minutes, 0 seconds)
[info] Timeout of './bin/spark-submit' '--class'
'org.apa
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16668#discussion_r97254989
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3406,3 +3406,28 @@ setMethod("randomSplit",
}
sapply(sdfs, dataFrame)
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16579
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishe
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16594
As of MySQL 5.7.3, the EXPLAIN statement is changed so that the effect of
the EXTENDED keyword is always enabled.
```
mysql> EXPLAIN EXTENDED
-> SELECT t1.a, t1.a IN (SELECT t2.a
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16579
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71822/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16579
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16579
**[Test build #71822 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71822/testReport)**
for PR 16579 at commit
[`7879201`](https://github.com/apache/spark/commit/7
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16659
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71818/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16659
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16659
**[Test build #71818 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71818/testReport)**
for PR 16659 at commit
[`0753ee6`](https://github.com/apache/spark/commit/0
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16594
PostgreSQL has [a few different options in the EXPLAIN
command](https://www.postgresql.org/docs/9.3/static/sql-explain.html):
```
EXPLAIN SELECT * FROM foo WHERE i = 4;
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16552#discussion_r97253775
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
---
@@ -1353,6 +1353,15 @@ class HiveDDLSuite
sql("INS
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16594
DB2 has a tool to format the contents of the EXPLAIN tables. Below is an
example of the output with explanation:
![screenshot 2017-01-22 21 05
45](https://cloud.githubusercontent.com/ass
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16344
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16344
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71823/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16344
**[Test build #71823 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71823/testReport)**
for PR 16344 at commit
[`54da2cb`](https://github.com/apache/spark/commit/5
Github user windpiger commented on the issue:
https://github.com/apache/spark/pull/16672
In hive:
1. read a table with non-existing path, no exception and return 0 rows
2. read a table with non-permission path, throw runtime exception
```
FAILED: SemanticException org.ap
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16659
LGTM pending test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16587
Thanks! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and w
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16579
LGTM pending test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16669
thanks, merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wi
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16675
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16675
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71820/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16675
**[Test build #71820 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71820/testReport)**
for PR 16675 at commit
[`97b0a1c`](https://github.com/apache/spark/commit/9
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16669
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16587
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16594
Let us do some research how the other RDBMSs are doing it? For example,
Oracle
```
SQL> explain plan for select * from product;
Explained.
SQL> select * from table(dbms_xplan.
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/16594
@rxin Can we add a flag to enable or disable it? Currently there's no other
way to see size and row count except debugging.
---
If your project is set up for it, you can reply to this email and have
Github user djvulee commented on the issue:
https://github.com/apache/spark/pull/16671
@HyukjinKwon One assumption behind this design is that the specified column
has index in most real scenario, so the table scan cost is not much high.
What I observed is that most large tabl
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16587
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the fea
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/16579
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the featur
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16594
sorry this explain plan makes no sense -- it is impossible to read.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16579#discussion_r97250719
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -982,6 +982,33 @@ class SQLQuerySuite extends QueryTest with
SharedS
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16675
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71819/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16675
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16675
**[Test build #71819 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71819/testReport)**
for PR 16675 at commit
[`c2b4132`](https://github.com/apache/spark/commit/c
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16579#discussion_r97250587
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -982,6 +982,33 @@ class SQLQuerySuite extends QueryTest with
SharedSQLConte
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16636
Ideally the table schema must be specified or inferred before saving to
metastore, however, for hive serde tables, we have to save it to metastore
first, and let the hive metastore to infer the sc
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16579#discussion_r97250343
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -982,6 +982,33 @@ class SQLQuerySuite extends QueryTest with
SharedS
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16579#discussion_r97250196
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -982,6 +982,33 @@ class SQLQuerySuite extends QueryTest with
SharedS
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16579#discussion_r97250113
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -982,6 +982,33 @@ class SQLQuerySuite extends QueryTest with
SharedSQLConte
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16579#discussion_r97249959
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -982,6 +982,33 @@ class SQLQuerySuite extends QueryTest with
SharedS
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16579#discussion_r97249854
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -982,6 +982,33 @@ class SQLQuerySuite extends QueryTest with
SharedSQLConte
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/16344
@yanboliang Thanks so much for your detailed review. Your suggestions make
lots of sense and I have included all of them in the new commit. Let me know if
there is any other change needed.
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16671
FWIW, I am negative of this approach too. It does not look a good solution
to require full table scans to resolve skew between partitions.
As said, it is not good for a large table. Then
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16344
**[Test build #71823 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71823/testReport)**
for PR 16344 at commit
[`54da2cb`](https://github.com/apache/spark/commit/54
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16579#discussion_r97249644
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -982,6 +982,33 @@ class SQLQuerySuite extends QueryTest with
SharedS
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16579#discussion_r97249538
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -982,6 +982,33 @@ class SQLQuerySuite extends QueryTest with
SharedSQLConte
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16579#discussion_r97249317
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -982,6 +982,33 @@ class SQLQuerySuite extends QueryTest with
SharedS
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16579
**[Test build #71822 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71822/testReport)**
for PR 16579 at commit
[`7879201`](https://github.com/apache/spark/commit/78
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/16675
Looks good, I'll merge if it passes test. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have thi
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16579#discussion_r97249218
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -982,6 +982,33 @@ class SQLQuerySuite extends QueryTest with
SharedS
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16579#discussion_r97249076
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -982,6 +982,33 @@ class SQLQuerySuite extends QueryTest with
SharedSQLConte
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16579
**[Test build #71821 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71821/testReport)**
for PR 16579 at commit
[`7061cd9`](https://github.com/apache/spark/commit/70
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/16594
@hvanhovell I've updated the description which shows a simple example.
The explained plan will become hard to read when joining many tables and
sizeInBytes is computed by the simple way (non-c
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16579
Thank you, @viirya .
I noticed that `spark.sessionState.conf.clear()` is useless. I removed that.
---
If your project is set up for it, you can reply to this email and have your
reply appe
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/16675
@yanboliang Thanks for the quick response. How about the new commit, where
I just change the value from `getFamily` to lower case when necessary, i.e., in
the calculation of p-value and dispers
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16675
**[Test build #71820 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71820/testReport)**
for PR 16675 at commit
[`97b0a1c`](https://github.com/apache/spark/commit/97
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16579#discussion_r97248522
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -982,6 +982,33 @@ class SQLQuerySuite extends QueryTest with
SharedSQLConte
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/16675
@actuaryzhang I think the change is not appropriate, the function
```getFamily``` should return the raw value that users specified, this is the
cause that I didn't change them in #16516 . Thanks.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16579
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71816/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16579
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16579
**[Test build #71816 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71816/testReport)**
for PR 16579 at commit
[`387ab59`](https://github.com/apache/spark/commit/3
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/16675
I would prefer that `getFamily` returns lower case values directly, because
using `getFamily.toLowerCase` can get very cumbersome and I use this a lot in
another PR #16344. If we want to keep
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16675
**[Test build #71819 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71819/testReport)**
for PR 16675 at commit
[`c2b4132`](https://github.com/apache/spark/commit/c2
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16636#discussion_r97247351
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -1527,6 +1527,21 @@ class DDLSuite extends QueryTest with
Github user koertkuipers commented on the issue:
https://github.com/apache/spark/pull/16479
i will just copy the conversion code over for now thx
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
GitHub user actuaryzhang opened a pull request:
https://github.com/apache/spark/pull/16675
[SPARK-19155][ML] make getFamily case insensitive
## What changes were proposed in this pull request?
This is a supplement to PR #16516 which did not make the value from
`getFamily` case i
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16659
**[Test build #71818 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71818/testReport)**
for PR 16659 at commit
[`0753ee6`](https://github.com/apache/spark/commit/07
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16344
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16344
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71817/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16579
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
1 - 100 of 292 matches
Mail list logo