Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18975#discussion_r136509156
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveDirCommand.scala
---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to t
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19090
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81305/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19090
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19090
**[Test build #81305 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81305/testReport)**
for PR 19090 at commit
[`26fc756`](https://github.com/apache/spark/commit/2
Github user rednaxelafx commented on a diff in the pull request:
https://github.com/apache/spark/pull/19082#discussion_r136506452
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -244,6 +246,92 @@ case class HashAggregateExe
Github user rednaxelafx commented on a diff in the pull request:
https://github.com/apache/spark/pull/19082#discussion_r136506046
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -244,6 +246,92 @@ case class HashAggregateExe
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18975#discussion_r136508091
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveDirCommand.scala
---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to t
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18975#discussion_r136508055
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveDirCommand.scala
---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to t
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18697
shouldn't we fix `ProjectExec.outputPartitioning`?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have thi
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18869
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81306/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18869
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18869
**[Test build #81306 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81306/testReport)**
for PR 18869 at commit
[`b64c9e6`](https://github.com/apache/spark/commit/b
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18975#discussion_r136506540
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveDirCommand.scala
---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to t
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19102
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user lgrcyanny commented on the issue:
https://github.com/apache/spark/pull/19079
Hi @vanzin I have submit a PR based on master branch, please review it,
thank you
https://github.com/apache/spark/pull/19102
---
If your project is set up for it, you can reply to this email
GitHub user lgrcyanny opened a pull request:
https://github.com/apache/spark/pull/19102
[SPARK-21859][CORE] Fix SparkFiles.get failed on driver in yarn-cluster and
yarn-client mode
## What changes were proposed in this pull request?
when use SparkFiles.get a file on driver in ya
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18869
@gatorsmile Right. Isn't too verbose if we describe map is not supported in
each description?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub a
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18869
`map` is not supported, right?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled a
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18975#discussion_r136502487
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertSuite.scala ---
@@ -534,4 +534,115 @@ class InsertIntoHiveTableSuite extends QueryTest
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/19082#discussion_r136500779
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -244,6 +246,92 @@ case class HashAggregateExec(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18931
**[Test build #81307 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81307/testReport)**
for PR 18931 at commit
[`1101b2c`](https://github.com/apache/spark/commit/11
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18975#discussion_r136499419
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertSuite.scala ---
@@ -534,4 +534,115 @@ class InsertIntoHiveTableSuite extends QueryTest
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18975#discussion_r136497740
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala
---
@@ -140,6 +141,10 @@ case class DataSourceAnal
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18647
Thank you @felixcheung and @holdenk.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
en
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18647
Merged to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18647
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18647
I double checked these **`_split_schema_abstract`**,
**`_parse_field_abstract`**, **`_parse_schema_abstract`** and
**`_infer_schema_type`** are not used in a public API.
Under `./python
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18999
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18999
Thank you @viirya, @felixcheung, @rxin and @ueshin.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18999
Merged to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/19090
Looks ok given the examples & syntax - https://ss64.com/nt/cmd.html and
https://technet.microsoft.com/en-us/library/cc771320(v=ws.11).aspx and my
manual tests.
I think here is the very
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18869
**[Test build #81306 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81306/testReport)**
for PR 18869 at commit
[`b64c9e6`](https://github.com/apache/spark/commit/b6
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18869
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, o
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18869
@gatorsmile @HyukjinKwon #18818 is merged now. Can this PR go ahead?
As what #18818 did is to allow structs, arrays to be input expression for
predicates, currently looks like we don't have e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19090
**[Test build #81305 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81305/testReport)**
for PR 19090 at commit
[`26fc756`](https://github.com/apache/spark/commit/26
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/19090
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18931#discussion_r136492055
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/WholeStageCodegenExec.scala
---
@@ -149,14 +149,144 @@ trait CodegenSupport extends SparkPl
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/18931#discussion_r136491920
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/WholeStageCodegenExec.scala
---
@@ -149,14 +149,144 @@ trait CodegenSupport extends SparkPla
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/18966#discussion_r136491409
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -769,16 +769,27 @@ class CodegenContext {
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/19077
@jerryshao @JoshRosen yes, it would not generally be arbitrary sized
allocations. Basically, we allocate memory in multiples of 4 or 8 bytesï¼even
so, I think this change is also beneficial .
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18999
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18999
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81303/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18999
**[Test build #81303 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81303/testReport)**
for PR 18999 at commit
[`f2608ab`](https://github.com/apache/spark/commit/f
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18704
@cloud-fan Resolved conflict, could you please review?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/19082#discussion_r136490017
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -244,6 +246,92 @@ case class HashAggregateExec(
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19077#discussion_r136489896
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java
---
@@ -47,23 +47,29 @@ private boolean shouldPool(long size
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/19077#discussion_r136487281
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java
---
@@ -47,23 +47,29 @@ private boolean shouldPool(long siz
Github user JoshRosen commented on the issue:
https://github.com/apache/spark/pull/19077
Just curious: do you know where are we allocating these close-in-size
chunks of memory? I understand the motivation, but just curious to know what's
causing this pattern. I think the original idea
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19100
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81300/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19100
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19100
**[Test build #81300 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81300/testReport)**
for PR 19100 at commit
[`7dbd810`](https://github.com/apache/spark/commit/7
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19101
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19101
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81302/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19101
**[Test build #81302 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81302/testReport)**
for PR 19101 at commit
[`3ee18ce`](https://github.com/apache/spark/commit/3
Github user zhengruifeng commented on the issue:
https://github.com/apache/spark/pull/17014
@WeichenXu123 Sounds good. And since adding `handlePersistence` as a
`ml.Param` may influences many algs (more than that in this PR), I think we may
need more discussion @MLnick @yanboliang
Github user WeichenXu123 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16774#discussion_r136482755
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/tuning/CrossValidatorSuite.scala ---
@@ -120,6 +120,33 @@ class CrossValidatorSuite
}
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/19060#discussion_r136475026
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/sources/DataSourceSuite.scala ---
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Soft
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/17014
@smurching Yes this should be added as a `ml.Param`, we should not add as
an argument.
@zhengruifeng Would you mind update the PR according to our discussion
result above ?
Make `handle
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15334
@gatorsmile Hi Sean, i tried apache-drill after looking through their
documentation. And they are able to encode interval data into parquet.
```
0: jdbc:drill:zk=local> CREATE TABLE
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18647
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18647
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81304/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18647
**[Test build #81304 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81304/testReport)**
for PR 18647 at commit
[`83228cb`](https://github.com/apache/spark/commit/8
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/19099#discussion_r136479931
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2663,4 +2664,31 @@ class SQLQuerySuite extends QueryTest with
SharedSQLCo
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/19099#discussion_r136479686
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -79,11 +79,11 @@ abstract class Optimizer(sessionCatalo
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18647
**[Test build #81304 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81304/testReport)**
for PR 18647 at commit
[`83228cb`](https://github.com/apache/spark/commit/83
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18999
**[Test build #81303 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81303/testReport)**
for PR 18999 at commit
[`f2608ab`](https://github.com/apache/spark/commit/f2
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18999
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes s
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18647
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes s
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/19080#discussion_r136477108
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/physical/partitioning.scala
---
@@ -284,24 +241,17 @@ case class RangePartition
Github user janewangfb commented on the issue:
https://github.com/apache/spark/pull/18975
Jenkin test please!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes s
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/19080#discussion_r136476025
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/physical/partitioning.scala
---
@@ -284,24 +241,17 @@ case class RangePartition
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19100
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81301/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19060
**[Test build #81298 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81298/testReport)**
for PR 19060 at commit
[`104f24c`](https://github.com/apache/spark/commit/1
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19060
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19060
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81298/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19100
**[Test build #81301 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81301/testReport)**
for PR 19100 at commit
[`7954c0b`](https://github.com/apache/spark/commit/7
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19101
**[Test build #81302 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81302/testReport)**
for PR 19101 at commit
[`3ee18ce`](https://github.com/apache/spark/commit/3e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19100
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/19101
[SPARK-21884] [BACKPORT-2.2] [SPARK-21477] [SQL] Mark LocalTableScanExec's
input data transient
This PR is to backport https://github.com/apache/spark/pull/18686 for
resolving the issue in http
Github user sitalkedia commented on the issue:
https://github.com/apache/spark/pull/18317
ping @zsxwing !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, o
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18975
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18975
**[Test build #81297 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81297/testReport)**
for PR 18975 at commit
[`e2db5e1`](https://github.com/apache/spark/commit/e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18975
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81297/
Test FAILed.
---
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/19050
ping @cloud-fan @hvanhovell Can you have time to review this? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project d
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19100
**[Test build #81301 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81301/testReport)**
for PR 19100 at commit
[`7954c0b`](https://github.com/apache/spark/commit/79
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19100#discussion_r136473016
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuerySuite.scala
---
@@ -117,4 +117,12 @@ class OptimizeMetadataOnl
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19100
**[Test build #81300 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81300/testReport)**
for PR 19100 at commit
[`7dbd810`](https://github.com/apache/spark/commit/7d
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19100#discussion_r136472066
--- Diff: sql/core/src/test/resources/sql-tests/results/cross-join.sql.out
---
@@ -128,6 +128,7 @@ two 2 two 2 one 1 two
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19100
@cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/19100
[SPARK-21891] [SQL] Add TBLPROPERTIES to DDL statement: CREATE TABLE USING
## What changes were proposed in this pull request?
Add `TBLPROPERTIES` to the DDL statement `CREATE TABLE USING`.
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19078
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19072
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/19078
LGTM
Merging with master
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18686
Thank you so much!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/19072#discussion_r136470877
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala
---
@@ -1473,6 +1473,17 @@ sealed trait LogisticRegressionSumm
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18686
Sure. Will do it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18686
Yes. This is the fix.
@gatorsmile and @cloud-fan . Can we have this in branch-2.2, too?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitH
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/19094
I close this issue. Thank you again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun closed the pull request at:
https://github.com/apache/spark/pull/19094
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19099
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
1 - 100 of 299 matches
Mail list logo