Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16777
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16777
Thanks! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16868
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16868
Yes, the location can be the same or different from the original table.
LGTM pending test
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16841
OK to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16915
OK to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16915#discussion_r100905620
--- Diff:
sql/core/src/test/resources/sql-tests/results/subquery/in-subquery/in-with-cte.sql.out
---
@@ -0,0 +1,368 @@
+-- Automatically generated
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16841
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16841#discussion_r100907233
--- Diff:
sql/core/src/test/resources/sql-tests/results/subquery/in-subquery/in-multiple-columns.sql.out
---
@@ -0,0 +1,178 @@
+-- Automatically
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16841#discussion_r100907556
--- Diff:
sql/core/src/test/resources/sql-tests/inputs/subquery/in-subquery/in-multiple-columns.sql
---
@@ -0,0 +1,127 @@
+-- A test suite for
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16891#discussion_r100910764
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala ---
@@ -75,7 +75,7 @@ class JDBCWriteSuite extends SharedSQLContext
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16891#discussion_r100910994
--- Diff:
external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala
---
@@ -149,4 +172,16 @@ class
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16915
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16868
Thanks! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16878
Thanks! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16869
Now, we can also remove the generated sqlgen files.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/16921
[SPARK-19589][SQL] Removal of SQLGEN files
### What changes were proposed in this pull request?
SQLGen is removed. Thus, the generated files should be removed too.
### How was this
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16739
Let me rewrite the test cases in Scala.
```Scala
val df = spark.range(0, 1, 1, 5)
assert(df.rdd.getNumPartitions == 5)
assert(df.coalesce(3
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16921
cc @hvanhovell @jiangxb1987
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16919
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16919
LGTM. Merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16891#discussion_r101097830
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala ---
@@ -349,4 +349,17 @@ class JDBCWriteSuite extends SharedSQLContext
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16891#discussion_r101097919
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala ---
@@ -349,4 +349,17 @@ class JDBCWriteSuite extends SharedSQLContext
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16891
I ran the docker tests in my local computer. Now, finally, all the tests
can pass! :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/16933
[SPARK-19601] [SQL] Fix CollapseRepartition rule to preserve
shuffle-enabled Repartition
### What changes were proposed in this pull request?
When users use the shuffle-enabled `repartition
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16739
The issue is fixed in https://github.com/apache/spark/pull/16933. If this
is merged at first, I will fix the test case in this PR Thanks! : )
---
If your project is set up for it, you can reply
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16925
A late LGTM. : )
Normally, we do not encourage users to define hints inside a view. Users
can add a BROADCAST hint when they define a persistent view. That will be
stored in the
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16931#discussion_r101338203
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/sources/BucketedWriteSuite.scala
---
@@ -169,19 +169,20 @@ class BucketedWriteSuite extends
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16931
Another late LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16933#discussion_r101341238
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala ---
@@ -374,6 +374,9 @@ package object dsl {
case
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16933#discussion_r101341551
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -563,23 +563,27 @@ object CollapseProject extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16933#discussion_r101342124
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/CollapseRepartitionSuite.scala
---
@@ -32,6 +32,18 @@ class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16933#discussion_r101346606
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/CollapseRepartitionSuite.scala
---
@@ -43,15 +55,44 @@ class
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16938
We need to define a consistent rule in Catalog how to handle the scenario
when the to-be-created directory already exists. So far, in most DDL scenarios,
when trying to create a directory but it
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16674
A late LGTM : )
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16915
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16841
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16841
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16672
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16672
Thanks! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16841
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16915
LGTM. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16386#discussion_r101460391
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonSuite.scala
---
@@ -1764,4 +1769,117 @@ class JsonSuite extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16386#discussion_r101460558
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonSuite.scala
---
@@ -1802,4 +1806,118 @@ class JsonSuite extends
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16841
Thanks! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16841
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16841#discussion_r101462455
--- Diff:
sql/core/src/test/resources/sql-tests/inputs/subquery/in-subquery/in-multiple-columns.sql
---
@@ -0,0 +1,127 @@
+-- A test suite for
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16954
cc @hvanhovell I did the internal review. It is ready for you to review it.
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16776
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16776
Thanks! Merging to master.
Please continue to finish the work
https://issues.apache.org/jira/browse/SPARK-19573
---
If your project is set up for it, you can reply to this email and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16956
I like this change.
To run unit test cases in your local environment, below are some common
commands when I develop Spark SQL
- To do the local style check
dev/lint-scala
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16956#discussion_r101587263
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -645,17 +645,21 @@ class AstBuilder extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16962#discussion_r101653599
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/SaveIntoDataSourceCommand.scala
---
@@ -0,0 +1,52
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16962#discussion_r101655120
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -573,6 +575,21 @@ final class DataFrameWriter[T] private[sql](ds
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16962#discussion_r101655346
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/util/DataFrameCallbackSuite.scala
---
@@ -159,4 +161,56 @@ class DataFrameCallbackSuite extends
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16962
LGTM except three minor comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16962#discussion_r101666288
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/SaveIntoDataSourceCommand.scala
---
@@ -0,0 +1,52
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16962
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16938
One more case:
5. `CREATE TABLE` or `CTAS` without the location spec: if the default path
exists, should we succeed or fail?
After we finishing the TABLE-level DDLs, we also need to
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16956
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16330
The extra flag is needed only when using Hive metastore. How about renaming
the flag to `spark.sql.hive.metastore.default.derby.dir`? The value is unable
to be changed at runtime. Thus, the best
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16330
In Hive metastore execution client, we face the same issue. See the
[code](https://github.com/apache/spark/blob/master/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala#L379-L383
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16938
@windpiger Thank you for your efforts! What you did above need to be
written as the test cases. Could you do it as a separate PR?
In addition, all the cases you tried are only for hive
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16938
Could you check the behaviors for both data source tables and hive serde
tables? Later, we also need to check the behaviors of InMemoryCatalog for data
source tables without enabling Hive
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16290#discussion_r101883075
--- Diff: R/pkg/R/sparkR.R ---
@@ -376,6 +377,12 @@ sparkR.session <- function(
overrideEnvs(sparkConfigMap, param
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16898
Sorry, I am late. Will review it tonight. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16898#discussion_r101887954
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
---
@@ -119,23 +130,45 @@ object FileFormatWriter
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16898#discussion_r101888000
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
---
@@ -108,9 +107,21 @@ object FileFormatWriter
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16898#discussion_r101888059
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
---
@@ -287,31 +320,16 @@ object FileFormatWriter
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16898#discussion_r101888508
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
---
@@ -119,23 +130,45 @@ object FileFormatWriter
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/16988
[SPARK-19658] [SQL] Set NumPartitions of RepartitionByExpression In Analyzer
### What changes were proposed in this pull request?
Currently, if `NumPartitions` is not set in
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16933#discussion_r101911156
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/CollapseRepartitionSuite.scala
---
@@ -43,15 +55,44 @@ class
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16956
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16988
cc @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/16994
[SPARK-15453] [SQL] [Follow-up] FileSourceScanExec to extract
`outputOrdering` information
### What changes were proposed in this pull request?
`outputOrdering` is also dependent on whether
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16994
cc @tejasapatil @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16956
Thanks! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16994#discussion_r101941261
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala ---
@@ -240,6 +240,7 @@ class BucketedReadSuite extends QueryTest
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16981#discussion_r101947505
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -482,6 +482,15 @@ case class JsonTuple
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16981#discussion_r101947601
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonUtils.scala
---
@@ -55,4 +60,24 @@ object JacksonUtils
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16981
Could you add SQL test cases to SQLQueryTestSuite?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16981#discussion_r101947987
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -482,6 +482,15 @@ case class JsonTuple
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16981#discussion_r101948378
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/JsonFunctionsSuite.scala ---
@@ -174,4 +174,44 @@ class JsonFunctionsSuite extends QueryTest with
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16981#discussion_r101948552
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/JsonFunctionsSuite.scala ---
@@ -174,4 +174,44 @@ class JsonFunctionsSuite extends QueryTest with
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16981#discussion_r101948866
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -482,6 +482,15 @@ case class JsonTuple
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16726#discussion_r101952060
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveTableScanSuite.scala
---
@@ -166,13 +166,11 @@ class HiveTableScanSuite
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16726#discussion_r101952068
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveTableScanSuite.scala
---
@@ -166,13 +166,11 @@ class HiveTableScanSuite
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16626#discussion_r101958020
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -174,6 +177,79 @@ case class AlterTableRenameCommand
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16626#discussion_r101958045
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -174,6 +177,79 @@ case class AlterTableRenameCommand
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16626#discussion_r101958137
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala
---
@@ -71,8 +71,20 @@ class JDBCSuite extends SparkFunSuite
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16626#discussion_r101958217
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/sources/TableScanSuite.scala ---
@@ -416,4 +416,21 @@ class TableScanSuite extends DataSourceTest
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16726#discussion_r101958369
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -251,11 +251,11 @@ private[hive] class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16626#discussion_r101958919
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -174,6 +177,79 @@ case class AlterTableRenameCommand
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16626#discussion_r101959389
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -504,15 +504,15 @@ private[spark] class HiveExternalCatalog
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16626#discussion_r101960044
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -563,35 +574,47 @@ private[spark] class HiveExternalCatalog
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16626#discussion_r101960271
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -523,18 +523,29 @@ private[spark] class HiveExternalCatalog
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16626#discussion_r101960398
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -504,15 +504,15 @@ private[spark] class HiveExternalCatalog
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16626#discussion_r101961067
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -563,35 +574,47 @@ private[spark] class HiveExternalCatalog
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16626#discussion_r101961354
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -563,35 +574,47 @@ private[spark] class HiveExternalCatalog
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16626
We also need to test the support of `InMemoryCatalog`. Please do not add a
test case yet. I think I really need to finish
https://github.com/apache/spark/pull/16592 ASAP. It will make everyone
101 - 200 of 14069 matches
Mail list logo