Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94094625
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -211,6 +211,55 @@ object JdbcUtils extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94094718
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -568,10 +617,9 @@ object JdbcUtils extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94094799
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -568,10 +617,9 @@ object JdbcUtils extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94095175
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -568,10 +617,9 @@ object JdbcUtils extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94098761
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -568,10 +617,9 @@ object JdbcUtils extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94099004
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -211,6 +211,55 @@ object JdbcUtils extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94101576
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -211,6 +211,55 @@ object JdbcUtils extends
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16233
> 4. Add `AnalysisContext` to enable us to still support a view created
with CTE/Windows query.
What is the `AnalysisContext `?
---
If your project is set up for it, you can reply
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16233
> Note this is compatible with the views defined by older versions of
Spark(before 2.2), which have empty defaultDatabase and all the relations in
viewText have database part defi
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/16437
[SPARK-19028] [SQL] Fixed non-thread-safe functions used in SessionCatalog
### What changes were proposed in this pull request?
Fixed non-thread-safe functions used in SessionCatalog
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16437
cc @zsxwing @cloud-fan @yhuai
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16437
A related PR: https://github.com/apache/spark/pull/12915
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/16438
[SPARK-19029] [SQL] Remove databaseName from SimpleCatalogRelation
### What changes were proposed in this pull request?
Remove useless `databaseName ` from `SimpleCatalogRelation
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r94202164
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -845,6 +846,7 @@ private[hive] class HiveClientImpl
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r94202380
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
---
@@ -377,6 +378,39 @@ case class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r94202491
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -2224,6 +2316,16 @@ object EliminateSubqueryAliases
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r94202502
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -658,6 +719,21 @@ class Analyzer
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r94202754
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -2224,6 +2316,16 @@ object EliminateSubqueryAliases
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r94202769
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -658,6 +719,21 @@ class Analyzer
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r94202897
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalogSuite.scala
---
@@ -465,6 +465,35 @@ class SessionCatalogSuite
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r94202865
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalogSuite.scala
---
@@ -465,6 +465,35 @@ class SessionCatalogSuite
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16233
Could you also add a test case for verifying the error behaviors?
For example, in the definition of a nested view, how Analyzer behaves when
the dependent databases, views, or views are
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16233
Generally, the solution also looks ok to me. I think the test case coverage
needs to be improved.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15819#discussion_r94204568
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertIntoHiveTableSuite.scala
---
@@ -76,6 +76,8 @@ class InsertIntoHiveTableSuite extends
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15819
Is that possible to backport the test cases in
https://github.com/apache/spark/pull/16399?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94207873
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcRelationProvider.scala
---
@@ -57,26 +57,28 @@ class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94207926
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -108,14 +108,32 @@ object JdbcUtils extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16422#discussion_r94208116
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -300,10 +300,21 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16422#discussion_r94208490
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -300,10 +300,21 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16422
What is the behavior of DESC COLUMN for the complex/nested type (map,
struct, array)?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15664
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15664
Merging to master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16404
LGTM cc @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16320
LGTM cc @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16401#discussion_r94251460
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -95,6 +96,29 @@ abstract class LogicalPlan
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16401#discussion_r94251558
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -642,6 +642,13 @@ object SQLConf {
.doubleConf
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16401#discussion_r94251780
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/estimation/EstimationSuite.scala
---
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16401#discussion_r94253091
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -95,6 +96,29 @@ abstract class LogicalPlan
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16422#discussion_r94267919
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -300,10 +300,21 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16422
To get the column names and types, we do not need `DESC COLUMN`.
For retrieving the statistics, each vendor has different ways. Normally,
users can access the statistics from the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16422
After rethinking about it, `DESC EXTENDED/FORMATTED COLUMN` discloses the
data patterns/statistics info. These info are pretty sensitive. Not all the
users should be allowed to access it
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16296
In the original `Create Hive Serde Table` command, users are allowed to
specify the serde properties for `ROW FORMAT SERDE`. It sounds like the unified
Create Table command is missing such a
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16437
I think we need to backport it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16437
@markhamstra The JIRA is updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16401#discussion_r94280370
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/estimation/EstimationSuite.scala
---
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/16448
[SPARK-19048] [SQL] Delete Partition Location when Dropping Managed
Partitioned Tables in InMemoryCatalog
### What changes were proposed in this pull request?
The data in the managed table
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16448#discussion_r94286874
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
---
@@ -199,6 +199,52 @@ class HiveDDLSuite
assert
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16448#discussion_r94286870
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -400,13 +400,12 @@ case class AlterTableSerDePropertiesCommand
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16320
Happy New Year!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16337#discussion_r94287611
--- Diff:
sql/core/src/test/resources/sql-tests/results/subquery/in-subquery/simple-in.sql.out
---
@@ -0,0 +1,213 @@
+-- Automatically generated by
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16447
All the recent PRs failed at the same test case:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70775/testReport/org.apache.spark.sql.streaming
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16304#discussion_r94287885
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/EventTimeWatermarkSuite.scala
---
@@ -124,6 +137,33 @@ class WatermarkSuite extends
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16449
It sounds like our watermarkTime delay calculation causes this issue. Below
are two typical cases:
Case 1: when setting the watermark delay to 1 month interval:
```Scala
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16449
Yeah, agree. This is a bug in the test cases. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16448
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16438
cc @cloud-fan @yhuai
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14702#discussion_r94339262
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/script/ScriptTransformationExec.scala
---
@@ -0,0 +1,334 @@
+/*
+ * Licensed to
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15819
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15819#discussion_r94341135
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -54,6 +63,63 @@ case class InsertIntoHiveTable
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15819#discussion_r94341815
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/client/VersionsSuite.scala ---
@@ -216,5 +218,37 @@ class VersionsSuite extends SparkFunSuite
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16401#discussion_r94344029
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -95,6 +96,29 @@ abstract class LogicalPlan
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16401
LGTM, except one comment.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16448
cc @cloud-fan @yhuai
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15819#discussion_r94352298
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/client/VersionsSuite.scala ---
@@ -216,5 +218,37 @@ class VersionsSuite extends SparkFunSuite
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16337#discussion_r94353849
--- Diff:
sql/core/src/test/resources/sql-tests/results/subquery/in-subquery/simple-in.sql.out
---
@@ -0,0 +1,176 @@
+-- Automatically generated by
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16337#discussion_r94353809
--- Diff:
sql/core/src/test/resources/sql-tests/results/subquery/in-subquery/simple-in.sql.out
---
@@ -0,0 +1,176 @@
+-- Automatically generated by
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16404
Found a bug filed in a JIRA
https://issues.apache.org/jira/browse/SPARK-19035. This PR does not resolves
it.
---
If your project is set up for it, you can reply to this email and have your
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15819#discussion_r94360173
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/client/VersionsSuite.scala ---
@@ -216,5 +218,37 @@ class VersionsSuite extends SparkFunSuite
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15819#discussion_r94360355
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/client/VersionsSuite.scala ---
@@ -216,5 +218,37 @@ class VersionsSuite extends SparkFunSuite
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16337#discussion_r94360417
--- Diff:
sql/core/src/test/resources/sql-tests/results/subquery/in-subquery/simple-in.sql.out
---
@@ -0,0 +1,176 @@
+-- Automatically generated by
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16320
Please add the test case?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16422#discussion_r94361604
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -300,10 +300,21 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16320
The test case coverage in the suite `CSVInferSchemaSuite.scala` looks
random. I am afraid the future code changes could easily break the existing
type inference rules. Could you improve it in a
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15880
Just for your reference, below is the conversion charts of MS SQL Server.
It includes both implicit and explicit conversion rules.
![screenshot 2017-01-02 23 18
56](https
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16320
@dongjoon-hyun Could you submit a backport PR to 2.1? I am unable to merge
this PR to 2.1. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16448
Thanks! Merging to master/2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16404
```Scala
sql("select a + rand() from testData2 group by a, a + rand()").explain(true)
```
After we merging this PR, I am afraid we might hitting a common
misunderstand
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16460#discussion_r94518349
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala
---
@@ -74,12 +69,29 @@ case class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16460#discussion_r94532732
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala
---
@@ -152,4 +190,29 @@ case class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16460#discussion_r94533881
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -473,22 +473,26 @@ case class DataSource
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16404
DB2 has such a limit. See the error message `SQL -583`:
http://www.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.messages.sql.doc/doc/msql00583n.html
> The rout
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16404
Oracle allows it. It sounds like they treat ` (username ||
dbms_random.string('a', 10))` in aggregate and group-by as the same expression.
```SQL
SQL> sel
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16404
MySQL treats them differently...
```SQL
mysql> select c1, concat(rand(), c1) from t1 group by c1;
+--+--+
| c1 | concat(rand(),
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16463
LGTM. Thanks, merging to 2.1!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16463
Could you please close it and open one for branch 2.0? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16460#discussion_r94658875
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -473,22 +473,26 @@ case class DataSource
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16460#discussion_r94659408
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -473,22 +473,26 @@ case class DataSource
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16460#discussion_r94670491
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala
---
@@ -74,12 +69,30 @@ case class
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16460
LGTM except one minor comment.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16296#discussion_r94699687
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveOptions.scala
---
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16296#discussion_r94700700
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -18,14 +18,79 @@
package org.apache.spark.sql.hive
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16472
Thanks! Merged to Spark 2.0.
Could you please close it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16422
Column-level security can block users to access the specific columns, but
this command `DESC EXTENDED/FORMATTED COLUMN` might not be part of the
design/solution.
---
If your project is set up
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16337#discussion_r94718082
--- Diff:
sql/core/src/test/resources/sql-tests/results/subquery/in-subquery/in-group-by.sql.out
---
@@ -0,0 +1,357 @@
+-- Automatically generated
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16337
I compared the results and confirmed the results are consistent. LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15819
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15819#discussion_r94719028
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/client/VersionsSuite.scala ---
@@ -216,5 +219,37 @@ class VersionsSuite extends SparkFunSuite
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15819#discussion_r94719123
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -54,6 +63,63 @@ case class InsertIntoHiveTable
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15819#discussion_r94719375
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/client/VersionsSuite.scala ---
@@ -216,5 +219,37 @@ class VersionsSuite extends SparkFunSuite
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15819
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
401 - 500 of 14069 matches
Mail list logo