Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12734#issuecomment-215198362
@liancheng The last commit adds a new test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12734#discussion_r61318397
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -264,9 +265,16 @@ class SparkSqlAstBuilder(conf: SQLConf) extends
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12734#issuecomment-215191068
oh, I cannot change it. @liancheng will change the title after he gets up :)
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12734#issuecomment-215188645
Yea. https://issues.apache.org/jira/browse/SPARK-14954 is the jira.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12734#discussion_r61305260
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -264,9 +265,16 @@ class SparkSqlAstBuilder(conf: SQLConf) extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12734#discussion_r61303166
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -264,9 +265,16 @@ class SparkSqlAstBuilder(conf: SQLConf) extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12734#discussion_r61302579
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -264,9 +265,16 @@ class SparkSqlAstBuilder(conf: SQLConf) extends
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12734#issuecomment-215164079
For `DataFrameWriter`, can we do `sortBy` without using `bucketBy`?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12724#issuecomment-215159444
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12724#issuecomment-215102775
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12728#issuecomment-214991635
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Repository: spark
Updated Branches:
refs/heads/master b2a456064 -> d73d67f62
[SPARK-14944][SPARK-14943][SQL] Remove HiveConf from HiveTableScanExec,
HiveTableReader, and ScriptTransformation
## What changes were proposed in this pull request?
This patch removes HiveConf from
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12727#issuecomment-214985783
Merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12727#issuecomment-214985763
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12714#discussion_r61207500
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -1391,4 +1393,99 @@ class SQLQuerySuite extends QueryTest
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12724#issuecomment-214969720
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/12724
[SPARK-14783] [SPARK-14786] [BRANCH-1.6] Preserve full exception stacktrace
in IsolatedClientLoader and Remove hive-cli dependency from hive subproject
This PR is the branch-1.6 version
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12703#discussion_r61176555
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala
---
@@ -112,3 +116,107 @@ case class ExplainCommand(
("
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12689#issuecomment-214905568
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/12714
[SPARK-14130] [SQL] Throw exceptions for ALTER TABLE ADD/REPLACE/CHANGE
COLUMN, ALTER TABLE SET FILEFORMAT, DBFS, and transaction related commands
## What changes were proposed in this pull request
Repository: spark
Updated Branches:
refs/heads/master 92f66331b -> 5cb03220a
[SPARK-14912][SQL] Propagate data source options to Hadoop configuration
## What changes were proposed in this pull request?
We currently have no way for users to propagate options to the underlying
library that
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12688#issuecomment-214829104
Thanks. Merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12679#issuecomment-214827797
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12687#issuecomment-214825127
[Uploading nativeCommand.txtâ¦]()
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/12687
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12688#issuecomment-214622278
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/12687
[SQL] Log native commands to find the number of tests that will be affected
if we do not delegate any statements to Hive
For Spark 2.0, Spark will handle all queries and statements. This PR aims
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12672#issuecomment-214591014
LGTM. Let's fix the test and get it in.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12672#issuecomment-214554637
Maybe also include a sanity check to make sure it can be created and try
some basic functions?
---
If your project is set up for it, you can reply to this email
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12585#issuecomment-214508281
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12659#issuecomment-214488360
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12659#discussion_r60971154
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -622,40 +620,44 @@ class SessionCatalog
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12662#issuecomment-214481742
Just a note. The test in that file is
```
create table test (a int) stored as inputformat
'org.apache.hadoop.hive.ql.io.RCFileInputFormat' outputformat
Repository: spark
Updated Branches:
refs/heads/master c7758ba38 -> 88e54218d
[SPARK-14892][SQL][TEST] Disable the HiveCompatibilitySuite test case for
INPUTDRIVER and OUTPUTDRIVER.
What changes were proposed in this pull request?
Disable the test case involving INPUTDRIVER and
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12662#issuecomment-214481012
LGTM Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12643#issuecomment-214030487
LGTM. Do we have tests that creates tables when `spark.sql.caseSensitive`
is true?
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/12654
[SPARK-14885] [SQL] When creating a CatalogColumn, we should use the
catalogString of a DataType object.
## What changes were proposed in this pull request?
Right now, the data type field
rce and CreateMetastoreDataSourceAsSelect are not
Hive-specific. So, this PR moves them from sql/hive to sql/core. Also, I am
adding `Command` suffix to these two classes.
## How was this patch tested?
Existing tests.
Author: Yin Huai <yh...@databricks.com>
Closes #12645 from yhuai/moveCreateDataSource.
Project: http:
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12645#issuecomment-213892106
OK. Thanks. Will send out a follow-up pr.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12645#discussion_r60839199
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -0,0 +1,452 @@
+/*
+ * Licensed
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/9702
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/12410
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/12367
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/12372
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Repository: spark
Updated Branches:
refs/heads/master e3c1366bb -> 162e12b08
[SPARK-14877][SQL] Remove HiveMetastoreTypes class
## What changes were proposed in this pull request?
It is unnecessary as DataType.catalogString largely replaces the need for this
class.
## How was this patch
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12644#issuecomment-213849631
Thanks! Merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12645#discussion_r60835970
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -0,0 +1,436 @@
+/*
+ * Licensed
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12645#discussion_r60835975
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -0,0 +1,436 @@
+/*
+ * Licensed
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/12645
[SPARK-14879] [SQL] Move CreateMetastoreDataSource and
CreateMetastoreDataSourceAsSelect to sql/core
## What changes were proposed in this pull request?
CreateMetastoreDataSource
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12644#issuecomment-213842068
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Repository: spark
Updated Branches:
refs/heads/master 890abd127 -> e3c1366bb
[SPARK-14865][SQL] Better error handling for view creation.
## What changes were proposed in this pull request?
This patch improves error handling in view creation. CreateViewCommand itself
will analyze the view SQL
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12633#issuecomment-213823889
LGTM. Merging to master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
[SPARK-14872][SQL] Restructure command package
## What changes were proposed in this pull request?
This patch restructures sql.execution.command package to break the commands
into multiple files, in some logical organization: databases, tables, views,
functions.
I also renamed
Repository: spark
Updated Branches:
refs/heads/master fddd3aee0 -> 5c8a0ec99
http://git-wip-us.apache.org/repos/asf/spark/blob/5c8a0ec9/sql/core/src/main/scala/org/apache/spark/sql/execution/basicPhysicalOperators.scala
--
Repository: spark
Updated Branches:
refs/heads/master ee6b209a9 -> fddd3aee0
[SPARK-14871][SQL] Disable StatsReportListener to declutter output
## What changes were proposed in this pull request?
Spark SQL inherited from Shark to use the StatsReportListener. Unfortunately
this clutters the
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12636#issuecomment-213817228
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12635#issuecomment-213817153
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12589#discussion_r60822234
--- Diff:
repl/scala-2.10/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -1026,21 +1025,7 @@ class SparkILoop
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12589#discussion_r60800495
--- Diff:
repl/scala-2.10/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -1026,21 +1025,7 @@ class SparkILoop
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/12332
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12332#issuecomment-213499097
ok. I am closing it. I will submit a new PR later once I update it.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12602#issuecomment-213295778
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12527#issuecomment-213230661
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12527#discussion_r60686408
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileScanRDD.scala
---
@@ -131,4 +134,23 @@ class FileScanRDD
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12588#issuecomment-213224429
Seems the failed test
(https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/56629/testReport/org.apache.spark.sql.hive.execution/SQLQuerySuite
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12594#issuecomment-213209658
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12588#issuecomment-213188904
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12584#issuecomment-213154008
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12081#issuecomment-213145668
@gatorsmile Just left a few comments. For the third item in description
(`Third, the property value of java.io.tmpdi...`), seems the explanation is not
finished
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12081#discussion_r60665413
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
---
@@ -348,4 +363,99 @@ class HiveDDLSuite extends QueryTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12081#discussion_r60663960
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -52,7 +52,10 @@ abstract class NativeDDLCommand(val sql: String
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12081#discussion_r60663912
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -121,8 +123,11 @@ class SessionCatalog
Repository: spark
Updated Branches:
refs/heads/master 8e1bb0456 -> a2e8d4fdd
[SPARK-13643][SQL] Implement SparkSession
## What changes were proposed in this pull request?
After removing most of `HiveContext` in
8fc267ab3322e46db81e725a5cb1adb5a71b2b4d we can now move existing functionality
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12553#issuecomment-213117687
LGTM. Thanks. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12564#issuecomment-213114916
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12584#issuecomment-213114625
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12580#issuecomment-213091125
@srowen @rxin @JoshRosen How's this version?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12580#issuecomment-213090150
@JoshRosen https://issues.apache.org/jira/browse/SPARK-14818 is the jira
for updating mima exclusions.
---
If your project is set up for it, you can reply to this email
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12580#issuecomment-213085351
@srowen I am moving this into sql.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12527#discussion_r60643219
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileScanRDD.scala
---
@@ -131,4 +134,23 @@ class FileScanRDD
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12580#discussion_r60635076
--- Diff: project/SparkBuild.scala ---
@@ -254,7 +254,7 @@ object SparkBuild extends PomBuild {
val mimaProjects = allProjects.filterNot { x
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/12580
[SPARK-14807] Create a compatibility module
## What changes were proposed in this pull request?
This PR creates a compatibility module, which will host HiveContext in
Spark 2.0 (moving
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12561#issuecomment-213045905
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12567#issuecomment-213045030
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12527#issuecomment-213037505
Just want to add a note. For that test case, we have a join that only
shuffle one side of the input, so we have both preferred locations of original
input files as well
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12566#issuecomment-213029923
The current changes look good!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12564#issuecomment-212773354
https://github.com/apache/spark/pull/12564/commits/d8137838dde92a6ad00e17cbd5aa2745881583d0
looks good
---
If your project is set up for it, you can reply
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12561#issuecomment-212763121
https://github.com/apache/spark/pull/12561/commits/c878aef1ea7bb61e29961253a2e3510954b57348
looks good
---
If your project is set up for it, you can reply
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12558#issuecomment-212758646
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12556#issuecomment-212751532
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Repository: spark
Updated Branches:
refs/heads/master 90933e2af -> 804581411
[SPARK-14782][SPARK-14778][SQL] Remove HiveConf dependency from
HiveSqlAstBuilder
## What changes were proposed in this pull request?
The patch removes HiveConf dependency from HiveSqlAstBuilder. This is required
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12550#issuecomment-212732856
Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12550#issuecomment-212732569
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12553#issuecomment-212707420
Thank you! This one looks good. Seems MIMA is complaining the following.
```
[error] * method getSchema(java.lang.Class)scala.collection.Seq in class
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12554#issuecomment-212700785
Thanks! LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12412#discussion_r60515328
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/commands.scala ---
@@ -329,3 +335,134 @@ case class CreateMetastoreDataSourceAsSelect
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12412#discussion_r60515202
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveSqlParser.scala
---
@@ -234,6 +240,25 @@ class HiveSqlAstBuilder(hiveConf: HiveConf
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12543#issuecomment-212670225
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12538#issuecomment-212651536
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12540#issuecomment-212651385
LGTM!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
1701 - 1800 of 5990 matches
Mail list logo