Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13019#discussion_r62796071
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/encoders/RowEncoderSuite.scala
---
@@ -143,21 +143,35 @@ class RowEncoderSuite extends
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12616#issuecomment-218323375
LGTM pending jenkins
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Repository: spark
Updated Branches:
refs/heads/master 86475520f -> d9ca9fd3e
[SPARK-14837][SQL][STREAMING] Added support in file stream source for reading
new files added to subdirs
## What changes were proposed in this pull request?
Currently, file stream source can only find new files if
Repository: spark
Updated Branches:
refs/heads/branch-2.0 f021f3460 -> d8c2da9a4
[SPARK-14837][SQL][STREAMING] Added support in file stream source for reading
new files added to subdirs
## What changes were proposed in this pull request?
Currently, file stream source can only find new files
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12616#issuecomment-218324187
Thanks! Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13013#discussion_r62742240
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -192,6 +192,11 @@ case class
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13013#issuecomment-218275313
Thanks! It's great to have those documented!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13013#discussion_r62741975
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -192,6 +192,11 @@ case class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13013#discussion_r62741785
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -192,6 +192,11 @@ case class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13013#discussion_r62740848
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -444,8 +447,10 @@ final class DataFrameWriter private[sql](df:
DataFrame
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13013#discussion_r62740941
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -351,6 +351,9 @@ final class DataFrameWriter private[sql](df: DataFrame
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13013#discussion_r62740925
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -444,8 +447,10 @@ final class DataFrameWriter private[sql](df:
DataFrame
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13013#discussion_r62740666
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -351,6 +351,9 @@ final class DataFrameWriter private[sql](df: DataFrame
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12616#discussion_r62739836
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSourceSuite.scala
---
@@ -444,6 +445,79 @@ class FileStreamSourceSuite extends
Repository: spark
Updated Branches:
refs/heads/branch-2.0 5a4a188fe -> 0ab195886
[SPARK-14986][SQL] Return correct result for empty LATERAL VIEW OUTER
## What changes were proposed in this pull request?
A Generate with the `outer` flag enabled should always return one or more rows
for every
Repository: spark
Updated Branches:
refs/heads/master 89f73f674 -> d28c67544
[SPARK-14986][SQL] Return correct result for empty LATERAL VIEW OUTER
## What changes were proposed in this pull request?
A Generate with the `outer` flag enabled should always return one or more rows
for every
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12906#issuecomment-218269756
Thank you! Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12616#discussion_r62737973
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSource.scala
---
@@ -33,12 +35,14 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12616#discussion_r62738019
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSource.scala
---
@@ -97,21 +102,30 @@ class FileStreamSource
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12616#discussion_r62737571
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSource.scala
---
@@ -97,21 +102,30 @@ class FileStreamSource
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12616#discussion_r62737506
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSource.scala
---
@@ -97,21 +102,30 @@ class FileStreamSource
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12906#issuecomment-218231074
Let's ask jenkins to have one more test run.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12906#issuecomment-218230981
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12906#issuecomment-218230958
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Repository: spark
Updated Branches:
refs/heads/master aab99d31a -> 8a12580d2
[SPARK-14127][SQL] "DESC ": Extracts schema information from table
properties for data source tables
## What changes were proposed in this pull request?
This is a follow-up of #12934 and #12844. This PR adds a set
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13025#issuecomment-218204902
LGTM. Merging to master and branch 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13025#discussion_r62699825
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -489,9 +486,83 @@ private[sql] object DDLUtils {
case
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13025#discussion_r62698532
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -489,9 +486,83 @@ private[sql] object DDLUtils {
case
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12618#issuecomment-218048161
Yea, let's be consistent on what we allow for table and db names. We have a
utility function `CreateDataSourceTableUtils.validateName`, which does the same
check as hive
Repository: spark
Updated Branches:
refs/heads/branch-2.0 6a5ec08ea -> 1bcbf6157
[SPARK-15025][SQL] fix duplicate of PATH key in datasource table options
## What changes were proposed in this pull request?
The issue is that when the user provides the path option with uppercase "PATH"
key,
Repository: spark
Updated Branches:
refs/heads/master 3323d0f93 -> 980bba0dc
[SPARK-15025][SQL] fix duplicate of PATH key in datasource table options
## What changes were proposed in this pull request?
The issue is that when the user provides the path option with uppercase "PATH"
key,
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12804#issuecomment-218026918
LGTM. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12804#issuecomment-218001757
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12939#discussion_r62572000
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/package.scala ---
@@ -162,7 +162,8 @@ package object util {
def
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12204#issuecomment-217986292
How about we revisit it after we release 2.0? For a `CatalogTable`, we
should always set schema, partition columns, bucketing columns, and sorting
columns no matter
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12988#issuecomment-217981494
@hvanhovell How about we also add the regression tests to
`LogicalPlanToSQLSuite` (in a separate PR)? It will be good to reduce the
dependency on HiveCompatibilitySuite
Repository: spark
Updated Branches:
refs/heads/branch-2.0 40d24686a -> bf53b96b5
[SPARK-15173][SQL] DataFrameWriter.insertInto should work with datasource table
stored in hive
When we parse `CREATE TABLE USING`, we should build a `CreateTableUsing` plan
with the `managedIfNoPath` set to
Repository: spark
Updated Branches:
refs/heads/master c3e23bc0c -> 2adb11f6d
[SPARK-15173][SQL] DataFrameWriter.insertInto should work with datasource table
stored in hive
When we parse `CREATE TABLE USING`, we should build a `CreateTableUsing` plan
with the `managedIfNoPath` set to true.
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12949#discussion_r62561309
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -239,8 +239,13 @@ case class DataSource
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12949#issuecomment-217970297
Thanks! Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12949#discussion_r62560894
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -239,8 +239,13 @@ case class DataSource
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12945#issuecomment-217942790
If possible, can you have a pr to just add the new API and deprecate the
old API?
---
If your project is set up for it, you can reply to this email and have your
reply
Repository: spark
Updated Branches:
refs/heads/master b1e01fd51 -> 671b382a8
[SPARK-14127][SQL] Makes 'DESC [EXTENDED|FORMATTED] ' support data
source tables
## What changes were proposed in this pull request?
This is a follow-up of PR #12844. It makes the newly updated
Repository: spark
Updated Branches:
refs/heads/branch-2.0 29bc8d2ec -> de6afc887
[SPARK-14127][SQL] Makes 'DESC [EXTENDED|FORMATTED] ' support data
source tables
## What changes were proposed in this pull request?
This is a follow-up of PR #12844. It makes the newly updated
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12934#issuecomment-217937962
LGTM. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12804#discussion_r62541948
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/MetastoreDataSourcesSuite.scala
---
@@ -1033,4 +1035,23 @@ class MetastoreDataSourcesSuite
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12871#issuecomment-217788838
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12871#issuecomment-217787634
The PR looks good. Let's resolve the conflicts and get it in.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12871#issuecomment-217787572
btw, what's the reason of having
https://github.com/apache/spark/pull/12871/commits/aefade3924b52ab05f26d9a8af4f63555e243b24?
(where do we turn the string to its lower
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12871#discussion_r62454941
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/InMemoryCatalog.scala
---
@@ -73,6 +79,8 @@ class InMemoryCatalog extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12949#discussion_r62454544
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -239,8 +239,13 @@ case class DataSource
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12947#discussion_r62420950
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ExistingRDD.scala ---
@@ -224,7 +236,9 @@ private[sql] case class BatchedDataSourceScanExec
Repository: spark
Updated Branches:
refs/heads/branch-2.0 22f9f5f97 -> dc1562e97
[SPARK-14997][SQL] Fixed FileCatalog to return correct set of files when there
is no partitioning scheme in the given paths
## What changes were proposed in this pull request?
Lets says there are json files in
Repository: spark
Updated Branches:
refs/heads/master e20cd9f4c -> f7b7ef416
[SPARK-14997][SQL] Fixed FileCatalog to return correct set of files when there
is no partitioning scheme in the given paths
## What changes were proposed in this pull request?
Lets says there are json files in the
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12856#issuecomment-217571587
Thanks! Merging to master and 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12856#issuecomment-217544431
LGTM! Those tests are awesome!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r62382369
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileCatalogSuite.scala
---
@@ -0,0 +1,68 @@
+/*
+ * Licensed
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r62382277
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileCatalogSuite.scala
---
@@ -0,0 +1,68 @@
+/*
+ * Licensed
Repository: spark
Updated Branches:
refs/heads/master 76ad04d9a -> 5c8fad7b9
[SPARK-15108][SQL] Describe Permanent UDTF
What changes were proposed in this pull request?
When Describe a UDTF, the command returns a wrong result. The command is unable
to find the function, which has been
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12885#issuecomment-217525623
LGTM. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12885#issuecomment-217525644
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12885#discussion_r62371164
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/functions.scala
---
@@ -115,22 +116,23 @@ case class DescribeFunction
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12885#discussion_r62370858
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/functions.scala
---
@@ -115,22 +116,23 @@ case class DescribeFunction
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12947#issuecomment-217510156
@clockfly This PR does not truncate those long strings caused by long
paths, right?
---
If your project is set up for it, you can reply to this email and have your
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12897#issuecomment-217507341
@hvanhovell Thank you for looking at it! I have closed the jira.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12897#issuecomment-217506700
@hvanhovell +1. Let's not make any change.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12949#discussion_r62354994
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -239,8 +239,13 @@ case class DataSource
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12897#issuecomment-217491293
@hvanhovell What is the behavior of 1.6? Does 1.6 treat `L` as a suffix for
a bigint literal?
---
If your project is set up for it, you can reply to this email and have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r62296892
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileCatalog.scala
---
@@ -61,7 +61,31 @@ abstract class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r62296796
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileCatalog.scala
---
@@ -61,7 +61,31 @@ abstract class
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12871#issuecomment-217369151
looks pretty good. Left a few comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12871#discussion_r62296164
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalogSuite.scala
---
@@ -488,6 +491,79 @@ abstract class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12871#discussion_r62296175
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalogSuite.scala
---
@@ -488,6 +491,79 @@ abstract class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12871#discussion_r62295951
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/InMemoryCatalog.scala
---
@@ -272,9 +362,29 @@ class InMemoryCatalog extends
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-217337836
@gatorsmile I am not sure we should ban dropping multiple partitions in a
single call, which is a useful command.
I just took a look at our implementation
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12781#discussion_r62271545
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/ShowCreateTableSuite.scala ---
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12934#issuecomment-217308885
LGTM. Can we also convert the
schema/partitionColumn/bucketingColumn/sortingColumn strings to the
ColumnColumn to display them nicely?
---
If your project is set up
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r62143147
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileCatalog.scala
---
@@ -61,7 +61,31 @@ abstract class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r62142868
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/sources/HadoopFsRelationTest.scala
---
@@ -486,7 +488,151 @@ abstract class HadoopFsRelationTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r62142699
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/sources/HadoopFsRelationTest.scala
---
@@ -486,7 +488,143 @@ abstract class HadoopFsRelationTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r62142173
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileCatalog.scala
---
@@ -61,7 +61,31 @@ abstract class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r62142196
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileCatalog.scala
---
@@ -61,7 +61,31 @@ abstract class
Repository: spark
Updated Branches:
refs/heads/master 8fb1463d6 -> ef55e46c9
[SPARK-14993][SQL] Fix Partition Discovery Inconsistency when Input is a Path
to Parquet File
What changes were proposed in this pull request?
When we load a dataset, if we set the path to ```/path/a=1```, we
Repository: spark
Updated Branches:
refs/heads/branch-2.0 d90359d63 -> 689b0fc81
[SPARK-14993][SQL] Fix Partition Discovery Inconsistency when Input is a Path
to Parquet File
What changes were proposed in this pull request?
When we load a dataset, if we set the path to ```/path/a=1```,
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12828#issuecomment-217056363
LGTM. Merging to master and branch 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12885#issuecomment-217055817
I do not think we need to do anything when we switch the database.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12872#issuecomment-217054995
Merged to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Repository: spark
Updated Branches:
refs/heads/branch-2.0 fa3c5507f -> d90359d63
[SPARK-6339][SQL] Supports CREATE TEMPORARY VIEW tableIdentifier AS query
## What changes were proposed in this pull request?
This PR support new SQL syntax CREATE TEMPORARY VIEW.
Like:
```
CREATE TEMPORARY VIEW
Repository: spark
Updated Branches:
refs/heads/master fa79d346e -> 8fb1463d6
[SPARK-6339][SQL] Supports CREATE TEMPORARY VIEW tableIdentifier AS query
## What changes were proposed in this pull request?
This PR support new SQL syntax CREATE TEMPORARY VIEW.
Like:
```
CREATE TEMPORARY VIEW
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12872#issuecomment-217046086
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12872#discussion_r62136126
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -71,29 +87,59 @@ case class CreateViewCommand(
require
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12872#discussion_r62136118
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -71,29 +87,59 @@ case class CreateViewCommand(
require
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12804#issuecomment-217039037
ok to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12872#discussion_r62131182
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -71,29 +87,59 @@ case class CreateViewCommand(
require
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12885#discussion_r62130890
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/functions.scala
---
@@ -129,7 +129,16 @@ case class DescribeFunction
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12897#discussion_r62130296
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/ExpressionParserSuite.scala
---
@@ -73,8 +73,11 @@ class ExpressionParserSuite
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12897#discussion_r62128679
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -800,7 +800,18 @@ class AstBuilder extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12897#discussion_r62128718
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -800,7 +800,18 @@ class AstBuilder extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12897#discussion_r62121313
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/ExpressionParserSuite.scala
---
@@ -73,8 +73,11 @@ class ExpressionParserSuite
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12890#discussion_r62119745
--- Diff: repl/scala-2.11/src/main/scala/org/apache/spark/repl/Main.scala
---
@@ -71,35 +71,32 @@ object Main extends Logging
1501 - 1600 of 5990 matches
Mail list logo