Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13829#discussion_r68131571
--- Diff:
sql/catalyst/src/main/java/org/apache/spark/sql/catalyst/expressions/codegen/BufferHolder.java
---
@@ -55,6 +60,11 @@ public BufferHolder
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/13829
[SPARK-16071][SQL] Checks size limit when doubling the array size in
BufferHolder
## What changes were proposed in this pull request?
This PR Checks the size limit when doubling the
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13749
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13749#discussion_r67597567
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -435,26 +435,25 @@ case class DataSource
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/13749
[SPARK-16034][SQL][WIP] Checks the partition columns when calling
dataFrame.write.mode("append").saveAsTable
## What changes were proposed in this pull request?
DataFrameWri
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/13743
[SPARK-15916][SQL] JDBC filter push down should respect operator precedence
## What changes were proposed in this pull request?
This PR fixes the problem that the precedence order is
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13640#discussion_r67556081
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala
---
@@ -233,6 +233,10 @@ class JDBCSuite extends SparkFunSuite
assert
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13640#discussion_r67555308
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -305,7 +305,7 @@ private[sql] class JDBCRDD
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13710
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13651
@hvanhovell @cloud-fan Thanks! Updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13524
This seems as a blocking issue.
I have created a PR at https://github.com/apache/spark/pull/13651, please
help to review.
---
If your project is set up for it, you can reply to this
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/8785#discussion_r66919638
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala
---
@@ -27,6 +27,8 @@ import org.apache.spark.sql
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13622#discussion_r66915601
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/parquetSuites.scala ---
@@ -676,6 +676,46 @@ class ParquetSourceSuite extends
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13592#discussion_r66894492
--- Diff: docs/sql-programming-guide.md ---
@@ -12,130 +12,130 @@ title: Spark SQL and DataFrames
Spark SQL is a Spark module for structured data
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13592#discussion_r66894292
--- Diff: docs/sql-programming-guide.md ---
@@ -12,130 +12,130 @@ title: Spark SQL and DataFrames
Spark SQL is a Spark module for structured data
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13592#discussion_r66894132
--- Diff: docs/sql-programming-guide.md ---
@@ -12,130 +12,130 @@ title: Spark SQL and DataFrames
Spark SQL is a Spark module for structured data
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13592#discussion_r66893943
--- Diff: docs/sql-programming-guide.md ---
@@ -12,130 +12,130 @@ title: Spark SQL and DataFrames
Spark SQL is a Spark module for structured data
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13592#discussion_r66893918
--- Diff: docs/sql-programming-guide.md ---
@@ -12,130 +12,130 @@ title: Spark SQL and DataFrames
Spark SQL is a Spark module for structured data
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13592#discussion_r66893224
--- Diff: docs/sql-programming-guide.md ---
@@ -12,130 +12,130 @@ title: Spark SQL and DataFrames
Spark SQL is a Spark module for structured data
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13592#discussion_r66892238
--- Diff: docs/sql-programming-guide.md ---
@@ -12,130 +12,130 @@ title: Spark SQL and DataFrames
Spark SQL is a Spark module for structured data
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13592#discussion_r66891847
--- Diff: docs/sql-programming-guide.md ---
@@ -12,130 +12,130 @@ title: Spark SQL and DataFrames
Spark SQL is a Spark module for structured data
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13592#discussion_r66891502
--- Diff: docs/sql-programming-guide.md ---
@@ -12,130 +12,130 @@ title: Spark SQL and DataFrames
Spark SQL is a Spark module for structured data
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/11079
@thomastechs
We should use `df.select("`a.c`")` to select a column with name "a.c".
The reason is that we can df.select can be used to select a nested
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13637#discussion_r66886314
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -736,6 +736,290 @@ class SQLContext private[sql](val sparkSession
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13637#discussion_r66885663
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -736,6 +736,290 @@ class SQLContext private[sql](val sparkSession
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13651#discussion_r66885516
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -213,7 +213,7 @@ case class Multiply(left
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13651#discussion_r66882617
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2847,4 +2847,15 @@ class SQLQuerySuite extends QueryTest with
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13632
@cloud-fan Updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13651#discussion_r66881813
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -213,7 +213,7 @@ case class Multiply(left
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/13651
[SPARK-15776][SQL] Divide Expression inside Aggregation function is casted
to wrong type
## What changes were proposed in this pull request?
This PR fixes the problem that Divide
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13524
@Sephiroth-Lin
I think you can use a simpler case in the description of this PR.
Such as:
```
select sum(4/3)
```
The expected result is
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13400
Looks good to me except one minor test issue.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13400#discussion_r66839361
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -1537,6 +1537,35 @@ class SQLQuerySuite extends QueryTest
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/13637
[SPARK-15914][SQL] Add deprecated method back to SQLContext for backward
compatibility
## What changes were proposed in this pull request?
Revert partial changes in SPARK-12600, and add
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13585#discussion_r66741254
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -65,15 +65,20 @@ private[hive] trait HiveStrategies
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13623
Looks good! +1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/13632
[SPARK-15910][SQL] Check schema consistency when using Kryo encoder to
convert DataFrame to Dataset
## What changes were proposed in this pull request?
This PR enforces schema check when
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13531
Looks good!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13605
Should we also use DataFrame for SparkSession.range?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13605
How about SparkSession.range? It returns a Dataset[Long], but the encoder's
schema (value: Long) is not matching logical plan's schema (id: Long).
---
If your project is set up f
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13585#discussion_r66569556
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/planning/patterns.scala
---
@@ -92,6 +92,36 @@ object PhysicalOperation extends
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13531#discussion_r66538048
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategySuite.scala
---
@@ -340,6 +340,40 @@ class
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13531#discussion_r66533473
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/fileSourceInterfaces.scala
---
@@ -298,6 +309,28 @@ trait FileFormat
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13529
@liancheng @cloud-fan
About EmbedSerializerInFilter, I have noted in source code:
`TODO: Remove this after we completely fixes SPARK-15632 by adding
optimization rules For typed
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13535
@liancheng updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13414
@hvanhovell Thanks for the review.
Updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/13535
[SPARK-15792][SQL] Allows operator to change the verbosity in explain output
## What changes were proposed in this pull request?
This PR allows customization of verbosity in explain
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/13529
[SPARK-15632][SQL]Typed Filter should NOT change the Dataset schema
## What changes were proposed in this pull request?
This PR makes sure the typed Filter doesn't change the Da
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13442#discussion_r65929214
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/text/TextFileFormat.scala
---
@@ -84,6 +85,21 @@ class TextFileFormat extends
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13470
Updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13471#discussion_r65612208
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/LocalTableScanExec.scala
---
@@ -48,6 +48,14 @@ private[sql] case class
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13471#discussion_r65612007
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LocalRelation.scala
---
@@ -57,7 +57,13 @@ case class LocalRelation
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13470#discussion_r65611639
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/trees/TreeNode.scala
---
@@ -427,13 +427,21 @@ abstract class TreeNode[BaseType
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13470#discussion_r65611456
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/trees/TreeNode.scala
---
@@ -427,13 +427,21 @@ abstract class TreeNode[BaseType
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13470#discussion_r65610587
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/trees/TreeNode.scala
---
@@ -427,13 +427,21 @@ abstract class TreeNode[BaseType
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13470#discussion_r65609097
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/trees/TreeNode.scala
---
@@ -427,13 +427,21 @@ abstract class TreeNode[BaseType
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13470#discussion_r65607416
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/trees/TreeNode.scala
---
@@ -427,13 +427,21 @@ abstract class TreeNode[BaseType
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13470#discussion_r65606416
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/trees/TreeNode.scala
---
@@ -427,13 +427,21 @@ abstract class TreeNode[BaseType
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13471
cc @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13471
This depends on PR https://github.com/apache/spark/pull/13470
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13470#discussion_r65600672
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/trees/TreeNode.scala
---
@@ -427,13 +427,21 @@ abstract class TreeNode[BaseType
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/13471
[SPARK-15734][SQL] Avoids printing internal row in explain output
## What changes were proposed in this pull request?
This PR avoids printing internal rows in explain output for some
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13470
@liancheng Please take a look.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/13470
[SPARK-15733][SQL] Makes the explain output less verbose by hiding some
verbose output like None, null, empty List, and etc.
## What changes were proposed in this pull request?
This PR
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13451#discussion_r65572608
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -317,17 +317,19 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13414
@hvanhovell Probably we can talk more face to face next week.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13451
Thanks @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13451
cc @cloud-fan @liancheng @andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13451#discussion_r65483550
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -1399,52 +1399,6 @@ class SQLQuerySuite extends QueryTest
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13451#discussion_r65483261
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/MetastoreDataSourcesSuite.scala
---
@@ -1104,4 +1104,22 @@ class MetastoreDataSourcesSuite
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/13451
[SPARK-15711][SQL][WIP]Ban CREATE TEMPORARY TABLE USING AS SELECT
## What changes were proposed in this pull request?
This PR bans syntax like `CREATE TEMPORARY TABLE USING AS SELECT
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13433
@cloud-fan updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13433#discussion_r65400811
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/QueryPlan.scala
---
@@ -264,7 +264,7 @@ abstract class QueryPlan[PlanType
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/13433
[SPARK-15692][SQL]Improves the explain output of several physical plans by
displaying embedded logical plan in tree style
## What changes were proposed in this pull request?
Improves the
Github user clockfly commented on the pull request:
https://github.com/apache/spark/pull/13363
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user clockfly commented on the pull request:
https://github.com/apache/spark/pull/13414
@hvanhovell
I updated the description, please check whether it makes more sense now.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user clockfly commented on the pull request:
https://github.com/apache/spark/pull/13363
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user clockfly commented on the pull request:
https://github.com/apache/spark/pull/13414
@hvanhovell
> create temp table ... using statement describes the access to a physical
storage; which in my book is a table.
We still allow `create table using...`, w
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/13414
[SPARK-15674][SQL] Deprecates "CREATE TEMPORARY TABLE USING...", uses
"CREAT TEMPORARY VIEW USING..." instead
## What changes were proposed in this pull request?
Github user clockfly commented on the pull request:
https://github.com/apache/spark/pull/13367#issuecomment-222316956
Looks good! +1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13363#discussion_r64974684
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/trees/TreeNode.scala
---
@@ -424,23 +424,50 @@ abstract class TreeNode[BaseType
Github user clockfly commented on the pull request:
https://github.com/apache/spark/pull/13363#issuecomment-66569
@cloud-fan updated
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user clockfly commented on the pull request:
https://github.com/apache/spark/pull/13363#issuecomment-59687
Some useful improvements in PR https://github.com/apache/spark/pull/13271
will be converted in new PRs.
---
If your project is set up for it, you can reply to this
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/13363
[SPARK-15495][SQL][WIP] Improve the explain output for Aggregation operator
## What changes were proposed in this pull request?
This PR improves the explain output of Aggregator operator
Github user clockfly closed the pull request at:
https://github.com/apache/spark/pull/13271
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user clockfly commented on the pull request:
https://github.com/apache/spark/pull/13271#issuecomment-222025534
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/1#discussion_r64812858
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1795,7 +1795,9 @@ class Analyzer(
def apply
Github user clockfly commented on the pull request:
https://github.com/apache/spark/pull/13271#issuecomment-221969548
@rxin
I have moved the changes about innerChildren to another PR. I also have
updated the description based on your comment (add SQL query).
Please
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/1
[SPARK-13445][SQL]Improves error message and add test coverage for Window
function
## What changes were proposed in this pull request?
Add more verbose error message when order by clause
Github user clockfly commented on the pull request:
https://github.com/apache/spark/pull/13271#issuecomment-221728609
@davies, wants to take a look at this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/13306
[SPARK-12988][SQL] Can't drop top level columns that contain dots
## What changes were proposed in this pull request?
This PR is work based on @dilipbiswal's
https://github.
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13269#discussion_r64617059
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/KeyValueGroupedDataset.scala ---
@@ -42,17 +42,9 @@ class KeyValueGroupedDataset[K, V] private[sql
Github user clockfly commented on the pull request:
https://github.com/apache/spark/pull/13271#issuecomment-221337168
@rxin Thanks, I will make another try to simplify this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/13271
[SPARK-15495][SQL][WIP] Improve the explain output
## What changes were proposed in this pull request?
Improve the output of explain:
Now, it looks like this:
```
scala>
Github user clockfly commented on the pull request:
https://github.com/apache/spark/pull/13166#issuecomment-219981113
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13127#discussion_r63451732
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/client/VersionsSuite.scala ---
@@ -130,109 +124,343 @@ class VersionsSuite extends
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13127#discussion_r63451609
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala ---
@@ -265,6 +343,47 @@ private[client] class Shim_v0_12 extends Shim
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13127#discussion_r63450897
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/client/VersionsSuite.scala ---
@@ -130,109 +124,343 @@ class VersionsSuite extends
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/13127
[SPARK-15334][SQL] HiveClient facade not compatible with Hive 0.12
## What changes were proposed in this pull request?
Fixed the following compatibility issues:
1
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/13098
[SPARK-15171][SQL] Update unit tests to remove the references to deprecated
method dataset.registerTempTable
## What changes were proposed in this pull request?
Update the unit test code
201 - 300 of 326 matches
Mail list logo