Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13818
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13542
also, can you try `--conf` and see if it works?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13542
Can you provide more information on the root cause? Seems it is not clear
why it does not work.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13987
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13977#discussion_r69010595
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1836,13 +1836,15 @@ class Analyzer
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13977#discussion_r69004659
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1836,13 +1836,15 @@ class Analyzer
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/13977
[SPARK-16301] [SQL] The analyzer rule for resolving using joins should
respect the case sensitivity setting.
## What changes were proposed in this pull request?
The analyzer rule for resolving
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13880#discussion_r68853188
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -171,14 +171,6 @@ case class InsertIntoHiveTable
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13939#discussion_r68843139
--- Diff:
sql/hive/compatibility/src/test/scala/org/apache/spark/sql/hive/execution/HiveWindowFunctionQuerySuite.scala
---
@@ -569,6 +572,7 @@ class
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13860
seems fine to have that method.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Repository: spark
Updated Branches:
refs/heads/branch-2.0 4c5e16f58 -> e68872f2e
[SPARK-16181][SQL] outer join with isNull filter may return wrong result
## What changes were proposed in this pull request?
The root cause is: the output attributes of outer join are derived from its
children,
Repository: spark
Updated Branches:
refs/heads/master 0923c4f56 -> 1f2776df6
[SPARK-16181][SQL] outer join with isNull filter may return wrong result
## What changes were proposed in this pull request?
The root cause is: the output attributes of outer join are derived from its
children,
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13884
LGTM. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/10943
how about we close this pr since https://github.com/apache/spark/pull/13306
has been merged?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13939
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/13938
[SPARK-15863][SQL][DOC][FOLLOW-UP] Update SQL programming guide.
## What changes were proposed in this pull request?
This PR makes several updates to SQL programming guide.
You can merge
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13931
@davies @ericl want to take a look?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/13931
[SPARK-16224] [SQL] [PYSPARK] SparkSession builder's configs need to be set
to the existing Scala SparkContext's SparkConf
## What changes were proposed in this pull request?
When we create
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13907
With your PR, if users specify `ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'`, will we convert?
---
If your project is set up for it, you can reply to this email and have your
reply
Repository: spark
Updated Branches:
refs/heads/master d48935400 -> 5f8de2160
[SQL][MINOR] Simplify data source predicate filter translation.
## What changes were proposed in this pull request?
This is a small patch to rewrite the predicate filter translation in
DataSourceStrategy. The
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13889
Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13889
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13868#discussion_r68441144
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -691,7 +692,8 @@ private[sql] class SQLConf extends Serializable
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13701
Sorry. I am not sure I get it. We can set the row group size to a small
size. Then, it will not be hard to create a parquet file having multiple row
groups.
---
If your project is set up
Repository: spark
Updated Branches:
refs/heads/branch-2.0 3d8d95644 -> 3ccdd6b9c
[SPARK-13709][SQL] Initialize deserializer with both table and partition
properties when reading partitioned tables
## What changes were proposed in this pull request?
When reading partitions of a partitioned
Repository: spark
Updated Branches:
refs/heads/master cc6778ee0 -> 2d2f607bf
[SPARK-13709][SQL] Initialize deserializer with both table and partition
properties when reading partitioned tables
## What changes were proposed in this pull request?
When reading partitions of a partitioned Hive
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13865
Thanks. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13884#discussion_r68355286
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -688,6 +688,14 @@ object FoldablePropagation extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13884#discussion_r68355194
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
---
@@ -1541,4 +1541,13 @@ class DataFrameSuite extends QueryTest
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13701
Thank you for the testing. Can you also test the case that a file contains
multiple row groups and we can avoid of scanning unneeded ones?
Also since it is not fixing a critical bug, let's
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13701#discussion_r68352913
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategy.scala
---
@@ -85,8 +85,15 @@ private[sql] object
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13701#discussion_r68352884
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategy.scala
---
@@ -85,8 +85,15 @@ private[sql] object
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13865
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13865#discussion_r68352336
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/QueryPartitionSuite.scala ---
@@ -65,4 +68,77 @@ class QueryPartitionSuite extends QueryTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13865#discussion_r68352322
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/QueryPartitionSuite.scala ---
@@ -65,4 +68,77 @@ class QueryPartitionSuite extends QueryTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13865#discussion_r68352282
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/QueryPartitionSuite.scala ---
@@ -65,4 +68,77 @@ class QueryPartitionSuite extends QueryTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13720#discussion_r68343766
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -522,7 +523,7 @@ case class DescribeTableCommand(table
Repository: spark
Updated Branches:
refs/heads/branch-2.0 6cb24de99 -> 05677bb5a
[SPARK-15443][SQL] Fix 'explain' for streaming Dataset
## What changes were proposed in this pull request?
- Fix the `explain` command for streaming Dataset/DataFrame. E.g.,
```
== Parsed Logical Plan ==
Repository: spark
Updated Branches:
refs/heads/master 91b1ef28d -> 0e4bdebec
[SPARK-15443][SQL] Fix 'explain' for streaming Dataset
## What changes were proposed in this pull request?
- Fix the `explain` command for streaming Dataset/DataFrame. E.g.,
```
== Parsed Logical Plan ==
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13815
LGTM. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13815#discussion_r68286177
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala
---
@@ -98,7 +100,12 @@ case class ExplainCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13815#discussion_r68286111
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala ---
@@ -307,6 +307,16 @@ private[sql] abstract class SparkStrategies
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13815#discussion_r68285294
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala
---
@@ -98,7 +100,12 @@ case class ExplainCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13815#discussion_r68284723
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/IncrementalExecution.scala
---
@@ -37,6 +37,7 @@ class IncrementalExecution
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13815#discussion_r68284519
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSourceSuite.scala
---
@@ -592,6 +592,37 @@ class FileStreamSourceSuite extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13815#discussion_r68281045
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/IncrementalExecution.scala
---
@@ -37,6 +37,7 @@ class IncrementalExecution
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13815#discussion_r68280796
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala
---
@@ -98,7 +100,12 @@ case class ExplainCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13463#discussion_r68095799
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/ListingFileCatalog.scala
---
@@ -83,40 +83,10 @@ class ListingFileCatalog
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13845
Looks good. @JoshRosen can you take a quick look as well?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13831
@vanzin Thank you for the PR. I probably will not be able to review it
until we get 2.0 out. Will take a look after the release.
---
If your project is set up for it, you can reply to this email
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/13830
[SPARK-16121] ListingFileCatalog does not list in parallel anymore
## What changes were proposed in this pull request?
Seems the fix of SPARK-14959 breaks the parallel partitioning discovery
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13463#discussion_r67968229
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategySuite.scala
---
@@ -490,6 +491,7 @@ class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13463#discussion_r67956952
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/ListingFileCatalog.scala
---
@@ -83,40 +83,10 @@ class ListingFileCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13463#discussion_r67955807
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/ListingFileCatalog.scala
---
@@ -83,40 +83,10 @@ class ListingFileCatalog
Repository: spark
Updated Branches:
refs/heads/branch-2.0 0d7e1d11d -> afa14b71b
[SPARK-16002][SQL] Sleep when no new data arrives to avoid 100% CPU usage
## What changes were proposed in this pull request?
Add a configuration to allow people to set a minimum polling delay when no new
data
Repository: spark
Updated Branches:
refs/heads/master f4a3d45e3 -> c399c7f0e
[SPARK-16002][SQL] Sleep when no new data arrives to avoid 100% CPU usage
## What changes were proposed in this pull request?
Add a configuration to allow people to set a minimum polling delay when no new
data
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13718
Thanks. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13807#discussion_r67933216
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/ExpressionEncoder.scala
---
@@ -110,16 +110,25 @@ object ExpressionEncoder
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13807#discussion_r67932528
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/ExpressionEncoder.scala
---
@@ -110,16 +110,28 @@ object ExpressionEncoder
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13807#discussion_r67932028
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/ExpressionEncoder.scala
---
@@ -110,16 +110,28 @@ object ExpressionEncoder
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13807#discussion_r67931868
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/ExpressionEncoder.scala
---
@@ -110,16 +110,28 @@ object ExpressionEncoder
Repository: spark
Updated Branches:
refs/heads/branch-2.0 f805b989b -> 0d7e1d11d
[SPARK-16037][SQL] Follow-up: add DataFrameWriter.insertInto() test cases for
by position resolution
## What changes were proposed in this pull request?
This PR migrates some test cases introduced in #12313 as
Repository: spark
Updated Branches:
refs/heads/master b76e35537 -> f4a3d45e3
[SPARK-16037][SQL] Follow-up: add DataFrameWriter.insertInto() test cases for
by position resolution
## What changes were proposed in this pull request?
This PR migrates some test cases introduced in #12313 as a
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13810
LGTM. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13807#discussion_r67912070
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala
---
@@ -830,6 +830,13 @@ class DatasetSuite extends QueryTest with
SharedSQLContext
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13772
Tests that you mentioned in `I've run test against current branch. When
case-insensitive resolution is used, exepctedColumns is not correct actually`
---
If your project is set up for it, you can
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13022#discussion_r67907323
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -531,18 +531,29 @@ private[hive] class
HiveMetastoreCatalog
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13772
@viirya Can you comment in the jira with your case?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13592#discussion_r67774982
--- Diff: docs/sql-programming-guide.md ---
@@ -12,130 +12,129 @@ title: Spark SQL and DataFrames
Spark SQL is a Spark module for structured data
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13592#discussion_r67775004
--- Diff: docs/sql-programming-guide.md ---
@@ -12,130 +12,129 @@ title: Spark SQL and DataFrames
Spark SQL is a Spark module for structured data
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13592#discussion_r67774733
--- Diff: docs/sql-programming-guide.md ---
@@ -12,130 +12,129 @@ title: Spark SQL and DataFrames
Spark SQL is a Spark module for structured data
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13592
@felixcheung I merged this one since I think it is better to make changes
in parallel using this version as the foundation. Can you help on revising the
R related doc? Thanks!
---
If your project
Repository: spark
Updated Branches:
refs/heads/branch-2.0 54aef1c14 -> 8159da20e
[SPARK-15863][SQL][DOC] Initial SQL programming guide update for Spark 2.0
## What changes were proposed in this pull request?
Initial SQL programming guide update for Spark 2.0. Contents like 1.6 to 2.0
Repository: spark
Updated Branches:
refs/heads/master d0eddb80e -> 6df8e3886
[SPARK-15863][SQL][DOC] Initial SQL programming guide update for Spark 2.0
## What changes were proposed in this pull request?
Initial SQL programming guide update for Spark 2.0. Contents like 1.6 to 2.0
migration
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13592
Thanks! Let's get it in first and then we can revise it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13592#discussion_r67773057
--- Diff: docs/sql-programming-guide.md ---
@@ -12,130 +12,130 @@ title: Spark SQL and DataFrames
Spark SQL is a Spark module for structured data
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13592#discussion_r67763740
--- Diff: docs/sql-programming-guide.md ---
@@ -12,130 +12,129 @@ title: Spark SQL and DataFrames
Spark SQL is a Spark module for structured data
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13592#discussion_r67763685
--- Diff: docs/sql-programming-guide.md ---
@@ -12,130 +12,129 @@ title: Spark SQL and DataFrames
Spark SQL is a Spark module for structured data
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13772
@viirya Thank you for the pr. After I filed the jira, I did some
investigation. Actually as I noted at
https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13769#discussion_r67637488
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala
---
@@ -43,8 +43,128 @@ import
ull/13754/files and
https://github.com/apache/spark/pull/13749. I will comment inline to explain my
changes.
## How was this patch tested?
Existing tests.
Author: Yin Huai <yh...@databricks.com>
Closes #13766 from yhuai/caseSensitivity.
(cherry picked fr
754/files and
https://github.com/apache/spark/pull/13749. I will comment inline to explain my
changes.
## How was this patch tested?
Existing tests.
Author: Yin Huai <yh...@databricks.com>
Closes #13766 from yhuai/caseSensitivity.
Project: http://git-wip-us.apache.org/repos/asf/spark/re
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13766
Thanks. I am merging this to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13769#discussion_r67618071
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/sources/DataSourceAnalysisSuite.scala
---
@@ -0,0 +1,190 @@
+/*
+ * Licensed to the Apache
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13769#discussion_r67617959
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertIntoHiveTableSuite.scala
---
@@ -372,4 +380,93 @@ class InsertIntoHiveTableSuite extends
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13769
https://github.com/apache/spark/pull/13769/commits/5315b80128e5cabc31ec0a77df9aca9a42aa10c5
is the actual change. Other commits are from
https://github.com/apache/spark/pull/13766.
---
If your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13769#discussion_r67611551
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/sources/DataSourceAnalysisSuite.scala
---
@@ -0,0 +1,190 @@
+/*
+ * Licensed to the Apache
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13769#discussion_r67611543
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala
---
@@ -43,8 +43,128 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13769#discussion_r67611533
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertIntoHiveTableSuite.scala
---
@@ -372,4 +380,95 @@ class InsertIntoHiveTableSuite extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13769#discussion_r67611526
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -196,7 +197,7 @@ case class InsertIntoHiveTable
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13746
Closing this. https://github.com/apache/spark/pull/13769 is the new version.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/13769
[SPARK-16030] [SQL] Allow specifying static partitions when inserting to
data source tables
## What changes were proposed in this pull request?
This PR adds the static partition support
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/13746
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13766#discussion_r67610425
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/rules.scala
---
@@ -84,7 +84,13 @@ private[sql] object PreprocessTableInsertion
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13766#discussion_r67610340
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/rules.scala
---
@@ -84,7 +84,13 @@ private[sql] object PreprocessTableInsertion
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13766#discussion_r67610324
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveQuerySuite.scala
---
@@ -1033,6 +1033,41 @@ class HiveQuerySuite extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13766#discussion_r67610318
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -1684,36 +1684,4 @@ class SQLQuerySuite extends QueryTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13766#discussion_r67610316
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertIntoHiveTableSuite.scala
---
@@ -372,4 +372,24 @@ class InsertIntoHiveTableSuite extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13766#discussion_r67610309
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -435,7 +435,7 @@ case class DataSource
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13766#discussion_r67610312
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/rules.scala
---
@@ -84,7 +84,13 @@ private[sql] object PreprocessTableInsertion
1001 - 1100 of 5990 matches
Mail list logo