[spark] branch master updated (0fd9f57 -> 225c2e2)

2020-11-29 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 0fd9f57  [SPARK-33448][SQL] Support CACHE/UNCACHE TABLE commands for 
v2 tables
 add 225c2e2  [SPARK-33498][SQL][FOLLOW-UP] Deduplicate the unittest by 
using checkCastWithParseError

No new revisions were added by this update.

Summary of changes:
 .../scala/org/apache/spark/sql/catalyst/expressions/CastSuite.scala | 6 +-
 1 file changed, 1 insertion(+), 5 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (2da7259 -> 0fd9f57)

2020-11-29 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 2da7259  [SPARK-32976][SQL] Support column list in INSERT statement
 add 0fd9f57  [SPARK-33448][SQL] Support CACHE/UNCACHE TABLE commands for 
v2 tables

No new revisions were added by this update.

Summary of changes:
 .../spark/sql/catalyst/parser/AstBuilder.scala | 31 ---
 .../sql/catalyst/plans/logical/statements.scala| 16 
 .../spark/sql/catalyst/parser/DDLParserSuite.scala | 27 -
 .../catalyst/analysis/ResolveSessionCatalog.scala  | 19 +
 .../spark/sql/execution/SparkSqlParser.scala   | 34 
 .../apache/spark/sql/execution/command/cache.scala | 43 +
 .../org/apache/spark/sql/CachedTableSuite.scala| 11 ++
 .../spark/sql/connector/DataSourceV2SQLSuite.scala | 40 +++
 .../spark/sql/execution/SparkSqlParserSuite.scala  | 45 +-
 .../sql/execution/metric/SQLMetricsSuite.scala |  2 +-
 .../thriftserver/HiveThriftServer2Suites.scala |  4 +-
 .../apache/spark/sql/hive/CachedTableSuite.scala   | 14 +++
 .../org/apache/spark/sql/hive/test/TestHive.scala  |  2 +-
 13 files changed, 152 insertions(+), 136 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (4851453 -> 2da7259)

2020-11-29 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 4851453  [MINOR] Spelling bin core docs external mllib repl
 add 2da7259  [SPARK-32976][SQL] Support column list in INSERT statement

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/sql/catalyst/parser/SqlBase.g4|   4 +-
 .../spark/sql/catalyst/analysis/Analyzer.scala |  52 -
 .../sql/catalyst/analysis/CheckAnalysis.scala  |   2 +-
 .../apache/spark/sql/catalyst/dsl/package.scala|   2 +-
 .../spark/sql/catalyst/parser/AstBuilder.scala |  20 +-
 .../sql/catalyst/plans/logical/statements.scala|   2 +
 .../spark/sql/catalyst/parser/DDLParserSuite.scala |  66 ++
 .../sql/catalyst/parser/PlanParserSuite.scala  |   4 +-
 .../org/apache/spark/sql/DataFrameWriter.scala |   1 +
 .../execution/datasources/DataSourceStrategy.scala |  10 +-
 .../datasources/FallBackFileSourceV2.scala |   4 +-
 .../spark/sql/execution/datasources/rules.scala|   6 +-
 .../org/apache/spark/sql/SQLInsertTestSuite.scala  | 221 +
 .../execution/command/PlanResolutionSuite.scala|   2 +-
 .../org/apache/spark/sql/hive/HiveStrategies.scala |   9 +-
 ...erySuite.scala => HiveSQLInsertTestSuite.scala} |   8 +-
 16 files changed, 375 insertions(+), 38 deletions(-)
 create mode 100644 
sql/core/src/test/scala/org/apache/spark/sql/SQLInsertTestSuite.scala
 copy 
sql/hive/src/test/scala/org/apache/spark/sql/hive/{orc/HiveOrcPartitionDiscoverySuite.scala
 => HiveSQLInsertTestSuite.scala} (77%)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (feda729 -> 4851453)

2020-11-29 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from feda729  [SPARK-33567][SQL] DSv2: Use callback instead of passing 
Spark session and v2 relation for refreshing cache
 add 4851453  [MINOR] Spelling bin core docs external mllib repl

No new revisions were added by this update.

Summary of changes:
 bin/docker-image-tool.sh   |   2 +-
 .../org/apache/spark/ui/static/spark-dag-viz.js|   2 +-
 .../resources/org/apache/spark/ui/static/utils.js  |   2 +-
 .../apache/spark/ExecutorAllocationManager.scala   |   4 +-
 .../org/apache/spark/api/java/JavaPairRDD.scala|   4 +-
 .../org/apache/spark/api/java/JavaRDDLike.scala|   2 +-
 .../org/apache/spark/api/python/PythonRDD.scala|   6 +-
 .../org/apache/spark/deploy/JsonProtocol.scala |   2 +-
 .../org/apache/spark/deploy/SparkSubmit.scala  |   2 +-
 .../spark/deploy/history/FsHistoryProvider.scala   |   2 +-
 .../apache/spark/deploy/history/HybridStore.scala  |   2 +-
 .../scala/org/apache/spark/executor/Executor.scala |   4 +-
 .../org/apache/spark/metrics/MetricsConfig.scala   |   2 +-
 .../spark/metrics/sink/PrometheusServlet.scala |   6 +-
 .../org/apache/spark/rdd/DoubleRDDFunctions.scala  |   2 +-
 .../org/apache/spark/rdd/OrderedRDDFunctions.scala |   4 +-
 core/src/main/scala/org/apache/spark/rdd/RDD.scala |   2 +-
 .../spark/resource/TaskResourceRequest.scala   |   2 +-
 .../org/apache/spark/rpc/netty/NettyRpcEnv.scala   |   4 +-
 .../scheduler/BarrierJobAllocationFailed.scala |   4 +-
 .../org/apache/spark/scheduler/DAGScheduler.scala  |   8 +-
 .../org/apache/spark/scheduler/HealthTracker.scala |   4 +-
 .../apache/spark/scheduler/TaskSetManager.scala|   2 +-
 .../apache/spark/security/CryptoStreamUtils.scala  |   2 +-
 .../org/apache/spark/storage/BlockManager.scala|   4 +-
 .../spark/storage/BlockManagerMasterEndpoint.scala |   2 +-
 .../org/apache/spark/ui/jobs/AllJobsPage.scala |   2 +-
 .../scala/org/apache/spark/ui/jobs/JobPage.scala   |   2 +-
 .../org/apache/spark/util/ClosureCleaner.scala |   2 +-
 .../main/scala/org/apache/spark/util/Utils.scala   |  22 ++--
 .../apache/spark/util/io/ChunkedByteBuffer.scala   |   2 +-
 .../shuffle/sort/UnsafeShuffleWriterSuite.java |  10 +-
 .../java/test/org/apache/spark/JavaAPISuite.java   |   2 +-
 .../scala/org/apache/spark/CheckpointSuite.scala   |  12 +--
 .../org/apache/spark/ContextCleanerSuite.scala |  10 +-
 .../spark/ExecutorAllocationManagerSuite.scala |   2 +-
 .../test/scala/org/apache/spark/FileSuite.scala|   2 +-
 .../org/apache/spark/benchmark/BenchmarkBase.scala |   2 +-
 .../deploy/history/FsHistoryProviderSuite.scala|   4 +-
 .../apache/spark/deploy/master/MasterSuite.scala   |   2 +-
 .../apache/spark/deploy/worker/WorkerSuite.scala   |   2 +-
 .../org/apache/spark/executor/ExecutorSuite.scala  |   2 +-
 .../io/FileCommitProtocolInstantiationSuite.scala  |   4 +-
 .../spark/metrics/InputOutputMetricsSuite.scala|   2 +-
 .../netty/NettyBlockTransferServiceSuite.scala |   2 +-
 .../apache/spark/rdd/PairRDDFunctionsSuite.scala   |  34 +++---
 .../test/scala/org/apache/spark/rdd/RDDSuite.scala |   2 +-
 .../apache/spark/resource/ResourceUtilsSuite.scala |   2 +-
 .../apache/spark/rpc/netty/NettyRpcEnvSuite.scala  |   2 +-
 .../apache/spark/scheduler/DAGSchedulerSuite.scala |   6 +-
 .../spark/scheduler/ReplayListenerSuite.scala  |   2 +-
 .../scheduler/SchedulerIntegrationSuite.scala  |   8 +-
 .../spark/scheduler/SparkListenerSuite.scala   |   6 +-
 .../spark/scheduler/TaskSetManagerSuite.scala  |   6 +-
 .../spark/status/AppStatusListenerSuite.scala  |   2 +-
 .../apache/spark/storage/BlockManagerSuite.scala   |   4 +-
 .../org/apache/spark/util/JsonProtocolSuite.scala  |   8 +-
 .../org/apache/spark/util/SizeEstimatorSuite.scala |   2 +-
 docs/_plugins/include_example.rb   |   4 +-
 docs/building-spark.md |   2 +-
 docs/configuration.md  |   2 +-
 docs/css/main.css  |   4 +-
 docs/graphx-programming-guide.md   |   4 +-
 docs/ml-migration-guide.md |   2 +-
 docs/mllib-clustering.md   |   2 +-
 docs/mllib-data-types.md   |   2 +-
 docs/monitoring.md |   6 +-
 docs/running-on-kubernetes.md  |   4 +-
 docs/running-on-mesos.md   |   2 +-
 docs/running-on-yarn.md|   2 +-
 docs/sparkr.md |   2 +-
 docs/sql-data-sources-jdbc.md  |   2 +-
 docs/sql-migration-guide.md|   6 +-
 docs/sql-ref-syntax-aux-conf-mgmt-set-timezone.md  |   2 +-
 docs/sql-ref-syntax-ddl-create-table-hive

[spark] branch master updated (a5e13ac -> feda729)

2020-11-29 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from a5e13ac  [SPARK-33582][SQL] Hive Metastore support filter by not-equals
 add feda729  [SPARK-33567][SQL] DSv2: Use callback instead of passing 
Spark session and v2 relation for refreshing cache

No new revisions were added by this update.

Summary of changes:
 .../datasources/v2/DataSourceV2Strategy.scala  | 26 +++---
 .../execution/datasources/v2/DropTableExec.scala   | 11 -
 .../datasources/v2/RefreshTableExec.scala  | 11 -
 .../datasources/v2/V1FallbackWriters.scala | 15 +++--
 .../datasources/v2/WriteToDataSourceV2Exec.scala   | 21 -
 5 files changed, 43 insertions(+), 41 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f93d439 -> a5e13ac)

2020-11-29 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f93d439  [SPARK-33589][SQL] Close opened session if the initialization 
fails
 add a5e13ac  [SPARK-33582][SQL] Hive Metastore support filter by not-equals

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/hive/client/HiveShim.scala  |  8 
 .../apache/spark/sql/hive/client/FiltersSuite.scala  |  8 
 .../hive/client/HivePartitionFilteringSuite.scala| 20 
 3 files changed, 36 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (3d54774 -> f93d439)

2020-11-29 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 3d54774  [SPARK-33517][SQL][DOCS] Fix the correct menu items and page 
links in PySpark Usage Guide for Pandas with Apache Arrow
 add f93d439  [SPARK-33589][SQL] Close opened session if the initialization 
fails

No new revisions were added by this update.

Summary of changes:
 .../hive/thriftserver/SparkSQLSessionManager.scala | 50 ++
 1 file changed, 31 insertions(+), 19 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-33517][SQL][DOCS] Fix the correct menu items and page links in PySpark Usage Guide for Pandas with Apache Arrow

2020-11-29 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 3d54774  [SPARK-33517][SQL][DOCS] Fix the correct menu items and page 
links in PySpark Usage Guide for Pandas with Apache Arrow
3d54774 is described below

commit 3d54774fb9cbf674580851aa2323991c7e462a1e
Author: liucht 
AuthorDate: Mon Nov 30 10:03:18 2020 +0900

[SPARK-33517][SQL][DOCS] Fix the correct menu items and page links in 
PySpark Usage Guide for Pandas with Apache Arrow

### What changes were proposed in this pull request?

Change "Apache Arrow in Spark" to "Apache Arrow in PySpark"
and the link to 
“/sql-pyspark-pandas-with-arrow.html#apache-arrow-in-pyspark”

### Why are the changes needed?
When I click on the menu item it doesn't point to the correct page, and 
from the parent menu I can infer that the correct menu item name and link 
should be "Apache Arrow in PySpark".
like this:
 image

![image](https://user-images.githubusercontent.com/28332082/99954725-2b64e200-2dbe-11eb-9576-cf6a3d758980.png)

### Does this PR introduce any user-facing change?
Yes, clicking on the menu item will take you to the correct guide page

### How was this patch tested?
Manually build the doc. This can be verified as below:

cd docs
SKIP_API=1 jekyll build
open _site/sql-pyspark-pandas-with-arrow.html

Closes #30466 from liucht-inspur/master.

Authored-by: liucht 
Signed-off-by: HyukjinKwon 
---
 docs/_data/menu-sql.yaml | 11 ---
 1 file changed, 11 deletions(-)

diff --git a/docs/_data/menu-sql.yaml b/docs/_data/menu-sql.yaml
index ec0b404..cda2a1a 100644
--- a/docs/_data/menu-sql.yaml
+++ b/docs/_data/menu-sql.yaml
@@ -64,17 +64,6 @@
   url: sql-distributed-sql-engine.html#running-the-spark-sql-cli
 - text: PySpark Usage Guide for Pandas with Apache Arrow
   url: sql-pyspark-pandas-with-arrow.html
-  subitems:
-- text: Apache Arrow in Spark
-  url: sql-pyspark-pandas-with-arrow.html#apache-arrow-in-spark
-- text: "Enabling for Conversion to/from Pandas"
-  url: 
sql-pyspark-pandas-with-arrow.html#enabling-for-conversion-tofrom-pandas
-- text: "Pandas UDFs (a.k.a. Vectorized UDFs)"
-  url: sql-pyspark-pandas-with-arrow.html#pandas-udfs-aka-vectorized-udfs
-- text: "Pandas Function APIs"
-  url: sql-pyspark-pandas-with-arrow.html#pandas-function-apis
-- text: Usage Notes
-  url: sql-pyspark-pandas-with-arrow.html#usage-notes
 - text: Migration Guide
   url: sql-migration-old.html
 - text: SQL Reference


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-2.4 updated: [SPARK-33585][SQL][DOCS] Fix the comment for `SQLContext.tables()` and mention the `database` column

2020-11-29 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new 1a0283b  [SPARK-33585][SQL][DOCS] Fix the comment for 
`SQLContext.tables()` and mention the `database` column
1a0283b is described below

commit 1a0283bccbbc44cb8abf5aeea61983e1b2e4bf92
Author: Max Gekk 
AuthorDate: Sun Nov 29 12:18:07 2020 -0800

[SPARK-33585][SQL][DOCS] Fix the comment for `SQLContext.tables()` and 
mention the `database` column

### What changes were proposed in this pull request?
Change the comments for `SQLContext.tables()` to "The returned DataFrame 
has three columns, database, tableName and isTemporary".

### Why are the changes needed?
Currently, the comment mentions only 2 columns but `tables()` returns 3 
columns actually:
```scala
scala> spark.range(10).createOrReplaceTempView("view1")
scala> val tables = spark.sqlContext.tables()
tables: org.apache.spark.sql.DataFrame = [database: string, tableName: 
string ... 1 more field]

scala> tables.printSchema
root
 |-- database: string (nullable = false)
 |-- tableName: string (nullable = false)
 |-- isTemporary: boolean (nullable = false)

scala> tables.show
++-+---+
|database|tableName|isTemporary|
++-+---+
| default|   t1|  false|
| default|   t2|  false|
| default|  ymd|  false|
||view1|   true|
++-+---+
```

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
By running `./dev/scalastyle`

Closes #30526 from MaxGekk/sqlcontext-tables-doc.

Authored-by: Max Gekk 
Signed-off-by: Dongjoon Hyun 
(cherry picked from commit a088a801ed8c17171545c196a3f26ce415de0cd1)
Signed-off-by: Dongjoon Hyun 
---
 sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala
index af60184..2459e15 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala
@@ -705,7 +705,7 @@ class SQLContext private[sql](val sparkSession: 
SparkSession)
 
   /**
* Returns a `DataFrame` containing names of existing tables in the current 
database.
-   * The returned DataFrame has two columns, tableName and isTemporary (a 
Boolean
+   * The returned DataFrame has three columns, database, tableName and 
isTemporary (a Boolean
* indicating if a table is a temporary one or not).
*
* @group ddl_ops
@@ -717,7 +717,7 @@ class SQLContext private[sql](val sparkSession: 
SparkSession)
 
   /**
* Returns a `DataFrame` containing names of existing tables in the given 
database.
-   * The returned DataFrame has two columns, tableName and isTemporary (a 
Boolean
+   * The returned DataFrame has three columns, database, tableName and 
isTemporary (a Boolean
* indicating if a table is a temporary one or not).
*
* @group ddl_ops


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-33585][SQL][DOCS] Fix the comment for `SQLContext.tables()` and mention the `database` column

2020-11-29 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new f67f80b  [SPARK-33585][SQL][DOCS] Fix the comment for 
`SQLContext.tables()` and mention the `database` column
f67f80b is described below

commit f67f80b6665176c7fd66300d389bdc6047d273c3
Author: Max Gekk 
AuthorDate: Sun Nov 29 12:18:07 2020 -0800

[SPARK-33585][SQL][DOCS] Fix the comment for `SQLContext.tables()` and 
mention the `database` column

### What changes were proposed in this pull request?
Change the comments for `SQLContext.tables()` to "The returned DataFrame 
has three columns, database, tableName and isTemporary".

### Why are the changes needed?
Currently, the comment mentions only 2 columns but `tables()` returns 3 
columns actually:
```scala
scala> spark.range(10).createOrReplaceTempView("view1")
scala> val tables = spark.sqlContext.tables()
tables: org.apache.spark.sql.DataFrame = [database: string, tableName: 
string ... 1 more field]

scala> tables.printSchema
root
 |-- database: string (nullable = false)
 |-- tableName: string (nullable = false)
 |-- isTemporary: boolean (nullable = false)

scala> tables.show
++-+---+
|database|tableName|isTemporary|
++-+---+
| default|   t1|  false|
| default|   t2|  false|
| default|  ymd|  false|
||view1|   true|
++-+---+
```

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
By running `./dev/scalastyle`

Closes #30526 from MaxGekk/sqlcontext-tables-doc.

Authored-by: Max Gekk 
Signed-off-by: Dongjoon Hyun 
(cherry picked from commit a088a801ed8c17171545c196a3f26ce415de0cd1)
Signed-off-by: Dongjoon Hyun 
---
 sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala
index 7cf0b6b..dd23796 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala
@@ -661,7 +661,7 @@ class SQLContext private[sql](val sparkSession: 
SparkSession)
 
   /**
* Returns a `DataFrame` containing names of existing tables in the current 
database.
-   * The returned DataFrame has two columns, tableName and isTemporary (a 
Boolean
+   * The returned DataFrame has three columns, database, tableName and 
isTemporary (a Boolean
* indicating if a table is a temporary one or not).
*
* @group ddl_ops
@@ -673,7 +673,7 @@ class SQLContext private[sql](val sparkSession: 
SparkSession)
 
   /**
* Returns a `DataFrame` containing names of existing tables in the given 
database.
-   * The returned DataFrame has two columns, tableName and isTemporary (a 
Boolean
+   * The returned DataFrame has three columns, database, tableName and 
isTemporary (a Boolean
* indicating if a table is a temporary one or not).
*
* @group ddl_ops


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (0054fc9 -> a088a80)

2020-11-29 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 0054fc9  [SPARK-33588][SQL] Respect the `spark.sql.caseSensitive` 
config while resolving partition spec in v1 `SHOW TABLE EXTENDED`
 add a088a80  [SPARK-33585][SQL][DOCS] Fix the comment for 
`SQLContext.tables()` and mention the `database` column

No new revisions were added by this update.

Summary of changes:
 sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (c8286ec -> 0054fc9)

2020-11-29 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c8286ec  [SPARK-33587][CORE] Kill the executor on nested fatal errors
 add 0054fc9  [SPARK-33588][SQL] Respect the `spark.sql.caseSensitive` 
config while resolving partition spec in v1 `SHOW TABLE EXTENDED`

No new revisions were added by this update.

Summary of changes:
 .../spark/sql/execution/command/tables.scala   | 17 +--
 .../sql-tests/results/show-tables.sql.out  |  2 +-
 .../sql/execution/command/v1/ShowTablesSuite.scala | 25 ++
 3 files changed, 37 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (b94ff1e -> c8286ec)

2020-11-29 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from b94ff1e  [SPARK-33590][DOCS][SQL] Add missing sub-bullets in Spark SQL 
Guide
 add c8286ec  [SPARK-33587][CORE] Kill the executor on nested fatal errors

No new revisions were added by this update.

Summary of changes:
 .../scala/org/apache/spark/executor/Executor.scala | 28 -
 .../org/apache/spark/internal/config/package.scala | 11 
 .../org/apache/spark/executor/ExecutorSuite.scala  | 73 +-
 3 files changed, 108 insertions(+), 4 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (ba178f8 -> b94ff1e)

2020-11-29 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from ba178f8  [SPARK-33581][SQL][TEST] Refactor HivePartitionFilteringSuite
 add b94ff1e  [SPARK-33590][DOCS][SQL] Add missing sub-bullets in Spark SQL 
Guide

No new revisions were added by this update.

Summary of changes:
 docs/_data/menu-sql.yaml | 4 
 1 file changed, 4 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org