[spark] branch master updated (cc06266 -> 3a299aa)

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from cc06266  [SPARK-33019][CORE] Use 
spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version=1 by default
 add 3a299aa  [SPARK-32741][SQL] Check if the same ExprId refers to the 
unique attribute in logical plans

No new revisions were added by this update.

Summary of changes:
 .../spark/sql/catalyst/analysis/Analyzer.scala | 11 +++-
 .../spark/sql/catalyst/optimizer/Optimizer.scala   | 15 +++--
 .../spark/sql/catalyst/optimizer/subquery.scala| 51 +---
 .../sql/catalyst/plans/logical/LogicalPlan.scala   | 70 ++
 .../optimizer/FoldablePropagationSuite.scala   |  4 +-
 .../plans/logical/LogicalPlanIntegritySuite.scala  | 51 
 .../sql/execution/adaptive/AQEOptimizer.scala  |  8 ++-
 .../apache/spark/sql/streaming/StreamSuite.scala   |  7 +--
 8 files changed, 181 insertions(+), 36 deletions(-)
 create mode 100644 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlanIntegritySuite.scala


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (cc06266 -> 3a299aa)

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from cc06266  [SPARK-33019][CORE] Use 
spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version=1 by default
 add 3a299aa  [SPARK-32741][SQL] Check if the same ExprId refers to the 
unique attribute in logical plans

No new revisions were added by this update.

Summary of changes:
 .../spark/sql/catalyst/analysis/Analyzer.scala | 11 +++-
 .../spark/sql/catalyst/optimizer/Optimizer.scala   | 15 +++--
 .../spark/sql/catalyst/optimizer/subquery.scala| 51 +---
 .../sql/catalyst/plans/logical/LogicalPlan.scala   | 70 ++
 .../optimizer/FoldablePropagationSuite.scala   |  4 +-
 .../plans/logical/LogicalPlanIntegritySuite.scala  | 51 
 .../sql/execution/adaptive/AQEOptimizer.scala  |  8 ++-
 .../apache/spark/sql/streaming/StreamSuite.scala   |  7 +--
 8 files changed, 181 insertions(+), 36 deletions(-)
 create mode 100644 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlanIntegritySuite.scala


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (cc06266 -> 3a299aa)

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from cc06266  [SPARK-33019][CORE] Use 
spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version=1 by default
 add 3a299aa  [SPARK-32741][SQL] Check if the same ExprId refers to the 
unique attribute in logical plans

No new revisions were added by this update.

Summary of changes:
 .../spark/sql/catalyst/analysis/Analyzer.scala | 11 +++-
 .../spark/sql/catalyst/optimizer/Optimizer.scala   | 15 +++--
 .../spark/sql/catalyst/optimizer/subquery.scala| 51 +---
 .../sql/catalyst/plans/logical/LogicalPlan.scala   | 70 ++
 .../optimizer/FoldablePropagationSuite.scala   |  4 +-
 .../plans/logical/LogicalPlanIntegritySuite.scala  | 51 
 .../sql/execution/adaptive/AQEOptimizer.scala  |  8 ++-
 .../apache/spark/sql/streaming/StreamSuite.scala   |  7 +--
 8 files changed, 181 insertions(+), 36 deletions(-)
 create mode 100644 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlanIntegritySuite.scala


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (cc06266 -> 3a299aa)

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from cc06266  [SPARK-33019][CORE] Use 
spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version=1 by default
 add 3a299aa  [SPARK-32741][SQL] Check if the same ExprId refers to the 
unique attribute in logical plans

No new revisions were added by this update.

Summary of changes:
 .../spark/sql/catalyst/analysis/Analyzer.scala | 11 +++-
 .../spark/sql/catalyst/optimizer/Optimizer.scala   | 15 +++--
 .../spark/sql/catalyst/optimizer/subquery.scala| 51 +---
 .../sql/catalyst/plans/logical/LogicalPlan.scala   | 70 ++
 .../optimizer/FoldablePropagationSuite.scala   |  4 +-
 .../plans/logical/LogicalPlanIntegritySuite.scala  | 51 
 .../sql/execution/adaptive/AQEOptimizer.scala  |  8 ++-
 .../apache/spark/sql/streaming/StreamSuite.scala   |  7 +--
 8 files changed, 181 insertions(+), 36 deletions(-)
 create mode 100644 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlanIntegritySuite.scala


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (cc06266 -> 3a299aa)

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from cc06266  [SPARK-33019][CORE] Use 
spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version=1 by default
 add 3a299aa  [SPARK-32741][SQL] Check if the same ExprId refers to the 
unique attribute in logical plans

No new revisions were added by this update.

Summary of changes:
 .../spark/sql/catalyst/analysis/Analyzer.scala | 11 +++-
 .../spark/sql/catalyst/optimizer/Optimizer.scala   | 15 +++--
 .../spark/sql/catalyst/optimizer/subquery.scala| 51 +---
 .../sql/catalyst/plans/logical/LogicalPlan.scala   | 70 ++
 .../optimizer/FoldablePropagationSuite.scala   |  4 +-
 .../plans/logical/LogicalPlanIntegritySuite.scala  | 51 
 .../sql/execution/adaptive/AQEOptimizer.scala  |  8 ++-
 .../apache/spark/sql/streaming/StreamSuite.scala   |  7 +--
 8 files changed, 181 insertions(+), 36 deletions(-)
 create mode 100644 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlanIntegritySuite.scala


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] [spark-website] megelon opened a new pull request #291: :) bogotá-meetup

2020-09-30 Thread GitBox


megelon opened a new pull request #291:
URL: https://github.com/apache/spark-website/pull/291


   Hello
   
   I am Co-organizer of Apache Spark Bogotá Meetup from Colombia
   https://www.meetup.com/es/Apache-Spark-Bogota/
   
   And would like to include the community on the following web page.
   
   https://spark.apache.org/community.html
   
   Looking forward to meeting you



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] [spark-website] srowen commented on pull request #291: :) bogotá-meetup

2020-09-30 Thread GitBox


srowen commented on pull request #291:
URL: https://github.com/apache/spark-website/pull/291#issuecomment-701395093


   You need to regenerate the site HTML too - there are instructions in the 
README



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (3a299aa -> ece8d8e)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 3a299aa  [SPARK-32741][SQL] Check if the same ExprId refers to the 
unique attribute in logical plans
 add ece8d8e  [SPARK-33006][K8S][DOCS] Add dynamic PVC usage example into 
K8s doc

No new revisions were added by this update.

Summary of changes:
 docs/running-on-kubernetes.md | 22 +-
 1 file changed, 21 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (3a299aa -> ece8d8e)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 3a299aa  [SPARK-32741][SQL] Check if the same ExprId refers to the 
unique attribute in logical plans
 add ece8d8e  [SPARK-33006][K8S][DOCS] Add dynamic PVC usage example into 
K8s doc

No new revisions were added by this update.

Summary of changes:
 docs/running-on-kubernetes.md | 22 +-
 1 file changed, 21 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (3a299aa -> ece8d8e)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 3a299aa  [SPARK-32741][SQL] Check if the same ExprId refers to the 
unique attribute in logical plans
 add ece8d8e  [SPARK-33006][K8S][DOCS] Add dynamic PVC usage example into 
K8s doc

No new revisions were added by this update.

Summary of changes:
 docs/running-on-kubernetes.md | 22 +-
 1 file changed, 21 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (3a299aa -> ece8d8e)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 3a299aa  [SPARK-32741][SQL] Check if the same ExprId refers to the 
unique attribute in logical plans
 add ece8d8e  [SPARK-33006][K8S][DOCS] Add dynamic PVC usage example into 
K8s doc

No new revisions were added by this update.

Summary of changes:
 docs/running-on-kubernetes.md | 22 +-
 1 file changed, 21 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (3a299aa -> ece8d8e)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 3a299aa  [SPARK-32741][SQL] Check if the same ExprId refers to the 
unique attribute in logical plans
 add ece8d8e  [SPARK-33006][K8S][DOCS] Add dynamic PVC usage example into 
K8s doc

No new revisions were added by this update.

Summary of changes:
 docs/running-on-kubernetes.md | 22 +-
 1 file changed, 21 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (ece8d8e -> 3bdbb55)

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from ece8d8e  [SPARK-33006][K8S][DOCS] Add dynamic PVC usage example into 
K8s doc
 add 3bdbb55  [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in 
the SQL docs

No new revisions were added by this update.

Summary of changes:
 docs/sql-ref-syntax-ddl-create-table-datasource.md |  7 -
 docs/sql-ref-syntax-ddl-create-table-hiveformat.md | 32 ++
 2 files changed, 38 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (ece8d8e -> 3bdbb55)

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from ece8d8e  [SPARK-33006][K8S][DOCS] Add dynamic PVC usage example into 
K8s doc
 add 3bdbb55  [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in 
the SQL docs

No new revisions were added by this update.

Summary of changes:
 docs/sql-ref-syntax-ddl-create-table-datasource.md |  7 -
 docs/sql-ref-syntax-ddl-create-table-hiveformat.md | 32 ++
 2 files changed, 38 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in the SQL docs

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new db6ba04  [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in 
the SQL docs
db6ba04 is described below

commit db6ba049c43e2aa1521ed39c9f2b802ad04d111f
Author: GuoPhilipse <46367746+guophili...@users.noreply.github.com>
AuthorDate: Thu Oct 1 08:15:53 2020 +0900

[SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in the SQL docs

### What changes were proposed in this pull request?
update sql-ref docs, the following key words will be added in this PR.

CLUSTERED BY
SORTED BY
INTO num_buckets BUCKETS

### Why are the changes needed?
let more users know the sql key words usage

### Does this PR introduce _any_ user-facing change?
No

![image](https://user-images.githubusercontent.com/46367746/94428281-0a6b8080-01c3-11eb-9ff3-899f8da602ca.png)

![image](https://user-images.githubusercontent.com/46367746/94428285-0d667100-01c3-11eb-8a54-90e7641d917b.png)

![image](https://user-images.githubusercontent.com/46367746/94428288-0f303480-01c3-11eb-9e1d-023538aa6e2d.png)

### How was this patch tested?
generate html test

Closes #29883 from GuoPhilipse/add-sql-missing-keywords.

Lead-authored-by: GuoPhilipse 
<46367746+guophili...@users.noreply.github.com>
Co-authored-by: GuoPhilipse 
Signed-off-by: Takeshi Yamamuro 
(cherry picked from commit 3bdbb5546d2517dda6f71613927cc1783c87f319)
Signed-off-by: Takeshi Yamamuro 
---
 docs/sql-ref-syntax-ddl-create-table-datasource.md |  7 -
 docs/sql-ref-syntax-ddl-create-table-hiveformat.md | 32 ++
 2 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/docs/sql-ref-syntax-ddl-create-table-datasource.md 
b/docs/sql-ref-syntax-ddl-create-table-datasource.md
index d334447..ba0516a 100644
--- a/docs/sql-ref-syntax-ddl-create-table-datasource.md
+++ b/docs/sql-ref-syntax-ddl-create-table-datasource.md
@@ -67,7 +67,12 @@ as any order. For example, you can write COMMENT 
table_comment after TBLPROPERTI
 
 * **SORTED BY**
 
-Determines the order in which the data is stored in buckets. Default is 
Ascending order.
+Specifies an ordering of bucket columns. Optionally, one can use ASC for 
an ascending order or DESC for a descending order after any column names in the 
SORTED BY clause.
+If not specified, ASC is assumed by default.
+   
+* **INTO num_buckets BUCKETS**
+
+Specifies buckets numbers, which is used in `CLUSTERED BY` clause.
 
 * **LOCATION**
 
diff --git a/docs/sql-ref-syntax-ddl-create-table-hiveformat.md 
b/docs/sql-ref-syntax-ddl-create-table-hiveformat.md
index 7bf847d..3a8c8d5 100644
--- a/docs/sql-ref-syntax-ddl-create-table-hiveformat.md
+++ b/docs/sql-ref-syntax-ddl-create-table-hiveformat.md
@@ -31,6 +31,9 @@ CREATE [ EXTERNAL ] TABLE [ IF NOT EXISTS ] table_identifier
 [ COMMENT table_comment ]
 [ PARTITIONED BY ( col_name2[:] col_type2 [ COMMENT col_comment2 ], ... ) 
 | ( col_name1, col_name2, ... ) ]
+[ CLUSTERED BY ( col_name1, col_name2, ...) 
+[ SORTED BY ( col_name1 [ ASC | DESC ], col_name2 [ ASC | DESC ], ... 
) ] 
+INTO num_buckets BUCKETS ]
 [ ROW FORMAT row_format ]
 [ STORED AS file_format ]
 [ LOCATION path ]
@@ -65,6 +68,21 @@ as any order. For example, you can write COMMENT 
table_comment after TBLPROPERTI
 
 Partitions are created on the table, based on the columns specified.
 
+* **CLUSTERED BY**
+
+Partitions created on the table will be bucketed into fixed buckets based 
on the column specified for bucketing.
+
+**NOTE:** Bucketing is an optimization technique that uses buckets (and 
bucketing columns) to determine data partitioning and avoid data shuffle.
+
+* **SORTED BY**
+
+Specifies an ordering of bucket columns. Optionally, one can use ASC for 
an ascending order or DESC for a descending order after any column names in the 
SORTED BY clause.
+If not specified, ASC is assumed by default.
+
+* **INTO num_buckets BUCKETS**
+
+Specifies buckets numbers, which is used in `CLUSTERED BY` clause.
+
 * **row_format**
 
 Use the `SERDE` clause to specify a custom SerDe for one table. Otherwise, 
use the `DELIMITED` clause to use the native SerDe and specify the delimiter, 
escape character, null character and so on.
@@ -203,6 +221,20 @@ CREATE EXTERNAL TABLE family (id INT, name STRING)
 STORED AS INPUTFORMAT 
'com.ly.spark.example.serde.io.SerDeExampleInputFormat'
 OUTPUTFORMAT 'com.ly.spark.example.serde.io.SerDeExampleOutputFormat'
 LOCATION '/tmp/family/';
+
+--Use `CLUSTERED BY` clause to create bucket table without `SORTED BY`
+CREATE TABLE clustered_by_test1 (ID INT, AGE STRING)
+CLUSTERED BY (ID)
+INTO 4 BUC

[spark] branch master updated (ece8d8e -> 3bdbb55)

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from ece8d8e  [SPARK-33006][K8S][DOCS] Add dynamic PVC usage example into 
K8s doc
 add 3bdbb55  [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in 
the SQL docs

No new revisions were added by this update.

Summary of changes:
 docs/sql-ref-syntax-ddl-create-table-datasource.md |  7 -
 docs/sql-ref-syntax-ddl-create-table-hiveformat.md | 32 ++
 2 files changed, 38 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in the SQL docs

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new db6ba04  [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in 
the SQL docs
db6ba04 is described below

commit db6ba049c43e2aa1521ed39c9f2b802ad04d111f
Author: GuoPhilipse <46367746+guophili...@users.noreply.github.com>
AuthorDate: Thu Oct 1 08:15:53 2020 +0900

[SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in the SQL docs

### What changes were proposed in this pull request?
update sql-ref docs, the following key words will be added in this PR.

CLUSTERED BY
SORTED BY
INTO num_buckets BUCKETS

### Why are the changes needed?
let more users know the sql key words usage

### Does this PR introduce _any_ user-facing change?
No

![image](https://user-images.githubusercontent.com/46367746/94428281-0a6b8080-01c3-11eb-9ff3-899f8da602ca.png)

![image](https://user-images.githubusercontent.com/46367746/94428285-0d667100-01c3-11eb-8a54-90e7641d917b.png)

![image](https://user-images.githubusercontent.com/46367746/94428288-0f303480-01c3-11eb-9e1d-023538aa6e2d.png)

### How was this patch tested?
generate html test

Closes #29883 from GuoPhilipse/add-sql-missing-keywords.

Lead-authored-by: GuoPhilipse 
<46367746+guophili...@users.noreply.github.com>
Co-authored-by: GuoPhilipse 
Signed-off-by: Takeshi Yamamuro 
(cherry picked from commit 3bdbb5546d2517dda6f71613927cc1783c87f319)
Signed-off-by: Takeshi Yamamuro 
---
 docs/sql-ref-syntax-ddl-create-table-datasource.md |  7 -
 docs/sql-ref-syntax-ddl-create-table-hiveformat.md | 32 ++
 2 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/docs/sql-ref-syntax-ddl-create-table-datasource.md 
b/docs/sql-ref-syntax-ddl-create-table-datasource.md
index d334447..ba0516a 100644
--- a/docs/sql-ref-syntax-ddl-create-table-datasource.md
+++ b/docs/sql-ref-syntax-ddl-create-table-datasource.md
@@ -67,7 +67,12 @@ as any order. For example, you can write COMMENT 
table_comment after TBLPROPERTI
 
 * **SORTED BY**
 
-Determines the order in which the data is stored in buckets. Default is 
Ascending order.
+Specifies an ordering of bucket columns. Optionally, one can use ASC for 
an ascending order or DESC for a descending order after any column names in the 
SORTED BY clause.
+If not specified, ASC is assumed by default.
+   
+* **INTO num_buckets BUCKETS**
+
+Specifies buckets numbers, which is used in `CLUSTERED BY` clause.
 
 * **LOCATION**
 
diff --git a/docs/sql-ref-syntax-ddl-create-table-hiveformat.md 
b/docs/sql-ref-syntax-ddl-create-table-hiveformat.md
index 7bf847d..3a8c8d5 100644
--- a/docs/sql-ref-syntax-ddl-create-table-hiveformat.md
+++ b/docs/sql-ref-syntax-ddl-create-table-hiveformat.md
@@ -31,6 +31,9 @@ CREATE [ EXTERNAL ] TABLE [ IF NOT EXISTS ] table_identifier
 [ COMMENT table_comment ]
 [ PARTITIONED BY ( col_name2[:] col_type2 [ COMMENT col_comment2 ], ... ) 
 | ( col_name1, col_name2, ... ) ]
+[ CLUSTERED BY ( col_name1, col_name2, ...) 
+[ SORTED BY ( col_name1 [ ASC | DESC ], col_name2 [ ASC | DESC ], ... 
) ] 
+INTO num_buckets BUCKETS ]
 [ ROW FORMAT row_format ]
 [ STORED AS file_format ]
 [ LOCATION path ]
@@ -65,6 +68,21 @@ as any order. For example, you can write COMMENT 
table_comment after TBLPROPERTI
 
 Partitions are created on the table, based on the columns specified.
 
+* **CLUSTERED BY**
+
+Partitions created on the table will be bucketed into fixed buckets based 
on the column specified for bucketing.
+
+**NOTE:** Bucketing is an optimization technique that uses buckets (and 
bucketing columns) to determine data partitioning and avoid data shuffle.
+
+* **SORTED BY**
+
+Specifies an ordering of bucket columns. Optionally, one can use ASC for 
an ascending order or DESC for a descending order after any column names in the 
SORTED BY clause.
+If not specified, ASC is assumed by default.
+
+* **INTO num_buckets BUCKETS**
+
+Specifies buckets numbers, which is used in `CLUSTERED BY` clause.
+
 * **row_format**
 
 Use the `SERDE` clause to specify a custom SerDe for one table. Otherwise, 
use the `DELIMITED` clause to use the native SerDe and specify the delimiter, 
escape character, null character and so on.
@@ -203,6 +221,20 @@ CREATE EXTERNAL TABLE family (id INT, name STRING)
 STORED AS INPUTFORMAT 
'com.ly.spark.example.serde.io.SerDeExampleInputFormat'
 OUTPUTFORMAT 'com.ly.spark.example.serde.io.SerDeExampleOutputFormat'
 LOCATION '/tmp/family/';
+
+--Use `CLUSTERED BY` clause to create bucket table without `SORTED BY`
+CREATE TABLE clustered_by_test1 (ID INT, AGE STRING)
+CLUSTERED BY (ID)
+INTO 4 BUC

[spark] branch master updated (ece8d8e -> 3bdbb55)

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from ece8d8e  [SPARK-33006][K8S][DOCS] Add dynamic PVC usage example into 
K8s doc
 add 3bdbb55  [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in 
the SQL docs

No new revisions were added by this update.

Summary of changes:
 docs/sql-ref-syntax-ddl-create-table-datasource.md |  7 -
 docs/sql-ref-syntax-ddl-create-table-hiveformat.md | 32 ++
 2 files changed, 38 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in the SQL docs

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new db6ba04  [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in 
the SQL docs
db6ba04 is described below

commit db6ba049c43e2aa1521ed39c9f2b802ad04d111f
Author: GuoPhilipse <46367746+guophili...@users.noreply.github.com>
AuthorDate: Thu Oct 1 08:15:53 2020 +0900

[SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in the SQL docs

### What changes were proposed in this pull request?
update sql-ref docs, the following key words will be added in this PR.

CLUSTERED BY
SORTED BY
INTO num_buckets BUCKETS

### Why are the changes needed?
let more users know the sql key words usage

### Does this PR introduce _any_ user-facing change?
No

![image](https://user-images.githubusercontent.com/46367746/94428281-0a6b8080-01c3-11eb-9ff3-899f8da602ca.png)

![image](https://user-images.githubusercontent.com/46367746/94428285-0d667100-01c3-11eb-8a54-90e7641d917b.png)

![image](https://user-images.githubusercontent.com/46367746/94428288-0f303480-01c3-11eb-9e1d-023538aa6e2d.png)

### How was this patch tested?
generate html test

Closes #29883 from GuoPhilipse/add-sql-missing-keywords.

Lead-authored-by: GuoPhilipse 
<46367746+guophili...@users.noreply.github.com>
Co-authored-by: GuoPhilipse 
Signed-off-by: Takeshi Yamamuro 
(cherry picked from commit 3bdbb5546d2517dda6f71613927cc1783c87f319)
Signed-off-by: Takeshi Yamamuro 
---
 docs/sql-ref-syntax-ddl-create-table-datasource.md |  7 -
 docs/sql-ref-syntax-ddl-create-table-hiveformat.md | 32 ++
 2 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/docs/sql-ref-syntax-ddl-create-table-datasource.md 
b/docs/sql-ref-syntax-ddl-create-table-datasource.md
index d334447..ba0516a 100644
--- a/docs/sql-ref-syntax-ddl-create-table-datasource.md
+++ b/docs/sql-ref-syntax-ddl-create-table-datasource.md
@@ -67,7 +67,12 @@ as any order. For example, you can write COMMENT 
table_comment after TBLPROPERTI
 
 * **SORTED BY**
 
-Determines the order in which the data is stored in buckets. Default is 
Ascending order.
+Specifies an ordering of bucket columns. Optionally, one can use ASC for 
an ascending order or DESC for a descending order after any column names in the 
SORTED BY clause.
+If not specified, ASC is assumed by default.
+   
+* **INTO num_buckets BUCKETS**
+
+Specifies buckets numbers, which is used in `CLUSTERED BY` clause.
 
 * **LOCATION**
 
diff --git a/docs/sql-ref-syntax-ddl-create-table-hiveformat.md 
b/docs/sql-ref-syntax-ddl-create-table-hiveformat.md
index 7bf847d..3a8c8d5 100644
--- a/docs/sql-ref-syntax-ddl-create-table-hiveformat.md
+++ b/docs/sql-ref-syntax-ddl-create-table-hiveformat.md
@@ -31,6 +31,9 @@ CREATE [ EXTERNAL ] TABLE [ IF NOT EXISTS ] table_identifier
 [ COMMENT table_comment ]
 [ PARTITIONED BY ( col_name2[:] col_type2 [ COMMENT col_comment2 ], ... ) 
 | ( col_name1, col_name2, ... ) ]
+[ CLUSTERED BY ( col_name1, col_name2, ...) 
+[ SORTED BY ( col_name1 [ ASC | DESC ], col_name2 [ ASC | DESC ], ... 
) ] 
+INTO num_buckets BUCKETS ]
 [ ROW FORMAT row_format ]
 [ STORED AS file_format ]
 [ LOCATION path ]
@@ -65,6 +68,21 @@ as any order. For example, you can write COMMENT 
table_comment after TBLPROPERTI
 
 Partitions are created on the table, based on the columns specified.
 
+* **CLUSTERED BY**
+
+Partitions created on the table will be bucketed into fixed buckets based 
on the column specified for bucketing.
+
+**NOTE:** Bucketing is an optimization technique that uses buckets (and 
bucketing columns) to determine data partitioning and avoid data shuffle.
+
+* **SORTED BY**
+
+Specifies an ordering of bucket columns. Optionally, one can use ASC for 
an ascending order or DESC for a descending order after any column names in the 
SORTED BY clause.
+If not specified, ASC is assumed by default.
+
+* **INTO num_buckets BUCKETS**
+
+Specifies buckets numbers, which is used in `CLUSTERED BY` clause.
+
 * **row_format**
 
 Use the `SERDE` clause to specify a custom SerDe for one table. Otherwise, 
use the `DELIMITED` clause to use the native SerDe and specify the delimiter, 
escape character, null character and so on.
@@ -203,6 +221,20 @@ CREATE EXTERNAL TABLE family (id INT, name STRING)
 STORED AS INPUTFORMAT 
'com.ly.spark.example.serde.io.SerDeExampleInputFormat'
 OUTPUTFORMAT 'com.ly.spark.example.serde.io.SerDeExampleOutputFormat'
 LOCATION '/tmp/family/';
+
+--Use `CLUSTERED BY` clause to create bucket table without `SORTED BY`
+CREATE TABLE clustered_by_test1 (ID INT, AGE STRING)
+CLUSTERED BY (ID)
+INTO 4 BUC

[spark] branch master updated (ece8d8e -> 3bdbb55)

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from ece8d8e  [SPARK-33006][K8S][DOCS] Add dynamic PVC usage example into 
K8s doc
 add 3bdbb55  [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in 
the SQL docs

No new revisions were added by this update.

Summary of changes:
 docs/sql-ref-syntax-ddl-create-table-datasource.md |  7 -
 docs/sql-ref-syntax-ddl-create-table-hiveformat.md | 32 ++
 2 files changed, 38 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in the SQL docs

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new db6ba04  [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in 
the SQL docs
db6ba04 is described below

commit db6ba049c43e2aa1521ed39c9f2b802ad04d111f
Author: GuoPhilipse <46367746+guophili...@users.noreply.github.com>
AuthorDate: Thu Oct 1 08:15:53 2020 +0900

[SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in the SQL docs

### What changes were proposed in this pull request?
update sql-ref docs, the following key words will be added in this PR.

CLUSTERED BY
SORTED BY
INTO num_buckets BUCKETS

### Why are the changes needed?
let more users know the sql key words usage

### Does this PR introduce _any_ user-facing change?
No

![image](https://user-images.githubusercontent.com/46367746/94428281-0a6b8080-01c3-11eb-9ff3-899f8da602ca.png)

![image](https://user-images.githubusercontent.com/46367746/94428285-0d667100-01c3-11eb-8a54-90e7641d917b.png)

![image](https://user-images.githubusercontent.com/46367746/94428288-0f303480-01c3-11eb-9e1d-023538aa6e2d.png)

### How was this patch tested?
generate html test

Closes #29883 from GuoPhilipse/add-sql-missing-keywords.

Lead-authored-by: GuoPhilipse 
<46367746+guophili...@users.noreply.github.com>
Co-authored-by: GuoPhilipse 
Signed-off-by: Takeshi Yamamuro 
(cherry picked from commit 3bdbb5546d2517dda6f71613927cc1783c87f319)
Signed-off-by: Takeshi Yamamuro 
---
 docs/sql-ref-syntax-ddl-create-table-datasource.md |  7 -
 docs/sql-ref-syntax-ddl-create-table-hiveformat.md | 32 ++
 2 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/docs/sql-ref-syntax-ddl-create-table-datasource.md 
b/docs/sql-ref-syntax-ddl-create-table-datasource.md
index d334447..ba0516a 100644
--- a/docs/sql-ref-syntax-ddl-create-table-datasource.md
+++ b/docs/sql-ref-syntax-ddl-create-table-datasource.md
@@ -67,7 +67,12 @@ as any order. For example, you can write COMMENT 
table_comment after TBLPROPERTI
 
 * **SORTED BY**
 
-Determines the order in which the data is stored in buckets. Default is 
Ascending order.
+Specifies an ordering of bucket columns. Optionally, one can use ASC for 
an ascending order or DESC for a descending order after any column names in the 
SORTED BY clause.
+If not specified, ASC is assumed by default.
+   
+* **INTO num_buckets BUCKETS**
+
+Specifies buckets numbers, which is used in `CLUSTERED BY` clause.
 
 * **LOCATION**
 
diff --git a/docs/sql-ref-syntax-ddl-create-table-hiveformat.md 
b/docs/sql-ref-syntax-ddl-create-table-hiveformat.md
index 7bf847d..3a8c8d5 100644
--- a/docs/sql-ref-syntax-ddl-create-table-hiveformat.md
+++ b/docs/sql-ref-syntax-ddl-create-table-hiveformat.md
@@ -31,6 +31,9 @@ CREATE [ EXTERNAL ] TABLE [ IF NOT EXISTS ] table_identifier
 [ COMMENT table_comment ]
 [ PARTITIONED BY ( col_name2[:] col_type2 [ COMMENT col_comment2 ], ... ) 
 | ( col_name1, col_name2, ... ) ]
+[ CLUSTERED BY ( col_name1, col_name2, ...) 
+[ SORTED BY ( col_name1 [ ASC | DESC ], col_name2 [ ASC | DESC ], ... 
) ] 
+INTO num_buckets BUCKETS ]
 [ ROW FORMAT row_format ]
 [ STORED AS file_format ]
 [ LOCATION path ]
@@ -65,6 +68,21 @@ as any order. For example, you can write COMMENT 
table_comment after TBLPROPERTI
 
 Partitions are created on the table, based on the columns specified.
 
+* **CLUSTERED BY**
+
+Partitions created on the table will be bucketed into fixed buckets based 
on the column specified for bucketing.
+
+**NOTE:** Bucketing is an optimization technique that uses buckets (and 
bucketing columns) to determine data partitioning and avoid data shuffle.
+
+* **SORTED BY**
+
+Specifies an ordering of bucket columns. Optionally, one can use ASC for 
an ascending order or DESC for a descending order after any column names in the 
SORTED BY clause.
+If not specified, ASC is assumed by default.
+
+* **INTO num_buckets BUCKETS**
+
+Specifies buckets numbers, which is used in `CLUSTERED BY` clause.
+
 * **row_format**
 
 Use the `SERDE` clause to specify a custom SerDe for one table. Otherwise, 
use the `DELIMITED` clause to use the native SerDe and specify the delimiter, 
escape character, null character and so on.
@@ -203,6 +221,20 @@ CREATE EXTERNAL TABLE family (id INT, name STRING)
 STORED AS INPUTFORMAT 
'com.ly.spark.example.serde.io.SerDeExampleInputFormat'
 OUTPUTFORMAT 'com.ly.spark.example.serde.io.SerDeExampleOutputFormat'
 LOCATION '/tmp/family/';
+
+--Use `CLUSTERED BY` clause to create bucket table without `SORTED BY`
+CREATE TABLE clustered_by_test1 (ID INT, AGE STRING)
+CLUSTERED BY (ID)
+INTO 4 BUC

[spark] branch branch-3.0 updated: [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in the SQL docs

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new db6ba04  [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in 
the SQL docs
db6ba04 is described below

commit db6ba049c43e2aa1521ed39c9f2b802ad04d111f
Author: GuoPhilipse <46367746+guophili...@users.noreply.github.com>
AuthorDate: Thu Oct 1 08:15:53 2020 +0900

[SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in the SQL docs

### What changes were proposed in this pull request?
update sql-ref docs, the following key words will be added in this PR.

CLUSTERED BY
SORTED BY
INTO num_buckets BUCKETS

### Why are the changes needed?
let more users know the sql key words usage

### Does this PR introduce _any_ user-facing change?
No

![image](https://user-images.githubusercontent.com/46367746/94428281-0a6b8080-01c3-11eb-9ff3-899f8da602ca.png)

![image](https://user-images.githubusercontent.com/46367746/94428285-0d667100-01c3-11eb-8a54-90e7641d917b.png)

![image](https://user-images.githubusercontent.com/46367746/94428288-0f303480-01c3-11eb-9e1d-023538aa6e2d.png)

### How was this patch tested?
generate html test

Closes #29883 from GuoPhilipse/add-sql-missing-keywords.

Lead-authored-by: GuoPhilipse 
<46367746+guophili...@users.noreply.github.com>
Co-authored-by: GuoPhilipse 
Signed-off-by: Takeshi Yamamuro 
(cherry picked from commit 3bdbb5546d2517dda6f71613927cc1783c87f319)
Signed-off-by: Takeshi Yamamuro 
---
 docs/sql-ref-syntax-ddl-create-table-datasource.md |  7 -
 docs/sql-ref-syntax-ddl-create-table-hiveformat.md | 32 ++
 2 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/docs/sql-ref-syntax-ddl-create-table-datasource.md 
b/docs/sql-ref-syntax-ddl-create-table-datasource.md
index d334447..ba0516a 100644
--- a/docs/sql-ref-syntax-ddl-create-table-datasource.md
+++ b/docs/sql-ref-syntax-ddl-create-table-datasource.md
@@ -67,7 +67,12 @@ as any order. For example, you can write COMMENT 
table_comment after TBLPROPERTI
 
 * **SORTED BY**
 
-Determines the order in which the data is stored in buckets. Default is 
Ascending order.
+Specifies an ordering of bucket columns. Optionally, one can use ASC for 
an ascending order or DESC for a descending order after any column names in the 
SORTED BY clause.
+If not specified, ASC is assumed by default.
+   
+* **INTO num_buckets BUCKETS**
+
+Specifies buckets numbers, which is used in `CLUSTERED BY` clause.
 
 * **LOCATION**
 
diff --git a/docs/sql-ref-syntax-ddl-create-table-hiveformat.md 
b/docs/sql-ref-syntax-ddl-create-table-hiveformat.md
index 7bf847d..3a8c8d5 100644
--- a/docs/sql-ref-syntax-ddl-create-table-hiveformat.md
+++ b/docs/sql-ref-syntax-ddl-create-table-hiveformat.md
@@ -31,6 +31,9 @@ CREATE [ EXTERNAL ] TABLE [ IF NOT EXISTS ] table_identifier
 [ COMMENT table_comment ]
 [ PARTITIONED BY ( col_name2[:] col_type2 [ COMMENT col_comment2 ], ... ) 
 | ( col_name1, col_name2, ... ) ]
+[ CLUSTERED BY ( col_name1, col_name2, ...) 
+[ SORTED BY ( col_name1 [ ASC | DESC ], col_name2 [ ASC | DESC ], ... 
) ] 
+INTO num_buckets BUCKETS ]
 [ ROW FORMAT row_format ]
 [ STORED AS file_format ]
 [ LOCATION path ]
@@ -65,6 +68,21 @@ as any order. For example, you can write COMMENT 
table_comment after TBLPROPERTI
 
 Partitions are created on the table, based on the columns specified.
 
+* **CLUSTERED BY**
+
+Partitions created on the table will be bucketed into fixed buckets based 
on the column specified for bucketing.
+
+**NOTE:** Bucketing is an optimization technique that uses buckets (and 
bucketing columns) to determine data partitioning and avoid data shuffle.
+
+* **SORTED BY**
+
+Specifies an ordering of bucket columns. Optionally, one can use ASC for 
an ascending order or DESC for a descending order after any column names in the 
SORTED BY clause.
+If not specified, ASC is assumed by default.
+
+* **INTO num_buckets BUCKETS**
+
+Specifies buckets numbers, which is used in `CLUSTERED BY` clause.
+
 * **row_format**
 
 Use the `SERDE` clause to specify a custom SerDe for one table. Otherwise, 
use the `DELIMITED` clause to use the native SerDe and specify the delimiter, 
escape character, null character and so on.
@@ -203,6 +221,20 @@ CREATE EXTERNAL TABLE family (id INT, name STRING)
 STORED AS INPUTFORMAT 
'com.ly.spark.example.serde.io.SerDeExampleInputFormat'
 OUTPUTFORMAT 'com.ly.spark.example.serde.io.SerDeExampleOutputFormat'
 LOCATION '/tmp/family/';
+
+--Use `CLUSTERED BY` clause to create bucket table without `SORTED BY`
+CREATE TABLE clustered_by_test1 (ID INT, AGE STRING)
+CLUSTERED BY (ID)
+INTO 4 BUC

[spark] branch master updated (3bdbb55 -> d75222d)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 3bdbb55  [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in 
the SQL docs
 add d75222d  [SPARK-33012][BUILD][K8S] Upgrade fabric8 to 4.10.3

No new revisions were added by this update.

Summary of changes:
 dev/deps/spark-deps-hadoop-2.7-hive-1.2| 28 ++
 dev/deps/spark-deps-hadoop-2.7-hive-2.3| 28 ++
 dev/deps/spark-deps-hadoop-3.2-hive-2.3| 28 ++
 resource-managers/kubernetes/core/pom.xml  |  2 +-
 .../kubernetes/integration-tests/pom.xml   |  2 +-
 5 files changed, 71 insertions(+), 17 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (3bdbb55 -> d75222d)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 3bdbb55  [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in 
the SQL docs
 add d75222d  [SPARK-33012][BUILD][K8S] Upgrade fabric8 to 4.10.3

No new revisions were added by this update.

Summary of changes:
 dev/deps/spark-deps-hadoop-2.7-hive-1.2| 28 ++
 dev/deps/spark-deps-hadoop-2.7-hive-2.3| 28 ++
 dev/deps/spark-deps-hadoop-3.2-hive-2.3| 28 ++
 resource-managers/kubernetes/core/pom.xml  |  2 +-
 .../kubernetes/integration-tests/pom.xml   |  2 +-
 5 files changed, 71 insertions(+), 17 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (3bdbb55 -> d75222d)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 3bdbb55  [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in 
the SQL docs
 add d75222d  [SPARK-33012][BUILD][K8S] Upgrade fabric8 to 4.10.3

No new revisions were added by this update.

Summary of changes:
 dev/deps/spark-deps-hadoop-2.7-hive-1.2| 28 ++
 dev/deps/spark-deps-hadoop-2.7-hive-2.3| 28 ++
 dev/deps/spark-deps-hadoop-3.2-hive-2.3| 28 ++
 resource-managers/kubernetes/core/pom.xml  |  2 +-
 .../kubernetes/integration-tests/pom.xml   |  2 +-
 5 files changed, 71 insertions(+), 17 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (3bdbb55 -> d75222d)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 3bdbb55  [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in 
the SQL docs
 add d75222d  [SPARK-33012][BUILD][K8S] Upgrade fabric8 to 4.10.3

No new revisions were added by this update.

Summary of changes:
 dev/deps/spark-deps-hadoop-2.7-hive-1.2| 28 ++
 dev/deps/spark-deps-hadoop-2.7-hive-2.3| 28 ++
 dev/deps/spark-deps-hadoop-3.2-hive-2.3| 28 ++
 resource-managers/kubernetes/core/pom.xml  |  2 +-
 .../kubernetes/integration-tests/pom.xml   |  2 +-
 5 files changed, 71 insertions(+), 17 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (3bdbb55 -> d75222d)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 3bdbb55  [SPARK-31753][SQL][DOCS][FOLLOW-UP] Add missing keywords in 
the SQL docs
 add d75222d  [SPARK-33012][BUILD][K8S] Upgrade fabric8 to 4.10.3

No new revisions were added by this update.

Summary of changes:
 dev/deps/spark-deps-hadoop-2.7-hive-1.2| 28 ++
 dev/deps/spark-deps-hadoop-2.7-hive-2.3| 28 ++
 dev/deps/spark-deps-hadoop-3.2-hive-2.3| 28 ++
 resource-managers/kubernetes/core/pom.xml  |  2 +-
 .../kubernetes/integration-tests/pom.xml   |  2 +-
 5 files changed, 71 insertions(+), 17 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (d75222d -> 0b5a379)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from d75222d  [SPARK-33012][BUILD][K8S] Upgrade fabric8 to 4.10.3
 add 0b5a379  [SPARK-33023][CORE] Judge  path of Windows need  add 
condition `Utils.isWindows`

No new revisions were added by this update.

Summary of changes:
 core/src/main/scala/org/apache/spark/SparkContext.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (d75222d -> 0b5a379)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from d75222d  [SPARK-33012][BUILD][K8S] Upgrade fabric8 to 4.10.3
 add 0b5a379  [SPARK-33023][CORE] Judge  path of Windows need  add 
condition `Utils.isWindows`

No new revisions were added by this update.

Summary of changes:
 core/src/main/scala/org/apache/spark/SparkContext.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (d75222d -> 0b5a379)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from d75222d  [SPARK-33012][BUILD][K8S] Upgrade fabric8 to 4.10.3
 add 0b5a379  [SPARK-33023][CORE] Judge  path of Windows need  add 
condition `Utils.isWindows`

No new revisions were added by this update.

Summary of changes:
 core/src/main/scala/org/apache/spark/SparkContext.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (d75222d -> 0b5a379)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from d75222d  [SPARK-33012][BUILD][K8S] Upgrade fabric8 to 4.10.3
 add 0b5a379  [SPARK-33023][CORE] Judge  path of Windows need  add 
condition `Utils.isWindows`

No new revisions were added by this update.

Summary of changes:
 core/src/main/scala/org/apache/spark/SparkContext.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (d75222d -> 0b5a379)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from d75222d  [SPARK-33012][BUILD][K8S] Upgrade fabric8 to 4.10.3
 add 0b5a379  [SPARK-33023][CORE] Judge  path of Windows need  add 
condition `Utils.isWindows`

No new revisions were added by this update.

Summary of changes:
 core/src/main/scala/org/apache/spark/SparkContext.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (0b5a379 -> 28ed3a5)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 0b5a379  [SPARK-33023][CORE] Judge  path of Windows need  add 
condition `Utils.isWindows`
 add 28ed3a5  [SPARK-32723][WEBUI] Upgrade to jQuery 3.5.1

No new revisions were added by this update.

Summary of changes:
 core/src/main/resources/org/apache/spark/ui/static/jquery-3.4.1.min.js | 2 --
 core/src/main/resources/org/apache/spark/ui/static/jquery-3.5.1.min.js | 2 ++
 core/src/main/scala/org/apache/spark/ui/UIUtils.scala  | 2 +-
 dev/.rat-excludes  | 2 +-
 docs/_layouts/global.html  | 2 +-
 docs/js/vendor/jquery-3.4.1.min.js | 2 --
 docs/js/vendor/jquery-3.5.1.min.js | 2 ++
 7 files changed, 7 insertions(+), 7 deletions(-)
 delete mode 100644 
core/src/main/resources/org/apache/spark/ui/static/jquery-3.4.1.min.js
 create mode 100644 
core/src/main/resources/org/apache/spark/ui/static/jquery-3.5.1.min.js
 delete mode 100644 docs/js/vendor/jquery-3.4.1.min.js
 create mode 100644 docs/js/vendor/jquery-3.5.1.min.js


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (0b5a379 -> 28ed3a5)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 0b5a379  [SPARK-33023][CORE] Judge  path of Windows need  add 
condition `Utils.isWindows`
 add 28ed3a5  [SPARK-32723][WEBUI] Upgrade to jQuery 3.5.1

No new revisions were added by this update.

Summary of changes:
 core/src/main/resources/org/apache/spark/ui/static/jquery-3.4.1.min.js | 2 --
 core/src/main/resources/org/apache/spark/ui/static/jquery-3.5.1.min.js | 2 ++
 core/src/main/scala/org/apache/spark/ui/UIUtils.scala  | 2 +-
 dev/.rat-excludes  | 2 +-
 docs/_layouts/global.html  | 2 +-
 docs/js/vendor/jquery-3.4.1.min.js | 2 --
 docs/js/vendor/jquery-3.5.1.min.js | 2 ++
 7 files changed, 7 insertions(+), 7 deletions(-)
 delete mode 100644 
core/src/main/resources/org/apache/spark/ui/static/jquery-3.4.1.min.js
 create mode 100644 
core/src/main/resources/org/apache/spark/ui/static/jquery-3.5.1.min.js
 delete mode 100644 docs/js/vendor/jquery-3.4.1.min.js
 create mode 100644 docs/js/vendor/jquery-3.5.1.min.js


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (0b5a379 -> 28ed3a5)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 0b5a379  [SPARK-33023][CORE] Judge  path of Windows need  add 
condition `Utils.isWindows`
 add 28ed3a5  [SPARK-32723][WEBUI] Upgrade to jQuery 3.5.1

No new revisions were added by this update.

Summary of changes:
 core/src/main/resources/org/apache/spark/ui/static/jquery-3.4.1.min.js | 2 --
 core/src/main/resources/org/apache/spark/ui/static/jquery-3.5.1.min.js | 2 ++
 core/src/main/scala/org/apache/spark/ui/UIUtils.scala  | 2 +-
 dev/.rat-excludes  | 2 +-
 docs/_layouts/global.html  | 2 +-
 docs/js/vendor/jquery-3.4.1.min.js | 2 --
 docs/js/vendor/jquery-3.5.1.min.js | 2 ++
 7 files changed, 7 insertions(+), 7 deletions(-)
 delete mode 100644 
core/src/main/resources/org/apache/spark/ui/static/jquery-3.4.1.min.js
 create mode 100644 
core/src/main/resources/org/apache/spark/ui/static/jquery-3.5.1.min.js
 delete mode 100644 docs/js/vendor/jquery-3.4.1.min.js
 create mode 100644 docs/js/vendor/jquery-3.5.1.min.js


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (0b5a379 -> 28ed3a5)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 0b5a379  [SPARK-33023][CORE] Judge  path of Windows need  add 
condition `Utils.isWindows`
 add 28ed3a5  [SPARK-32723][WEBUI] Upgrade to jQuery 3.5.1

No new revisions were added by this update.

Summary of changes:
 core/src/main/resources/org/apache/spark/ui/static/jquery-3.4.1.min.js | 2 --
 core/src/main/resources/org/apache/spark/ui/static/jquery-3.5.1.min.js | 2 ++
 core/src/main/scala/org/apache/spark/ui/UIUtils.scala  | 2 +-
 dev/.rat-excludes  | 2 +-
 docs/_layouts/global.html  | 2 +-
 docs/js/vendor/jquery-3.4.1.min.js | 2 --
 docs/js/vendor/jquery-3.5.1.min.js | 2 ++
 7 files changed, 7 insertions(+), 7 deletions(-)
 delete mode 100644 
core/src/main/resources/org/apache/spark/ui/static/jquery-3.4.1.min.js
 create mode 100644 
core/src/main/resources/org/apache/spark/ui/static/jquery-3.5.1.min.js
 delete mode 100644 docs/js/vendor/jquery-3.4.1.min.js
 create mode 100644 docs/js/vendor/jquery-3.5.1.min.js


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (0b5a379 -> 28ed3a5)

2020-09-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 0b5a379  [SPARK-33023][CORE] Judge  path of Windows need  add 
condition `Utils.isWindows`
 add 28ed3a5  [SPARK-32723][WEBUI] Upgrade to jQuery 3.5.1

No new revisions were added by this update.

Summary of changes:
 core/src/main/resources/org/apache/spark/ui/static/jquery-3.4.1.min.js | 2 --
 core/src/main/resources/org/apache/spark/ui/static/jquery-3.5.1.min.js | 2 ++
 core/src/main/scala/org/apache/spark/ui/UIUtils.scala  | 2 +-
 dev/.rat-excludes  | 2 +-
 docs/_layouts/global.html  | 2 +-
 docs/js/vendor/jquery-3.4.1.min.js | 2 --
 docs/js/vendor/jquery-3.5.1.min.js | 2 ++
 7 files changed, 7 insertions(+), 7 deletions(-)
 delete mode 100644 
core/src/main/resources/org/apache/spark/ui/static/jquery-3.4.1.min.js
 create mode 100644 
core/src/main/resources/org/apache/spark/ui/static/jquery-3.5.1.min.js
 delete mode 100644 docs/js/vendor/jquery-3.4.1.min.js
 create mode 100644 docs/js/vendor/jquery-3.5.1.min.js


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (28ed3a5 -> 5651284)

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 28ed3a5  [SPARK-32723][WEBUI] Upgrade to jQuery 3.5.1
 add 5651284  [SPARK-32992][SQL] Map Oracle's ROWID type to StringType in 
read via JDBC

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala| 11 +++
 .../main/scala/org/apache/spark/sql/jdbc/OracleDialect.scala  |  6 ++
 2 files changed, 17 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (28ed3a5 -> 5651284)

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 28ed3a5  [SPARK-32723][WEBUI] Upgrade to jQuery 3.5.1
 add 5651284  [SPARK-32992][SQL] Map Oracle's ROWID type to StringType in 
read via JDBC

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala| 11 +++
 .../main/scala/org/apache/spark/sql/jdbc/OracleDialect.scala  |  6 ++
 2 files changed, 17 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (28ed3a5 -> 5651284)

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 28ed3a5  [SPARK-32723][WEBUI] Upgrade to jQuery 3.5.1
 add 5651284  [SPARK-32992][SQL] Map Oracle's ROWID type to StringType in 
read via JDBC

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala| 11 +++
 .../main/scala/org/apache/spark/sql/jdbc/OracleDialect.scala  |  6 ++
 2 files changed, 17 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (28ed3a5 -> 5651284)

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 28ed3a5  [SPARK-32723][WEBUI] Upgrade to jQuery 3.5.1
 add 5651284  [SPARK-32992][SQL] Map Oracle's ROWID type to StringType in 
read via JDBC

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala| 11 +++
 .../main/scala/org/apache/spark/sql/jdbc/OracleDialect.scala  |  6 ++
 2 files changed, 17 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (28ed3a5 -> 5651284)

2020-09-30 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 28ed3a5  [SPARK-32723][WEBUI] Upgrade to jQuery 3.5.1
 add 5651284  [SPARK-32992][SQL] Map Oracle's ROWID type to StringType in 
read via JDBC

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala| 11 +++
 .../main/scala/org/apache/spark/sql/jdbc/OracleDialect.scala  |  6 ++
 2 files changed, 17 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org