svn commit: r31234 - in /dev/spark/2.4.1-SNAPSHOT-2018_11_29_22_45-4661ac7-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-11-29 Thread pwendell
Author: pwendell Date: Fri Nov 30 07:00:15 2018 New Revision: 31234 Log: Apache Spark 2.4.1-SNAPSHOT-2018_11_29_22_45-4661ac7 docs [This commit notification would consist of 1476 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

svn commit: r31228 - in /dev/spark/3.0.0-SNAPSHOT-2018_11_29_20_42-9cfc3ee-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-11-29 Thread pwendell
Author: pwendell Date: Fri Nov 30 04:55:05 2018 New Revision: 31228 Log: Apache Spark 3.0.0-SNAPSHOT-2018_11_29_20_42-9cfc3ee docs [This commit notification would consist of 1753 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-26188][SQL] FileIndex: don't infer data types of partition columns if user specifies schema

2018-11-29 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.4 94206c722 -> 4661ac76a [SPARK-26188][SQL] FileIndex: don't infer data types of partition columns if user specifies schema ## What changes were proposed in this pull request? This PR is to fix a regression introduced in:

spark git commit: [SPARK-26188][SQL] FileIndex: don't infer data types of partition columns if user specifies schema

2018-11-29 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 8edb64c1b -> 9cfc3ee62 [SPARK-26188][SQL] FileIndex: don't infer data types of partition columns if user specifies schema ## What changes were proposed in this pull request? This PR is to fix a regression introduced in:

spark git commit: [SPARK-26060][SQL] Track SparkConf entries and make SET command reject such entries.

2018-11-29 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 66b204646 -> 8edb64c1b [SPARK-26060][SQL] Track SparkConf entries and make SET command reject such entries. ## What changes were proposed in this pull request? Currently the `SET` command works without any warnings even if the specified

spark git commit: [SPARK-25446][R] Add schema_of_json() and schema_of_csv() to R

2018-11-29 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master 0166c7373 -> 66b204646 [SPARK-25446][R] Add schema_of_json() and schema_of_csv() to R ## What changes were proposed in this pull request? This PR proposes to expose `schema_of_json` and `schema_of_csv` at R side. **`schema_of_json`**:

spark git commit: [SPARK-25501][SS] Add kafka delegation token support.

2018-11-29 Thread vanzin
Repository: spark Updated Branches: refs/heads/master f97326bcd -> 0166c7373 [SPARK-25501][SS] Add kafka delegation token support. ## What changes were proposed in this pull request? It adds kafka delegation token support for structured streaming. Please see the relevant

svn commit: r31224 - in /dev/spark/3.0.0-SNAPSHOT-2018_11_29_16_33-f97326b-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-11-29 Thread pwendell
Author: pwendell Date: Fri Nov 30 00:46:03 2018 New Revision: 31224 Log: Apache Spark 3.0.0-SNAPSHOT-2018_11_29_16_33-f97326b docs [This commit notification would consist of 1753 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-25977][SQL] Parsing decimals from CSV using locale

2018-11-29 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master 59741887e -> f97326bcd [SPARK-25977][SQL] Parsing decimals from CSV using locale ## What changes were proposed in this pull request? In the PR, I propose using of the locale option to parse decimals from CSV input. After the changes,

spark git commit: [SPARK-25905][CORE] When getting a remote block, avoid forcing a conversion to a ChunkedByteBuffer

2018-11-29 Thread irashid
Repository: spark Updated Branches: refs/heads/master cb368f2c2 -> 59741887e [SPARK-25905][CORE] When getting a remote block, avoid forcing a conversion to a ChunkedByteBuffer ## What changes were proposed in this pull request? In `BlockManager`, `getRemoteValues` gets a `ChunkedByteBuffer`

svn commit: r31217 - in /dev/spark/3.0.0-SNAPSHOT-2018_11_29_12_20-cb368f2-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-11-29 Thread pwendell
Author: pwendell Date: Thu Nov 29 20:32:50 2018 New Revision: 31217 Log: Apache Spark 3.0.0-SNAPSHOT-2018_11_29_12_20-cb368f2 docs [This commit notification would consist of 1753 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-26142] followup: Move sql shuffle read metrics relatives to SQLShuffleMetricsReporter

2018-11-29 Thread rxin
Repository: spark Updated Branches: refs/heads/master 9fdc7a840 -> cb368f2c2 [SPARK-26142] followup: Move sql shuffle read metrics relatives to SQLShuffleMetricsReporter ## What changes were proposed in this pull request? Follow up for https://github.com/apache/spark/pull/23128, move sql

spark git commit: [SPARK-26158][MLLIB] fix covariance accuracy problem for DenseVector

2018-11-29 Thread srowen
Repository: spark Updated Branches: refs/heads/master 1144df3b5 -> 9fdc7a840 [SPARK-26158][MLLIB] fix covariance accuracy problem for DenseVector ## What changes were proposed in this pull request? Enhance accuracy of the covariance logic in RowMatrix for function computeCovariance ## How

svn commit: r31211 - in /dev/spark/2.4.1-SNAPSHOT-2018_11_29_10_11-94206c7-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-11-29 Thread pwendell
Author: pwendell Date: Thu Nov 29 18:27:19 2018 New Revision: 31211 Log: Apache Spark 2.4.1-SNAPSHOT-2018_11_29_10_11-94206c7 docs [This commit notification would consist of 1476 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-26015][K8S] Set a default UID for Spark on K8S Images

2018-11-29 Thread vanzin
Repository: spark Updated Branches: refs/heads/master 24e78b7f1 -> 1144df3b5 [SPARK-26015][K8S] Set a default UID for Spark on K8S Images Adds USER directives to the Dockerfiles which is configurable via build argument (`spark_uid`) for easy customisation. A `-u` flag is added to

spark git commit: [SPARK-26186][SPARK-26184][CORE] Last updated time is not getting updated for the Inprogress application

2018-11-29 Thread vanzin
Repository: spark Updated Branches: refs/heads/branch-2.4 7200915fa -> 94206c722 [SPARK-26186][SPARK-26184][CORE] Last updated time is not getting updated for the Inprogress application ## What changes were proposed in this pull request? When the

spark git commit: [SPARK-26186][SPARK-26184][CORE] Last updated time is not getting updated for the Inprogress application

2018-11-29 Thread vanzin
Repository: spark Updated Branches: refs/heads/master de4228152 -> 24e78b7f1 [SPARK-26186][SPARK-26184][CORE] Last updated time is not getting updated for the Inprogress application ## What changes were proposed in this pull request? When the

spark-website git commit: Add back IBM Spectrum Conductor to third party list

2018-11-29 Thread srowen
Repository: spark-website Updated Branches: refs/heads/asf-site b9325e833 -> 8e88577bd Add back IBM Spectrum Conductor to third party list IBM Spectrum Conductor (old name IBM Spectrum Conductor with Spark) was part of the list of third party projects but was removed as it had a dead link.

spark-website git commit: Removed `Runs everywhere` duplication

2018-11-29 Thread srowen
Repository: spark-website Updated Branches: refs/heads/asf-site 4762eb4c2 -> b9325e833 Removed `Runs everywhere` duplication Author: Anton Rybochkin Closes #154 from raipc/asf-site. Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo Commit:

spark-website git commit: Doc for [SPARK-26177] Automated formatting for Scala code

2018-11-29 Thread srowen
Repository: spark-website Updated Branches: refs/heads/asf-site b3d3882db -> 4762eb4c2 Doc for [SPARK-26177] Automated formatting for Scala code Docs for https://github.com/apache/spark/pull/23148 Author: cody koeninger Closes #159 from koeninger/scalafmt. Project:

spark git commit: [MINOR][DOCS][WIP] Fix Typos

2018-11-29 Thread srowen
Repository: spark Updated Branches: refs/heads/master 31c4fab3f -> de4228152 [MINOR][DOCS][WIP] Fix Typos ## What changes were proposed in this pull request? Fix Typos. ## How was this patch tested? NA Closes #23145 from kjmrknsn/docUpdate. Authored-by: Keiji Yoshida Signed-off-by: Sean

spark git commit: [SPARK-26081][SQL] Prevent empty files for empty partitions in Text datasources

2018-11-29 Thread srowen
Repository: spark Updated Branches: refs/heads/master 9a09e91a3 -> 31c4fab3f [SPARK-26081][SQL] Prevent empty files for empty partitions in Text datasources ## What changes were proposed in this pull request? In the PR, I propose to postpone creation of

svn commit: r31207 - in /dev/spark/3.0.0-SNAPSHOT-2018_11_29_08_00-9a09e91-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-11-29 Thread pwendell
Author: pwendell Date: Thu Nov 29 16:13:02 2018 New Revision: 31207 Log: Apache Spark 3.0.0-SNAPSHOT-2018_11_29_08_00-9a09e91 docs [This commit notification would consist of 1753 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-26177] Automated formatting for Scala code

2018-11-29 Thread srowen
Repository: spark Updated Branches: refs/heads/master e3ea93ab6 -> 9a09e91a3 [SPARK-26177] Automated formatting for Scala code ## What changes were proposed in this pull request? Add a maven plugin and wrapper script to use scalafmt to format files that differ from git master. Intention is

spark git commit: [MINOR][ML] add missing params to Instr

2018-11-29 Thread srowen
Repository: spark Updated Branches: refs/heads/master 06a87711b -> e3ea93ab6 [MINOR][ML] add missing params to Instr ## What changes were proposed in this pull request? add following param to instr: GBTC: validationTol GBTR: validationTol, validationIndicatorCol colnames in LiR, LinearSVC,

spark git commit: [SPARK-26024][FOLLOWUP][MINOR] Follow-up to remove extra blank lines in R function descriptions

2018-11-29 Thread srowen
Repository: spark Updated Branches: refs/heads/master b9b68a6dc -> 06a87711b [SPARK-26024][FOLLOWUP][MINOR] Follow-up to remove extra blank lines in R function descriptions ## What changes were proposed in this pull request? Follow-up to remove extra blank lines in R function descriptions

spark git commit: [SPARK-26211][SQL] Fix InSet for binary, and struct and array with null.

2018-11-29 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.3 96a5a127e -> e96ba8430 [SPARK-26211][SQL] Fix InSet for binary, and struct and array with null. Currently `InSet` doesn't work properly for binary type, or struct and array type with null value in the set. Because, as for binary type,

spark git commit: [SPARK-26211][SQL] Fix InSet for binary, and struct and array with null.

2018-11-29 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.4 99a9107c9 -> 7200915fa [SPARK-26211][SQL] Fix InSet for binary, and struct and array with null. ## What changes were proposed in this pull request? Currently `InSet` doesn't work properly for binary type, or struct and array type

spark git commit: [SPARK-26211][SQL] Fix InSet for binary, and struct and array with null.

2018-11-29 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 7a83d7140 -> b9b68a6dc [SPARK-26211][SQL] Fix InSet for binary, and struct and array with null. ## What changes were proposed in this pull request? Currently `InSet` doesn't work properly for binary type, or struct and array type with

spark git commit: [SPARK-26163][SQL] Parsing decimals from JSON using locale

2018-11-29 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 8bfea86b1 -> 7a83d7140 [SPARK-26163][SQL] Parsing decimals from JSON using locale ## What changes were proposed in this pull request? In the PR, I propose using of the locale option to parse (and infer) decimals from JSON input. After