svn commit: r27730 - in /dev/spark/2.4.0-SNAPSHOT-2018_06_25_20_01-e07aee2-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-06-25 Thread pwendell
Author: pwendell Date: Tue Jun 26 03:17:24 2018 New Revision: 27730 Log: Apache Spark 2.4.0-SNAPSHOT-2018_06_25_20_01-e07aee2 docs [This commit notification would consist of 1468 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-24636][SQL] Type coercion of arrays for array_join function

2018-06-25 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master c7967c604 -> e07aee216 [SPARK-24636][SQL] Type coercion of arrays for array_join function ## What changes were proposed in this pull request? Presto's implementation accepts arbitrary arrays of primitive types as an input: ``` presto>

spark git commit: [SPARK-23776][DOC] Update instructions for running PySpark after building with SBT

2018-06-25 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master d48803bf6 -> 4c059ebc6 [SPARK-23776][DOC] Update instructions for running PySpark after building with SBT ## What changes were proposed in this pull request? This update tells the reader how to build Spark with SBT such that pyspark-sql

spark git commit: [SPARK-24418][BUILD] Upgrade Scala to 2.11.12 and 2.12.6

2018-06-25 Thread jshao
Repository: spark Updated Branches: refs/heads/master 4c059ebc6 -> c7967c604 [SPARK-24418][BUILD] Upgrade Scala to 2.11.12 and 2.12.6 ## What changes were proposed in this pull request? Scala is upgraded to `2.11.12` and `2.12.6`. We used `loadFIles()` in `ILoop` as a hook to initialize the

svn commit: r27729 - in /dev/spark/2.3.2-SNAPSHOT-2018_06_25_18_01-db538b2-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-06-25 Thread pwendell
Author: pwendell Date: Tue Jun 26 01:16:22 2018 New Revision: 27729 Log: Apache Spark 2.3.2-SNAPSHOT-2018_06_25_18_01-db538b2 docs [This commit notification would consist of 1443 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-24324][PYTHON][FOLLOWUP] Grouped Map positional conf should have deprecation note

2018-06-25 Thread cutlerb
Repository: spark Updated Branches: refs/heads/master 6d16b9885 -> d48803bf6 [SPARK-24324][PYTHON][FOLLOWUP] Grouped Map positional conf should have deprecation note ## What changes were proposed in this pull request? Followup to the discussion of the added conf in SPARK-24324 which allows

spark git commit: [SPARK-24552][CORE][BRANCH-2.2] Use unique id instead of attempt number for writes .

2018-06-25 Thread vanzin
Repository: spark Updated Branches: refs/heads/branch-2.2 a6459 -> 72575d0bb [SPARK-24552][CORE][BRANCH-2.2] Use unique id instead of attempt number for writes . This passes a unique attempt id to the Hadoop APIs, because attempt number is reused when stages are retried. When attempt

spark git commit: [SPARK-24552][CORE][SQL][BRANCH-2.3] Use unique id instead of attempt number for writes .

2018-06-25 Thread vanzin
Repository: spark Updated Branches: refs/heads/branch-2.3 a1e964007 -> db538b25a [SPARK-24552][CORE][SQL][BRANCH-2.3] Use unique id instead of attempt number for writes . This passes a unique attempt id instead of attempt number to v2 data sources and hadoop APIs, because attempt number is

spark git commit: [SPARK-24552][CORE][SQL] Use task ID instead of attempt number for writes.

2018-06-25 Thread vanzin
Repository: spark Updated Branches: refs/heads/master baa01c8ca -> 6d16b9885 [SPARK-24552][CORE][SQL] Use task ID instead of attempt number for writes. This passes the unique task attempt id instead of attempt number to v2 data sources because attempt number is reused when stages are

svn commit: r27727 - in /dev/spark/2.4.0-SNAPSHOT-2018_06_25_16_01-baa01c8-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-06-25 Thread pwendell
Author: pwendell Date: Mon Jun 25 23:16:04 2018 New Revision: 27727 Log: Apache Spark 2.4.0-SNAPSHOT-2018_06_25_16_01-baa01c8 docs [This commit notification would consist of 1468 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [INFRA] Close stale PR.

2018-06-25 Thread vanzin
Repository: spark Updated Branches: refs/heads/master 5264164a6 -> baa01c8ca [INFRA] Close stale PR. Closes #21614 Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/baa01c8c Tree:

spark git commit: [SPARK-24648][SQL] SqlMetrics should be threadsafe

2018-06-25 Thread hvanhovell
Repository: spark Updated Branches: refs/heads/master 594ac4f7b -> 5264164a6 [SPARK-24648][SQL] SqlMetrics should be threadsafe Use LongAdder to make SQLMetrics thread safe. ## What changes were proposed in this pull request? Replace += with LongAdder.add() for concurrent counting ## How

svn commit: r27722 - in /dev/spark/2.4.0-SNAPSHOT-2018_06_25_12_01-594ac4f-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-06-25 Thread pwendell
Author: pwendell Date: Mon Jun 25 19:16:19 2018 New Revision: 27722 Log: Apache Spark 2.4.0-SNAPSHOT-2018_06_25_12_01-594ac4f docs [This commit notification would consist of 1468 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-24633][SQL] Fix codegen when split is required for arrays_zip

2018-06-25 Thread wenchen
Repository: spark Updated Branches: refs/heads/master bac50aa37 -> 594ac4f7b [SPARK-24633][SQL] Fix codegen when split is required for arrays_zip ## What changes were proposed in this pull request? In function array_zip, when split is required by the high number of arguments, a codegen

svn commit: r27715 - in /dev/spark/2.4.0-SNAPSHOT-2018_06_25_08_01-bac50aa-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-06-25 Thread pwendell
Author: pwendell Date: Mon Jun 25 15:16:33 2018 New Revision: 27715 Log: Apache Spark 2.4.0-SNAPSHOT-2018_06_25_08_01-bac50aa docs [This commit notification would consist of 1468 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-24596][SQL] Non-cascading Cache Invalidation

2018-06-25 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 8ab8ef773 -> bac50aa37 [SPARK-24596][SQL] Non-cascading Cache Invalidation ## What changes were proposed in this pull request? 1. Add parameter 'cascade' in CacheManager.uncacheQuery(). Under 'cascade=false' mode, only invalidate the

svn commit: r27713 - in /dev/spark/2.4.0-SNAPSHOT-2018_06_25_05_54-8ab8ef7-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-06-25 Thread pwendell
Author: pwendell Date: Mon Jun 25 13:11:47 2018 New Revision: 27713 Log: Apache Spark 2.4.0-SNAPSHOT-2018_06_25_05_54-8ab8ef7 docs [This commit notification would consist of 1468 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: Fix minor typo in docs/cloud-integration.md

2018-06-25 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master 6e0596e26 -> 8ab8ef773 Fix minor typo in docs/cloud-integration.md ## What changes were proposed in this pull request? Minor typo in docs/cloud-integration.md ## How was this patch tested? This is trivial enough that it should not

svn commit: r27710 - in /dev/spark/2.4.0-SNAPSHOT-2018_06_25_00_01-6e0596e-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-06-25 Thread pwendell
Author: pwendell Date: Mon Jun 25 07:17:03 2018 New Revision: 27710 Log: Apache Spark 2.4.0-SNAPSHOT-2018_06_25_00_01-6e0596e docs [This commit notification would consist of 1468 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-23931][SQL][FOLLOW-UP] Make `arrays_zip` in function.scala `@scala.annotation.varargs`.

2018-06-25 Thread lixiao
Repository: spark Updated Branches: refs/heads/master f596ebe4d -> 6e0596e26 [SPARK-23931][SQL][FOLLOW-UP] Make `arrays_zip` in function.scala `@scala.annotation.varargs`. ## What changes were proposed in this pull request? This is a follow-up pr of #21045 which added `arrays_zip`. The

spark git commit: [SPARK-24327][SQL] Verify and normalize a partition column name based on the JDBC resolved schema

2018-06-25 Thread lixiao
Repository: spark Updated Branches: refs/heads/master a5849ad9a -> f596ebe4d [SPARK-24327][SQL] Verify and normalize a partition column name based on the JDBC resolved schema ## What changes were proposed in this pull request? This pr modified JDBC datasource code to verify and normalize a