spark git commit: [SPARK-23815][CORE] Spark writer dynamic partition overwrite mode may fail to write output on multi level partition

2018-04-12 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.3 2995b79d6 -> dfdf1bb9b [SPARK-23815][CORE] Spark writer dynamic partition overwrite mode may fail to write output on multi level partition ## What changes were proposed in this pull request? Spark introduced new writer mode to

spark git commit: [SPARK-23815][CORE] Spark writer dynamic partition overwrite mode may fail to write output on multi level partition

2018-04-12 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 1018be44d -> 4b0703679 [SPARK-23815][CORE] Spark writer dynamic partition overwrite mode may fail to write output on multi level partition ## What changes were proposed in this pull request? Spark introduced new writer mode to overwrite

spark git commit: [SPARK-23971] Should not leak Spark sessions across test suites

2018-04-12 Thread lixiao
Repository: spark Updated Branches: refs/heads/master ab7b961a4 -> 1018be44d [SPARK-23971] Should not leak Spark sessions across test suites ## What changes were proposed in this pull request? Many suites currently leak Spark sessions (sometimes with stopped SparkContexts) via the

svn commit: r26318 - in /dev/spark/2.3.1-SNAPSHOT-2018_04_12_22_02-2995b79-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-04-12 Thread pwendell
Author: pwendell Date: Fri Apr 13 05:17:16 2018 New Revision: 26318 Log: Apache Spark 2.3.1-SNAPSHOT-2018_04_12_22_02-2995b79 docs [This commit notification would consist of 1443 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-23748][SS] Fix SS continuous process doesn't support SubqueryAlias issue

2018-04-12 Thread tdas
Repository: spark Updated Branches: refs/heads/branch-2.3 908c681c6 -> 2995b79d6 [SPARK-23748][SS] Fix SS continuous process doesn't support SubqueryAlias issue ## What changes were proposed in this pull request? Current SS continuous doesn't support processing on temp table or

spark git commit: [SPARK-23748][SS] Fix SS continuous process doesn't support SubqueryAlias issue

2018-04-12 Thread tdas
Repository: spark Updated Branches: refs/heads/master 682002b6d -> 14291b061 [SPARK-23748][SS] Fix SS continuous process doesn't support SubqueryAlias issue ## What changes were proposed in this pull request? Current SS continuous doesn't support processing on temp table or `df.as("xxx")`,

spark git commit: [SPARK-23867][SCHEDULER] use droppedCount in logWarning

2018-04-12 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.3 571269519 -> 908c681c6 [SPARK-23867][SCHEDULER] use droppedCount in logWarning ## What changes were proposed in this pull request? Get the count of dropped events for output in log message. ## How was this patch tested? The fix is

spark git commit: [SPARK-23867][SCHEDULER] use droppedCount in logWarning

2018-04-12 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 0f93b91a7 -> 682002b6d [SPARK-23867][SCHEDULER] use droppedCount in logWarning ## What changes were proposed in this pull request? Get the count of dropped events for output in log message. ## How was this patch tested? The fix is

spark git commit: [SPARK-23751][FOLLOW-UP] fix build for scala-2.12

2018-04-12 Thread jkbradley
Repository: spark Updated Branches: refs/heads/master 0b19122d4 -> 0f93b91a7 [SPARK-23751][FOLLOW-UP] fix build for scala-2.12 ## What changes were proposed in this pull request? fix build for scala-2.12 ## How was this patch tested? Manual. Author: WeichenXu

[1/2] spark-website git commit: Update text/wording to more "modern" Spark and more consistent.

2018-04-12 Thread rxin
Repository: spark-website Updated Branches: refs/heads/asf-site 91b561749 -> 658467248 http://git-wip-us.apache.org/repos/asf/spark-website/blob/65846724/site/news/strata-exercises-now-available-online.html -- diff --git

[2/2] spark-website git commit: Update text/wording to more "modern" Spark and more consistent.

2018-04-12 Thread rxin
Update text/wording to more "modern" Spark and more consistent. 1. Use DataFrame examples. 2. Reduce explicit comparison with MapReduce, since the topic does not really come up. 3. More focus on analytics rather than "cluster compute". 4. Update committer affiliation. 5. Make it more clear

svn commit: r26307 - in /dev/spark/2.4.0-SNAPSHOT-2018_04_12_08_02-0b19122-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-04-12 Thread pwendell
Author: pwendell Date: Thu Apr 12 15:24:31 2018 New Revision: 26307 Log: Apache Spark 2.4.0-SNAPSHOT-2018_04_12_08_02-0b19122 docs [This commit notification would consist of 1458 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-23762][SQL] UTF8StringBuffer uses MemoryBlock

2018-04-12 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 6a2289ecf -> 0b19122d4 [SPARK-23762][SQL] UTF8StringBuffer uses MemoryBlock ## What changes were proposed in this pull request? This PR tries to use `MemoryBlock` in `UTF8StringBuffer`. In general, there are two advantages to use

svn commit: r26305 - in /dev/spark/2.4.0-SNAPSHOT-2018_04_12_04_02-6a2289e-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-04-12 Thread pwendell
Author: pwendell Date: Thu Apr 12 11:23:41 2018 New Revision: 26305 Log: Apache Spark 2.4.0-SNAPSHOT-2018_04_12_04_02-6a2289e docs [This commit notification would consist of 1458 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

svn commit: r26303 - in /dev/spark/2.3.1-SNAPSHOT-2018_04_12_02_01-5712695-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-04-12 Thread pwendell
Author: pwendell Date: Thu Apr 12 09:15:55 2018 New Revision: 26303 Log: Apache Spark 2.3.1-SNAPSHOT-2018_04_12_02_01-5712695 docs [This commit notification would consist of 1443 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-23962][SQL][TEST] Fix race in currentExecutionIds().

2018-04-12 Thread wenchen
Repository: spark Updated Branches: refs/heads/master e904dfaf0 -> 6a2289ecf [SPARK-23962][SQL][TEST] Fix race in currentExecutionIds(). SQLMetricsTestUtils.currentExecutionIds() was racing with the listener bus, which lead to some flaky tests. We should wait till the listener bus is empty.

spark git commit: [SPARK-23962][SQL][TEST] Fix race in currentExecutionIds().

2018-04-12 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.3 03a4dfd69 -> 571269519 [SPARK-23962][SQL][TEST] Fix race in currentExecutionIds(). SQLMetricsTestUtils.currentExecutionIds() was racing with the listener bus, which lead to some flaky tests. We should wait till the listener bus is