svn commit: r29418 - in /dev/spark/2.4.1-SNAPSHOT-2018_09_15_22_02-60af706-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s
Author: pwendell Date: Sun Sep 16 05:18:48 2018 New Revision: 29418 Log: Apache Spark 2.4.1-SNAPSHOT-2018_09_15_22_02-60af706 docs [This commit notification would consist of 1475 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
svn commit: r29417 - in /dev/spark/2.3.3-SNAPSHOT-2018_09_15_22_02-7b5da37-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s
Author: pwendell Date: Sun Sep 16 05:17:33 2018 New Revision: 29417 Log: Apache Spark 2.3.3-SNAPSHOT-2018_09_15_22_02-7b5da37 docs [This commit notification would consist of 1443 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
spark git commit: [SPARK-24418][FOLLOWUP][DOC] Update docs to show Scala 2.11.12
Repository: spark Updated Branches: refs/heads/master 02c2963f8 -> bfcf74260 [SPARK-24418][FOLLOWUP][DOC] Update docs to show Scala 2.11.12 ## What changes were proposed in this pull request? SPARK-24418 upgrades Scala to 2.11.12. This PR updates Scala version in docs. - https://spark.apache.org/docs/latest/quick-start.html#self-contained-applications (screenshot) ![screen1](https://user-images.githubusercontent.com/9700541/45590509-9c5f0400-b8ee-11e8-9293-e48d297db894.png) - https://spark.apache.org/docs/latest/rdd-programming-guide.html#working-with-key-value-pairs (Scala, Java) (These are hyperlink updates) - https://spark.apache.org/docs/latest/streaming-flume-integration.html#configuring-flume-1 (screenshot) ![screen2](https://user-images.githubusercontent.com/9700541/45590511-a123b800-b8ee-11e8-97a5-b7f2288229c2.png) ## How was this patch tested? Manual. ```bash $ cd docs $ SKIP_API=1 jekyll build ``` Closes #22431 from dongjoon-hyun/SPARK-24418. Authored-by: Dongjoon Hyun Signed-off-by: DB Tsai Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/bfcf7426 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/bfcf7426 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/bfcf7426 Branch: refs/heads/master Commit: bfcf7426057a964b3cee90089aab6c003addc4fb Parents: 02c2963 Author: Dongjoon Hyun Authored: Sun Sep 16 04:14:19 2018 + Committer: DB Tsai Committed: Sun Sep 16 04:14:19 2018 + -- docs/_config.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/bfcf7426/docs/_config.yml -- diff --git a/docs/_config.yml b/docs/_config.yml index 75e54c6..dfc1a73 100644 --- a/docs/_config.yml +++ b/docs/_config.yml @@ -17,7 +17,7 @@ include: SPARK_VERSION: 2.5.0-SNAPSHOT SPARK_VERSION_SHORT: 2.5.0 SCALA_BINARY_VERSION: "2.11" -SCALA_VERSION: "2.11.8" +SCALA_VERSION: "2.11.12" MESOS_VERSION: 1.0.0 SPARK_ISSUE_TRACKER_URL: https://issues.apache.org/jira/browse/SPARK SPARK_GITHUB_URL: https://github.com/apache/spark - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
spark git commit: [SPARK-24418][FOLLOWUP][DOC] Update docs to show Scala 2.11.12
Repository: spark Updated Branches: refs/heads/branch-2.4 b839721f3 -> 60af706b4 [SPARK-24418][FOLLOWUP][DOC] Update docs to show Scala 2.11.12 ## What changes were proposed in this pull request? SPARK-24418 upgrades Scala to 2.11.12. This PR updates Scala version in docs. - https://spark.apache.org/docs/latest/quick-start.html#self-contained-applications (screenshot) ![screen1](https://user-images.githubusercontent.com/9700541/45590509-9c5f0400-b8ee-11e8-9293-e48d297db894.png) - https://spark.apache.org/docs/latest/rdd-programming-guide.html#working-with-key-value-pairs (Scala, Java) (These are hyperlink updates) - https://spark.apache.org/docs/latest/streaming-flume-integration.html#configuring-flume-1 (screenshot) ![screen2](https://user-images.githubusercontent.com/9700541/45590511-a123b800-b8ee-11e8-97a5-b7f2288229c2.png) ## How was this patch tested? Manual. ```bash $ cd docs $ SKIP_API=1 jekyll build ``` Closes #22431 from dongjoon-hyun/SPARK-24418. Authored-by: Dongjoon Hyun Signed-off-by: DB Tsai (cherry picked from commit bfcf7426057a964b3cee90089aab6c003addc4fb) Signed-off-by: DB Tsai Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/60af706b Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/60af706b Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/60af706b Branch: refs/heads/branch-2.4 Commit: 60af706b4c49fa1be1b2b1223490c98868c801c3 Parents: b839721 Author: Dongjoon Hyun Authored: Sun Sep 16 04:14:19 2018 + Committer: DB Tsai Committed: Sun Sep 16 04:14:34 2018 + -- docs/_config.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/60af706b/docs/_config.yml -- diff --git a/docs/_config.yml b/docs/_config.yml index 20b6495..7247377 100644 --- a/docs/_config.yml +++ b/docs/_config.yml @@ -17,7 +17,7 @@ include: SPARK_VERSION: 2.4.1-SNAPSHOT SPARK_VERSION_SHORT: 2.4.1 SCALA_BINARY_VERSION: "2.11" -SCALA_VERSION: "2.11.8" +SCALA_VERSION: "2.11.12" MESOS_VERSION: 1.0.0 SPARK_ISSUE_TRACKER_URL: https://issues.apache.org/jira/browse/SPARK SPARK_GITHUB_URL: https://github.com/apache/spark - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
[1/2] spark git commit: Preparing Spark release v2.3.2-rc6
Repository: spark Updated Branches: refs/heads/branch-2.3 0c1e3d109 -> 7b5da37c0 Preparing Spark release v2.3.2-rc6 Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/02b51072 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/02b51072 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/02b51072 Branch: refs/heads/branch-2.3 Commit: 02b510728c31b70e6035ad541bfcdc2b59dcd79a Parents: 0c1e3d1 Author: Saisai Shao Authored: Sun Sep 16 11:31:17 2018 +0800 Committer: Saisai Shao Committed: Sun Sep 16 11:31:17 2018 +0800 -- R/pkg/DESCRIPTION | 2 +- assembly/pom.xml | 2 +- common/kvstore/pom.xml| 2 +- common/network-common/pom.xml | 2 +- common/network-shuffle/pom.xml| 2 +- common/network-yarn/pom.xml | 2 +- common/sketch/pom.xml | 2 +- common/tags/pom.xml | 2 +- common/unsafe/pom.xml | 2 +- core/pom.xml | 2 +- docs/_config.yml | 4 ++-- examples/pom.xml | 2 +- external/docker-integration-tests/pom.xml | 2 +- external/flume-assembly/pom.xml | 2 +- external/flume-sink/pom.xml | 2 +- external/flume/pom.xml| 2 +- external/kafka-0-10-assembly/pom.xml | 2 +- external/kafka-0-10-sql/pom.xml | 2 +- external/kafka-0-10/pom.xml | 2 +- external/kafka-0-8-assembly/pom.xml | 2 +- external/kafka-0-8/pom.xml| 2 +- external/kinesis-asl-assembly/pom.xml | 2 +- external/kinesis-asl/pom.xml | 2 +- external/spark-ganglia-lgpl/pom.xml | 2 +- graphx/pom.xml| 2 +- hadoop-cloud/pom.xml | 2 +- launcher/pom.xml | 2 +- mllib-local/pom.xml | 2 +- mllib/pom.xml | 2 +- pom.xml | 2 +- python/pyspark/version.py | 2 +- repl/pom.xml | 2 +- resource-managers/kubernetes/core/pom.xml | 2 +- resource-managers/mesos/pom.xml | 2 +- resource-managers/yarn/pom.xml| 2 +- sql/catalyst/pom.xml | 2 +- sql/core/pom.xml | 2 +- sql/hive-thriftserver/pom.xml | 2 +- sql/hive/pom.xml | 2 +- streaming/pom.xml | 2 +- tools/pom.xml | 2 +- 41 files changed, 42 insertions(+), 42 deletions(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/02b51072/R/pkg/DESCRIPTION -- diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION index 6ec4966..8df2635 100644 --- a/R/pkg/DESCRIPTION +++ b/R/pkg/DESCRIPTION @@ -1,6 +1,6 @@ Package: SparkR Type: Package -Version: 2.3.3 +Version: 2.3.2 Title: R Frontend for Apache Spark Description: Provides an R Frontend for Apache Spark. Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"), http://git-wip-us.apache.org/repos/asf/spark/blob/02b51072/assembly/pom.xml -- diff --git a/assembly/pom.xml b/assembly/pom.xml index f8b15cc..57485fc 100644 --- a/assembly/pom.xml +++ b/assembly/pom.xml @@ -21,7 +21,7 @@ org.apache.spark spark-parent_2.11 -2.3.3-SNAPSHOT +2.3.2 ../pom.xml http://git-wip-us.apache.org/repos/asf/spark/blob/02b51072/common/kvstore/pom.xml -- diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml index e412a47..53e58c2 100644 --- a/common/kvstore/pom.xml +++ b/common/kvstore/pom.xml @@ -22,7 +22,7 @@ org.apache.spark spark-parent_2.11 -2.3.3-SNAPSHOT +2.3.2 ../../pom.xml http://git-wip-us.apache.org/repos/asf/spark/blob/02b51072/common/network-common/pom.xml -- diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml index d8f9a3d..d05647c 100644 --- a/common/network-common/pom.xml +++ b/common/network-common/pom.xml @@ -22,7 +22,7 @@ org.apache.spark spark-parent_2.11 -2.3.3-SNAPSHOT +2.3.2 ../../pom.xml http://git-wip-us.apache.org/repos/asf/spark/blob/02b51072/common/network-shuffle/pom.xml -- diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml index a1a4f87..8d46761 100644 --- a/common/network-shuffle/pom.xml +++ b/common/network-shuffle/pom.xml
[2/2] spark git commit: Preparing development version 2.3.3-SNAPSHOT
Preparing development version 2.3.3-SNAPSHOT Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/7b5da37c Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/7b5da37c Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/7b5da37c Branch: refs/heads/branch-2.3 Commit: 7b5da37c0ad08e7b2f3d536de13be63758a2ed99 Parents: 02b5107 Author: Saisai Shao Authored: Sun Sep 16 11:31:22 2018 +0800 Committer: Saisai Shao Committed: Sun Sep 16 11:31:22 2018 +0800 -- R/pkg/DESCRIPTION | 2 +- assembly/pom.xml | 2 +- common/kvstore/pom.xml| 2 +- common/network-common/pom.xml | 2 +- common/network-shuffle/pom.xml| 2 +- common/network-yarn/pom.xml | 2 +- common/sketch/pom.xml | 2 +- common/tags/pom.xml | 2 +- common/unsafe/pom.xml | 2 +- core/pom.xml | 2 +- docs/_config.yml | 4 ++-- examples/pom.xml | 2 +- external/docker-integration-tests/pom.xml | 2 +- external/flume-assembly/pom.xml | 2 +- external/flume-sink/pom.xml | 2 +- external/flume/pom.xml| 2 +- external/kafka-0-10-assembly/pom.xml | 2 +- external/kafka-0-10-sql/pom.xml | 2 +- external/kafka-0-10/pom.xml | 2 +- external/kafka-0-8-assembly/pom.xml | 2 +- external/kafka-0-8/pom.xml| 2 +- external/kinesis-asl-assembly/pom.xml | 2 +- external/kinesis-asl/pom.xml | 2 +- external/spark-ganglia-lgpl/pom.xml | 2 +- graphx/pom.xml| 2 +- hadoop-cloud/pom.xml | 2 +- launcher/pom.xml | 2 +- mllib-local/pom.xml | 2 +- mllib/pom.xml | 2 +- pom.xml | 2 +- python/pyspark/version.py | 2 +- repl/pom.xml | 2 +- resource-managers/kubernetes/core/pom.xml | 2 +- resource-managers/mesos/pom.xml | 2 +- resource-managers/yarn/pom.xml| 2 +- sql/catalyst/pom.xml | 2 +- sql/core/pom.xml | 2 +- sql/hive-thriftserver/pom.xml | 2 +- sql/hive/pom.xml | 2 +- streaming/pom.xml | 2 +- tools/pom.xml | 2 +- 41 files changed, 42 insertions(+), 42 deletions(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/7b5da37c/R/pkg/DESCRIPTION -- diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION index 8df2635..6ec4966 100644 --- a/R/pkg/DESCRIPTION +++ b/R/pkg/DESCRIPTION @@ -1,6 +1,6 @@ Package: SparkR Type: Package -Version: 2.3.2 +Version: 2.3.3 Title: R Frontend for Apache Spark Description: Provides an R Frontend for Apache Spark. Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"), http://git-wip-us.apache.org/repos/asf/spark/blob/7b5da37c/assembly/pom.xml -- diff --git a/assembly/pom.xml b/assembly/pom.xml index 57485fc..f8b15cc 100644 --- a/assembly/pom.xml +++ b/assembly/pom.xml @@ -21,7 +21,7 @@ org.apache.spark spark-parent_2.11 -2.3.2 +2.3.3-SNAPSHOT ../pom.xml http://git-wip-us.apache.org/repos/asf/spark/blob/7b5da37c/common/kvstore/pom.xml -- diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml index 53e58c2..e412a47 100644 --- a/common/kvstore/pom.xml +++ b/common/kvstore/pom.xml @@ -22,7 +22,7 @@ org.apache.spark spark-parent_2.11 -2.3.2 +2.3.3-SNAPSHOT ../../pom.xml http://git-wip-us.apache.org/repos/asf/spark/blob/7b5da37c/common/network-common/pom.xml -- diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml index d05647c..d8f9a3d 100644 --- a/common/network-common/pom.xml +++ b/common/network-common/pom.xml @@ -22,7 +22,7 @@ org.apache.spark spark-parent_2.11 -2.3.2 +2.3.3-SNAPSHOT ../../pom.xml http://git-wip-us.apache.org/repos/asf/spark/blob/7b5da37c/common/network-shuffle/pom.xml -- diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml index 8d46761..a1a4f87 100644 --- a/common/network-shuffle/pom.xml +++ b/common/network-shuffle/pom.xml @@ -22,7 +22,7 @@ org.apache.spark spark-parent_2.11 -
[spark] Git Push Summary
Repository: spark Updated Tags: refs/tags/v2.3.2-rc6 [created] 02b510728 - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
svn commit: r29416 - in /dev/spark/2.5.0-SNAPSHOT-2018_09_15_20_02-fefaa3c-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s
Author: pwendell Date: Sun Sep 16 03:16:49 2018 New Revision: 29416 Log: Apache Spark 2.5.0-SNAPSHOT-2018_09_15_20_02-fefaa3c docs [This commit notification would consist of 1484 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
spark git commit: [SPARK-25439][TESTS][SQL] Fixes TPCHQuerySuite datatype of customer.c_nationkey to BIGINT according to spec
Repository: spark Updated Branches: refs/heads/master fefaa3c30 -> 02c2963f8 [SPARK-25439][TESTS][SQL] Fixes TPCHQuerySuite datatype of customer.c_nationkey to BIGINT according to spec ## What changes were proposed in this pull request? Fixes TPCH DDL datatype of `customer.c_nationkey` from `STRING` to `BIGINT` according to spec and `nation.nationkey` in `TPCHQuerySuite.scala`. The rest of the keys are OK. Note, this will lead to **non-comparable previous results** to new runs involving the customer table. ## How was this patch tested? Manual tests Author: npoggi Closes #22430 from npoggi/SPARK-25439_Fix-TPCH-customer-c_nationkey. Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/02c2963f Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/02c2963f Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/02c2963f Branch: refs/heads/master Commit: 02c2963f895b9d78d7f6d9972cacec4ef55fa278 Parents: fefaa3c Author: npoggi Authored: Sat Sep 15 20:06:08 2018 -0700 Committer: gatorsmile Committed: Sat Sep 15 20:06:08 2018 -0700 -- sql/core/src/test/scala/org/apache/spark/sql/TPCHQuerySuite.scala | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/02c2963f/sql/core/src/test/scala/org/apache/spark/sql/TPCHQuerySuite.scala -- diff --git a/sql/core/src/test/scala/org/apache/spark/sql/TPCHQuerySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/TPCHQuerySuite.scala index e3e7005..b32d95d 100644 --- a/sql/core/src/test/scala/org/apache/spark/sql/TPCHQuerySuite.scala +++ b/sql/core/src/test/scala/org/apache/spark/sql/TPCHQuerySuite.scala @@ -69,7 +69,7 @@ class TPCHQuerySuite extends BenchmarkQueryTest { sql( """ |CREATE TABLE `customer` (`c_custkey` BIGINT, `c_name` STRING, `c_address` STRING, -|`c_nationkey` STRING, `c_phone` STRING, `c_acctbal` DECIMAL(10,0), +|`c_nationkey` BIGINT, `c_phone` STRING, `c_acctbal` DECIMAL(10,0), |`c_mktsegment` STRING, `c_comment` STRING) |USING parquet """.stripMargin) - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
spark git commit: [SPARK-25439][TESTS][SQL] Fixes TPCHQuerySuite datatype of customer.c_nationkey to BIGINT according to spec
Repository: spark Updated Branches: refs/heads/branch-2.4 b40e5feec -> b839721f3 [SPARK-25439][TESTS][SQL] Fixes TPCHQuerySuite datatype of customer.c_nationkey to BIGINT according to spec ## What changes were proposed in this pull request? Fixes TPCH DDL datatype of `customer.c_nationkey` from `STRING` to `BIGINT` according to spec and `nation.nationkey` in `TPCHQuerySuite.scala`. The rest of the keys are OK. Note, this will lead to **non-comparable previous results** to new runs involving the customer table. ## How was this patch tested? Manual tests Author: npoggi Closes #22430 from npoggi/SPARK-25439_Fix-TPCH-customer-c_nationkey. (cherry picked from commit 02c2963f895b9d78d7f6d9972cacec4ef55fa278) Signed-off-by: gatorsmile Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/b839721f Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/b839721f Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/b839721f Branch: refs/heads/branch-2.4 Commit: b839721f3cea2b9d9af73ab4fd9dad225025ec86 Parents: b40e5fe Author: npoggi Authored: Sat Sep 15 20:06:08 2018 -0700 Committer: gatorsmile Committed: Sat Sep 15 20:06:26 2018 -0700 -- sql/core/src/test/scala/org/apache/spark/sql/TPCHQuerySuite.scala | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/b839721f/sql/core/src/test/scala/org/apache/spark/sql/TPCHQuerySuite.scala -- diff --git a/sql/core/src/test/scala/org/apache/spark/sql/TPCHQuerySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/TPCHQuerySuite.scala index e3e7005..b32d95d 100644 --- a/sql/core/src/test/scala/org/apache/spark/sql/TPCHQuerySuite.scala +++ b/sql/core/src/test/scala/org/apache/spark/sql/TPCHQuerySuite.scala @@ -69,7 +69,7 @@ class TPCHQuerySuite extends BenchmarkQueryTest { sql( """ |CREATE TABLE `customer` (`c_custkey` BIGINT, `c_name` STRING, `c_address` STRING, -|`c_nationkey` STRING, `c_phone` STRING, `c_acctbal` DECIMAL(10,0), +|`c_nationkey` BIGINT, `c_phone` STRING, `c_acctbal` DECIMAL(10,0), |`c_mktsegment` STRING, `c_comment` STRING) |USING parquet """.stripMargin) - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
svn commit: r29413 - in /dev/spark/2.4.1-SNAPSHOT-2018_09_15_18_02-b40e5fe-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s
Author: pwendell Date: Sun Sep 16 01:16:22 2018 New Revision: 29413 Log: Apache Spark 2.4.1-SNAPSHOT-2018_09_15_18_02-b40e5fe docs [This commit notification would consist of 1475 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
svn commit: r29412 - in /dev/spark/2.4.0-2018_09_16_00_38-1220ab8-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _site/api/j
Author: wenchen Date: Sun Sep 16 00:56:09 2018 New Revision: 29412 Log: Apache Spark 2.4.0-2018_09_16_00_38-1220ab8 docs [This commit notification would consist of 1481 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
[1/2] spark git commit: [SPARK-25438][SQL][TEST] Fix FilterPushdownBenchmark to use the same memory assumption
Repository: spark Updated Branches: refs/heads/branch-2.4 ae2ca0e5d -> b40e5feec http://git-wip-us.apache.org/repos/asf/spark/blob/b40e5fee/sql/core/benchmarks/FilterPushdownBenchmark-results.txt -- diff --git a/sql/core/benchmarks/FilterPushdownBenchmark-results.txt b/sql/core/benchmarks/FilterPushdownBenchmark-results.txt index a75a15c..e680ddf 100644 --- a/sql/core/benchmarks/FilterPushdownBenchmark-results.txt +++ b/sql/core/benchmarks/FilterPushdownBenchmark-results.txt @@ -2,737 +2,669 @@ Pushdown for many distinct value case -Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6 -Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz - +OpenJDK 64-Bit Server VM 1.8.0_181-b13 on Linux 3.10.0-862.3.2.el7.x86_64 +Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz Select 0 string row (value IS NULL): Best/Avg Time(ms)Rate(M/s) Per Row(ns) Relative -Parquet Vectorized8970 / 9122 1.8 570.3 1.0X -Parquet Vectorized (Pushdown) 471 / 491 33.4 30.0 19.0X -Native ORC Vectorized 7661 / 7853 2.1 487.0 1.2X -Native ORC Vectorized (Pushdown) 1134 / 1161 13.9 72.1 7.9X - -Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6 -Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz +Parquet Vectorized 11405 / 11485 1.4 725.1 1.0X +Parquet Vectorized (Pushdown) 675 / 690 23.3 42.9 16.9X +Native ORC Vectorized 7127 / 7170 2.2 453.1 1.6X +Native ORC Vectorized (Pushdown) 519 / 541 30.3 33.0 22.0X +OpenJDK 64-Bit Server VM 1.8.0_181-b13 on Linux 3.10.0-862.3.2.el7.x86_64 +Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz Select 0 string row ('7864320' < value < '7864320'): Best/Avg Time(ms) Rate(M/s) Per Row(ns) Relative -Parquet Vectorized9246 / 9297 1.7 587.8 1.0X -Parquet Vectorized (Pushdown) 480 / 488 32.8 30.5 19.3X -Native ORC Vectorized 7838 / 7850 2.0 498.3 1.2X -Native ORC Vectorized (Pushdown) 1054 / 1118 14.9 67.0 8.8X - -Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6 -Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz +Parquet Vectorized 11457 / 11473 1.4 728.4 1.0X +Parquet Vectorized (Pushdown) 656 / 686 24.0 41.7 17.5X +Native ORC Vectorized 7328 / 7342 2.1 465.9 1.6X +Native ORC Vectorized (Pushdown) 539 / 565 29.2 34.2 21.3X +OpenJDK 64-Bit Server VM 1.8.0_181-b13 on Linux 3.10.0-862.3.2.el7.x86_64 +Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz Select 1 string row (value = '7864320'): Best/Avg Time(ms)Rate(M/s) Per Row(ns) Relative -Parquet Vectorized8989 / 9100 1.7 571.5 1.0X -Parquet Vectorized (Pushdown) 448 / 467 35.1 28.5 20.1X -Native ORC Vectorized 7680 / 7768 2.0 488.3 1.2X -Native ORC Vectorized (Pushdown) 1067 / 1118 14.7 67.8 8.4X - -Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6 -Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz +Parquet Vectorized 11878 / 11888 1.3 755.2 1.0X +Parquet Vectorized (Pushdown) 630 / 654 25.0 40.1 18.9X +Native ORC Vectorized 7342 / 7362 2.1 466.8 1.6X +Native ORC Vectorized (Pushdown) 519 / 537 30.3 33.0 22.9X +OpenJDK 64-Bit Server VM 1.8.0_181-b13 on Linux 3.10.0-862.3.2.el7.x86_64 +Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz Select 1 string row (value <=> '7864320'): Best/Avg Time(ms)Rate(M/s) Per Row(ns) Relative -Parquet Vectorized9115 / 9266 1.7 579.5 1.0X -Parquet Vectorized (Pushdown) 466 / 492 33.7 29.7
[2/2] spark git commit: [SPARK-25438][SQL][TEST] Fix FilterPushdownBenchmark to use the same memory assumption
[SPARK-25438][SQL][TEST] Fix FilterPushdownBenchmark to use the same memory assumption ## What changes were proposed in this pull request? This PR aims to fix three things in `FilterPushdownBenchmark`. **1. Use the same memory assumption.** The following configurations are used in ORC and Parquet. - Memory buffer for writing - parquet.block.size (default: 128MB) - orc.stripe.size (default: 64MB) - Compression chunk size - parquet.page.size (default: 1MB) - orc.compress.size (default: 256KB) SPARK-24692 used 1MB, the default value of `parquet.page.size`, for `parquet.block.size` and `orc.stripe.size`. But, it missed to match `orc.compress.size`. So, the current benchmark shows the result from ORC with 256KB memory for compression and Parquet with 1MB. To compare correctly, we need to be consistent. **2. Dictionary encoding should not be enforced for all cases.** SPARK-24206 enforced dictionary encoding for all test cases. This PR recovers the default behavior in general and enforces dictionary encoding only in case of `prepareStringDictTable`. **3. Generate test result on AWS r3.xlarge** SPARK-24206 generated the result on AWS in order to reproduce and compare easily. This PR also aims to update the result on the same machine again in the same reason. Specifically, AWS r3.xlarge with Instance Store is used. ## How was this patch tested? Manual. Enable the test cases and run `FilterPushdownBenchmark` on `AWS r3.xlarge`. It takes about 4 hours 15 minutes. Closes #22427 from dongjoon-hyun/SPARK-25438. Authored-by: Dongjoon Hyun Signed-off-by: Dongjoon Hyun (cherry picked from commit fefaa3c30df2c56046370081cb51bfe68d26976b) Signed-off-by: Dongjoon Hyun Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/b40e5fee Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/b40e5fee Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/b40e5fee Branch: refs/heads/branch-2.4 Commit: b40e5feec2660891590e21807133a508cbd004d3 Parents: ae2ca0e Author: Dongjoon Hyun Authored: Sat Sep 15 17:48:39 2018 -0700 Committer: Dongjoon Hyun Committed: Sat Sep 15 17:48:53 2018 -0700 -- .../FilterPushdownBenchmark-results.txt | 912 +-- .../benchmark/FilterPushdownBenchmark.scala | 11 +- 2 files changed, 428 insertions(+), 495 deletions(-) -- - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
[1/2] spark git commit: [SPARK-25438][SQL][TEST] Fix FilterPushdownBenchmark to use the same memory assumption
Repository: spark Updated Branches: refs/heads/master e06da95cd -> fefaa3c30 http://git-wip-us.apache.org/repos/asf/spark/blob/fefaa3c3/sql/core/benchmarks/FilterPushdownBenchmark-results.txt -- diff --git a/sql/core/benchmarks/FilterPushdownBenchmark-results.txt b/sql/core/benchmarks/FilterPushdownBenchmark-results.txt index a75a15c..e680ddf 100644 --- a/sql/core/benchmarks/FilterPushdownBenchmark-results.txt +++ b/sql/core/benchmarks/FilterPushdownBenchmark-results.txt @@ -2,737 +2,669 @@ Pushdown for many distinct value case -Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6 -Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz - +OpenJDK 64-Bit Server VM 1.8.0_181-b13 on Linux 3.10.0-862.3.2.el7.x86_64 +Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz Select 0 string row (value IS NULL): Best/Avg Time(ms)Rate(M/s) Per Row(ns) Relative -Parquet Vectorized8970 / 9122 1.8 570.3 1.0X -Parquet Vectorized (Pushdown) 471 / 491 33.4 30.0 19.0X -Native ORC Vectorized 7661 / 7853 2.1 487.0 1.2X -Native ORC Vectorized (Pushdown) 1134 / 1161 13.9 72.1 7.9X - -Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6 -Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz +Parquet Vectorized 11405 / 11485 1.4 725.1 1.0X +Parquet Vectorized (Pushdown) 675 / 690 23.3 42.9 16.9X +Native ORC Vectorized 7127 / 7170 2.2 453.1 1.6X +Native ORC Vectorized (Pushdown) 519 / 541 30.3 33.0 22.0X +OpenJDK 64-Bit Server VM 1.8.0_181-b13 on Linux 3.10.0-862.3.2.el7.x86_64 +Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz Select 0 string row ('7864320' < value < '7864320'): Best/Avg Time(ms) Rate(M/s) Per Row(ns) Relative -Parquet Vectorized9246 / 9297 1.7 587.8 1.0X -Parquet Vectorized (Pushdown) 480 / 488 32.8 30.5 19.3X -Native ORC Vectorized 7838 / 7850 2.0 498.3 1.2X -Native ORC Vectorized (Pushdown) 1054 / 1118 14.9 67.0 8.8X - -Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6 -Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz +Parquet Vectorized 11457 / 11473 1.4 728.4 1.0X +Parquet Vectorized (Pushdown) 656 / 686 24.0 41.7 17.5X +Native ORC Vectorized 7328 / 7342 2.1 465.9 1.6X +Native ORC Vectorized (Pushdown) 539 / 565 29.2 34.2 21.3X +OpenJDK 64-Bit Server VM 1.8.0_181-b13 on Linux 3.10.0-862.3.2.el7.x86_64 +Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz Select 1 string row (value = '7864320'): Best/Avg Time(ms)Rate(M/s) Per Row(ns) Relative -Parquet Vectorized8989 / 9100 1.7 571.5 1.0X -Parquet Vectorized (Pushdown) 448 / 467 35.1 28.5 20.1X -Native ORC Vectorized 7680 / 7768 2.0 488.3 1.2X -Native ORC Vectorized (Pushdown) 1067 / 1118 14.7 67.8 8.4X - -Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12 on Mac OS X 10.12.6 -Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz +Parquet Vectorized 11878 / 11888 1.3 755.2 1.0X +Parquet Vectorized (Pushdown) 630 / 654 25.0 40.1 18.9X +Native ORC Vectorized 7342 / 7362 2.1 466.8 1.6X +Native ORC Vectorized (Pushdown) 519 / 537 30.3 33.0 22.9X +OpenJDK 64-Bit Server VM 1.8.0_181-b13 on Linux 3.10.0-862.3.2.el7.x86_64 +Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz Select 1 string row (value <=> '7864320'): Best/Avg Time(ms)Rate(M/s) Per Row(ns) Relative -Parquet Vectorized9115 / 9266 1.7 579.5 1.0X -Parquet Vectorized (Pushdown) 466 / 492 33.7 29.7 19.5X
[2/2] spark git commit: [SPARK-25438][SQL][TEST] Fix FilterPushdownBenchmark to use the same memory assumption
[SPARK-25438][SQL][TEST] Fix FilterPushdownBenchmark to use the same memory assumption ## What changes were proposed in this pull request? This PR aims to fix three things in `FilterPushdownBenchmark`. **1. Use the same memory assumption.** The following configurations are used in ORC and Parquet. - Memory buffer for writing - parquet.block.size (default: 128MB) - orc.stripe.size (default: 64MB) - Compression chunk size - parquet.page.size (default: 1MB) - orc.compress.size (default: 256KB) SPARK-24692 used 1MB, the default value of `parquet.page.size`, for `parquet.block.size` and `orc.stripe.size`. But, it missed to match `orc.compress.size`. So, the current benchmark shows the result from ORC with 256KB memory for compression and Parquet with 1MB. To compare correctly, we need to be consistent. **2. Dictionary encoding should not be enforced for all cases.** SPARK-24206 enforced dictionary encoding for all test cases. This PR recovers the default behavior in general and enforces dictionary encoding only in case of `prepareStringDictTable`. **3. Generate test result on AWS r3.xlarge** SPARK-24206 generated the result on AWS in order to reproduce and compare easily. This PR also aims to update the result on the same machine again in the same reason. Specifically, AWS r3.xlarge with Instance Store is used. ## How was this patch tested? Manual. Enable the test cases and run `FilterPushdownBenchmark` on `AWS r3.xlarge`. It takes about 4 hours 15 minutes. Closes #22427 from dongjoon-hyun/SPARK-25438. Authored-by: Dongjoon Hyun Signed-off-by: Dongjoon Hyun Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/fefaa3c3 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/fefaa3c3 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/fefaa3c3 Branch: refs/heads/master Commit: fefaa3c30df2c56046370081cb51bfe68d26976b Parents: e06da95 Author: Dongjoon Hyun Authored: Sat Sep 15 17:48:39 2018 -0700 Committer: Dongjoon Hyun Committed: Sat Sep 15 17:48:39 2018 -0700 -- .../FilterPushdownBenchmark-results.txt | 912 +-- .../benchmark/FilterPushdownBenchmark.scala | 11 +- 2 files changed, 428 insertions(+), 495 deletions(-) -- - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
spark git commit: [SPARK-25425][SQL] Extra options should override session options in DataSource V2
Repository: spark Updated Branches: refs/heads/master bb2f069cf -> e06da95cd [SPARK-25425][SQL] Extra options should override session options in DataSource V2 ## What changes were proposed in this pull request? In the PR, I propose overriding session options by extra options in DataSource V2. Extra options are more specific and set via `.option()`, and should overwrite more generic session options. Entries from seconds map overwrites entries with the same key from the first map, for example: ```Scala scala> Map("option" -> false) ++ Map("option" -> true) res0: scala.collection.immutable.Map[String,Boolean] = Map(option -> true) ``` ## How was this patch tested? Added a test for checking which option is propagated to a data source in `load()`. Closes #22413 from MaxGekk/session-options. Lead-authored-by: Maxim Gekk Co-authored-by: Dongjoon Hyun Co-authored-by: Maxim Gekk Signed-off-by: Dongjoon Hyun Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/e06da95c Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/e06da95c Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/e06da95c Branch: refs/heads/master Commit: e06da95cd9423f55cdb154a2778b0bddf7be984c Parents: bb2f069 Author: Maxim Gekk Authored: Sat Sep 15 17:24:11 2018 -0700 Committer: Dongjoon Hyun Committed: Sat Sep 15 17:24:11 2018 -0700 -- .../org/apache/spark/sql/DataFrameReader.scala | 2 +- .../org/apache/spark/sql/DataFrameWriter.scala | 8 +++-- .../sql/sources/v2/DataSourceV2Suite.scala | 35 +++- .../sources/v2/SimpleWritableDataSource.scala | 6 +++- 4 files changed, 45 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/e06da95c/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala -- diff --git a/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala b/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala index e6c2cba..fe69f25 100644 --- a/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala +++ b/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala @@ -202,7 +202,7 @@ class DataFrameReader private[sql](sparkSession: SparkSession) extends Logging { DataSourceOptions.PATHS_KEY -> objectMapper.writeValueAsString(paths.toArray) } Dataset.ofRows(sparkSession, DataSourceV2Relation.create( - ds, extraOptions.toMap ++ sessionOptions + pathsOption, + ds, sessionOptions ++ extraOptions.toMap + pathsOption, userSpecifiedSchema = userSpecifiedSchema)) } else { loadV1Source(paths: _*) http://git-wip-us.apache.org/repos/asf/spark/blob/e06da95c/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala -- diff --git a/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala b/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala index dfb8c47..188fce7 100644 --- a/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala +++ b/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala @@ -241,10 +241,12 @@ final class DataFrameWriter[T] private[sql](ds: Dataset[T]) { val source = cls.newInstance().asInstanceOf[DataSourceV2] source match { case provider: BatchWriteSupportProvider => - val options = extraOptions ++ - DataSourceV2Utils.extractSessionConfigs(source, df.sparkSession.sessionState.conf) + val sessionOptions = DataSourceV2Utils.extractSessionConfigs( +source, +df.sparkSession.sessionState.conf) + val options = sessionOptions ++ extraOptions - val relation = DataSourceV2Relation.create(source, options.toMap) + val relation = DataSourceV2Relation.create(source, options) if (mode == SaveMode.Append) { runCommand(df.sparkSession, "save") { AppendData.byName(relation, df.logicalPlan) http://git-wip-us.apache.org/repos/asf/spark/blob/e06da95c/sql/core/src/test/scala/org/apache/spark/sql/sources/v2/DataSourceV2Suite.scala -- diff --git a/sql/core/src/test/scala/org/apache/spark/sql/sources/v2/DataSourceV2Suite.scala b/sql/core/src/test/scala/org/apache/spark/sql/sources/v2/DataSourceV2Suite.scala index f6c3e0c..7cc8abc 100644 --- a/sql/core/src/test/scala/org/apache/spark/sql/sources/v2/DataSourceV2Suite.scala +++ b/sql/core/src/test/scala/org/apache/spark/sql/sources/v2/DataSourceV2Suite.scala @@ -17,6 +17,8 @@ package org.apache.spark.sql.sources.v2 +import java.io.File + import
spark git commit: [SPARK-25436] Bump master branch version to 2.5.0-SNAPSHOT
Repository: spark Updated Branches: refs/heads/master 5ebef33c8 -> bb2f069cf [SPARK-25436] Bump master branch version to 2.5.0-SNAPSHOT ## What changes were proposed in this pull request? In the dev list, we can still discuss whether the next version is 2.5.0 or 3.0.0. Let us first bump the master branch version to `2.5.0-SNAPSHOT`. ## How was this patch tested? N/A Closes #22426 from gatorsmile/bumpVersionMaster. Authored-by: gatorsmile Signed-off-by: gatorsmile Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/bb2f069c Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/bb2f069c Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/bb2f069c Branch: refs/heads/master Commit: bb2f069cf2c1e5b05362c7bbe8e0994a3e36a626 Parents: 5ebef33 Author: gatorsmile Authored: Sat Sep 15 16:24:02 2018 -0700 Committer: gatorsmile Committed: Sat Sep 15 16:24:02 2018 -0700 -- R/pkg/DESCRIPTION | 2 +- assembly/pom.xml | 2 +- common/kvstore/pom.xml | 2 +- common/network-common/pom.xml | 2 +- common/network-shuffle/pom.xml | 2 +- common/network-yarn/pom.xml| 2 +- common/sketch/pom.xml | 2 +- common/tags/pom.xml| 2 +- common/unsafe/pom.xml | 2 +- core/pom.xml | 2 +- docs/_config.yml | 4 ++-- examples/pom.xml | 2 +- external/avro/pom.xml | 2 +- external/docker-integration-tests/pom.xml | 2 +- external/flume-assembly/pom.xml| 2 +- external/flume-sink/pom.xml| 2 +- external/flume/pom.xml | 2 +- external/kafka-0-10-assembly/pom.xml | 2 +- external/kafka-0-10-sql/pom.xml| 2 +- external/kafka-0-10/pom.xml| 2 +- external/kafka-0-8-assembly/pom.xml| 2 +- external/kafka-0-8/pom.xml | 2 +- external/kinesis-asl-assembly/pom.xml | 2 +- external/kinesis-asl/pom.xml | 2 +- external/spark-ganglia-lgpl/pom.xml| 2 +- graphx/pom.xml | 2 +- hadoop-cloud/pom.xml | 2 +- launcher/pom.xml | 2 +- mllib-local/pom.xml| 2 +- mllib/pom.xml | 2 +- pom.xml| 2 +- project/MimaExcludes.scala | 5 + python/pyspark/version.py | 2 +- repl/pom.xml | 2 +- resource-managers/kubernetes/core/pom.xml | 2 +- resource-managers/kubernetes/integration-tests/pom.xml | 2 +- resource-managers/mesos/pom.xml| 2 +- resource-managers/yarn/pom.xml | 2 +- sql/catalyst/pom.xml | 2 +- sql/core/pom.xml | 2 +- sql/hive-thriftserver/pom.xml | 2 +- sql/hive/pom.xml | 2 +- streaming/pom.xml | 2 +- tools/pom.xml | 2 +- 44 files changed, 49 insertions(+), 44 deletions(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/bb2f069c/R/pkg/DESCRIPTION -- diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION index f52d785..96090be 100644 --- a/R/pkg/DESCRIPTION +++ b/R/pkg/DESCRIPTION @@ -1,6 +1,6 @@ Package: SparkR Type: Package -Version: 2.4.0 +Version: 2.5.0 Title: R Frontend for Apache Spark Description: Provides an R Frontend for Apache Spark. Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"), http://git-wip-us.apache.org/repos/asf/spark/blob/bb2f069c/assembly/pom.xml -- diff --git a/assembly/pom.xml b/assembly/pom.xml index 9608c96..d431d3f 100644 --- a/assembly/pom.xml +++ b/assembly/pom.xml @@ -21,7 +21,7 @@ org.apache.spark spark-parent_2.11 -2.4.0-SNAPSHOT +2.5.0-SNAPSHOT ../pom.xml http://git-wip-us.apache.org/repos/asf/spark/blob/bb2f069c/common/kvstore/pom.xml
spark git commit: [SPARK-25426][SQL] Remove the duplicate fallback logic in UnsafeProjection
Repository: spark Updated Branches: refs/heads/master be454a7ce -> 5ebef33c8 [SPARK-25426][SQL] Remove the duplicate fallback logic in UnsafeProjection ## What changes were proposed in this pull request? This pr removed the duplicate fallback logic in `UnsafeProjection`. This pr comes from #22355. ## How was this patch tested? Added tests in `CodeGeneratorWithInterpretedFallbackSuite`. Closes #22417 from maropu/SPARK-25426. Authored-by: Takeshi Yamamuro Signed-off-by: gatorsmile Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/5ebef33c Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/5ebef33c Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/5ebef33c Branch: refs/heads/master Commit: 5ebef33c85a66cdc29db2eff2343600602bbe94e Parents: be454a7 Author: Takeshi Yamamuro Authored: Sat Sep 15 16:20:45 2018 -0700 Committer: gatorsmile Committed: Sat Sep 15 16:20:45 2018 -0700 -- .../sql/catalyst/expressions/Projection.scala | 25 ++-- .../sql/execution/basicPhysicalOperators.scala | 3 +-- 2 files changed, 3 insertions(+), 25 deletions(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/5ebef33c/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Projection.scala -- diff --git a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Projection.scala b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Projection.scala index 226a4dd..5f24170 100644 --- a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Projection.scala +++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Projection.scala @@ -17,10 +17,9 @@ package org.apache.spark.sql.catalyst.expressions -import scala.util.control.NonFatal - import org.apache.spark.sql.catalyst.InternalRow import org.apache.spark.sql.catalyst.expressions.codegen.{GenerateSafeProjection, GenerateUnsafeProjection} +import org.apache.spark.sql.internal.SQLConf import org.apache.spark.sql.types.{DataType, StructType} /** @@ -117,7 +116,7 @@ object UnsafeProjection extends CodeGeneratorWithInterpretedFallback[Seq[Expression], UnsafeProjection] { override protected def createCodeGeneratedObject(in: Seq[Expression]): UnsafeProjection = { -GenerateUnsafeProjection.generate(in) +GenerateUnsafeProjection.generate(in, SQLConf.get.subexpressionEliminationEnabled) } override protected def createInterpretedObject(in: Seq[Expression]): UnsafeProjection = { @@ -168,26 +167,6 @@ object UnsafeProjection def create(exprs: Seq[Expression], inputSchema: Seq[Attribute]): UnsafeProjection = { create(toBoundExprs(exprs, inputSchema)) } - - /** - * Same as other create()'s but allowing enabling/disabling subexpression elimination. - * The param `subexpressionEliminationEnabled` doesn't guarantee to work. For example, - * when fallbacking to interpreted execution, it is not supported. - */ - def create( - exprs: Seq[Expression], - inputSchema: Seq[Attribute], - subexpressionEliminationEnabled: Boolean): UnsafeProjection = { -val unsafeExprs = toUnsafeExprs(toBoundExprs(exprs, inputSchema)) -try { - GenerateUnsafeProjection.generate(unsafeExprs, subexpressionEliminationEnabled) -} catch { - case NonFatal(_) => -// We should have already seen the error message in `CodeGenerator` -logWarning("Expr codegen error and falling back to interpreter mode") -InterpretedUnsafeProjection.createProjection(unsafeExprs) -} - } } /** http://git-wip-us.apache.org/repos/asf/spark/blob/5ebef33c/sql/core/src/main/scala/org/apache/spark/sql/execution/basicPhysicalOperators.scala -- diff --git a/sql/core/src/main/scala/org/apache/spark/sql/execution/basicPhysicalOperators.scala b/sql/core/src/main/scala/org/apache/spark/sql/execution/basicPhysicalOperators.scala index 9434ceb..222a1b8 100644 --- a/sql/core/src/main/scala/org/apache/spark/sql/execution/basicPhysicalOperators.scala +++ b/sql/core/src/main/scala/org/apache/spark/sql/execution/basicPhysicalOperators.scala @@ -68,8 +68,7 @@ case class ProjectExec(projectList: Seq[NamedExpression], child: SparkPlan) protected override def doExecute(): RDD[InternalRow] = { child.execute().mapPartitionsWithIndexInternal { (index, iter) => - val project = UnsafeProjection.create(projectList, child.output, -subexpressionEliminationEnabled) + val project = UnsafeProjection.create(projectList, child.output) project.initialize(index) iter.map(project) }
svn commit: r29408 - /dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/
Author: wenchen Date: Sat Sep 15 15:43:22 2018 New Revision: 29408 Log: Apache Spark 2.4.0-2018_09_15_15_07-1220ab8 Added: dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/ dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/SparkR_2.4.0.tar.gz (with props) dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/SparkR_2.4.0.tar.gz.asc dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/SparkR_2.4.0.tar.gz.sha512 dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/pyspark-2.4.0.tar.gz (with props) dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/pyspark-2.4.0.tar.gz.asc dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/pyspark-2.4.0.tar.gz.sha512 dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/spark-2.4.0-bin-hadoop2.6.tgz (with props) dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/spark-2.4.0-bin-hadoop2.6.tgz.asc dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/spark-2.4.0-bin-hadoop2.6.tgz.sha512 dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/spark-2.4.0-bin-hadoop2.7.tgz (with props) dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/spark-2.4.0-bin-hadoop2.7.tgz.asc dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/spark-2.4.0-bin-hadoop2.7.tgz.sha512 dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/spark-2.4.0-bin-without-hadoop.tgz (with props) dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/spark-2.4.0-bin-without-hadoop.tgz.asc dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/spark-2.4.0-bin-without-hadoop.tgz.sha512 dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/spark-2.4.0.tgz (with props) dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/spark-2.4.0.tgz.asc dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/spark-2.4.0.tgz.sha512 Added: dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/SparkR_2.4.0.tar.gz == Binary file - no diff available. Propchange: dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/SparkR_2.4.0.tar.gz -- svn:mime-type = application/octet-stream Added: dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/SparkR_2.4.0.tar.gz.asc == --- dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/SparkR_2.4.0.tar.gz.asc (added) +++ dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/SparkR_2.4.0.tar.gz.asc Sat Sep 15 15:43:22 2018 @@ -0,0 +1,17 @@ +-BEGIN PGP SIGNATURE- +Version: GnuPG v1 + +iQIcBAABAgAGBQJbnSS1AAoJEB6A3KhRgyfrwQkQAI5B/SshH6Dnu7rQO5lO3xSf ++urf4SWO1ZQwsVMe7WDDZTLqyUc/flwipguDTsF+WQ7KO+2H8jvPyKkkPpn/ETQw +oTKyzsbjdMGOB4qSRIw/ftzldih+cKyfSE/JN/EsyVufPrNDnW5mli+MKF0Dx4Sb +PLOWM8uD02jwmJzlT8AWBPfO39wl8d+37lcG5u4P5MYRwHSTtBt+XJngqcBI4hUv +75rZ6sIx8JW6PXQ55ofxHjrlHRaeUZOKwgodZ7Go/ihox9Unw/pUecVrAFpgwWHJ +d9pNoBEqh6m5Br0vN0XoYxuVeMspeWdeOh6qvhKhJsFX7jbDDuxGtLqxX5QtSTSQ +1WYRbPb/xdElkB/ijlDhM+IQtv3JVCWhzSYyFPCLWlVBHCIL0syrUkCUu1wyf11S +NEPY2kwjYkaypkZBBk/KOp8kYUwZkpW2CEFlDnLUKrB8LqguGbvA9bEK+cFkePk/ +aN8LjaNENX57Ii8D2dUHtd3U0IG6AYGMCLaeUSp4dLF5ewgP1IkhiKxIonImLwnF +Z1uYskyCxQBoV+H5mMQ5LgGbguZmmVdrm9CkfCIr8fwoD3QUoLWYL/JYfzClc6Lc +Ohsmm1pbICfUmNjR8oi+RxZ8rjH66oLqMcd0dCBl42LJs9b6RHTJrU09izy9F6Mw +T8Jo5/bBhtqgjKldtk60 +=Cd0w +-END PGP SIGNATURE- Added: dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/SparkR_2.4.0.tar.gz.sha512 == --- dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/SparkR_2.4.0.tar.gz.sha512 (added) +++ dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/SparkR_2.4.0.tar.gz.sha512 Sat Sep 15 15:43:22 2018 @@ -0,0 +1,3 @@ +SparkR_2.4.0.tar.gz: B9EB1936 D1E35BDC AB839372 2C430288 DDF65734 975DC661 + 5808706F C856BC50 21A46063 7BB13526 BE8BAEAB C50143B1 + 036806E5 34C85C20 60EED600 E3A47052 Added: dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/pyspark-2.4.0.tar.gz == Binary file - no diff available. Propchange: dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/pyspark-2.4.0.tar.gz -- svn:mime-type = application/octet-stream Added: dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/pyspark-2.4.0.tar.gz.asc == --- dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/pyspark-2.4.0.tar.gz.asc (added) +++ dev/spark/2.4.0-2018_09_15_15_07-1220ab8-bin/pyspark-2.4.0.tar.gz.asc Sat Sep 15 15:43:22 2018 @@ -0,0 +1,17 @@ +-BEGIN PGP SIGNATURE- +Version: GnuPG v1 + +iQIcBAABAgAGBQJbnSICAAoJEB6A3KhRgyfrYNAP/RXrnmIxN9B3HbsoYbpWa1C4 +QqV6CVSssjq2EYRZlSc26pUFOIqjZhHTeGg6RmJlmkNCs0r5U70NZWaTVCDTCbZU +jpYYW/AsAo15dfdR28qsXcCgjCDwwe3H/sJcnpj95KJaBNh4l/Oio4bh+Hp8hGQI +rzHwTNabb7i/hasLQGFo21r/RCR2Vi5eLJaaL702ciJx220zk71CvCiswdA5tkMZ +gpQM98BRuOKzTFWaM+ZkjSxILFrAsBRkBCsF7QNFA3aQjN5qRSAsTnS9sSFwMa6E
svn commit: r29407 - /dev/spark/KEYS
Author: wenchen Date: Sat Sep 15 15:04:37 2018 New Revision: 29407 Log: Update KEYS Modified: dev/spark/KEYS Modified: dev/spark/KEYS == --- dev/spark/KEYS (original) +++ dev/spark/KEYS Sat Sep 15 15:04:37 2018 @@ -704,99 +704,114 @@ kyHyHY5kPG9HfDOSahPz =SDAz -END PGP PUBLIC KEY BLOCK- -pub 4096R/89E45E06 2018-09-14 - Key fingerprint = 2DDA 5187 089E A1F5 96B3 B00C A433 0D11 89E4 5E06 +pub 4096R/518327EB 2018-09-15 uid Wenchen Fan (CODE SIGNING KEY) -sub 4096R/B9EF05F4 2018-09-14 --BEGIN PGP PUBLIC KEY BLOCK- -Version: GnuPG v1 +sub 4096R/8C274D32 2018-09-15 +-BEGIN PGP PRIVATE KEY BLOCK- -mI0EWC+l4gEEAKKxwuQeYNR66TTUqyGhfpdDONTEy1zf3Hb+kRzpbPvB5WV+dxNQ -v/yHZL4YbVzjIoqgVHzJaZlaBvBIRi9A+jyi/t6P7ePRJYqE3QKfKkUcDkACcNVS -akhgjxMiz5brb8yzbDwiUo/ojP/yyjZpWAMk4ADY2XnbcGwFXRhwiDL5ABEBAAG0 -L1RvdGFsbHkgTGVnaXQgU2lnbmluZyBLZXkgPG1hbGxvcnlAZXhhbXBsZS5vcmc+ -iNAEEwEIADoWIQTPgFhmrr2CZlQjVU9DWe1i4ITauQUCWC+l4gIbAQYLCQgHAwIH -FQoJCAsDAgQWAgMBAh4BAheAAAoJEENZ7WLghNq5IBYEAJx5lw8JI0E9uzhlfM8f -jlbPM8t8kldABxc6eSO2XbPaxVyi6IZ8QZQGrms4ZqlF2vgFJyFRKgT85uhs/9wk -b4+DCe/hCJM4kmh0vaqRen4j1oq3PZes82g9Kg9TQ46ZL3tKCXbkyH70KztW4r4T -s9R0GzoLMKCnTQDzq8KgddVjmQENBEy9tcUBCACnWQfqdrcz7tQL/iCeWDYSYPwX -pPMUMLE721HfFH7d8ErunPKPIwq1v4CrNmMjcainofbu/BfuZESSK1hBAItOk/5V -TkzCJlzkrHY9g5v+XlBMPDQC9u4AE/myw3p52+0NXsnBz+a35mxJKMl+9v9ztvue -A6EmLr2xaLf/nx4XwXUMSi1Lp8i8XpAOz/Xg1fspPMRhuDAGYDnOh4uH1jADGoqY -aPMty0yVEmzx74qvdIOvfgj16A/9LYXk67td6/JQ5LFCZmFsbahAsqi9inNgBZmn -fXO4m4lhzeqNjJAgaw7Fz2zqUmvpEheKKClgTQMWWNI9Rx1L8IKnJkuKnpzHABEB -AAG0I01pY2hhZWwgUnV0dGVyIDxtYXJ1dHRlckBnbWFpbC5jb20+iQE+BBMBAgAo -AhsjBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAUCViPWjQUJEswiQAAKCRBRcWYZ -4ITauf1BB/9/atCA6ROdoqnLu3xVstGbhDX03gJFf0/B0OPgrJ2S4YofPg4xAw7H -XtgygY/+vX/DSUNFTluS3H0oL4BSwSsvvItT6fta04gbElP9JMFvxpMvlighKpgy -3D9AGjI5wi8PSXJn91dsW1XmQj7Ooh6Om6TQbd9W+WHDPHcmNhHvMgluCvC1ZT/J -3RSbSlZIbNlQsyADO9THFrkNyB2cZe8HW6a2vyP7AyMGlmXfKdHDQTG1atDzd/0m -AISKgY4CUgT1UGJuxG32N2ePwcc/gWoRHQG5MD+xm6oenhhgOdU+f0TcrLH9n6H4 -BgA4PTZR8/aaje78diJSUazf6cRaG0eDiQE+BBMBAgAoBQJMvbXFAhsjBQkJZgGA -BgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAKCRBRcWYZ4ITauTy9B/4hmPQ7CSqw -5OS5t8U5y38BlqHflqFev3llX68sDtzYfxQuQVS3fxOBoGmFQ/LSfXQYhDG6BZa4 -nDuDZEgb81Mvj0DJDl4lmyMdBoIvXhvdEPDd/rrOG+1t2+S429W9NIObKaZCs9ab -v2fnIhrtyAWxc/iNR5rJmNXozvJVGAgAeNhBSrvZqFaPJ//BklbJhfVgNwt4GgtF -l1vaU7LMaMrOWA9Hyd8dWAGuIhbYXOOFj1WZ/OhUlYXnsIe8XzaJ1y6LyVkCLhaJ -+MVtGwTXrFXRhBLQlhCYBfO25i/PGUWSvRhI8n/r+RMNOuy1HlFbexRYrtPXOLbi -O8AlFuIsX9nRuQENBEy9tcUBCADYcCgQCCF1WUSn7c/VXNvgmXzvv3lVX9WkV4Qd -pcJXitXglXdTZwVxGv3AxDuaLEwxW7rbqKRPzWNjj4xTHxt2YtUjE+mLV58AFaQQ -U3aldYG8JPr2eohMNZqp2BG2odczw5eaO5l5ETjC1nHUjDUm8us3TV3AXOajAjgu -GvpG3DKnx/gmudrMBVSAEE64kefyBmSR683zkXhw+NgbTID9XW1OSqE+fLQf0ZzQ -EojMdfYIeV8Q5sMAmU3J9AdlpyDrZaYRmiphgw8PZTMahhz/o6Bz7p6VqA4Ncmr2 -25nntIsjUUz0iK6TsaOi9KrF23Rw+IDUJeYkdVbwGqavgJG1ABEBAAGJASUEGAEC -AA8CGwwFAlYj1w0FCRLMIsMACgkQUXFmGeCE2rktYAf+NUDbT4wS4s+6qZyx8eV6 -gmW+iWFlvIlsUFijR5WToF7PD/vfl3wVaadNkHBQ0p1cIKwDMkgFvLGsHzEIJWAI -BQ8X4e1FobklGxRDsq1bbJtsk/RjmZJJ4ntZvsl82VQSXeiw/pK5XgOHy186GMNA -ZmL6fjAvqrL0WGki33jMUtDpUC9GjQtAsYoR4taDpc7wKp45TLUMoV55hIUHE83a -z5xkXFTOYoSyWgHFCPbV9qA25TWMAUOKDOUiOdrLa3Nz6fw1d4nVL/bBVzHzrOWX -sF9hsz7kPMi2ExrXimyYNHgWPwcBJwooTst76VdH4s8ghLXtLRXV2WuKcDQZa9CJ -XJkCDQRbm/nXARAA1qqnpcaXQmE6bzpNJIaVAwG8XUCkGh3YIaZb/c+Mcf1BQQ8q -Uu0QYydppBDWNGpQIvMmwojuppyStFlcQ7/saaFZ3UnrVZp82sPaWi7ldYMy//mV -ROl4YjHbyc6aWbmIMXcIK8F1hVr4L+vYNtzKTaMBwpP64/vAWGHemgYuwEWaJwCX -AFeY4NPywP6uH4izuAwA5QC2p26QjOV4o+MAPLuU5hZw78EWsVlj7whOvqLvi+SG -TMn/9wqMPtBMm98tdXev/apuQRFepcmu1GWJTRIY5RTz7fJK7lPx96s3H7/0wQxb -RrTIo21h58t17k2uNUcYJQFtDMFT+SHViB+DljqiXIBMsl4JCbEInsubF4XDxshS -rST5wSKDyDAGu4BcFBbgwH5KZgc/efCAnP95FcAWSw0ZHGZsymMpdSV2cQtRLrxv -l3PUav6QjzrXV+OJ6Z12WnUTAVbIXQpCu5YrDMGF4yldppQmzvVQ//Cywv4zCf5N -5rfnKxqw9U1y1YJCW8Ii6UqYx/i9eaCk+spaAk+9b1YwnpncyJGFswWaULpFNdRq -5DHl5+HhYSg1aaxVGzO4il9h6Z7VZV2E9Jg+iMP46V5FKSR+QftC2R8fMaV2l1eJ -blnM6lnr96nCBKYNZgKOPYaWF2znpUf0b70vYvQNxVvpJ49blKGMMpBEzTcAEQEA -AbQzV2VuY2hlbiBGYW4gKENPREUgU0lHTklORyBLRVkpIDx3ZW5jaGVuQGFwYWNo -ZS5vcmc+iQI4BBMBAgAiBQJbm/nXAhsDBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIX -gAAKCRCkMw0RieReBoNvD/0V1ddp1a7hwDe3El3tyyppmVOdU36lUZFXDEb8z4Yr -aq31kW1aVRrF8dVnKEMENP0TYwS0SOMhkb0vWUCw+2EvAZZClWp7VYtcbwiH54oG -C56nQiAJtQIgYhyl2p+jdRdAXQ1WRkaiherR2BDgy1RvEdYbAA6yoZnEimJ8FbLQ -Vo/C/PL3Z7OZgYH2W2B20fj9oPYBVelhoXiFOEoHdoGlOmAnh0LWk7FY4ipdQE/x -1CD+BQ1MU9tjeQO2Yz5CIraG3o+Z0I0gmzuhwe7Ud/vRvoeg55u/3hPx5w3llHtH -9eZEb0j0ISeZxXwQLjux6de+wxqaf3Q8odK/4EdiXFCTP+1LCKrJiWZO3Xs8FRMI -BlXRbGuPEClSaSscg534l1q9IB9FcP9BeVH5PpGlxP3SFrg29mEajXQgHNQUSc75 -iFPBO5c84Yqdv1FagSkcRQgmu9QeUhcRrUyQiscZc00Ffa22JkOCy4N58I9JwUcr -69ASzwPROe3On1nkXkGLVCIO3eu6CMgUC9E29Bmh9eeh1/4+/QfN6mgD4dtLosAt -uzLA/csqu6LhRzp8u8TIfOW9+2/DeWzzY14RyjILLaaDjMurOmGKCwvo/QEiq4By -XrgyOEy0KJu1MjG2fKvAZ7yo/tvUeQQn1eTsk2M1qrv7KJf6xySwpcA5FkkIwS02
svn commit: r29403 - in /dev/spark/2.4.0-SNAPSHOT-2018_09_15_00_02-be454a7-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s
Author: pwendell Date: Sat Sep 15 07:17:04 2018 New Revision: 29403 Log: Apache Spark 2.4.0-SNAPSHOT-2018_09_15_00_02-be454a7 docs [This commit notification would consist of 1484 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org