spark git commit: [SPARK-25040][SQL] Empty string for non string types should be disallowed
Repository: spark Updated Branches: refs/heads/master c391dc65e -> 03e82e368 [SPARK-25040][SQL] Empty string for non string types should be disallowed ## What changes were proposed in this pull request? This takes over original PR at #22019. The original proposal is to have null for float and double types. Later a more reasonable proposal is to disallow empty strings. This patch adds logic to throw exception when finding empty strings for non string types. ## How was this patch tested? Added test. Closes #22787 from viirya/SPARK-25040. Authored-by: Liang-Chi Hsieh Signed-off-by: hyukjinkwon Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/03e82e36 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/03e82e36 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/03e82e36 Branch: refs/heads/master Commit: 03e82e36896afb43cc42c8d065ebe41a19ec62a7 Parents: c391dc6 Author: Liang-Chi Hsieh Authored: Tue Oct 23 13:43:53 2018 +0800 Committer: hyukjinkwon Committed: Tue Oct 23 13:43:53 2018 +0800 -- docs/sql-migration-guide-upgrade.md | 2 ++ .../spark/sql/catalyst/json/JacksonParser.scala | 19 +- .../execution/datasources/json/JsonSuite.scala | 37 +++- 3 files changed, 48 insertions(+), 10 deletions(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/03e82e36/docs/sql-migration-guide-upgrade.md -- diff --git a/docs/sql-migration-guide-upgrade.md b/docs/sql-migration-guide-upgrade.md index 68a897c..b8b9ad8 100644 --- a/docs/sql-migration-guide-upgrade.md +++ b/docs/sql-migration-guide-upgrade.md @@ -11,6 +11,8 @@ displayTitle: Spark SQL Upgrading Guide - In PySpark, when creating a `SparkSession` with `SparkSession.builder.getOrCreate()`, if there is an existing `SparkContext`, the builder was trying to update the `SparkConf` of the existing `SparkContext` with configurations specified to the builder, but the `SparkContext` is shared by all `SparkSession`s, so we should not update them. Since 3.0, the builder comes to not update the configurations. This is the same behavior as Java/Scala API in 2.3 and above. If you want to update them, you need to update them prior to creating a `SparkSession`. + - In Spark version 2.4 and earlier, the parser of JSON data source treats empty strings as null for some data types such as `IntegerType`. For `FloatType` and `DoubleType`, it fails on empty strings and throws exceptions. Since Spark 3.0, we disallow empty strings and will throw exceptions for data types except for `StringType` and `BinaryType`. + ## Upgrading From Spark SQL 2.3 to 2.4 - In Spark version 2.3 and earlier, the second parameter to array_contains function is implicitly promoted to the element type of first array type parameter. This type promotion can be lossy and may cause `array_contains` function to return wrong result. This problem has been addressed in 2.4 by employing a safer type promotion mechanism. This can cause some change in behavior and are illustrated in the table below. http://git-wip-us.apache.org/repos/asf/spark/blob/03e82e36/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala -- diff --git a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala index 984979a..918c9e7 100644 --- a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala +++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala @@ -168,7 +168,7 @@ class JacksonParser( case VALUE_NUMBER_INT | VALUE_NUMBER_FLOAT => parser.getFloatValue -case VALUE_STRING => +case VALUE_STRING if parser.getTextLength >= 1 => // Special case handling for NaN and Infinity. parser.getText match { case "NaN" => Float.NaN @@ -184,7 +184,7 @@ class JacksonParser( case VALUE_NUMBER_INT | VALUE_NUMBER_FLOAT => parser.getDoubleValue -case VALUE_STRING => +case VALUE_STRING if parser.getTextLength >= 1 => // Special case handling for NaN and Infinity. parser.getText match { case "NaN" => Double.NaN @@ -211,7 +211,7 @@ class JacksonParser( case TimestampType => (parser: JsonParser) => parseJsonToken[java.lang.Long](parser, dataType) { -case VALUE_STRING => +case VALUE_STRING if parser.getTextLength >= 1 => val stringValue = parser.getText // This one will lose microseconds parts. // See
svn commit: r30227 - in /dev/spark/2.4.1-SNAPSHOT-2018_10_22_22_02-4099565-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s
Author: pwendell Date: Tue Oct 23 05:17:02 2018 New Revision: 30227 Log: Apache Spark 2.4.1-SNAPSHOT-2018_10_22_22_02-4099565 docs [This commit notification would consist of 1477 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
spark git commit: [SPARK-24499][SQL][DOC][FOLLOW-UP] Fix spelling in doc
Repository: spark Updated Branches: refs/heads/branch-2.4 b9b594ade -> 4099565cd [SPARK-24499][SQL][DOC][FOLLOW-UP] Fix spelling in doc ## What changes were proposed in this pull request? This PR replaces `turing` with `tuning` in files and a file name. Currently, in the left side menu, `Turing` is shown. [This page](https://dist.apache.org/repos/dist/dev/spark/v2.4.0-rc4-docs/_site/sql-performance-turing.html) is one of examples. ![image](https://user-images.githubusercontent.com/1315079/47332714-20a96180-d6bb-11e8-9a5a-0a8dad292626.png) ## How was this patch tested? `grep -rin turing docs` && `find docs -name "*turing*"` Closes #22800 from kiszk/SPARK-24499-follow. Authored-by: Kazuaki Ishizaki Signed-off-by: Wenchen Fan (cherry picked from commit c391dc65efb21357bdd80b28fba3851773759bc6) Signed-off-by: Wenchen Fan Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/4099565c Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/4099565c Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/4099565c Branch: refs/heads/branch-2.4 Commit: 4099565c887640b60e9c57d9dc7989e0c3ed Parents: b9b594a Author: Kazuaki Ishizaki Authored: Tue Oct 23 12:19:31 2018 +0800 Committer: Wenchen Fan Committed: Tue Oct 23 12:20:00 2018 +0800 -- docs/_data/menu-sql.yaml| 10 +- docs/sql-migration-guide-upgrade.md | 2 +- docs/sql-performance-tuning.md | 151 +++ docs/sql-performance-turing.md | 151 --- 4 files changed, 157 insertions(+), 157 deletions(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/4099565c/docs/_data/menu-sql.yaml -- diff --git a/docs/_data/menu-sql.yaml b/docs/_data/menu-sql.yaml index 6718763..cd065ea 100644 --- a/docs/_data/menu-sql.yaml +++ b/docs/_data/menu-sql.yaml @@ -36,15 +36,15 @@ url: sql-data-sources-avro.html - text: Troubleshooting url: sql-data-sources-troubleshooting.html -- text: Performance Turing - url: sql-performance-turing.html +- text: Performance Tuning + url: sql-performance-tuning.html subitems: - text: Caching Data In Memory - url: sql-performance-turing.html#caching-data-in-memory + url: sql-performance-tuning.html#caching-data-in-memory - text: Other Configuration Options - url: sql-performance-turing.html#other-configuration-options + url: sql-performance-tuning.html#other-configuration-options - text: Broadcast Hint for SQL Queries - url: sql-performance-turing.html#broadcast-hint-for-sql-queries + url: sql-performance-tuning.html#broadcast-hint-for-sql-queries - text: Distributed SQL Engine url: sql-distributed-sql-engine.html subitems: http://git-wip-us.apache.org/repos/asf/spark/blob/4099565c/docs/sql-migration-guide-upgrade.md -- diff --git a/docs/sql-migration-guide-upgrade.md b/docs/sql-migration-guide-upgrade.md index 062e07b..af561f2 100644 --- a/docs/sql-migration-guide-upgrade.md +++ b/docs/sql-migration-guide-upgrade.md @@ -270,7 +270,7 @@ displayTitle: Spark SQL Upgrading Guide - In PySpark, `na.fill()` or `fillna` also accepts boolean and replaces nulls with booleans. In prior Spark versions, PySpark just ignores it and returns the original Dataset/DataFrame. - - Since Spark 2.3, when either broadcast hash join or broadcast nested loop join is applicable, we prefer to broadcasting the table that is explicitly specified in a broadcast hint. For details, see the section [Broadcast Hint](sql-performance-turing.html#broadcast-hint-for-sql-queries) and [SPARK-22489](https://issues.apache.org/jira/browse/SPARK-22489). + - Since Spark 2.3, when either broadcast hash join or broadcast nested loop join is applicable, we prefer to broadcasting the table that is explicitly specified in a broadcast hint. For details, see the section [Broadcast Hint](sql-performance-tuning.html#broadcast-hint-for-sql-queries) and [SPARK-22489](https://issues.apache.org/jira/browse/SPARK-22489). - Since Spark 2.3, when all inputs are binary, `functions.concat()` returns an output as binary. Otherwise, it returns as a string. Until Spark 2.3, it always returns as a string despite of input types. To keep the old behavior, set `spark.sql.function.concatBinaryAsString` to `true`. http://git-wip-us.apache.org/repos/asf/spark/blob/4099565c/docs/sql-performance-tuning.md -- diff --git a/docs/sql-performance-tuning.md b/docs/sql-performance-tuning.md new file mode 100644 index 000..7c7c4a8 --- /dev/null +++ b/docs/sql-performance-tuning.md @@ -0,0 +1,151 @@ +--- +layout:
spark git commit: [SPARK-24499][SQL][DOC][FOLLOW-UP] Fix spelling in doc
Repository: spark Updated Branches: refs/heads/master 3b4556745 -> c391dc65e [SPARK-24499][SQL][DOC][FOLLOW-UP] Fix spelling in doc ## What changes were proposed in this pull request? This PR replaces `turing` with `tuning` in files and a file name. Currently, in the left side menu, `Turing` is shown. [This page](https://dist.apache.org/repos/dist/dev/spark/v2.4.0-rc4-docs/_site/sql-performance-turing.html) is one of examples. ![image](https://user-images.githubusercontent.com/1315079/47332714-20a96180-d6bb-11e8-9a5a-0a8dad292626.png) ## How was this patch tested? `grep -rin turing docs` && `find docs -name "*turing*"` Closes #22800 from kiszk/SPARK-24499-follow. Authored-by: Kazuaki Ishizaki Signed-off-by: Wenchen Fan Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/c391dc65 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/c391dc65 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/c391dc65 Branch: refs/heads/master Commit: c391dc65efb21357bdd80b28fba3851773759bc6 Parents: 3b45567 Author: Kazuaki Ishizaki Authored: Tue Oct 23 12:19:31 2018 +0800 Committer: Wenchen Fan Committed: Tue Oct 23 12:19:31 2018 +0800 -- docs/_data/menu-sql.yaml| 10 +- docs/sql-migration-guide-upgrade.md | 2 +- docs/sql-performance-tuning.md | 151 +++ docs/sql-performance-turing.md | 151 --- 4 files changed, 157 insertions(+), 157 deletions(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/c391dc65/docs/_data/menu-sql.yaml -- diff --git a/docs/_data/menu-sql.yaml b/docs/_data/menu-sql.yaml index 6718763..cd065ea 100644 --- a/docs/_data/menu-sql.yaml +++ b/docs/_data/menu-sql.yaml @@ -36,15 +36,15 @@ url: sql-data-sources-avro.html - text: Troubleshooting url: sql-data-sources-troubleshooting.html -- text: Performance Turing - url: sql-performance-turing.html +- text: Performance Tuning + url: sql-performance-tuning.html subitems: - text: Caching Data In Memory - url: sql-performance-turing.html#caching-data-in-memory + url: sql-performance-tuning.html#caching-data-in-memory - text: Other Configuration Options - url: sql-performance-turing.html#other-configuration-options + url: sql-performance-tuning.html#other-configuration-options - text: Broadcast Hint for SQL Queries - url: sql-performance-turing.html#broadcast-hint-for-sql-queries + url: sql-performance-tuning.html#broadcast-hint-for-sql-queries - text: Distributed SQL Engine url: sql-distributed-sql-engine.html subitems: http://git-wip-us.apache.org/repos/asf/spark/blob/c391dc65/docs/sql-migration-guide-upgrade.md -- diff --git a/docs/sql-migration-guide-upgrade.md b/docs/sql-migration-guide-upgrade.md index 7871a49..68a897c 100644 --- a/docs/sql-migration-guide-upgrade.md +++ b/docs/sql-migration-guide-upgrade.md @@ -274,7 +274,7 @@ displayTitle: Spark SQL Upgrading Guide - In PySpark, `na.fill()` or `fillna` also accepts boolean and replaces nulls with booleans. In prior Spark versions, PySpark just ignores it and returns the original Dataset/DataFrame. - - Since Spark 2.3, when either broadcast hash join or broadcast nested loop join is applicable, we prefer to broadcasting the table that is explicitly specified in a broadcast hint. For details, see the section [Broadcast Hint](sql-performance-turing.html#broadcast-hint-for-sql-queries) and [SPARK-22489](https://issues.apache.org/jira/browse/SPARK-22489). + - Since Spark 2.3, when either broadcast hash join or broadcast nested loop join is applicable, we prefer to broadcasting the table that is explicitly specified in a broadcast hint. For details, see the section [Broadcast Hint](sql-performance-tuning.html#broadcast-hint-for-sql-queries) and [SPARK-22489](https://issues.apache.org/jira/browse/SPARK-22489). - Since Spark 2.3, when all inputs are binary, `functions.concat()` returns an output as binary. Otherwise, it returns as a string. Until Spark 2.3, it always returns as a string despite of input types. To keep the old behavior, set `spark.sql.function.concatBinaryAsString` to `true`. http://git-wip-us.apache.org/repos/asf/spark/blob/c391dc65/docs/sql-performance-tuning.md -- diff --git a/docs/sql-performance-tuning.md b/docs/sql-performance-tuning.md new file mode 100644 index 000..7c7c4a8 --- /dev/null +++ b/docs/sql-performance-tuning.md @@ -0,0 +1,151 @@ +--- +layout: global +title: Performance Tuning +displayTitle: Performance Tuning +--- + +* Table of contents +{:toc} + +For
svn commit: r30225 - in /dev/spark/3.0.0-SNAPSHOT-2018_10_22_20_02-3b45567-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s
Author: pwendell Date: Tue Oct 23 03:17:10 2018 New Revision: 30225 Log: Apache Spark 3.0.0-SNAPSHOT-2018_10_22_20_02-3b45567 docs [This commit notification would consist of 1483 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
svn commit: r30224 - in /dev/spark/2.3.3-SNAPSHOT-2018_10_22_18_02-8fbf3ee-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s
Author: pwendell Date: Tue Oct 23 01:16:40 2018 New Revision: 30224 Log: Apache Spark 2.3.3-SNAPSHOT-2018_10_22_18_02-8fbf3ee docs [This commit notification would consist of 1443 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
spark git commit: [SPARK-25795][R][EXAMPLE] Fix CSV SparkR SQL Example
Repository: spark Updated Branches: refs/heads/branch-2.4 f33d888a2 -> b9b594ade [SPARK-25795][R][EXAMPLE] Fix CSV SparkR SQL Example ## What changes were proposed in this pull request? This PR aims to fix the following SparkR example in Spark 2.3.0 ~ 2.4.0. ```r > df <- read.df("examples/src/main/resources/people.csv", "csv") > namesAndAges <- select(df, "name", "age") ... Caused by: org.apache.spark.sql.AnalysisException: cannot resolve '`name`' given input columns: [_c0];; 'Project ['name, 'age] +- AnalysisBarrier +- Relation[_c0#97] csv ``` - https://dist.apache.org/repos/dist/dev/spark/v2.4.0-rc3-docs/_site/sql-programming-guide.html#manually-specifying-options - http://spark.apache.org/docs/2.3.2/sql-programming-guide.html#manually-specifying-options - http://spark.apache.org/docs/2.3.1/sql-programming-guide.html#manually-specifying-options - http://spark.apache.org/docs/2.3.0/sql-programming-guide.html#manually-specifying-options ## How was this patch tested? Manual test in SparkR. (Please note that `RSparkSQLExample.R` fails at the last JDBC example) ```r > df <- read.df("examples/src/main/resources/people.csv", "csv", sep=";", > inferSchema=T, header=T) > namesAndAges <- select(df, "name", "age") ``` Closes #22791 from dongjoon-hyun/SPARK-25795. Authored-by: Dongjoon Hyun Signed-off-by: Dongjoon Hyun (cherry picked from commit 3b4556745e90a13f4ae7ebae4ab682617de25c38) Signed-off-by: Dongjoon Hyun Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/b9b594ad Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/b9b594ad Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/b9b594ad Branch: refs/heads/branch-2.4 Commit: b9b594ade9106ad96adb413c7a27ec7b4f8a849a Parents: f33d888 Author: Dongjoon Hyun Authored: Mon Oct 22 16:34:33 2018 -0700 Committer: Dongjoon Hyun Committed: Mon Oct 22 16:34:48 2018 -0700 -- examples/src/main/r/RSparkSQLExample.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/b9b594ad/examples/src/main/r/RSparkSQLExample.R -- diff --git a/examples/src/main/r/RSparkSQLExample.R b/examples/src/main/r/RSparkSQLExample.R index a5ed723..effba94 100644 --- a/examples/src/main/r/RSparkSQLExample.R +++ b/examples/src/main/r/RSparkSQLExample.R @@ -114,7 +114,7 @@ write.df(namesAndAges, "namesAndAges.parquet", "parquet") # $example on:manual_load_options_csv$ -df <- read.df("examples/src/main/resources/people.csv", "csv") +df <- read.df("examples/src/main/resources/people.csv", "csv", sep=";", inferSchema=T, header=T) namesAndAges <- select(df, "name", "age") # $example off:manual_load_options_csv$ - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
spark git commit: [SPARK-25795][R][EXAMPLE] Fix CSV SparkR SQL Example
Repository: spark Updated Branches: refs/heads/branch-2.3 d7a35877b -> 8fbf3ee91 [SPARK-25795][R][EXAMPLE] Fix CSV SparkR SQL Example ## What changes were proposed in this pull request? This PR aims to fix the following SparkR example in Spark 2.3.0 ~ 2.4.0. ```r > df <- read.df("examples/src/main/resources/people.csv", "csv") > namesAndAges <- select(df, "name", "age") ... Caused by: org.apache.spark.sql.AnalysisException: cannot resolve '`name`' given input columns: [_c0];; 'Project ['name, 'age] +- AnalysisBarrier +- Relation[_c0#97] csv ``` - https://dist.apache.org/repos/dist/dev/spark/v2.4.0-rc3-docs/_site/sql-programming-guide.html#manually-specifying-options - http://spark.apache.org/docs/2.3.2/sql-programming-guide.html#manually-specifying-options - http://spark.apache.org/docs/2.3.1/sql-programming-guide.html#manually-specifying-options - http://spark.apache.org/docs/2.3.0/sql-programming-guide.html#manually-specifying-options ## How was this patch tested? Manual test in SparkR. (Please note that `RSparkSQLExample.R` fails at the last JDBC example) ```r > df <- read.df("examples/src/main/resources/people.csv", "csv", sep=";", > inferSchema=T, header=T) > namesAndAges <- select(df, "name", "age") ``` Closes #22791 from dongjoon-hyun/SPARK-25795. Authored-by: Dongjoon Hyun Signed-off-by: Dongjoon Hyun (cherry picked from commit 3b4556745e90a13f4ae7ebae4ab682617de25c38) Signed-off-by: Dongjoon Hyun Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/8fbf3ee9 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/8fbf3ee9 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/8fbf3ee9 Branch: refs/heads/branch-2.3 Commit: 8fbf3ee91703fc714f3f01237485479562915933 Parents: d7a3587 Author: Dongjoon Hyun Authored: Mon Oct 22 16:34:33 2018 -0700 Committer: Dongjoon Hyun Committed: Mon Oct 22 16:35:05 2018 -0700 -- examples/src/main/r/RSparkSQLExample.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/8fbf3ee9/examples/src/main/r/RSparkSQLExample.R -- diff --git a/examples/src/main/r/RSparkSQLExample.R b/examples/src/main/r/RSparkSQLExample.R index a5ed723..effba94 100644 --- a/examples/src/main/r/RSparkSQLExample.R +++ b/examples/src/main/r/RSparkSQLExample.R @@ -114,7 +114,7 @@ write.df(namesAndAges, "namesAndAges.parquet", "parquet") # $example on:manual_load_options_csv$ -df <- read.df("examples/src/main/resources/people.csv", "csv") +df <- read.df("examples/src/main/resources/people.csv", "csv", sep=";", inferSchema=T, header=T) namesAndAges <- select(df, "name", "age") # $example off:manual_load_options_csv$ - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
spark git commit: [SPARK-25795][R][EXAMPLE] Fix CSV SparkR SQL Example
Repository: spark Updated Branches: refs/heads/master ff9ede092 -> 3b4556745 [SPARK-25795][R][EXAMPLE] Fix CSV SparkR SQL Example ## What changes were proposed in this pull request? This PR aims to fix the following SparkR example in Spark 2.3.0 ~ 2.4.0. ```r > df <- read.df("examples/src/main/resources/people.csv", "csv") > namesAndAges <- select(df, "name", "age") ... Caused by: org.apache.spark.sql.AnalysisException: cannot resolve '`name`' given input columns: [_c0];; 'Project ['name, 'age] +- AnalysisBarrier +- Relation[_c0#97] csv ``` - https://dist.apache.org/repos/dist/dev/spark/v2.4.0-rc3-docs/_site/sql-programming-guide.html#manually-specifying-options - http://spark.apache.org/docs/2.3.2/sql-programming-guide.html#manually-specifying-options - http://spark.apache.org/docs/2.3.1/sql-programming-guide.html#manually-specifying-options - http://spark.apache.org/docs/2.3.0/sql-programming-guide.html#manually-specifying-options ## How was this patch tested? Manual test in SparkR. (Please note that `RSparkSQLExample.R` fails at the last JDBC example) ```r > df <- read.df("examples/src/main/resources/people.csv", "csv", sep=";", > inferSchema=T, header=T) > namesAndAges <- select(df, "name", "age") ``` Closes #22791 from dongjoon-hyun/SPARK-25795. Authored-by: Dongjoon Hyun Signed-off-by: Dongjoon Hyun Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/3b455674 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/3b455674 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/3b455674 Branch: refs/heads/master Commit: 3b4556745e90a13f4ae7ebae4ab682617de25c38 Parents: ff9ede0 Author: Dongjoon Hyun Authored: Mon Oct 22 16:34:33 2018 -0700 Committer: Dongjoon Hyun Committed: Mon Oct 22 16:34:33 2018 -0700 -- examples/src/main/r/RSparkSQLExample.R | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/3b455674/examples/src/main/r/RSparkSQLExample.R -- diff --git a/examples/src/main/r/RSparkSQLExample.R b/examples/src/main/r/RSparkSQLExample.R index a5ed723..effba94 100644 --- a/examples/src/main/r/RSparkSQLExample.R +++ b/examples/src/main/r/RSparkSQLExample.R @@ -114,7 +114,7 @@ write.df(namesAndAges, "namesAndAges.parquet", "parquet") # $example on:manual_load_options_csv$ -df <- read.df("examples/src/main/resources/people.csv", "csv") +df <- read.df("examples/src/main/resources/people.csv", "csv", sep=";", inferSchema=T, header=T) namesAndAges <- select(df, "name", "age") # $example off:manual_load_options_csv$ - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
svn commit: r30223 - in /dev/spark/3.0.0-SNAPSHOT-2018_10_22_16_07-ff9ede0-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s
Author: pwendell Date: Mon Oct 22 23:21:36 2018 New Revision: 30223 Log: Apache Spark 3.0.0-SNAPSHOT-2018_10_22_16_07-ff9ede0 docs [This commit notification would consist of 1483 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
spark git commit: [SPARK-25627][TEST] Reduce test time for ContinuousStressSuite
Repository: spark Updated Branches: refs/heads/master bd66c7302 -> ff9ede092 [SPARK-25627][TEST] Reduce test time for ContinuousStressSuite ## What changes were proposed in this pull request? This goes to reduce test time for ContinuousStressSuite - from 8 mins 13 sec to 43 seconds. The approach taken by this is to reduce the triggers and epochs to wait and to reduce the expected rows accordingly. ## How was this patch tested? Existing tests. Closes #22662 from viirya/SPARK-25627. Authored-by: Liang-Chi Hsieh Signed-off-by: Sean Owen Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/ff9ede09 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/ff9ede09 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/ff9ede09 Branch: refs/heads/master Commit: ff9ede0929136b5af1b85f7917216e8ed7294838 Parents: bd66c73 Author: Liang-Chi Hsieh Authored: Mon Oct 22 13:18:29 2018 -0500 Committer: Sean Owen Committed: Mon Oct 22 13:18:29 2018 -0500 -- .../streaming/continuous/ContinuousSuite.scala | 36 ++-- 1 file changed, 18 insertions(+), 18 deletions(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/ff9ede09/sql/core/src/test/scala/org/apache/spark/sql/streaming/continuous/ContinuousSuite.scala -- diff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/continuous/ContinuousSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/continuous/ContinuousSuite.scala index 3d21bc6..93eae29 100644 --- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/continuous/ContinuousSuite.scala +++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/continuous/ContinuousSuite.scala @@ -241,10 +241,10 @@ class ContinuousStressSuite extends ContinuousSuiteBase { testStream(df, useV2Sink = true)( StartStream(longContinuousTrigger), AwaitEpoch(0), - Execute(waitForRateSourceTriggers(_, 201)), + Execute(waitForRateSourceTriggers(_, 10)), IncrementEpoch(), StopStream, - CheckAnswerRowsContains(scala.Range(0, 25000).map(Row(_))) + CheckAnswerRowsContains(scala.Range(0, 2500).map(Row(_))) ) } @@ -259,10 +259,10 @@ class ContinuousStressSuite extends ContinuousSuiteBase { testStream(df, useV2Sink = true)( StartStream(Trigger.Continuous(2012)), AwaitEpoch(0), - Execute(waitForRateSourceTriggers(_, 201)), + Execute(waitForRateSourceTriggers(_, 10)), IncrementEpoch(), StopStream, - CheckAnswerRowsContains(scala.Range(0, 25000).map(Row(_ + CheckAnswerRowsContains(scala.Range(0, 2500).map(Row(_ } test("restarts") { @@ -274,27 +274,27 @@ class ContinuousStressSuite extends ContinuousSuiteBase { .select('value) testStream(df, useV2Sink = true)( - StartStream(Trigger.Continuous(2012)), - AwaitEpoch(10), + StartStream(Trigger.Continuous(1012)), + AwaitEpoch(2), StopStream, - StartStream(Trigger.Continuous(2012)), - AwaitEpoch(20), + StartStream(Trigger.Continuous(1012)), + AwaitEpoch(4), StopStream, - StartStream(Trigger.Continuous(2012)), - AwaitEpoch(21), + StartStream(Trigger.Continuous(1012)), + AwaitEpoch(5), StopStream, - StartStream(Trigger.Continuous(2012)), - AwaitEpoch(22), + StartStream(Trigger.Continuous(1012)), + AwaitEpoch(6), StopStream, - StartStream(Trigger.Continuous(2012)), - AwaitEpoch(25), + StartStream(Trigger.Continuous(1012)), + AwaitEpoch(8), StopStream, - StartStream(Trigger.Continuous(2012)), + StartStream(Trigger.Continuous(1012)), StopStream, - StartStream(Trigger.Continuous(2012)), - AwaitEpoch(50), + StartStream(Trigger.Continuous(1012)), + AwaitEpoch(15), StopStream, - CheckAnswerRowsContains(scala.Range(0, 25000).map(Row(_ + CheckAnswerRowsContains(scala.Range(0, 2500).map(Row(_ } } - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
spark git commit: [SPARK-25771][PYSPARK] Fix improper synchronization in PythonWorkerFactory
Repository: spark Updated Branches: refs/heads/master 81a305dd0 -> bd66c7302 [SPARK-25771][PYSPARK] Fix improper synchronization in PythonWorkerFactory ## What changes were proposed in this pull request? Fix the following issues in PythonWorkerFactory 1. MonitorThread.run uses a wrong lock. 2. `createSimpleWorker` misses `synchronized` when updating `simpleWorkers`. Other changes are just to improve the code style to make the thread-safe contract clear. ## How was this patch tested? Jenkins Closes #22770 from zsxwing/pwf. Authored-by: Shixiong Zhu Signed-off-by: Shixiong Zhu Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/bd66c730 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/bd66c730 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/bd66c730 Branch: refs/heads/master Commit: bd66c73025c0b947be230178a737fd53812b78dd Parents: 81a305d Author: Shixiong Zhu Authored: Mon Oct 22 10:07:11 2018 -0700 Committer: Shixiong Zhu Committed: Mon Oct 22 10:07:11 2018 -0700 -- .../spark/api/python/PythonWorkerFactory.scala | 75 +++- 1 file changed, 43 insertions(+), 32 deletions(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/bd66c730/core/src/main/scala/org/apache/spark/api/python/PythonWorkerFactory.scala -- diff --git a/core/src/main/scala/org/apache/spark/api/python/PythonWorkerFactory.scala b/core/src/main/scala/org/apache/spark/api/python/PythonWorkerFactory.scala index 6afa37a..1f2f503 100644 --- a/core/src/main/scala/org/apache/spark/api/python/PythonWorkerFactory.scala +++ b/core/src/main/scala/org/apache/spark/api/python/PythonWorkerFactory.scala @@ -21,6 +21,7 @@ import java.io.{DataInputStream, DataOutputStream, EOFException, InputStream, Ou import java.net.{InetAddress, ServerSocket, Socket, SocketException} import java.nio.charset.StandardCharsets import java.util.Arrays +import javax.annotation.concurrent.GuardedBy import scala.collection.JavaConverters._ import scala.collection.mutable @@ -31,7 +32,7 @@ import org.apache.spark.security.SocketAuthHelper import org.apache.spark.util.{RedirectThread, Utils} private[spark] class PythonWorkerFactory(pythonExec: String, envVars: Map[String, String]) - extends Logging { + extends Logging { self => import PythonWorkerFactory._ @@ -39,7 +40,7 @@ private[spark] class PythonWorkerFactory(pythonExec: String, envVars: Map[String // pyspark/daemon.py (by default) and tell it to fork new workers for our tasks. This daemon // currently only works on UNIX-based systems now because it uses signals for child management, // so we can also fall back to launching workers, pyspark/worker.py (by default) directly. - val useDaemon = { + private val useDaemon = { val useDaemonEnabled = SparkEnv.get.conf.getBoolean("spark.python.use.daemon", true) // This flag is ignored on Windows as it's unable to fork. @@ -51,44 +52,52 @@ private[spark] class PythonWorkerFactory(pythonExec: String, envVars: Map[String // as expert-only option, and shouldn't be used before knowing what it means exactly. // This configuration indicates the module to run the daemon to execute its Python workers. - val daemonModule = SparkEnv.get.conf.getOption("spark.python.daemon.module").map { value => -logInfo( - s"Python daemon module in PySpark is set to [$value] in 'spark.python.daemon.module', " + - "using this to start the daemon up. Note that this configuration only has an effect when " + - "'spark.python.use.daemon' is enabled and the platform is not Windows.") -value - }.getOrElse("pyspark.daemon") + private val daemonModule = +SparkEnv.get.conf.getOption("spark.python.daemon.module").map { value => + logInfo( +s"Python daemon module in PySpark is set to [$value] in 'spark.python.daemon.module', " + +"using this to start the daemon up. Note that this configuration only has an effect when " + +"'spark.python.use.daemon' is enabled and the platform is not Windows.") + value +}.getOrElse("pyspark.daemon") // This configuration indicates the module to run each Python worker. - val workerModule = SparkEnv.get.conf.getOption("spark.python.worker.module").map { value => -logInfo( - s"Python worker module in PySpark is set to [$value] in 'spark.python.worker.module', " + - "using this to start the worker up. Note that this configuration only has an effect when " + - "'spark.python.use.daemon' is disabled or the platform is Windows.") -value - }.getOrElse("pyspark.worker") + private val workerModule = +SparkEnv.get.conf.getOption("spark.python.worker.module").map { value => + logInfo( +s"Python
svn commit: r30221 - in /dev/spark/v2.4.0-rc4-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _site/api/java/org/apache/spark
Author: wenchen Date: Mon Oct 22 16:16:37 2018 New Revision: 30221 Log: Apache Spark v2.4.0-rc4 docs [This commit notification would consist of 1479 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
svn commit: r30220 - /dev/spark/v2.4.0-rc4-bin/
Author: wenchen Date: Mon Oct 22 15:46:56 2018 New Revision: 30220 Log: Apache Spark v2.4.0-rc4 Added: dev/spark/v2.4.0-rc4-bin/ dev/spark/v2.4.0-rc4-bin/SparkR_2.4.0.tar.gz (with props) dev/spark/v2.4.0-rc4-bin/SparkR_2.4.0.tar.gz.asc dev/spark/v2.4.0-rc4-bin/SparkR_2.4.0.tar.gz.sha512 dev/spark/v2.4.0-rc4-bin/pyspark-2.4.0.tar.gz (with props) dev/spark/v2.4.0-rc4-bin/pyspark-2.4.0.tar.gz.asc dev/spark/v2.4.0-rc4-bin/pyspark-2.4.0.tar.gz.sha512 dev/spark/v2.4.0-rc4-bin/spark-2.4.0-bin-hadoop2.6.tgz (with props) dev/spark/v2.4.0-rc4-bin/spark-2.4.0-bin-hadoop2.6.tgz.asc dev/spark/v2.4.0-rc4-bin/spark-2.4.0-bin-hadoop2.6.tgz.sha512 dev/spark/v2.4.0-rc4-bin/spark-2.4.0-bin-hadoop2.7.tgz (with props) dev/spark/v2.4.0-rc4-bin/spark-2.4.0-bin-hadoop2.7.tgz.asc dev/spark/v2.4.0-rc4-bin/spark-2.4.0-bin-hadoop2.7.tgz.sha512 dev/spark/v2.4.0-rc4-bin/spark-2.4.0-bin-without-hadoop-scala-2.12.tgz (with props) dev/spark/v2.4.0-rc4-bin/spark-2.4.0-bin-without-hadoop-scala-2.12.tgz.asc dev/spark/v2.4.0-rc4-bin/spark-2.4.0-bin-without-hadoop-scala-2.12.tgz.sha512 dev/spark/v2.4.0-rc4-bin/spark-2.4.0-bin-without-hadoop.tgz (with props) dev/spark/v2.4.0-rc4-bin/spark-2.4.0-bin-without-hadoop.tgz.asc dev/spark/v2.4.0-rc4-bin/spark-2.4.0-bin-without-hadoop.tgz.sha512 dev/spark/v2.4.0-rc4-bin/spark-2.4.0.tgz (with props) dev/spark/v2.4.0-rc4-bin/spark-2.4.0.tgz.asc dev/spark/v2.4.0-rc4-bin/spark-2.4.0.tgz.sha512 Added: dev/spark/v2.4.0-rc4-bin/SparkR_2.4.0.tar.gz == Binary file - no diff available. Propchange: dev/spark/v2.4.0-rc4-bin/SparkR_2.4.0.tar.gz -- svn:mime-type = application/octet-stream Added: dev/spark/v2.4.0-rc4-bin/SparkR_2.4.0.tar.gz.asc == --- dev/spark/v2.4.0-rc4-bin/SparkR_2.4.0.tar.gz.asc (added) +++ dev/spark/v2.4.0-rc4-bin/SparkR_2.4.0.tar.gz.asc Mon Oct 22 15:46:56 2018 @@ -0,0 +1,17 @@ +-BEGIN PGP SIGNATURE- +Version: GnuPG v1 + +iQIcBAABAgAGBQJbzejDAAoJEGuscolPT9yKslMP/3h2XzsKfH/pU89rLp0TDNrA +ANpFvfcWsduW9H/vnE12QFWT/v61EtqDToKmyFBqRQmNVM2y0le5xRLcVEAot7G1 +vbzeAgmwZbMDmTSnppjdXyiEAtU24DFOLFPkrsw9ZLM75uKriF5q4Uf1YxWekhkO +1qOwKOOJT5VU0cf5U79uF6cZx5nSQKD7CWF9m+H/i8ozpbjoK4U6WDXEWR13A54T +UysqILRzxon2DgqRO4g7aEICew9/088Cw8ItIMayP2YkiaLBYI5vuGPSJ7jSCm/1 +Jua3Sqmmkm+Kn0FT3gj+5jN1cEc9Iadmhst1/WOh1YeAXzph0Nu0iALRbjgblblX +igRwLfRm374eWzQVZuxsIjqxuBCeIG5h7XaIjbY3TIRd81xNkBvey/CabkpmV7Br +mxdVVaTkUHnJ152yQg5VAc0xbW78OnIYxng3OHK1TwHbxhsyBWwUQj1mW/W5l4Vi +z6CerUKCfe3S3asYs0amUpF5LIlj9/AZobqR1lrRXTC1uK8388HaESueIYlffS0J +63FpHL4ztq1i2KjRgFKQ95AT8IuyUVywHlu3IUcMXkTOGtet5kR5mSajNXCWPPsy +Qqnzd+v688mpJY/kCpxiieYlUOJVDwCGw4WLJzunthq/XiD4cbdKpPTO28lPDbsD +TyntF6wq2FM/61x0NhbQ +=Cb1x +-END PGP SIGNATURE- Added: dev/spark/v2.4.0-rc4-bin/SparkR_2.4.0.tar.gz.sha512 == --- dev/spark/v2.4.0-rc4-bin/SparkR_2.4.0.tar.gz.sha512 (added) +++ dev/spark/v2.4.0-rc4-bin/SparkR_2.4.0.tar.gz.sha512 Mon Oct 22 15:46:56 2018 @@ -0,0 +1,3 @@ +SparkR_2.4.0.tar.gz: DA1B7DE2 61B46F10 6132AC9C 8BE61A27 C5745E25 7882279F + 2D4E62CB D4218F31 04A86BD1 EE43B33E 77262A04 2A7CE603 + 26239651 E476AE31 4B495113 22669346 Added: dev/spark/v2.4.0-rc4-bin/pyspark-2.4.0.tar.gz == Binary file - no diff available. Propchange: dev/spark/v2.4.0-rc4-bin/pyspark-2.4.0.tar.gz -- svn:mime-type = application/octet-stream Added: dev/spark/v2.4.0-rc4-bin/pyspark-2.4.0.tar.gz.asc == --- dev/spark/v2.4.0-rc4-bin/pyspark-2.4.0.tar.gz.asc (added) +++ dev/spark/v2.4.0-rc4-bin/pyspark-2.4.0.tar.gz.asc Mon Oct 22 15:46:56 2018 @@ -0,0 +1,17 @@ +-BEGIN PGP SIGNATURE- +Version: GnuPG v1 + +iQIcBAABAgAGBQJbzejDAAoJEGuscolPT9yK2nAP/1ajd+QAjcnRkzpXquZF2a+k +tuDEIAm14R6YEdg/MLN4NwKwa6LK7/mbmJo4lOTdElcCgTQ8I7Bdhm8ei7ar6yfT +mNcq1z9/QSVW5LfosApYLJGfeXrs+F0ZlcccRopvdVqUMo89r7OH+3Ahh0SkIyF3 +MYkuTDmmPOWNj848TMad7460GzWR5HWo0yeC8qzgFj23pp/eUUTGGhqJTHdaU/Xi +LSopgGx1Vcqjy5VeGq6IQU/HhGvO8priW5Y0YMe66XCplqYmLelA13pAD5prFTFo +u4IzC9IS5MN7wfwdqAYgj/TFP2ICO18pBOQoBATUbCfnJqnqRJPhBbmCWTRrikQE +Y50CcapkhdXojfv78e9QuEvU3hkeNwu2fA3ph3wzLZ6kcEGuednQlaPN2ivqJbYR +PXZiCnOeOOks54WE4iZzT94wtLLeYZuJZywfaaDNkjP+K24589VJL/p5SJCpnGWM +33jpEvCUB1U5H6RDPxn3f6o8FR/MguLAsRl9TYFg+BgMkGy8iLxZ87RSusmQVn90 +/tEAFbFhkb7vfhr/l0n8usdMWbxsj+1osTq3bHZLYJTk9JJba+5AvZ30sPwtCBPV +GCk6u8nELyfRe6mxxZTRc/PcFa1TWYkzSwo3gXi0+68dVkTGa3n5TVi5U0h8lB54 +1romxGLVUzBH3QNktCKy +=cne7 +-END PGP SIGNATURE- Added:
svn commit: r30219 - in /dev/spark/2.4.1-SNAPSHOT-2018_10_22_08_15-f33d888-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s
Author: pwendell Date: Mon Oct 22 15:29:39 2018 New Revision: 30219 Log: Apache Spark 2.4.1-SNAPSHOT-2018_10_22_08_15-f33d888 docs [This commit notification would consist of 1477 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
[1/2] spark git commit: Preparing Spark release v2.4.0-rc4
Repository: spark Updated Branches: refs/heads/branch-2.4 c21d7e1bb -> f33d888a2 Preparing Spark release v2.4.0-rc4 Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/e69e2bfa Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/e69e2bfa Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/e69e2bfa Branch: refs/heads/branch-2.4 Commit: e69e2bfa486d8d3b9d203b96ca9c0f37c2b6cabe Parents: c21d7e1 Author: Wenchen Fan Authored: Mon Oct 22 14:50:51 2018 + Committer: Wenchen Fan Committed: Mon Oct 22 14:50:51 2018 + -- R/pkg/DESCRIPTION | 2 +- assembly/pom.xml | 2 +- common/kvstore/pom.xml | 2 +- common/network-common/pom.xml | 2 +- common/network-shuffle/pom.xml | 2 +- common/network-yarn/pom.xml| 2 +- common/sketch/pom.xml | 2 +- common/tags/pom.xml| 2 +- common/unsafe/pom.xml | 2 +- core/pom.xml | 2 +- docs/_config.yml | 4 ++-- examples/pom.xml | 2 +- external/avro/pom.xml | 2 +- external/docker-integration-tests/pom.xml | 2 +- external/flume-assembly/pom.xml| 2 +- external/flume-sink/pom.xml| 2 +- external/flume/pom.xml | 2 +- external/kafka-0-10-assembly/pom.xml | 2 +- external/kafka-0-10-sql/pom.xml| 2 +- external/kafka-0-10/pom.xml| 2 +- external/kafka-0-8-assembly/pom.xml| 2 +- external/kafka-0-8/pom.xml | 2 +- external/kinesis-asl-assembly/pom.xml | 2 +- external/kinesis-asl/pom.xml | 2 +- external/spark-ganglia-lgpl/pom.xml| 2 +- graphx/pom.xml | 2 +- hadoop-cloud/pom.xml | 2 +- launcher/pom.xml | 2 +- mllib-local/pom.xml| 2 +- mllib/pom.xml | 2 +- pom.xml| 2 +- python/pyspark/version.py | 2 +- repl/pom.xml | 2 +- resource-managers/kubernetes/core/pom.xml | 2 +- resource-managers/kubernetes/integration-tests/pom.xml | 2 +- resource-managers/mesos/pom.xml| 2 +- resource-managers/yarn/pom.xml | 2 +- sql/catalyst/pom.xml | 2 +- sql/core/pom.xml | 2 +- sql/hive-thriftserver/pom.xml | 2 +- sql/hive/pom.xml | 2 +- streaming/pom.xml | 2 +- tools/pom.xml | 2 +- 43 files changed, 44 insertions(+), 44 deletions(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/e69e2bfa/R/pkg/DESCRIPTION -- diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION index 714b6f1..f52d785 100644 --- a/R/pkg/DESCRIPTION +++ b/R/pkg/DESCRIPTION @@ -1,6 +1,6 @@ Package: SparkR Type: Package -Version: 2.4.1 +Version: 2.4.0 Title: R Frontend for Apache Spark Description: Provides an R Frontend for Apache Spark. Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"), http://git-wip-us.apache.org/repos/asf/spark/blob/e69e2bfa/assembly/pom.xml -- diff --git a/assembly/pom.xml b/assembly/pom.xml index ee0de73..63ab510 100644 --- a/assembly/pom.xml +++ b/assembly/pom.xml @@ -21,7 +21,7 @@ org.apache.spark spark-parent_2.11 -2.4.1-SNAPSHOT +2.4.0 ../pom.xml http://git-wip-us.apache.org/repos/asf/spark/blob/e69e2bfa/common/kvstore/pom.xml -- diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml index b89e0fe..b10e118 100644 --- a/common/kvstore/pom.xml +++ b/common/kvstore/pom.xml @@ -22,7 +22,7 @@ org.apache.spark spark-parent_2.11 -2.4.1-SNAPSHOT +2.4.0 ../../pom.xml http://git-wip-us.apache.org/repos/asf/spark/blob/e69e2bfa/common/network-common/pom.xml
[spark] Git Push Summary
Repository: spark Updated Tags: refs/tags/v2.4.0-rc4 [created] e69e2bfa4 - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org