address comments
Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/02a3641f Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/02a3641f Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/02a3641f Branch: refs/heads/asf-site Commit: 02a3641f5ffe4a3e898766be95c7ae69da4b88e9 Parents: 62847a7 Author: Wenchen Fan <wenc...@databricks.com> Authored: Mon Nov 5 16:10:10 2018 +0800 Committer: Wenchen Fan <wenc...@databricks.com> Committed: Mon Nov 5 16:10:10 2018 +0800 ---------------------------------------------------------------------- releases/_posts/2018-11-05-spark-release-2-4-0.md | 6 +++--- site/releases/spark-release-2-4-0.html | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/spark-website/blob/02a3641f/releases/_posts/2018-11-05-spark-release-2-4-0.md ---------------------------------------------------------------------- diff --git a/releases/_posts/2018-11-05-spark-release-2-4-0.md b/releases/_posts/2018-11-05-spark-release-2-4-0.md index 61f7109..03f8f12 100644 --- a/releases/_posts/2018-11-05-spark-release-2-4-0.md +++ b/releases/_posts/2018-11-05-spark-release-2-4-0.md @@ -25,7 +25,7 @@ To download Apache Spark 2.4.0, visit the <a href="{{site.baseurl}}/downloads.ht - **Major features** - **Barrier Execution Mode**: [[SPARK-24374](https://issues.apache.org/jira/browse/SPARK-24374)] Support Barrier Execution Mode in the scheduler, to better integrate with deep learning frameworks. - **Scala 2.12 Support**: [[SPARK-14220](https://issues.apache.org/jira/browse/SPARK-14220)] Add experimental Scala 2.12 support. Now you can build Spark with Scala 2.12 and write Spark applications in Scala 2.12. - - **Higher-order functions**: [[SPARK-23899](https://issues.apache.org/jira/browse/SPARK-23899)] Add a lof of new built-in functions, including high-order functions, to deal with complex data types easier. + - **Higher-order functions**: [[SPARK-23899](https://issues.apache.org/jira/browse/SPARK-23899)] Add a lot of new built-in functions, including higher-order functions, to deal with complex data types easier. - **Built-in Avro data source**: [[SPARK-24768](https://issues.apache.org/jira/browse/SPARK-24768)] Inline Spark-Avro package with logical type support, better performance and usability. - **API** @@ -66,8 +66,8 @@ To download Apache Spark 2.4.0, visit the <a href="{{site.baseurl}}/downloads.ht - **PySpark** - [[SPARK-24215](https://issues.apache.org/jira/browse/SPARK-24215)] Implement eager evaluation for DataFrame APIs - - [[SPARK-22274](https://issues.apache.org/jira/browse/SPARK-22274)] User-defined aggregation functions with pandas UDF - - [[SPARK-22239](https://issues.apache.org/jira/browse/SPARK-22239)] User-defined window functions with pandas UDF + - [[SPARK-22274](https://issues.apache.org/jira/browse/SPARK-22274)] User-defined aggregation functions with Pandas UDF + - [[SPARK-22239](https://issues.apache.org/jira/browse/SPARK-22239)] User-defined window functions with Pandas UDF - [[SPARK-24396](https://issues.apache.org/jira/browse/SPARK-24396)] Add Structured Streaming ForeachWriter for Python - [[SPARK-23874](https://issues.apache.org/jira/browse/SPARK-23874)] Upgrade Apache Arrow to 0.10.0 - [[SPARK-25004](https://issues.apache.org/jira/browse/SPARK-25004)] Add spark.executor.pyspark.memory limit http://git-wip-us.apache.org/repos/asf/spark-website/blob/02a3641f/site/releases/spark-release-2-4-0.html ---------------------------------------------------------------------- diff --git a/site/releases/spark-release-2-4-0.html b/site/releases/spark-release-2-4-0.html index 8afbe42..29d826a 100644 --- a/site/releases/spark-release-2-4-0.html +++ b/site/releases/spark-release-2-4-0.html @@ -226,7 +226,7 @@ <ul> <li><strong>Barrier Execution Mode</strong>: [<a href="https://issues.apache.org/jira/browse/SPARK-24374">SPARK-24374</a>] Support Barrier Execution Mode in the scheduler, to better integrate with deep learning frameworks.</li> <li><strong>Scala 2.12 Support</strong>: [<a href="https://issues.apache.org/jira/browse/SPARK-14220">SPARK-14220</a>] Add experimental Scala 2.12 support. Now you can build Spark with Scala 2.12 and write Spark applications in Scala 2.12.</li> - <li><strong>Higher-order functions</strong>: [<a href="https://issues.apache.org/jira/browse/SPARK-23899">SPARK-23899</a>] Add a lof of new built-in functions, including high-order functions, to deal with complex data types easier.</li> + <li><strong>Higher-order functions</strong>: [<a href="https://issues.apache.org/jira/browse/SPARK-23899">SPARK-23899</a>] Add a lot of new built-in functions, including higher-order functions, to deal with complex data types easier.</li> <li><strong>Built-in Avro data source</strong>: [<a href="https://issues.apache.org/jira/browse/SPARK-24768">SPARK-24768</a>] Inline Spark-Avro package with logical type support, better performance and usability.</li> </ul> </li> @@ -277,8 +277,8 @@ <li><strong>PySpark</strong> <ul> <li>[<a href="https://issues.apache.org/jira/browse/SPARK-24215">SPARK-24215</a>] Implement eager evaluation for DataFrame APIs</li> - <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22274">SPARK-22274</a>] User-defined aggregation functions with pandas UDF</li> - <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22239">SPARK-22239</a>] User-defined window functions with pandas UDF</li> + <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22274">SPARK-22274</a>] User-defined aggregation functions with Pandas UDF</li> + <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22239">SPARK-22239</a>] User-defined window functions with Pandas UDF</li> <li>[<a href="https://issues.apache.org/jira/browse/SPARK-24396">SPARK-24396</a>] Add Structured Streaming ForeachWriter for Python</li> <li>[<a href="https://issues.apache.org/jira/browse/SPARK-23874">SPARK-23874</a>] Upgrade Apache Arrow to 0.10.0</li> <li>[<a href="https://issues.apache.org/jira/browse/SPARK-25004">SPARK-25004</a>] Add spark.executor.pyspark.memory limit</li> --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org