Repository: spark-website Updated Branches: refs/heads/asf-site 1c7fd01e9 -> 2fac17731
Patch references to docs/programming-guide.html to docs/rdd-programming-guide.html Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/2fac1773 Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/2fac1773 Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/2fac1773 Branch: refs/heads/asf-site Commit: 2fac17731bdaafc3ce47be5d0adad682487f983c Parents: 1c7fd01 Author: Sean Owen <so...@cloudera.com> Authored: Wed Jul 12 12:20:26 2017 +0100 Committer: Sean Owen <so...@cloudera.com> Committed: Wed Jul 12 12:20:26 2017 +0100 ---------------------------------------------------------------------- examples.md | 2 +- releases/_posts/2017-07-11-spark-release-2-2-0.md | 2 +- site/examples.html | 2 +- site/releases/spark-release-2-2-0.html | 2 +- site/sitemap.xml | 14 +++++++------- sitemap.xml | 2 +- 6 files changed, 12 insertions(+), 12 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/spark-website/blob/2fac1773/examples.md ---------------------------------------------------------------------- diff --git a/examples.md b/examples.md index fe9cc79..1bc45d0 100644 --- a/examples.md +++ b/examples.md @@ -11,7 +11,7 @@ navigation: These examples give a quick overview of the Spark API. Spark is built on the concept of <em>distributed datasets</em>, which contain arbitrary Java or Python objects. You create a dataset from external data, then apply parallel operations -to it. The building block of the Spark API is its [RDD API](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds). +to it. The building block of the Spark API is its [RDD API](https://spark.apache.org/docs/latest/rdd-programming-guide.html#resilient-distributed-datasets-rdds). In the RDD API, there are two types of operations: <em>transformations</em>, which define a new dataset based on previous ones, and <em>actions</em>, which kick off a job to execute on a cluster. http://git-wip-us.apache.org/repos/asf/spark-website/blob/2fac1773/releases/_posts/2017-07-11-spark-release-2-2-0.md ---------------------------------------------------------------------- diff --git a/releases/_posts/2017-07-11-spark-release-2-2-0.md b/releases/_posts/2017-07-11-spark-release-2-2-0.md index 8027d8a..37d3638 100644 --- a/releases/_posts/2017-07-11-spark-release-2-2-0.md +++ b/releases/_posts/2017-07-11-spark-release-2-2-0.md @@ -59,7 +59,7 @@ To download Apache Spark 2.2.0, visit the <a href="{{site.baseurl}}/downloads.ht - SPARK-19464: Remove support for Hadoop 2.5 and earlier - SPARK-19493: Remove Java 7 support -*Programming guides: <a href="{{site.baseurl}}/docs/2.2.0/programming-guide.html">Spark Programming Guide</a> and <a href="{{site.baseurl}}/docs/2.2.0/sql-programming-guide.html">Spark SQL, DataFrames and Datasets Guide</a>.* +*Programming guides: <a href="{{site.baseurl}}/docs/2.2.0/rdd-programming-guide.html">Spark RDD Programming Guide</a> and <a href="{{site.baseurl}}/docs/2.2.0/sql-programming-guide.html">Spark SQL, DataFrames and Datasets Guide</a>.* ### Structured Streaming http://git-wip-us.apache.org/repos/asf/spark-website/blob/2fac1773/site/examples.html ---------------------------------------------------------------------- diff --git a/site/examples.html b/site/examples.html index 439a62b..a4cfeda 100644 --- a/site/examples.html +++ b/site/examples.html @@ -199,7 +199,7 @@ <p>These examples give a quick overview of the Spark API. Spark is built on the concept of <em>distributed datasets</em>, which contain arbitrary Java or Python objects. You create a dataset from external data, then apply parallel operations -to it. The building block of the Spark API is its <a href="https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds">RDD API</a>. +to it. The building block of the Spark API is its <a href="https://spark.apache.org/docs/latest/rdd-programming-guide.html#resilient-distributed-datasets-rdds">RDD API</a>. In the RDD API, there are two types of operations: <em>transformations</em>, which define a new dataset based on previous ones, and <em>actions</em>, which kick off a job to execute on a cluster. http://git-wip-us.apache.org/repos/asf/spark-website/blob/2fac1773/site/releases/spark-release-2-2-0.html ---------------------------------------------------------------------- diff --git a/site/releases/spark-release-2-2-0.html b/site/releases/spark-release-2-2-0.html index badc714..0460c7d 100644 --- a/site/releases/spark-release-2-2-0.html +++ b/site/releases/spark-release-2-2-0.html @@ -264,7 +264,7 @@ </li> </ul> -<p><em>Programming guides: <a href="/docs/2.2.0/programming-guide.html">Spark Programming Guide</a> and <a href="/docs/2.2.0/sql-programming-guide.html">Spark SQL, DataFrames and Datasets Guide</a>.</em></p> +<p><em>Programming guides: <a href="/docs/2.2.0/rdd-programming-guide.html">Spark RDD Programming Guide</a> and <a href="/docs/2.2.0/sql-programming-guide.html">Spark SQL, DataFrames and Datasets Guide</a>.</em></p> <h3 id="structured-streaming">Structured Streaming</h3> http://git-wip-us.apache.org/repos/asf/spark-website/blob/2fac1773/site/sitemap.xml ---------------------------------------------------------------------- diff --git a/site/sitemap.xml b/site/sitemap.xml index 591e871..0ce546f 100644 --- a/site/sitemap.xml +++ b/site/sitemap.xml @@ -22,7 +22,7 @@ <priority>1.0</priority> </url> <url> - <loc>https://spark.apache.org/docs/latest/programming-guide.html</loc> + <loc>https://spark.apache.org/docs/latest/rdd-programming-guide.html</loc> <changefreq>daily</changefreq> <priority>1.0</priority> </url> @@ -652,27 +652,27 @@ <changefreq>weekly</changefreq> </url> <url> - <loc>https://spark.apache.org/sql/</loc> + <loc>https://spark.apache.org/graphx/</loc> <changefreq>weekly</changefreq> </url> <url> - <loc>https://spark.apache.org/streaming/</loc> + <loc>https://spark.apache.org/mllib/</loc> <changefreq>weekly</changefreq> </url> <url> - <loc>https://spark.apache.org/screencasts/</loc> + <loc>https://spark.apache.org/news/</loc> <changefreq>weekly</changefreq> </url> <url> - <loc>https://spark.apache.org/mllib/</loc> + <loc>https://spark.apache.org/screencasts/</loc> <changefreq>weekly</changefreq> </url> <url> - <loc>https://spark.apache.org/news/</loc> + <loc>https://spark.apache.org/sql/</loc> <changefreq>weekly</changefreq> </url> <url> - <loc>https://spark.apache.org/graphx/</loc> + <loc>https://spark.apache.org/streaming/</loc> <changefreq>weekly</changefreq> </url> <url> http://git-wip-us.apache.org/repos/asf/spark-website/blob/2fac1773/sitemap.xml ---------------------------------------------------------------------- diff --git a/sitemap.xml b/sitemap.xml index c55a1d3..f3c4cb9 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -25,7 +25,7 @@ sitemap: false <priority>1.0</priority> </url> <url> - <loc>https://spark.apache.org/docs/latest/programming-guide.html</loc> + <loc>https://spark.apache.org/docs/latest/rdd-programming-guide.html</loc> <changefreq>daily</changefreq> <priority>1.0</priority> </url> --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org