This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
     new 6e4cd88  [SPARK-27274][DOCS] Fix references to scala 2.11 in 2.4.1+ 
docs; Note 2.11 support is deprecated in 2.4.1+
6e4cd88 is described below

commit 6e4cd887a02eb07e5af0efb9a62f80ccfd0f3b2c
Author: Sean Owen <sean.o...@databricks.com>
AuthorDate: Mon Mar 25 19:06:17 2019 -0500

    [SPARK-27274][DOCS] Fix references to scala 2.11 in 2.4.1+ docs; Note 2.11 
support is deprecated in 2.4.1+
    
    ## What changes were proposed in this pull request?
    
    Fix references to scala 2.11 in 2.4.x docs; should default to 2.12. Note 
2.11 support is deprecated in 2.4.x. Note that this change isn't needed in 
master as it's already on 2.12 in docs by default.
    
    ## How was this patch tested?
    
    Docs build.
    
    Closes #24210 from srowen/Scala212docs24.
    
    Authored-by: Sean Owen <sean.o...@databricks.com>
    Signed-off-by: Sean Owen <sean.o...@databricks.com>
---
 docs/_config.yml              | 4 ++--
 docs/index.md                 | 3 ++-
 docs/rdd-programming-guide.md | 4 ++--
 3 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/docs/_config.yml b/docs/_config.yml
index 13b5d8e..acc188c 100644
--- a/docs/_config.yml
+++ b/docs/_config.yml
@@ -16,8 +16,8 @@ include:
 # of Spark, Scala, and Mesos.
 SPARK_VERSION: 2.4.2-SNAPSHOT
 SPARK_VERSION_SHORT: 2.4.2
-SCALA_BINARY_VERSION: "2.11"
-SCALA_VERSION: "2.11.12"
+SCALA_BINARY_VERSION: "2.12"
+SCALA_VERSION: "2.12.8"
 MESOS_VERSION: 1.0.0
 SPARK_ISSUE_TRACKER_URL: https://issues.apache.org/jira/browse/SPARK
 SPARK_GITHUB_URL: https://github.com/apache/spark
diff --git a/docs/index.md b/docs/index.md
index 0300528..9bba61b 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -36,7 +36,8 @@ uses Scala {{site.SCALA_BINARY_VERSION}}. You will need to 
use a compatible Scal
 ({{site.SCALA_BINARY_VERSION}}.x).
 
 Note that support for Java 7, Python 2.6 and old Hadoop versions before 2.6.5 
were removed as of Spark 2.2.0.
-Support for Scala 2.10 was removed as of 2.3.0.
+Support for Scala 2.10 was removed as of 2.3.0. Support for Scala 2.11 is 
deprecated as of Spark 2.4.1
+and will be removed in Spark 3.0.
 
 # Running the Examples and Shell
 
diff --git a/docs/rdd-programming-guide.md b/docs/rdd-programming-guide.md
index 9a07d6c..8655056 100644
--- a/docs/rdd-programming-guide.md
+++ b/docs/rdd-programming-guide.md
@@ -839,7 +839,7 @@ The most common ones are distributed "shuffle" operations, 
such as grouping or a
 by a key.
 
 In Scala, these operations are automatically available on RDDs containing
-[Tuple2](http://www.scala-lang.org/api/{{site.SCALA_VERSION}}/index.html#scala.Tuple2)
 objects
+[Tuple2](https://www.scala-lang.org/api/{{site.SCALA_VERSION}}/scala/Tuple2.html)
 objects
 (the built-in tuples in the language, created by simply writing `(a, b)`). The 
key-value pair operations are available in the
 [PairRDDFunctions](api/scala/index.html#org.apache.spark.rdd.PairRDDFunctions) 
class,
 which automatically wraps around an RDD of tuples.
@@ -871,7 +871,7 @@ The most common ones are distributed "shuffle" operations, 
such as grouping or a
 by a key.
 
 In Java, key-value pairs are represented using the
-[scala.Tuple2](http://www.scala-lang.org/api/{{site.SCALA_VERSION}}/index.html#scala.Tuple2)
 class
+[scala.Tuple2](https://www.scala-lang.org/api/{{site.SCALA_VERSION}}/scala/Tuple2.html)
 class
 from the Scala standard library. You can simply call `new Tuple2(a, b)` to 
create a tuple, and access
 its fields later with `tuple._1()` and `tuple._2()`.
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to