Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22615#discussion_r223729888
  
    --- Diff: docs/index.md ---
    @@ -30,9 +30,6 @@ Spark runs on Java 8+, Python 2.7+/3.4+ and R 3.1+. For 
the Scala API, Spark {{s
     uses Scala {{site.SCALA_BINARY_VERSION}}. You will need to use a 
compatible Scala version
     ({{site.SCALA_BINARY_VERSION}}.x).
     
    -Note that support for Java 7, Python 2.6 and old Hadoop versions before 
2.6.5 were removed as of Spark 2.2.0.
    -Support for Scala 2.10 was removed as of 2.3.0.
    --- End diff --
    
    Now that we are onto 3.0, I figured we didn't need to keep documenting how 
version 2.2 and 2.3 worked. I also felt that the particular Hadoop version was 
only an issue in the distant past, when we were trying to support the odd world 
of mutually incompatible 2.x releases before 2.2. Now, it's no more of a high 
level issue than anything else. Indeed we might even just build vs Hadoop 3.x 
in the end and de-emphasize dependence on a particular version of Hadoop. But 
for now I just removed this note.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to