This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
     new 41779ea2612 [MINOR][DOCS] Remove generated statement about Scala 
version in docs homepage as Spark supports multiple versions
41779ea2612 is described below

commit 41779ea26122de6a2f0e70a0398f82841a3f909b
Author: Sean Owen <sro...@gmail.com>
AuthorDate: Tue Aug 2 21:18:45 2022 -0500

    [MINOR][DOCS] Remove generated statement about Scala version in docs 
homepage as Spark supports multiple versions
    
    ### What changes were proposed in this pull request?
    
    Remove this statement from the docs homepage:
    "For the Scala API, Spark 3.3.0 uses Scala 2.12. You will need to use a 
compatible Scala version (2.12.x)."
    
    ### Why are the changes needed?
    
    It's misleading, as Spark supports 2.12 and 2.13.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No
    
    ### How was this patch tested?
    
    N/A
    
    Closes #37381 from srowen/RemoveScalaStatement.
    
    Authored-by: Sean Owen <sro...@gmail.com>
    Signed-off-by: Sean Owen <sro...@gmail.com>
    (cherry picked from commit 73ef5432547e3e8e9b0cce0913200a94402aeb4c)
    Signed-off-by: Sean Owen <sro...@gmail.com>
---
 docs/index.md | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/docs/index.md b/docs/index.md
index c6caf31d560..0c3c0273757 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -41,9 +41,8 @@ Spark runs on both Windows and UNIX-like systems (e.g. Linux, 
Mac OS), and it sh
 
 Spark runs on Java 8/11/17, Scala 2.12/2.13, Python 3.7+ and R 3.5+.
 Java 8 prior to version 8u201 support is deprecated as of Spark 3.2.0.
-For the Scala API, Spark {{site.SPARK_VERSION}}
-uses Scala {{site.SCALA_BINARY_VERSION}}. You will need to use a compatible 
Scala version
-({{site.SCALA_BINARY_VERSION}}.x).
+When using the Scala API, it is necessary for applications to use the same 
version of Scala that Spark was compiled for.
+For example, when using Scala 2.13, use Spark compiled for 2.13, and compile 
code/applications for Scala 2.13 as well.
 
 For Python 3.9, Arrow optimization and pandas UDFs might not work due to the 
supported Python versions in Apache Arrow. Please refer to the latest [Python 
Compatibility](https://arrow.apache.org/docs/python/install.html#python-compatibility)
 page.
 For Java 11, `-Dio.netty.tryReflectionSetAccessible=true` is required 
additionally for Apache Arrow library. This prevents 
`java.lang.UnsupportedOperationException: sun.misc.Unsafe or 
java.nio.DirectByteBuffer.(long, int) not available` when Apache Arrow uses 
Netty internally.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to