This is an automated email from the ASF dual-hosted git repository.
jiayu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/sedona.git
The following commit(s) were added to refs/heads/master by this push:
new df40faa0e Revert "[DOCS] Update 1.6.0 release notes with Java 11
tutorial (#1403)" (#1411)
df40faa0e is described below
commit df40faa0e1516899375d20f4093cd460943ba8c5
Author: Jia Yu <[email protected]>
AuthorDate: Mon May 13 15:02:10 2024 -0700
Revert "[DOCS] Update 1.6.0 release notes with Java 11 tutorial (#1403)"
(#1411)
* Revert "[DOCS] Update 1.6.0 release notes with Java 11 tutorial (#1403)"
This reverts commit d8a896e86140ec1b9075e43accaf9bf2840e0bc8.
* revert
---
docs/setup/databricks.md | 33 ---------------------------------
docs/setup/emr.md | 37 -------------------------------------
docs/setup/fabric.md | 4 ----
docs/setup/release-notes.md | 2 +-
4 files changed, 1 insertion(+), 75 deletions(-)
diff --git a/docs/setup/databricks.md b/docs/setup/databricks.md
index 1c43ab643..011c0392e 100644
--- a/docs/setup/databricks.md
+++ b/docs/setup/databricks.md
@@ -1,36 +1,3 @@
-
-## JDK 11+ requirement
-
-Sedona 1.6.0+ requires JDK 11+ to run. Databricks Runtime by default uses JDK
8. You can set up JDK 17 by following the instructions in the [Databricks
documentation](https://docs.databricks.com/en/dev-tools/sdk-java.html#create-a-cluster-that-uses-jdk-17).
-
-### on Databricks Runtime versions 13.1 and above
-
-When you create a cluster, specify that the cluster uses JDK 17 for both the
driver and executor by adding the following environment variable to `Advanced
Options > Spark > Environment Variables`:
-
-```
-JNAME=zulu17-ca-amd64
-```
-
-If you are using ARM-based clusters (for example, AWS Graviton instances), use
the following environment variable instead.
-
-```
-JNAME=zulu17-ca-arm64
-```
-
-### on Databricks Runtime versions 11.2 - 13.0
-
-When you create a cluster, you can specify that the cluster uses JDK 11 (for
both the driver and executor). To do this, add the following environment
variable to `Advanced Options > Spark > Environment Variables`:
-
-```
-JNAME=zulu11-ca-amd64
-```
-
-If you are using ARM-based clusters (for example, AWS Graviton instances), use
the following environment variable instead.
-
-```
-JNAME=zulu11-ca-arm64
-```
-
## Community edition (free-tier)
You just need to install the Sedona jars and Sedona Python on Databricks using
Databricks default web UI. Then everything will work.
diff --git a/docs/setup/emr.md b/docs/setup/emr.md
index 9f73b62ba..6d687f35e 100644
--- a/docs/setup/emr.md
+++ b/docs/setup/emr.md
@@ -5,43 +5,6 @@ This tutorial is tested on EMR on EC2 with EMR Studio
(notebooks). EMR on EC2 us
!!!note
If you are using Spark 3.4+ and Scala 2.12, please use
`sedona-spark-shaded-3.4_2.12`. Please pay attention to the Spark version
postfix and Scala version postfix.
-## JDK 11+ requirement
-
-Sedona 1.6.0+ requires JDK 11+ to run. For Amazon EMR 7.x, the default JVM is
Java 17. For Amazon EMR 5.x and 6.x, the default JVM is Java 8 but you can
configure the cluster to use Java 11 or Java 17. For more information, see [EMR
JVM
versions](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/configuring-java8.html#configuring-java8-override-spark).
-
-When you use Spark with Amazon EMR releases 6.12 and higher, if you write a
driver for submission in cluster mode, the driver uses Java 8, but you can set
the environment so that the executors use Java 11 or 17. To override the JVM
for Spark, AWS EMR recommends that you set both the Hadoop and Spark
classifications.
-
-However, it is unclear that if the following will work on EMR below 6.12.
-
-```
-{
-"Classification": "hadoop-env",
- "Configurations": [
- {
-"Classification": "export",
- "Configurations": [],
- "Properties": {
-"JAVA_HOME": "/usr/lib/jvm/java-1.11.0"
- }
- }
- ],
- "Properties": {}
- },
- {
-"Classification": "spark-env",
- "Configurations": [
- {
-"Classification": "export",
- "Configurations": [],
- "Properties": {
-"JAVA_HOME": "/usr/lib/jvm/java-1.11.0"
- }
- }
- ],
- "Properties": {}
- }
-```
-
## Prepare initialization script
In your S3 bucket, add a script that has the following content:
diff --git a/docs/setup/fabric.md b/docs/setup/fabric.md
index ff3a10e36..aa5ca6ee6 100644
--- a/docs/setup/fabric.md
+++ b/docs/setup/fabric.md
@@ -1,9 +1,5 @@
This tutorial will guide you through the process of installing Sedona on
Microsoft Fabric Synapse Data Engineering's Spark environment.
-## JDK 11+ requirement
-
-Sedona 1.6.0+ requires JDK 11+ to run. Microsoft Fabric Synapse Data
Engineering 1.2+ uses JDK 11 by default so we recommend using Microsoft Fabric
Synapse Data Engineering 1.2+. For more information, see [Apache Spark Runtimes
in Fabric](https://learn.microsoft.com/en-us/fabric/data-engineering/runtime).
-
## Step 1: Open Microsoft Fabric Synapse Data Engineering
Go to the [Microsoft Fabric portal](https://app.fabric.microsoft.com/) and
choose the `Data Engineering` option.
diff --git a/docs/setup/release-notes.md b/docs/setup/release-notes.md
index c44dcc6b3..16df19b0b 100644
--- a/docs/setup/release-notes.md
+++ b/docs/setup/release-notes.md
@@ -4,7 +4,7 @@
If you use Sedona < 1.6.0, please use GeoPandas <= `0.11.1` since
GeoPandas > 0.11.1 will automatically install Shapely 2.0. If you use Shapely,
please use <= `1.8.5`.
!!! warning
- Sedona 1.6.0+ requires Java 11+ to compile and run. If you are using
Java 8, please use Sedona < 1.6.0. To learn how to set up Java 11+ on different
platforms, please refer to the Java 11+ requirement in the corresponding
platform setup guide.
+ Sedona 1.6.0+ requires Java 11+ to compile and run. If you are using
Java 8, please use Sedona <= 1.5.2.
## Sedona 1.6.0