This is an automated email from the ASF dual-hosted git repository.

jiayu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/sedona.git


The following commit(s) were added to refs/heads/master by this push:
     new 8324e2ca9 [DOCS] Update markdown links (#1334)
8324e2ca9 is described below

commit 8324e2ca90a79b39868fec08f22f448667aa2b19
Author: Merijn <[email protected]>
AuthorDate: Fri Apr 12 05:02:34 2024 +0200

    [DOCS] Update markdown links (#1334)
    
    * Update links in code documentation across various files and some minor 
grammar fixes.
    
    * [DOCS] Fix line breaks
---
 docs/api/flink/Function.md                         | 14 +++--
 docs/api/snowflake/vector-data/Overview.md         |  8 +--
 docs/api/sql/DataFrameAPI.md                       |  2 +-
 docs/api/sql/Function.md                           | 12 ++---
 docs/api/sql/Optimizer.md                          | 10 ++--
 docs/api/sql/Overview.md                           | 10 ++--
 docs/api/sql/Raster-aggregate-function.md          |  2 +-
 docs/api/sql/Raster-map-algebra.md                 |  2 +-
 docs/api/sql/Raster-operators.md                   | 14 ++---
 docs/api/sql/Raster-visualizer.md                  |  2 +-
 docs/api/sql/Raster-writer.md                      |  2 +-
 docs/community/contact.md                          |  2 +-
 docs/community/contributor.md                      |  2 +-
 docs/community/develop.md                          | 24 ++++-----
 docs/community/publish.md                          |  4 +-
 docs/setup/flink/install-scala.md                  |  4 +-
 docs/setup/install-python.md                       |  6 +--
 docs/setup/install-scala.md                        |  6 +--
 docs/setup/maven-coordinates.md                    |  2 +-
 docs/setup/overview.md                             |  2 +-
 docs/setup/release-notes.md                        | 23 +++++----
 docs/setup/snowflake/install.md                    | 18 +++----
 docs/setup/zeppelin.md                             |  2 +-
 .../Advanced-Tutorial-Tune-your-Application.md     |  2 +-
 docs/tutorial/flink/sql.md                         | 26 +++++-----
 docs/tutorial/geopandas-shapely.md                 |  6 +--
 docs/tutorial/jupyter-notebook.md                  |  6 +--
 docs/tutorial/raster.md                            | 60 +++++++++++-----------
 docs/tutorial/rdd.md                               |  6 +--
 docs/tutorial/snowflake/sql.md                     | 32 ++++++------
 docs/tutorial/sql.md                               | 36 ++++++-------
 docs/tutorial/viz-gallery.md                       |  6 +--
 docs/tutorial/viz.md                               | 10 ++--
 docs/tutorial/zeppelin.md                          |  4 +-
 mkdocs.yml                                         |  2 +-
 35 files changed, 184 insertions(+), 185 deletions(-)

diff --git a/docs/api/flink/Function.md b/docs/api/flink/Function.md
index ad62c4551..5117670b9 100644
--- a/docs/api/flink/Function.md
+++ b/docs/api/flink/Function.md
@@ -616,9 +616,8 @@ SELECT ST_Buffer(ST_GeomFromWKT('POINT(0 0)'), 10, false, 
'quad_segs=2')
 ```
 
 Output:
-
-<img alt="Point buffer with 8 quadrant segments" 
src="../../../image/point-buffer-quad-8.png" width="100" height=""/>
-<img alt="Point buffer with 2 quadrant segments" 
src="../../../image/point-buffer-quad-2.png" width="100" height=""/>
+![Point buffer with 8 quadrant 
segments](../../image/point-buffer-quad-8.png){: width="100px"}
+![Point buffer with 2 quadrant 
segments](../../image/point-buffer-quad-2.png){: width="100px"}
 
 8 Segments &ensp; 2 Segments
 
@@ -629,9 +628,8 @@ SELECT ST_Buffer(ST_GeomFromWKT('LINESTRING(0 0, 50 70, 100 
100)'), 10, false, '
 ```
 
 Output:
-
-<img alt="Original Linestring" src="../../../image/linestring-og.png" 
width="150"/>
-<img alt="Original Linestring with buffer on the left side" 
src="../../../image/linestring-left-side.png" width="150"/>
+![Original Linestring](../../image/linestring-og.png "Original Linestring"){: 
width="150px"}
+![Original Linestring  with buffer on the left 
side](../../image/linestring-left-side.png "Original Linestring with buffer on 
the left side"){: width="150px"}
 
 Original Linestring &emsp; Left side buffed Linestring
 
@@ -2637,7 +2635,7 @@ Format: `ST_Snap(input: Geometry, reference: Geometry, 
tolerance: double)`
 
 Input geometry:
 
-<img width="250" src="../../../image/st_snap/st-snap-base-example.png" 
title="ST_Snap Base example"/>
+![ST_Snap base example](../../image/st_snap/st-snap-base-example.png "ST_Snap 
base example"){: width="250"}
 
 SQL Example:
 
@@ -2651,7 +2649,7 @@ SELECT ST_Snap(
 
 Output:
 
-<img width="250" src="../../../image/st_snap/st-snap-applied.png" 
title="ST_Snap applied example"/>
+![ST_Snap applied example](../../image/st_snap/st-snap-applied.png "ST_Snap 
applied example"){: width="250"}
 
 ```
 POLYGON ((236877.58 -6.61, 236878.29 -8.35, 236879.98 -8.33, 236879.72 -7.63, 
236880.69 -6.81, 236877.58 -6.61), (236878.45 -7.01, 236878.43 -7.52, 236879.29 
-7.5, 236878.63 -7.22, 236878.76 -6.89, 236878.45 -7.01))
diff --git a/docs/api/snowflake/vector-data/Overview.md 
b/docs/api/snowflake/vector-data/Overview.md
index d52a2bf37..5a76b2653 100644
--- a/docs/api/snowflake/vector-data/Overview.md
+++ b/docs/api/snowflake/vector-data/Overview.md
@@ -5,13 +5,13 @@ SedonaSQL supports SQL/MM Part3 Spatial SQL Standard. It 
includes four kinds of
 
 * Constructor: Construct a Geometry given an input string or coordinates
   * Example: ST_GeomFromWKT (string). Create a Geometry from a WKT String.
-  * Documentation: [Here](../Constructor)
+  * Documentation: [Here](Constructor.md)
 * Function: Execute a function on the given column or columns
   * Example: ST_Distance (A, B). Given two Geometry A and B, return the 
Euclidean distance of A and B.
-  * Documentation: [Here](../Function)
+  * Documentation: [Here](Function.md)
 * Aggregate function: Return the aggregated value on the given column
   * Example: ST_Envelope_Aggr (Geometry column). Given a Geometry column, 
calculate the entire envelope boundary of this column.
-  * Documentation: [Here](../AggregateFunction)
+  * Documentation: [Here](AggregateFunction.md)
 * Predicate: Execute a logic judgement on the given columns and return true or 
false
   * Example: ST_Contains (A, B). Check if A fully contains B. Return "True" if 
yes, else return "False".
-  * Documentation: [Here](../Predicate)
+  * Documentation: [Here](Predicate.md)
diff --git a/docs/api/sql/DataFrameAPI.md b/docs/api/sql/DataFrameAPI.md
index a249c9a42..25409856c 100644
--- a/docs/api/sql/DataFrameAPI.md
+++ b/docs/api/sql/DataFrameAPI.md
@@ -2,7 +2,7 @@ Sedona SQL functions can be used in a DataFrame style API 
similar to Spark funct
 
 The following objects contain the exposed functions: 
`org.apache.spark.sql.sedona_sql.expressions.st_functions`, 
`org.apache.spark.sql.sedona_sql.expressions.st_constructors`, 
`org.apache.spark.sql.sedona_sql.expressions.st_predicates`, and 
`org.apache.spark.sql.sedona_sql.expressions.st_aggregates`.
 
-Every functions can take all `Column` arguments. Additionally, overloaded 
forms can commonly take a mix of `String` and other Scala types (such as 
`Double`) as arguments.
+Every function can take all `Column` arguments. Additionally, overloaded forms 
can commonly take a mix of `String` and other Scala types (such as `Double`) as 
arguments.
 
 In general the following rules apply (although check the documentation of 
specific functions for any exceptions):
 
diff --git a/docs/api/sql/Function.md b/docs/api/sql/Function.md
index 85ab6d7a4..2c208faaa 100644
--- a/docs/api/sql/Function.md
+++ b/docs/api/sql/Function.md
@@ -611,8 +611,8 @@ SELECT ST_Buffer(ST_GeomFromWKT('POINT(0 0)'), 10, false, 
'quad_segs=2')
 
 Output:
 
-<img alt="Point buffer with 8 quadrant segments" 
src="../../../image/point-buffer-quad-8.png" width="100" height=""/>
-<img alt="Point buffer with 2 quadrant segments" 
src="../../../image/point-buffer-quad-2.png" width="100" height=""/>
+![Point buffer with 8 quadrant segments](../../image/point-buffer-quad-8.png 
"Point buffer with 8 quadrant segments"){: width="100px"}
+![Point buffer with 2 quadrant segments](../../image/point-buffer-quad-2.png 
"Point buffer with 2 quadrant segments"){: width="100px"}
 
 8 Segments &ensp; 2 Segments
 
@@ -624,8 +624,8 @@ SELECT ST_Buffer(ST_GeomFromWKT('LINESTRING(0 0, 50 70, 100 
100)'), 10, false, '
 
 Output:
 
-<img alt="Original Linestring" src="../../../image/linestring-og.png" 
width="150"/>
-<img alt="Original Linestring with buffer on the left side" 
src="../../../image/linestring-left-side.png" width="150"/>
+![Original Linestring](../../image/linestring-og.png "Original Linestring"){: 
width="150px"}
+![Original Linestring with buffer on the left 
side](../../image/linestring-left-side.png "Original Linestring with buffer on 
the left side"){: width="150px"}
 
 Original Linestring &emsp; Left side buffed Linestring
 
@@ -2623,7 +2623,7 @@ Format: `ST_Snap(input: Geometry, reference: Geometry, 
tolerance: double)`
 
 Input geometry:
 
-<img width="250" src="../../../image/st_snap/st-snap-base-example.png" 
title="ST_Snap Base example"/>
+![ST_Snap base example](../../image/st_snap/st-snap-base-example.png "ST_Snap 
base example"){: width="250px"}
 
 SQL Example:
 
@@ -2637,7 +2637,7 @@ SELECT ST_Snap(
 
 Output:
 
-<img width="250" src="../../../image/st_snap/st-snap-applied.png" 
title="ST_Snap applied example"/>
+![ST_Snap applied example](../../image/st_snap/st-snap-applied.png "ST_Snap 
applied example"){: width="250px"}
 
 ```
 POLYGON ((236877.58 -6.61, 236878.29 -8.35, 236879.98 -8.33, 236879.72 -7.63, 
236880.69 -6.81, 236877.58 -6.61), (236878.45 -7.01, 236878.43 -7.52, 236879.29 
-7.5, 236878.63 -7.22, 236878.76 -6.89, 236878.45 -7.01))
diff --git a/docs/api/sql/Optimizer.md b/docs/api/sql/Optimizer.md
index 39492ff53..3a96718dc 100644
--- a/docs/api/sql/Optimizer.md
+++ b/docs/api/sql/Optimizer.md
@@ -185,7 +185,7 @@ Note: If the distance is an expression, it is only 
evaluated on the first argume
 
 ## Automatic broadcast index join
 
-When one table involved a spatial join query is smaller than a threshold, 
Sedona will automatically choose broadcast index join instead of Sedona 
optimized join. The current threshold is controlled by 
[sedona.join.autoBroadcastJoinThreshold](../Parameter) and set to the same as 
`spark.sql.autoBroadcastJoinThreshold`.
+When one table involved a spatial join query is smaller than a threshold, 
Sedona will automatically choose broadcast index join instead of Sedona 
optimized join. The current threshold is controlled by 
[sedona.join.autoBroadcastJoinThreshold](Parameter.md) and set to the same as 
`spark.sql.autoBroadcastJoinThreshold`.
 
 ## Raster join
 
@@ -219,7 +219,7 @@ Please use the following steps:
 
 ### 1. Generate S2 ids for both tables
 
-Use [ST_S2CellIds](../Function/#st_s2cellids) to generate cell IDs. Each 
geometry may produce one or more IDs.
+Use [ST_S2CellIds](Function.md#st_s2cellids) to generate cell IDs. Each 
geometry may produce one or more IDs.
 
 ```sql
 SELECT id, geom, name, explode(ST_S2CellIDs(geom, 15)) as cellId
@@ -244,7 +244,7 @@ FROM lcs JOIN rcs ON lcs.cellId = rcs.cellId
 
 Due to the nature of S2 Cellid, the equi-join results might have a few 
false-positives depending on the S2 level you choose. A smaller level indicates 
bigger cells, less exploded rows, but more false positives.
 
-To ensure the correctness, you can use one of the [Spatial 
Predicates](../Predicate/) to filter out them. Use this query instead of the 
query in Step 2.
+To ensure the correctness, you can use one of the [Spatial 
Predicates](Predicate.md) to filter out them. Use this query instead of the 
query in Step 2.
 
 ```sql
 SELECT lcs.id as lcs_id, lcs.geom as lcs_geom, lcs.name as lcs_name, rcs.id as 
rcs_id, rcs.geom as rcs_geom, rcs.name as rcs_name
@@ -325,7 +325,7 @@ Sedona supports spatial predicate push-down for GeoParquet 
files. When spatial f
 to determine if all data in the file will be discarded by the spatial 
predicate. This optimization could reduce the number of files scanned
 when the queried GeoParquet dataset was partitioned by spatial proximity.
 
-To maximize the performance of Sedona GeoParquet filter pushdown, we suggest 
that you sort the data by their geohash values (see 
[ST_GeoHash](../../api/sql/Function/#st_geohash)) and then save as a GeoParquet 
file. An example is as follows:
+To maximize the performance of Sedona GeoParquet filter pushdown, we suggest 
that you sort the data by their geohash values (see 
[ST_GeoHash](../../api/sql/Function.md#st_geohash)) and then save as a 
GeoParquet file. An example is as follows:
 
 ```
 SELECT col1, col2, geom, ST_GeoHash(geom, 5) as geohash
@@ -336,7 +336,7 @@ ORDER BY geohash
 The following figure is the visualization of a GeoParquet dataset. `bbox`es of 
all GeoParquet files were plotted as blue rectangles and the query window was 
plotted as a red rectangle. Sedona will only scan 1 of the 6 files to
 answer queries such as `SELECT * FROM geoparquet_dataset WHERE 
ST_Intersects(geom, <query window>)`, thus only part of the data covered by the 
light green rectangle needs to be scanned.
 
-![](../../image/geoparquet-pred-pushdown.png)
+![Visualization of a GeoParquet 
dataset](../../image/geoparquet-pred-pushdown.png "Visualization of a 
GeoParquet dataset")
 
 We can compare the metrics of querying the GeoParquet dataset with or without 
the spatial predicate and observe that querying with spatial predicate results 
in fewer number of rows scanned.
 
diff --git a/docs/api/sql/Overview.md b/docs/api/sql/Overview.md
index a4c577cde..52bf11345 100644
--- a/docs/api/sql/Overview.md
+++ b/docs/api/sql/Overview.md
@@ -16,20 +16,20 @@ myDataFrame.withColumn("geometry", 
expr("ST_*")).selectExpr("ST_*")
 
 * Constructor: Construct a Geometry given an input string or coordinates
        * Example: ST_GeomFromWKT (string). Create a Geometry from a WKT String.
-       * Documentation: [Here](../Constructor)
+       * Documentation: [Here](Constructor.md)
 * Function: Execute a function on the given column or columns
        * Example: ST_Distance (A, B). Given two Geometry A and B, return the 
Euclidean distance of A and B.
-       * Documentation: [Here](../Function)
+       * Documentation: [Here](Function.md)
 * Aggregate function: Return the aggregated value on the given column
        * Example: ST_Envelope_Aggr (Geometry column). Given a Geometry column, 
calculate the entire envelope boundary of this column.
-       * Documentation: [Here](../AggregateFunction)
+       * Documentation: [Here](AggregateFunction.md)
 * Predicate: Execute a logic judgement on the given columns and return true or 
false
        * Example: ST_Contains (A, B). Check if A fully contains B. Return 
"True" if yes, else return "False".
-       * Documentation: [Here](../Predicate)
+       * Documentation: [Here](Predicate.md)
 
 Sedona also provides an Adapter to convert SpatialRDD <-> DataFrame. Please 
read [Adapter 
Scaladoc](../../scaladoc/spark/org/apache/sedona/sql/utils/index.html)
 
-SedonaSQL supports SparkSQL query optimizer, documentation is 
[Here](../Optimizer)
+SedonaSQL supports SparkSQL query optimizer, documentation is 
[Here](Optimizer.md)
 
 ## Quick start
 
diff --git a/docs/api/sql/Raster-aggregate-function.md 
b/docs/api/sql/Raster-aggregate-function.md
index 2535ff075..dd72aaa13 100644
--- a/docs/api/sql/Raster-aggregate-function.md
+++ b/docs/api/sql/Raster-aggregate-function.md
@@ -3,7 +3,7 @@
 Introduction: Returns a raster containing bands by specified indexes from all 
rasters in the provided column. Extracts the first bands from each raster and 
combines them into the output raster based on the input index values.
 
 !!!Note
-    RS_Union_Aggr can take multiple banded rasters as input but it would only 
extract the first band to the resulting raster. RS_Union_Aggr expects the 
following input, if not satisfied then will throw an IllegalArgumentException:
+    RS_Union_Aggr can take multiple banded rasters as input, but it would only 
extract the first band to the resulting raster. RS_Union_Aggr expects the 
following input, if not satisfied then will throw an IllegalArgumentException:
 
     - Indexes to be in an arithmetic sequence without any gaps.
     - Indexes to be unique and not repeated.
diff --git a/docs/api/sql/Raster-map-algebra.md 
b/docs/api/sql/Raster-map-algebra.md
index 940fc4abb..01f1ee219 100644
--- a/docs/api/sql/Raster-map-algebra.md
+++ b/docs/api/sql/Raster-map-algebra.md
@@ -120,4 +120,4 @@ FROM raster_table) t
 ### Further Reading
 
 * [Jiffle language 
summary](https://github.com/geosolutions-it/jai-ext/wiki/Jiffle---language-summary)
-* [Raster operators](../Raster-operators/)
+* [Raster operators](Raster-operators.md)
diff --git a/docs/api/sql/Raster-operators.md b/docs/api/sql/Raster-operators.md
index 1a1808118..af60c7cec 100644
--- a/docs/api/sql/Raster-operators.md
+++ b/docs/api/sql/Raster-operators.md
@@ -769,7 +769,7 @@ POINT (2 1)
 ```
 
 !!!Note
-    If the given geometry point is not in the same CRS as the given raster, 
the given geometry will be transformed to the given raster's CRS. You can use 
[ST_Transform](../Function/#st_transform) to transform the geometry beforehand.
+    If the given geometry point is not in the same CRS as the given raster, 
the given geometry will be transformed to the given raster's CRS. You can use 
[ST_Transform](Function.md#st_transform) to transform the geometry beforehand.
 
 ### RS_WorldToRasterCoordX
 
@@ -1440,7 +1440,7 @@ Since: `v1.5.1`
 
 Original Raster:
 
-<img alt="Original raster" src="../../../image/original-raster-clip.png" 
width="400"/>
+![Original raster](../../image/original-raster-clip.png "Original raster"){: 
width="400px"}
 
 SQL Example
 
@@ -1454,7 +1454,7 @@ SELECT RS_Clip(
 
 Output:
 
-<img alt="Cropped raster" src="../../../image/cropped-raster.png" width="400"/>
+![Cropped raster](../../image/cropped-raster.png "Cropped raster"){: 
width="400px"}
 
 SQL Example
 
@@ -1468,7 +1468,7 @@ SELECT RS_Clip(
 
 Output:
 
-<img alt="Clipped raster" src="../../../image/clipped-raster.png" width="400"/>
+![Clipped raster](../../image/clipped-raster.png "Clipped raster"){: 
width="400px"}
 
 ### RS_Interpolate
 
@@ -1528,8 +1528,8 @@ SELECT RS_Interpolate(raster, 1, 2.0, 'Variable', 12, 
1000)
 
 Output (Shown as heatmap):
 
-<img alt="Original raster" src="../../../image/heatmap_Interpolate.png" 
width="400"/>
-<img alt="Interpolated raster" src="../../../image/heatmap_Interpolate2.png" 
width="400"/>
+![Original raster](../../image/heatmap_Interpolate.png "Original raster"){: 
width="400px"}
+![Interpolated raster](../../image/heatmap_Interpolate2.png "Interpolated 
raster"){: width="400px"}
 
 ### RS_MetaData
 
@@ -2443,7 +2443,7 @@ Spark SQL Example for two raster input `RS_MapAlgebra`:
 RS_MapAlgebra(rast0, rast1, 'D', 'out = rast0[0] * 0.5 + rast1[0] * 0.5;', 
null)
 ```
 
-For more details and examples about `RS_MapAlgebra`, please refer to the [Map 
Algebra documentation](../Raster-map-algebra/).
+For more details and examples about `RS_MapAlgebra`, please refer to the [Map 
Algebra documentation](Raster-map-algebra.md).
 To learn how to write map algebra script, please refer to [Jiffle language 
summary](https://github.com/geosolutions-it/jai-ext/wiki/Jiffle---language-summary).
 
 ## Map Algebra Operators
diff --git a/docs/api/sql/Raster-visualizer.md 
b/docs/api/sql/Raster-visualizer.md
index c1598cd45..7b756bd9a 100644
--- a/docs/api/sql/Raster-visualizer.md
+++ b/docs/api/sql/Raster-visualizer.md
@@ -2,7 +2,7 @@ Sedona offers some APIs to aid in easy visualization of a 
raster object.
 
 ## Image-based visualization
 
-Sedona offers APIs to visualize a raster in an image form. This API only works 
for rasters with byte data, and bands <= 4 (Grayscale - RGBA). You can check 
the data type of an existing raster by using 
[RS_BandPixelType](../Raster-operators/#rs_bandpixeltype) or create your own 
raster by passing 'B' while using 
[RS_MakeEmptyRaster](../Raster-loader/#rs_makeemptyraster).
+Sedona offers APIs to visualize a raster in an image form. This API only works 
for rasters with byte data, and bands <= 4 (Grayscale - RGBA). You can check 
the data type of an existing raster by using 
[RS_BandPixelType](Raster-operators.md#rs_bandpixeltype) or create your own 
raster by passing 'B' while using 
[RS_MakeEmptyRaster](Raster-loader.md#rs_makeemptyraster).
 
 ### RS_AsBase64
 
diff --git a/docs/api/sql/Raster-writer.md b/docs/api/sql/Raster-writer.md
index ab324b69b..06e61f57b 100644
--- a/docs/api/sql/Raster-writer.md
+++ b/docs/api/sql/Raster-writer.md
@@ -99,7 +99,7 @@ root
 
 #### RS_AsPNG
 
-Introduction: Returns a PNG byte array, that can be written to raster files as 
PNGs using the [sedona function](#write-a-binary-dataframe-to-raster-files). 
This function can only accept pixel data type of unsigned integer. PNG can 
accept 1 or 3 bands of data from the raster, refer to 
[RS_Band](../Raster-operators/#rs_band) for more details.
+Introduction: Returns a PNG byte array, that can be written to raster files as 
PNGs using the [sedona function](#write-a-binary-dataframe-to-raster-files). 
This function can only accept pixel data type of unsigned integer. PNG can 
accept 1 or 3 bands of data from the raster, refer to 
[RS_Band](Raster-operators.md#rs_band) for more details.
 
 !!!Note
        Raster having `UNSIGNED_8BITS` pixel data type will have range of `0 - 
255`, whereas rasters having `UNSIGNED_16BITS` pixel data type will have range 
of `0 - 65535`. If provided pixel value is greater than either `255` for 
`UNSIGNED_8BITS` or `65535` for `UNSIGNED_16BITS`, then the extra bit will be 
truncated.
diff --git a/docs/community/contact.md b/docs/community/contact.md
index 85113df46..e52335729 100644
--- a/docs/community/contact.md
+++ b/docs/community/contact.md
@@ -45,4 +45,4 @@ Before submitting an issue, please:
 
 Enhancement requests for new features are also welcome. The more concrete and 
rationale the request is, the greater the chance it will be incorporated into 
future releases.
 
-Enter an issue in the [Sedona 
JIRA](https://issues.apache.org/jira/projects/SEDONA) or send an email to 
[[email protected]](https://lists.apache.org/[email protected])
+Enter an issue in the [Sedona 
JIRA](https://issues.apache.org/jira/projects/SEDONA) or email to 
[[email protected]](https://lists.apache.org/[email protected])
diff --git a/docs/community/contributor.md b/docs/community/contributor.md
index e2b2033a1..19ad20af4 100644
--- a/docs/community/contributor.md
+++ b/docs/community/contributor.md
@@ -32,7 +32,7 @@ Current Sedona PMC members are as follows:
 
 ## Become a committer
 
-To get started contributing to Sedona, learn [how to contribute](../rule) – 
anyone can submit patches, documentation and examples to the project.
+To get started contributing to Sedona, learn [how to contribute](rule.md) – 
anyone can submit patches, documentation and examples to the project.
 
 The PMC regularly adds new committers from the active contributors, based on 
their contributions to Sedona. The qualifications for new committers include:
 
diff --git a/docs/community/develop.md b/docs/community/develop.md
index 64cfd15fb..5bd78c9bd 100644
--- a/docs/community/develop.md
+++ b/docs/community/develop.md
@@ -10,17 +10,17 @@ We recommend Intellij IDEA with Scala plugin installed. 
Please make sure that th
 
 #### Choose `Open`
 
-<img src="../../image/ide-java-1.png"/>
+![](../image/ide-java-1.png)
 
 #### Go to the Sedona root folder (not a submodule folder) and choose `open`
 
-<img src="../../image/ide-java-2.png" style="width:500px;"/>
+![](../image/ide-java-2.png){: width="500px"}
 
 #### The IDE might show errors
 
 The IDE usually has trouble understanding the complex project structure in 
Sedona.
 
-<img src="../../image/ide-java-4.png"/>
+![](../image/ide-java-4.png)
 
 #### Fix errors by changing pom.xml
 
@@ -39,11 +39,11 @@ You need to comment out the following lines in `pom.xml` at 
the root folder, as
 
 Make sure you reload the pom.xml or reload the maven project. The IDE will ask 
you to remove some modules. Please select `yes`.
 
-<img src="../../image/ide-java-5.png"/>
+![](../image/ide-java-5.png)
 
 #### The final project structure should be like this:
 
-<img src="../../image/ide-java-3.png" style="width:400px;"/>
+![](../image/ide-java-3.png){: width="400px"}
 
 ### Run unit tests
 
@@ -54,35 +54,35 @@ In a terminal, go to the Sedona root folder. Run `mvn clean 
install`. All tests
     `mvn clean install` will compile Sedona with Spark 3.0 and Scala 2.12. If 
you have a different version of Spark in $SPARK_HOME, make sure to specify that 
using -Dspark command line arg.
     For example, to compile sedona with Spark 3.4 and Scala 2.12, use: `mvn 
clean install -Dspark=3.4 -Dscala=2.12`
 
-More details can be found on [Compile Sedona](../../setup/compile/)
+More details can be found on [Compile Sedona](../setup/compile.md)
 
 #### Run a single unit test
 
 In the IDE, right-click a test case and run this test case.
 
-<img src="../../image/ide-java-6.png" style="width:400px;"/>
+![](../image/ide-java-6.png){: width="400px"}
 
 The IDE might tell you that the PATH does not exist as follows:
 
-<img src="../../image/ide-java-7.png" style="width:600px;"/>
+![](../image/ide-java-7.png){: width="600px"}
 
 Go to `Edit Configuration`
 
-<img src="../../image/ide-java-8.png"/>
+![](../image/ide-java-8.png)
 
 Append the submodule folder to `Working Directory`. For example, `sedona/sql`.
 
-<img src="../../image/ide-java-9.png"/>
+![](../image/ide-java-9.png)
 
 Re-run the test case. Do NOT right click the test case to re-run. Instead, 
click the button as shown in the figure below.
 
-<img src="../../image/ide-java-10.png"/>
+![](../image/ide-java-10.png)
 
 ## Python developers
 
 #### Run all python tests
 
-To run all Python test cases, follow steps mentioned 
[here](../../setup/compile/#run-python-test).
+To run all Python test cases, follow steps mentioned 
[here](../setup/compile.md#run-python-test).
 
 #### Run all python tests in a single test file
 
diff --git a/docs/community/publish.md b/docs/community/publish.md
index 69d95cb10..e4a2f980d 100644
--- a/docs/community/publish.md
+++ b/docs/community/publish.md
@@ -400,8 +400,8 @@ Then submit to CRAN using this [web 
form](https://xmpalantir.wu.ac.at/cransubmit
 ### Prepare the environment and doc folder
 
 1. Check out the {{ sedona_create_release.current_version }} Git tag on your 
local repo.
-2. Read [Compile documentation website](../../setup/compile) to set up your 
environment. But don't deploy anything yet.
-3. Add the download link to [Download page](../../download).
+2. Read [Compile documentation website](../setup/compile.md) to set up your 
environment. But don't deploy anything yet.
+3. Add the download link to [Download page](../download.md).
 4. Add the news to `docs/index.md`.
 
 ### Generate Javadoc and Scaladoc
diff --git a/docs/setup/flink/install-scala.md 
b/docs/setup/flink/install-scala.md
index 554e8e7cf..820db46c8 100644
--- a/docs/setup/flink/install-scala.md
+++ b/docs/setup/flink/install-scala.md
@@ -4,8 +4,8 @@ Then you can create a self-contained Scala / Java project. A 
self-contained proj
 
 To use Sedona in your self-contained Flink project, you just need to add 
Sedona as a dependency in your pom.xml or build.sbt.
 
-1. To add Sedona as dependencies, please read [Sedona Maven Central 
coordinates](../../maven-coordinates)
-2. Read [Sedona Flink guide](../../../tutorial/flink/sql) and use Sedona 
Template project to start: [Sedona Template Project](../../../tutorial/demo/)
+1. To add Sedona as dependencies, please read [Sedona Maven Central 
coordinates](../maven-coordinates.md)
+2. Read [Sedona Flink guide](../../tutorial/flink/sql.md) and use Sedona 
Template project to start: [Sedona Template Project](../../tutorial/demo.md)
 3. Compile your project using Maven. Make sure you obtain the fat jar which 
packages all dependencies.
 4. Submit your compiled fat jar to Flink cluster. Make sure you are in the 
root folder of Flink distribution. Then run the following command:
 
diff --git a/docs/setup/install-python.md b/docs/setup/install-python.md
index 70d195135..94f8b9c2d 100644
--- a/docs/setup/install-python.md
+++ b/docs/setup/install-python.md
@@ -10,7 +10,7 @@ You need to install necessary packages if your system does 
not have them install
 
 ### Install sedona
 
-* Installing from PyPI repositories. You can find the latest Sedona Python on 
[PyPI](https://pypi.org/project/apache-sedona/). [There is an known issue in 
Sedona v1.0.1 and earlier versions](../release-notes/#known-issue).
+* Installing from PyPI repositories. You can find the latest Sedona Python on 
[PyPI](https://pypi.org/project/apache-sedona/). [There is an known issue in 
Sedona v1.0.1 and earlier versions](release-notes.md#known-issue).
 
 ```bash
 pip install apache-sedona
@@ -41,7 +41,7 @@ Sedona Python needs one additional jar file called 
`sedona-spark-shaded` or `sed
 You can get it using one of the following methods:
 
 1. If you run Sedona in Databricks, AWS EMR, or other cloud platform's 
notebook, use the `shaded jar`: Download [sedona-spark-shaded 
jar](https://repo.maven.apache.org/maven2/org/apache/sedona/) and 
[geotools-wrapper 
jar](https://repo.maven.apache.org/maven2/org/datasyslab/geotools-wrapper/) 
from Maven Central, and put them in SPARK_HOME/jars/ folder.
-2. If you run Sedona in an IDE or a local Jupyter notebook, use the `unshaded 
jar`. Call the [Maven Central coordinate](../maven-coordinates) in your python 
program. For example,
+2. If you run Sedona in an IDE or a local Jupyter notebook, use the `unshaded 
jar`. Call the [Maven Central coordinate](maven-coordinates.md) in your python 
program. For example,
 ==Sedona >= 1.4.1==
 
 ```python
@@ -91,4 +91,4 @@ export SPARK_HOME=~/Downloads/spark-3.0.1-bin-hadoop2.7
 export PYTHONPATH=$SPARK_HOME/python
 ```
 
-You can then play with [Sedona Python Jupyter 
notebook](../../tutorial/jupyter-notebook/).
+You can then play with [Sedona Python Jupyter 
notebook](../tutorial/jupyter-notebook.md).
diff --git a/docs/setup/install-scala.md b/docs/setup/install-scala.md
index 4289d3d31..83bca4b10 100644
--- a/docs/setup/install-scala.md
+++ b/docs/setup/install-scala.md
@@ -35,7 +35,7 @@ Please refer to [Sedona Maven Central 
coordinates](maven-coordinates.md) to sele
 
 2. Download Sedona jars:
        * Download the pre-compiled jars from [Sedona Releases](../download.md)
-       * Download / Git clone Sedona source code and compile the code by 
yourself (see [Compile Sedona](../compile))
+       * Download / Git clone Sedona source code and compile the code by 
yourself (see [Compile Sedona](compile.md))
 3. Run Spark shell with `--jars` option.
 
 ```
@@ -56,14 +56,14 @@ If you are using Spark 3.0 to 3.3, please use jars with 
filenames containing `3.
 
 ## Spark SQL shell
 
-Please see [Use Sedona in a pure SQL environment](../../tutorial/sql-pure-sql/)
+Please see [Use Sedona in a pure SQL environment](../tutorial/sql-pure-sql.md)
 
 ## Self-contained Spark projects
 
 A self-contained project allows you to create multiple Scala / Java files and 
write complex logics in one place. To use Sedona in your self-contained Spark 
project, you just need to add Sedona as a dependency in your pom.xml or 
build.sbt.
 
 1. To add Sedona as dependencies, please read [Sedona Maven Central 
coordinates](maven-coordinates.md)
-2. Use Sedona Template project to start: [Sedona Template 
Project](../../tutorial/demo/)
+2. Use Sedona Template project to start: [Sedona Template 
Project](../tutorial/demo.md)
 3. Compile your project using SBT. Make sure you obtain the fat jar which 
packages all dependencies.
 4. Submit your compiled fat jar to Spark cluster. Make sure you are in the 
root folder of Spark distribution. Then run the following command:
 
diff --git a/docs/setup/maven-coordinates.md b/docs/setup/maven-coordinates.md
index 1d27f5d56..bbffa9383 100644
--- a/docs/setup/maven-coordinates.md
+++ b/docs/setup/maven-coordinates.md
@@ -209,7 +209,7 @@ Apache Sedona provides different packages for each 
supported version of Spark.
 
 If you are using the Scala 2.13 builds of Spark, please use the corresponding 
packages for Scala 2.13, which are suffixed by `_2.13`.
 
-The optional GeoTools library is required if you want to use CRS 
transformation, ShapefileReader or GeoTiff reader. This wrapper library is a 
re-distribution of GeoTools official jars. The only purpose of this library is 
to bring GeoTools jars from OSGEO repository to Maven Central. This library is 
under GNU Lesser General Public License (LGPL) license so we cannot package it 
in Sedona official release.
+The optional GeoTools library is required if you want to use CRS 
transformation, ShapefileReader or GeoTiff reader. This wrapper library is a 
re-distribution of GeoTools official jars. The only purpose of this library is 
to bring GeoTools jars from OSGEO repository to Maven Central. This library is 
under GNU Lesser General Public License (LGPL) license, so we cannot package it 
in Sedona official release.
 
 !!! abstract "Sedona with Apache Spark and Scala 2.12"
 
diff --git a/docs/setup/overview.md b/docs/setup/overview.md
index 02cc30b2a..1d297f443 100644
--- a/docs/setup/overview.md
+++ b/docs/setup/overview.md
@@ -32,5 +32,5 @@
 - [x] Apache Zeppelin dashboard integration
 - [X] Integrate with a variety of Python tools including Jupyter notebook, 
GeoPandas, Shapely
 - [X] Integrate with a variety of visualization tools including KeplerGL, 
DeckGL
-- [x] High resolution and scalable map generation: [Visualize Spatial 
DataFrame/RDD](../../tutorial/viz)
+- [x] High resolution and scalable map generation: [Visualize Spatial 
DataFrame/RDD](../tutorial/viz.md)
 - [x] Support Scala, Java, Python, R
diff --git a/docs/setup/release-notes.md b/docs/setup/release-notes.md
index ae587d85d..35eb0f1c0 100644
--- a/docs/setup/release-notes.md
+++ b/docs/setup/release-notes.md
@@ -140,6 +140,7 @@ Sedona 1.5.1 is compiled against Spark 3.3 / Spark 3.4 / 
Spark 3.5, Flink 1.12,
 
 ### Test
 
+<ul>
 <li>[<a 
href='https://issues.apache.org/jira/browse/SEDONA-410'>SEDONA-410</a>] -       
  pre-commit: check that scripts with shebangs are executable
 </li>
 <li>[<a 
href='https://issues.apache.org/jira/browse/SEDONA-412'>SEDONA-412</a>] -       
  pre-commit: add hook `end-of-file-fixer`
@@ -192,12 +193,12 @@ Sedona 1.5.0 is compiled against Spark 3.3 / Spark 3.4 / 
Flink 1.12, Java 8.
 **New features**
 
 * Add 18 more ST functions for vector data processing in Sedona Spark and 
Sedona Flink
-* Add 36 more RS functions in Sedona Spark to support [comprehensive raster 
data ETL and analytics](../../tutorial/raster/)
+* Add 36 more RS functions in Sedona Spark to support [comprehensive raster 
data ETL and analytics](../tutorial/raster.md)
        * You can now directly join vector and raster datasets together
        * Flexible map algebra equations: `SELECT RS_MapAlgebra(rast, 'D', 'out 
= (rast[3] - rast[0]) / (rast[3] + rast[0]);') as ndvi FROM raster_table
 `
-* Add native support of [Uber H3 
functions](../../api/sql/Function/#st_h3celldistance) in Sedona Spark and 
Sedona Flink.
-* Add SedonaKepler and SedonaPyDeck for [interactive map 
visualization](../../tutorial/sql/#visualize-query-results) on Sedona Spark.
+* Add native support of [Uber H3 
functions](../api/sql/Function.md#st_h3celldistance) in Sedona Spark and Sedona 
Flink.
+* Add SedonaKepler and SedonaPyDeck for [interactive map 
visualization](../tutorial/sql.md#visualize-query-results) on Sedona Spark.
 
 ### Bug
 
@@ -429,7 +430,7 @@ Sedona 1.4.1 is compiled against Spark 3.3 / Spark 3.4 / 
Flink 1.12, Java 8.
 
 ### Highlights
 
-* [X] **Sedona Spark** More raster functions and bridge RasterUDT and Map 
Algebra operators. See [Raster based 
operators](../../api/sql/Raster-operators/#raster-based-operators) and [Raster 
to Map Algebra 
operators](../../api/sql/Raster-operators/#raster-to-map-algebra-operators).
+* [X] **Sedona Spark** More raster functions and bridge RasterUDT and Map 
Algebra operators. See [Raster based 
operators](../api/sql/Raster-operators.md#raster-based-operators) and [Raster 
to Map Algebra 
operators](../api/sql/Raster-operators.md#raster-to-map-algebra-operators).
 * [X] **Sedona Spark & Flink** Added geodesic / geography functions:
     * ST_DistanceSphere
     * ST_DistanceSpheroid
@@ -550,15 +551,15 @@ Sedona 1.4.0 is compiled against, Spark 3.3 / Flink 1.12, 
Java 8.
 ### Highlights
 
 * [X] **Sedona Spark & Flink** Serialize and deserialize geometries 3 - 7X 
faster
-* [X] **Sedona Spark & Flink** Google S2 based spatial join for fast 
approximate point-in-polygon join. See [Join query in 
Spark](../../api/sql/Optimizer/#google-s2-based-approximate-equi-join) and 
[Join query in Flink](../../tutorial/flink/sql/#join-query)
-* [X] **Sedona Spark** Pushdown spatial predicate on GeoParquet to reduce 
memory consumption by 10X: see 
[explanation](../../api/sql/Optimizer/#geoparquet)
+* [X] **Sedona Spark & Flink** Google S2 based spatial join for fast 
approximate point-in-polygon join. See [Join query in 
Spark](../api/sql/Optimizer.md#google-s2-based-approximate-equi-join) and [Join 
query in Flink](../tutorial/flink/sql.md#join-query)
+* [X] **Sedona Spark** Pushdown spatial predicate on GeoParquet to reduce 
memory consumption by 10X: see 
[explanation](../api/sql/Optimizer.md#Push-spatial-predicates-to-GeoParquet)
 * [X] **Sedona Spark** Automatically use broadcast index spatial join for 
small datasets
 * [X] **Sedona Spark** New RasterUDT added to Sedona GeoTiff reader.
 * [X] **Sedona Spark** A number of bug fixes and improvement to the Sedona R 
module.
 
 ### API change
 
-* **Sedona Spark & Flink** Packaging strategy changed. See [Maven 
Coordinate](../maven-coordinates). Please change your Sedona dependencies if 
needed. We recommend `sedona-spark-shaded-3.0_2.12-1.4.0` and 
`sedona-flink-shaded_2.12-1.4.0`
+* **Sedona Spark & Flink** Packaging strategy changed. See [Maven 
Coordinate](maven-coordinates.md). Please change your Sedona dependencies if 
needed. We recommend `sedona-spark-shaded-3.0_2.12-1.4.0` and 
`sedona-flink-shaded_2.12-1.4.0`
 * **Sedona Spark & Flink** GeoTools-wrapper version upgraded. Please use 
`geotools-wrapper-1.4.0-28.2`.
 
 ### Behavior change
@@ -701,7 +702,7 @@ This version is a major release on Sedona 1.3.0 line and 
consists of 50 PRs. It
 * [X] Native GeoParquet read and write (../../tutorial/sql/#load-geoparquet).
     * `df = spark.read.format("geoparquet").option("fieldGeometry", 
"myGeometryColumn").load("PATH/TO/MYFILE.parquet")`
     * `df.write.format("geoparquet").save("PATH/TO/MYFILE.parquet")`
-* [X] DataFrame style API (../../tutorial/sql/#dataframe-style-api)
+* [X] DataFrame style API (../../tutorial/sql.md/#dataframe-style-api)
     * `df.select(ST_Point(min_value, max_value).as("point"))`
 * [X] Allow WKT format CRS in ST_Transform
     * `ST_Transform(geom, "srcWktString", "tgtWktString")`
@@ -1040,12 +1041,12 @@ Key dependency upgrade:
 
 Key dependency packaging strategy change:
 
-* JTS, GeoTools, jts2geojson are no longer packaged in Sedona jars. End users 
need to add them manually. See [here](../maven-coordinates).
+* JTS, GeoTools, jts2geojson are no longer packaged in Sedona jars. End users 
need to add them manually. See [here](maven-coordinates.md).
 
 Key compilation target change:
 
 * [SEDONA-3](https://issues.apache.org/jira/browse/SEDONA-3): Paths and class 
names have been changed to Apache Sedona
-* [SEDONA-7](https://issues.apache.org/jira/browse/SEDONA-7): build the source 
code for Spark 2.4, 3.0, Scala 2.11, 2.12, Python 3.7, 3.8, 3.9. See 
[here](../compile).
+* [SEDONA-7](https://issues.apache.org/jira/browse/SEDONA-7): build the source 
code for Spark 2.4, 3.0, Scala 2.11, 2.12, Python 3.7, 3.8, 3.9. See 
[here](compile.md).
 
 ### Sedona-core
 
@@ -1093,7 +1094,7 @@ API change: Drop the function which can generate SVG 
vector images because the r
 
 API/Behavior change:
 
-* Python-to-Sedona adapter is moved to a separate module. To use Sedona 
Python, see [here](../overview/#prepare-python-adapter-jar)
+* Python-to-Sedona adapter is moved to a separate module. To use Sedona 
Python, see [here](install-python.md)
 
 New function:
 
diff --git a/docs/setup/snowflake/install.md b/docs/setup/snowflake/install.md
index 0e83b61db..0a7e82010 100644
--- a/docs/setup/snowflake/install.md
+++ b/docs/setup/snowflake/install.md
@@ -20,11 +20,11 @@ A stage is a Snowflake object that maps to a location in a 
cloud storage provide
 
 In this case, we will create a stage named `ApacheSedona` in the `public` 
schema of the database created in the previous step. The stage will be used to 
load Sedona's JAR files into the database. We will choose a `Snowflake managed` 
stage.
 
-<img src="./../../../image/snowflake/snowflake-1.png">
+![](../../image/snowflake/snowflake-1.png)
 
 After creating the stage, you should be able to see the stage in the database.
 
-<img src="./../../../image/snowflake/snowflake-2.png">
+![](../../image/snowflake/snowflake-2.png)
 
 You can refer to [Snowflake 
Documentation](https://docs.snowflake.com/en/sql-reference/sql/create-stage.html)
 to how to create a stage.
 
@@ -39,7 +39,7 @@ Then you can upload the 2 JAR files to the stage created in 
the previous step.
 
 After uploading the 2 JAR files, you should be able to see the 2 JAR files in 
the stage.
 
-<img src="./../../../image/snowflake/snowflake-3.png">
+![](../../image/snowflake/snowflake-3.png)
 
 You can refer to [Snowflake 
Documentation](https://docs.snowflake.com/en/sql-reference/sql/put.html) to how 
to upload files to a stage.
 
@@ -49,17 +49,17 @@ A schema is a Snowflake object that maps to a database. You 
can use a schema to
 
 In this case, we will create a schema named `SEDONA` in the database created 
in the previous step. The schema will be used to create Sedona's functions.
 
-<img src="./../../../image/snowflake/snowflake-4.png">
+![](../../image/snowflake/snowflake-4.png)
 
 You can find your schema in the database as follows:
 
-<img src="./../../../image/snowflake/snowflake-5.png">
+![](../../image/snowflake/snowflake-5.png)
 
 You can refer to [Snowflake 
Documentation](https://docs.snowflake.com/en/sql-reference/sql/create-schema.html)
 to how to create a schema.
 
 ## Step 4: Get the SQL script for creating Sedona's functions
 
-You will need to download 
[sedona-snowflake.sql](./../../../image/snowflake/sedona-snowflake.sql) to 
create Sedona's functions in the schema created in the previous step.
+You will need to download 
[sedona-snowflake.sql](../../image/snowflake/sedona-snowflake.sql) to create 
Sedona's functions in the schema created in the previous step.
 
 You can also get this SQL script by running the following command:
 
@@ -75,11 +75,11 @@ We will create a worksheet in the database created in the 
previous step, and run
 
 In this case, we will choose the option `Create Worksheet from SQL File`.
 
-<img src="./../../../image/snowflake/snowflake-6.png">
+![](../../image/snowflake/snowflake-6.png)
 
 In the worksheet, choose `SEDONA_TEST` as the database, and `PUBLIC` as the 
schema. The SQL script should be in the worksheet. Then right click the 
worksheet and choose `Run All`. Snowflake will take 3 minutes to create 
Sedona's functions.
 
-<img src="./../../../image/snowflake/snowflake-7.png">
+![](../../image/snowflake/snowflake-7.png)
 
 ## Step 6: Verify the installation
 
@@ -97,4 +97,4 @@ SRID=4326;POINT (1 2)
 
 The worksheet should look like this:
 
-<img src="./../../../image/snowflake/snowflake-8.png">
+![](../../image/snowflake/snowflake-8.png)
diff --git a/docs/setup/zeppelin.md b/docs/setup/zeppelin.md
index ea96f563a..97d866e54 100644
--- a/docs/setup/zeppelin.md
+++ b/docs/setup/zeppelin.md
@@ -1,7 +1,7 @@
 # Install Sedona-Zeppelin
 
 !!!warning
-       **Known issue**: due to an issue in Leaflet JS, Sedona can only plot 
each geometry (point, line string and polygon) as a point on Zeppelin map. To 
enjoy the scalable and full-fleged visualization, please use SedonaViz to plot 
scatter plots and heat maps on Zeppelin map.
+       **Known issue**: due to an issue in Leaflet JS, Sedona can only plot 
each geometry (point, line string and polygon) as a point on Zeppelin map. To 
enjoy the scalable and full-fledged visualization, please use SedonaViz to plot 
scatter plots and heat maps on Zeppelin map.
 
 ## Compatibility
 
diff --git a/docs/tutorial/Advanced-Tutorial-Tune-your-Application.md 
b/docs/tutorial/Advanced-Tutorial-Tune-your-Application.md
index 4517e5240..d9b89d13e 100644
--- a/docs/tutorial/Advanced-Tutorial-Tune-your-Application.md
+++ b/docs/tutorial/Advanced-Tutorial-Tune-your-Application.md
@@ -8,7 +8,7 @@ The versions of Sedona have three levels: X.X.X (i.e., 0.8.1)
 
 The first level means that this version contains big structure redesign which 
may bring big changes in APIs and performance.
 
-The second level (i.e., 0.8) indicates that this version contains significant 
performance enhancement, big new features and API changes. An old Sedona user 
who wants to pick this version needs to be careful about the API changes. 
Before you move to this version, please read [Sedona version release 
notes](../../setup/release-notes/) and make sure you are ready to accept the 
API changes.
+The second level (i.e., 0.8) indicates that this version contains significant 
performance enhancement, big new features and API changes. An old Sedona user 
who wants to pick this version needs to be careful about the API changes. 
Before you move to this version, please read [Sedona version release 
notes](../setup/release-notes.md) and make sure you are ready to accept the API 
changes.
 
 The third level (i.e., 0.8.1) tells that this version only contains bug fixes, 
some small new features and slight performance enhancement. This version will 
not contain any API changes. Moving to this version is safe. We highly suggest 
all Sedona users that stay at the same level move to the latest version in this 
level.
 
diff --git a/docs/tutorial/flink/sql.md b/docs/tutorial/flink/sql.md
index ca98e9eb7..d6e3d1408 100644
--- a/docs/tutorial/flink/sql.md
+++ b/docs/tutorial/flink/sql.md
@@ -6,14 +6,14 @@ SedonaSQL supports SQL/MM Part3 Spatial SQL Standard. It 
includes four kinds of
 Table myTable = tableEnv.sqlQuery("YOUR_SQL")
 ```
 
-Detailed SedonaSQL APIs are available here: [SedonaSQL 
API](../../../api/flink/Overview)
+Detailed SedonaSQL APIs are available here: [SedonaSQL 
API](../../api/flink/Overview.md)
 
 ## Set up dependencies
 
-1. Read [Sedona Maven Central coordinates](../../../setup/maven-coordinates)
+1. Read [Sedona Maven Central coordinates](../../setup/maven-coordinates.md)
 2. Add Sedona dependencies in build.sbt or pom.xml.
 3. Add [Flink 
dependencies](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/configuration/overview/)
 in build.sbt or pom.xml.
-4. Please see [SQL example project](../../demo/)
+4. Please see [SQL example project](../demo.md)
 
 ## Initiate Stream Environment
 
@@ -119,7 +119,7 @@ The output will be like this:
 ```
 
 !!!note
-       SedonaSQL provides lots of functions to create a Geometry column, 
please read [SedonaSQL constructor API](../../../api/flink/Constructor).
+       SedonaSQL provides lots of functions to create a Geometry column, 
please read [SedonaSQL constructor API](../../api/flink/Constructor.md).
 
 ## Transform the Coordinate Reference System
 
@@ -139,7 +139,7 @@ The second EPSG code EPSG:3857 in `ST_Transform` is the 
target CRS of the geomet
 This `ST_Transform` transform the CRS of these geometries from EPSG:4326 to 
EPSG:3857. The details CRS information can be found on 
[EPSG.io](https://epsg.io/)
 
 !!!note
-       Read [SedonaSQL ST_Transform 
API](../../../api/flink/Function/#st_transform) to learn different spatial 
query predicates.
+       Read [SedonaSQL ST_Transform 
API](../../api/flink/Function.md#st_transform) to learn different spatial query 
predicates.
 
 For example, a Table that has coordinates in the US will become like this.
 
@@ -200,7 +200,7 @@ geomTable.execute().print()
 ```
 
 !!!note
-       Read [SedonaSQL Predicate API](../../../api/flink/Predicate) to learn 
different spatial query predicates.
+       Read [SedonaSQL Predicate API](../../api/flink/Predicate.md) to learn 
different spatial query predicates.
 
 ## KNN query
 
@@ -221,13 +221,13 @@ geomTable.execute().print()
 
 ## Join query
 
-This equi-join leverages Flink's internal equi-join algorithm. You can opt to 
skip the Sedona refinement step  by sacrificing query accuracy. A running 
example is in [SQL example project](../../demo/).
+This equi-join leverages Flink's internal equi-join algorithm. You can opt to 
skip the Sedona refinement step  by sacrificing query accuracy. A running 
example is in [SQL example project](../demo.md).
 
 Please use the following steps:
 
 ### 1. Generate S2 ids for both tables
 
-Use [ST_S2CellIds](../../../api/flink/Function/#st_s2cellids) to generate cell 
IDs. Each geometry may produce one or more IDs.
+Use [ST_S2CellIds](../../api/flink/Function.md#st_s2cellids) to generate cell 
IDs. Each geometry may produce one or more IDs.
 
 ```sql
 SELECT id, geom, name, ST_S2CellIDs(geom, 15) as idarray
@@ -241,7 +241,7 @@ FROM rights
 
 ### 2. Explode id array
 
-The produced S2 ids are arrays of integers. We need to explode these Ids to 
multiple rows so later we can join two tables by ids.
+The produced S2 ids are arrays of integers. We need to explode these Ids to 
multiple rows, so later we can join two tables by ids.
 
 ```
 SELECT id, geom, name, cellId
@@ -266,7 +266,7 @@ FROM lcs JOIN rcs ON lcs.cellId = rcs.cellId
 
 Due to the nature of S2 Cellid, the equi-join results might have a few 
false-positives depending on the S2 level you choose. A smaller level indicates 
bigger cells, less exploded rows, but more false positives.
 
-To ensure the correctness, you can use one of the [Spatial 
Predicates](../../../api/Predicate/) to filter out them. Use this query as the 
query in Step 3.
+To ensure the correctness, you can use one of the [Spatial 
Predicates](../../api/sql/Predicate.md) to filter out them. Use this query as 
the query in Step 3.
 
 ```sql
 SELECT lcs.id as lcs_id, lcs.geom as lcs_geom, lcs.name as lcs_name, rcs.id as 
rcs_id, rcs.geom as rcs_geom, rcs.name as rcs_name
@@ -279,7 +279,7 @@ As you see, compared to the query in Step 2, we added one 
more filter, which is
 !!!tip
        You can skip this step if you don't need 100% accuracy and want faster 
query speed.
 
-### 5. Optional: De-duplcate
+### 5. Optional: De-duplicate
 
 Due to the explode function used when we generate S2 Cell Ids, the resulting 
DataFrame may have several duplicate <lcs_geom, rcs_geom> matches. You can 
remove them by performing a GroupBy query.
 
@@ -300,7 +300,7 @@ GROUP BY (lcs_geom, rcs_geom)
 ```
 
 !!!note
-       If you are doing point-in-polygon join, this is not a problem and you 
can safely discard this issue. This issue only happens when you do 
polygon-polygon, polygon-linestring, linestring-linestring join.
+       If you are doing point-in-polygon join, this is not a problem, and you 
can safely discard this issue. This issue only happens when you do 
polygon-polygon, polygon-linestring, linestring-linestring join.
 
 ### S2 for distance join
 
@@ -358,7 +358,7 @@ The output will be
 
 ### Store non-spatial attributes in Geometries
 
-You can concatenate other non-spatial attributes and store them in Geometry's 
`userData` field so you can recover them later on. `userData` field can be any 
object type.
+You can concatenate other non-spatial attributes and store them in Geometry's 
`userData` field, so you can recover them later on. `userData` field can be any 
object type.
 
 ```java
 import org.locationtech.jts.geom.Geometry;
diff --git a/docs/tutorial/geopandas-shapely.md 
b/docs/tutorial/geopandas-shapely.md
index eec4cbe58..b40ecfe5e 100644
--- a/docs/tutorial/geopandas-shapely.md
+++ b/docs/tutorial/geopandas-shapely.md
@@ -110,7 +110,7 @@ gdf.plot(
 
 To create Spark DataFrame based on mentioned Geometry types, please use <b> 
GeometryType </b> from  <b> sedona.sql.types </b> module. Converting works for 
list or tuple with shapely objects.
 
-Schema for target table with integer id and geometry type can be defined as 
follow:
+Schema for target table with integer id and geometry type can be defined as 
follows:
 
 ```python
 
@@ -127,7 +127,7 @@ schema = StructType(
 
 ```
 
-Also Spark DataFrame with geometry type can be converted to list of shapely 
objects with <b> collect </b> method.
+Also, Spark DataFrame with geometry type can be converted to list of shapely 
objects with <b> collect </b> method.
 
 ### Point example
 
@@ -339,7 +339,7 @@ gdf.show(1, False)
 
 ```
 
-### GeomeryCollection example
+### GeometryCollection example
 
 ```python3
 
diff --git a/docs/tutorial/jupyter-notebook.md 
b/docs/tutorial/jupyter-notebook.md
index 24c4970de..b243b441d 100644
--- a/docs/tutorial/jupyter-notebook.md
+++ b/docs/tutorial/jupyter-notebook.md
@@ -7,8 +7,8 @@ Sedona Python provides a number of [Jupyter Notebook 
examples](https://github.co
 Please use the following steps to run Jupyter notebook with Pipenv on your 
machine
 
 1. Clone Sedona GitHub repo or download the source code
-2. Install Sedona Python from PyPI or GitHub source: Read [Install Sedona 
Python](../../setup/install-python/#install-sedona) to learn.
-3. Prepare spark-shaded jar: Read [Install Sedona 
Python](../../setup/install-python/#prepare-spark-shaded-jar) to learn.
+2. Install Sedona Python from PyPI or GitHub source: Read [Install Sedona 
Python](../setup/install-python.md#install-sedona) to learn.
+3. Prepare spark-shaded jar: Read [Install Sedona 
Python](../setup/install-python.md#prepare-sedona-spark-jar) to learn.
 4. Setup pipenv python version. Please use your desired Python version.
 
 ```bash
@@ -36,6 +36,6 @@ pipenv shell
 python -m ipykernel install --user --name=apache-sedona
 ```
 
-8. Setup environment variables `SPARK_HOME` and `PYTHONPATH` if you didn't do 
it before. Read [Install Sedona 
Python](../../setup/install-python/#setup-environment-variables) to learn.
+8. Setup environment variables `SPARK_HOME` and `PYTHONPATH` if you didn't do 
it before. Read [Install Sedona 
Python](../setup/install-python.md/#setup-environment-variables) to learn.
 9. Launch jupyter notebook: `jupyter notebook`
 10. Select Sedona notebook. In your notebook, Kernel -> Change Kernel. Your 
kernel should now be an option.
diff --git a/docs/tutorial/raster.md b/docs/tutorial/raster.md
index cfc64bc9c..6827bef90 100644
--- a/docs/tutorial/raster.md
+++ b/docs/tutorial/raster.md
@@ -1,5 +1,5 @@
 !!!note
-    Sedona uses 1-based indexing for all raster functions except [map algebra 
function](../../api/sql/Raster-map-algebra), which uses 0-based indexing.
+    Sedona uses 1-based indexing for all raster functions except [map algebra 
function](../api/sql/Raster-map-algebra.md), which uses 0-based indexing.
 
 !!!note
     Since v`1.5.0`, Sedona assumes geographic coordinates to be in 
longitude/latitude order. If your data is lat/lon order, please use 
`ST_FlipCoordinates` to swap X and Y.
@@ -29,7 +29,7 @@ This page outlines the steps to manage raster data using 
SedonaSQL.
        myDataFrame.createOrReplaceTempView("rasterDf")
        ```
 
-Detailed SedonaSQL APIs are available here: [SedonaSQL 
API](../../api/sql/Overview). You can find example raster data in [Sedona 
GitHub 
repo](https://github.com/apache/sedona/blob/0eae42576c2588fe278f75cef3b17fee600eac90/spark/common/src/test/resources/raster/raster_with_no_data/test5.tiff).
+Detailed SedonaSQL APIs are available here: [SedonaSQL 
API](../api/sql/Overview.md). You can find example raster data in [Sedona 
GitHub 
repo](https://github.com/apache/sedona/blob/0eae42576c2588fe278f75cef3b17fee600eac90/spark/common/src/test/resources/raster/raster_with_no_data/test5.tiff).
 
 ## Set up dependencies
 
@@ -37,12 +37,12 @@ Detailed SedonaSQL APIs are available here: [SedonaSQL 
API](../../api/sql/Overvi
 
        1. Read [Sedona Maven Central 
coordinates](../setup/maven-coordinates.md) and add Sedona dependencies in 
build.sbt or pom.xml.
        2. Add [Apache Spark 
core](https://mvnrepository.com/artifact/org.apache.spark/spark-core), [Apache 
SparkSQL](https://mvnrepository.com/artifact/org.apache.spark/spark-sql) in 
build.sbt or pom.xml.
-       3. Please see [SQL example project](../demo/)
+       3. Please see [SQL example project](demo.md)
 
 === "Python"
 
-       1. Please read [Quick start](../../setup/install-python) to install 
Sedona Python.
-       2. This tutorial is based on [Sedona SQL Jupyter Notebook 
example](../jupyter-notebook). You can interact with Sedona Python Jupyter 
Notebook immediately on Binder. Click 
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/apache/sedona/HEAD?filepath=binder)
 to interact with Sedona Python Jupyter notebook immediately on Binder.
+       1. Please read [Quick start](../setup/install-python.md) to install 
Sedona Python.
+       2. This tutorial is based on [Sedona SQL Jupyter Notebook 
example](jupyter-notebook.md). You can interact with Sedona Python Jupyter 
Notebook immediately on Binder. Click 
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/apache/sedona/HEAD?filepath=binder)
 to interact with Sedona Python Jupyter notebook immediately on Binder.
 
 ## Create Sedona config
 
@@ -318,7 +318,7 @@ Sedona has a function to get the metadata for the raster, 
and also a function to
 
 ### Metadata
 
-This function will return an array of metadata, it will have all the necessary 
information about the raster, Please refer to 
[RS_MetaData](../../api/sql/Raster-operators/#rs_metadata).
+This function will return an array of metadata, it will have all the necessary 
information about the raster, Please refer to 
[RS_MetaData](../api/sql/Raster-operators.md#rs_metadata).
 
 ```sql
 SELECT RS_MetaData(rast) FROM rasterDf
@@ -334,7 +334,7 @@ The first two elements of the array represent the 
real-world geographic coordina
 
 ### World File
 
-There are two kinds of georeferences, GDAL and ESRI seen in [world 
files](https://en.wikipedia.org/wiki/World_file). For more information please 
refer to [RS_GeoReference](../../api/sql/Raster-operators/#rs_georeference).
+There are two kinds of georeferences, GDAL and ESRI seen in [world 
files](https://en.wikipedia.org/wiki/World_file). For more information please 
refer to [RS_GeoReference](../api/sql/Raster-operators.md#rs_georeference).
 
 ```sql
 SELECT RS_GeoReference(rast, "ESRI") FROM rasterDf
@@ -351,14 +351,14 @@ The Output will be as follows:
 4021226.584486
 ```
 
-World files are used to georeference and geo-locate images by establishing an 
image-to-world coordinate transformation that assigns real-world geographic 
coordinates to the pixels of the image.
+World files are used to georeference and geolocate images by establishing an 
image-to-world coordinate transformation that assigns real-world geographic 
coordinates to the pixels of the image.
 
 ## Raster Manipulation
 
 Since `v1.5.0` there have been many additions to manipulate raster data, we 
will show you a few example queries.
 
 !!!note
-    Read [SedonaSQL Raster operators](../../api/sql/Raster-operators) to learn 
how you can use Sedona for raster manipulation.
+    Read [SedonaSQL Raster operators](../api/sql/Raster-operators.md) to learn 
how you can use Sedona for raster manipulation.
 
 ### Coordinate translation
 
@@ -366,7 +366,7 @@ Sedona allows you to translate coordinates as per your 
needs. It can translate p
 
 #### PixelAsPoint
 
-Use [RS_PixelAsPoint](../../api/sql/Raster-operators#rs_pixelaspoint) to 
translate pixel coordinates to world location.
+Use [RS_PixelAsPoint](../api/sql/Raster-operators.md#rs_pixelaspoint) to 
translate pixel coordinates to world location.
 
 ```sql
 SELECT RS_PixelAsPoint(rast, 450, 400) FROM rasterDf
@@ -380,7 +380,7 @@ POINT (-13063342 3992403.75)
 
 #### World to Raster Coordinate
 
-Use 
[RS_WorldToRasterCoord](../../api/sql/Raster-operators#rs_worldtorastercoord) 
to translate world location to pixel coordinates. To just get X coordinate use 
[RS_WorldToRasterCoordX](../../api/sql/Raster-operators#rs_worldtorastercoordx) 
and for just Y coordinate use 
[RS_WorldToRasterCoordY](../../api/sql/Raster-operators#rs_worldtorastercoordy).
+Use 
[RS_WorldToRasterCoord](../api/sql/Raster-operators.md#rs_worldtorastercoord) 
to translate world location to pixel coordinates. To just get X coordinate use 
[RS_WorldToRasterCoordX](../api/sql/Raster-operators.md#rs_worldtorastercoordx) 
and for just Y coordinate use 
[RS_WorldToRasterCoordY](../api/sql/Raster-operators.md#rs_worldtorastercoordy).
 
 ```sql
 SELECT RS_WorldToRasterCoord(rast, -1.3063342E7, 3992403.75)
@@ -394,7 +394,7 @@ POINT (450 400)
 
 ### Pixel Manipulation
 
-Use [RS_Values](../../api/sql/Raster-operators#rs_values) to fetch values for 
a specified array of Point Geometries. The coordinates in the point geometry 
are indicative of real-world location.
+Use [RS_Values](../api/sql/Raster-operators.md#rs_values) to fetch values for 
a specified array of Point Geometries. The coordinates in the point geometry 
are indicative of real-world location.
 
 ```sql
 SELECT RS_Values(rast, Array(ST_Point(-13063342, 3992403.75), 
ST_Point(-13074192, 3996020)))
@@ -406,7 +406,7 @@ Output:
 [132.0, 148.0]
 ```
 
-To change values over a grid or area defined by geometry, we will use 
[RS_SetValues](../../api/sql/Raster-operators#rs_setvalues).
+To change values over a grid or area defined by geometry, we will use 
[RS_SetValues](../api/sql/Raster-operators.md#rs_setvalues).
 
 ```sql
 SELECT RS_SetValues(
@@ -419,7 +419,7 @@ Follow the links to get more information on how to use the 
functions appropriate
 
 ### Band Manipulation
 
-Sedona provides APIs to select specific bands from a raster image and create a 
new raster. For example, to select 2 bands from a raster, you can use the 
[RS_Band](../../api/sql/Raster-operators#rs_band) API to retrieve the desired 
multi-band raster.
+Sedona provides APIs to select specific bands from a raster image and create a 
new raster. For example, to select 2 bands from a raster, you can use the 
[RS_Band](../api/sql/Raster-operators.md#rs_band) API to retrieve the desired 
multi-band raster.
 
 Let's use a [multi-band 
raster](https://github.com/apache/sedona/blob/2a0b36989aa895c0781f9a10c907dd726506d0b7/spark/common/src/test/resources/raster_geotiff_color/FAA_UTM18N_NAD83.tif)
 for this example. The process of loading and converting it to raster type is 
the same.
 
@@ -427,7 +427,7 @@ Let's use a [multi-band 
raster](https://github.com/apache/sedona/blob/2a0b36989a
 SELECT RS_Band(colorRaster, Array(1, 2))
 ```
 
-Let's say you have many single-banded rasters and want to add a band to the 
raster to perform [map algebra operations](#execute-map-algebra-operations). 
You can do so using [RS_AddBand](../../api/sql/Raster-operators#rs_addband) 
Sedona function.
+Let's say you have many single-banded rasters and want to add a band to the 
raster to perform [map algebra operations](#execute-map-algebra-operations). 
You can do so using [RS_AddBand](../api/sql/Raster-operators.md#rs_addband) 
Sedona function.
 
 ```sql
 SELECT RS_AddBand(raster1, raster2, 1, 2)
@@ -437,7 +437,7 @@ This will result in `raster1` having `raster2`'s specified 
band.
 
 ### Resample raster data
 
-Sedona allows you to resample raster data using different interpolation 
methods like the nearest neighbor, bilinear, and bicubic to change the cell 
size or align raster grids, using 
[RS_Resample](../../api/sql/Raster-operators/#rs_resample).
+Sedona allows you to resample raster data using different interpolation 
methods like the nearest neighbor, bilinear, and bicubic to change the cell 
size or align raster grids, using 
[RS_Resample](../api/sql/Raster-operators.md#rs_resample).
 
 ```sql
 SELECT RS_Resample(rast, 50, -50, -13063342, 3992403.75, true, "bicubic")
@@ -461,13 +461,13 @@ where NIR is the near-infrared band and Red is the red 
band.
 SELECT RS_MapAlgebra(raster, 'D', 'out = (rast[3] - rast[0]) / (rast[3] + 
rast[0]);') as ndvi FROM raster_table
 ```
 
-For more information please refer to [Map Algebra 
API](../../api/sql/Raster-map-algebra).
+For more information please refer to [Map Algebra 
API](../api/sql/Raster-map-algebra.md).
 
 ## Interoperability between raster and vector data
 
 ### Geometry As Raster
 
-Sedona allows you to rasterize a geometry by using 
[RS_AsRaster](../../api/sql/Raster-writer/#rs_asraster).
+Sedona allows you to rasterize a geometry by using 
[RS_AsRaster](../api/sql/Raster-writer.md#rs_asraster).
 
 ```sql
 SELECT RS_AsRaster(
@@ -479,14 +479,14 @@ SELECT RS_AsRaster(
 
 The image created is as below for the vector:
 
-![Rasterized vector](../../image/rasterized-image.png)
+![Rasterized vector](../image/rasterized-image.png)
 
 !!!note
     The vector coordinates are buffed up to showcase the output, the real use 
case, may or may not match the example.
 
 ### Spatial range query
 
-Sedona provides raster predicates to do a range query using a geometry window, 
for example, let's use 
[RS_Intersects](../../api/sql/Raster-operators#rs_intersects).
+Sedona provides raster predicates to do a range query using a geometry window, 
for example, let's use 
[RS_Intersects](../api/sql/Raster-operators.md#rs_intersects).
 
 ```sql
 SELECT rast FROM rasterDf WHERE RS_Intersect(rast, ST_GeomFromWKT('POLYGON((0 
0, 0 10, 10 10, 10 0, 0 0))'))
@@ -503,7 +503,7 @@ SELECT r.rast, g.geom FROM rasterDf r, geomDf g WHERE 
RS_Interest(r.rast, g.geom
 !!!note
     These range and join queries will filter rasters using the provided 
geometric boundary and the spatial boundary of the raster.
 
-    Sedona offers more raster predicates to do spatial range queries and 
spatial join queries. Please refer to [raster predicates 
docs](../../api/sql/Raster-operators/#raster-predicates).
+    Sedona offers more raster predicates to do spatial range queries and 
spatial join queries. Please refer to [raster predicates 
docs](../api/sql/Raster-operators.md#raster-predicates).
 
 ## Visualize raster images
 
@@ -511,7 +511,7 @@ Sedona provides APIs to visualize raster data in an image 
form.
 
 ### Base64 String
 
-The [RS_AsBase64](../../api/sql/Raster-visualizer#rs_asbase64) encodes the 
raster data as a Base64 string and can be visualized using [online 
decoder](https://base64-viewer.onrender.com/).
+The [RS_AsBase64](../api/sql/Raster-visualizer.md#rs_asbase64) encodes the 
raster data as a Base64 string and can be visualized using [online 
decoder](https://base64-viewer.onrender.com/).
 
 ```sql
 SELECT RS_AsBase64(rast) FROM rasterDf
@@ -519,7 +519,7 @@ SELECT RS_AsBase64(rast) FROM rasterDf
 
 ### HTML Image
 
-The [RS_AsImage](../../api/sql/Raster-visualizer#rs_asimage) returns an HTML 
image tag, that can be visualized using an HTML viewer or in Jupyter Notebook. 
For more information please click on the link.
+The [RS_AsImage](../api/sql/Raster-visualizer.md#rs_asimage) returns an HTML 
image tag, that can be visualized using an HTML viewer or in Jupyter Notebook. 
For more information please click on the link.
 
 ```sql
 SELECT RS_AsImage(rast, 500) FROM rasterDf
@@ -527,7 +527,7 @@ SELECT RS_AsImage(rast, 500) FROM rasterDf
 
 The output looks like this:
 
-![Output](../../image/DisplayImage.png)
+![Output](../image/DisplayImage.png)
 
 ### 2-D Matrix
 
@@ -545,7 +545,7 @@ Output will be as follows:
 | 3   4   5   6|
 ```
 
-Please refer to [Raster visualizer docs](../../api/sql/Raster-visualizer) to 
learn how to make the most of the visualizing APIs.
+Please refer to [Raster visualizer docs](../api/sql/Raster-visualizer.md) to 
learn how to make the most of the visualizing APIs.
 
 ## Save to permanent storage
 
@@ -559,7 +559,7 @@ Sedona has a few writer functions that create the binary 
DataFrame necessary for
 
 ### As Arc Grid
 
-Use [RS_AsArcGrid](../../api/sql/Raster-writer#rs_asarcgrid) to get the binary 
Dataframe of the raster in Arc Grid format.
+Use [RS_AsArcGrid](../api/sql/Raster-writer.md#rs_asarcgrid) to get the binary 
Dataframe of the raster in Arc Grid format.
 
 ```sql
 SELECT RS_AsArcGrid(raster)
@@ -567,7 +567,7 @@ SELECT RS_AsArcGrid(raster)
 
 ### As GeoTiff
 
-Use [RS_AsGeoTiff](../../api/sql/Raster-writer#rs_asgeotiff) to get the binary 
Dataframe of the raster in GeoTiff format.
+Use [RS_AsGeoTiff](../api/sql/Raster-writer.md#rs_asgeotiff) to get the binary 
Dataframe of the raster in GeoTiff format.
 
 ```sql
 SELECT RS_AsGeoTiff(raster)
@@ -575,13 +575,13 @@ SELECT RS_AsGeoTiff(raster)
 
 ### As PNG
 
-Use [RS_AsPNG](../../api/sql/Raster-writer#rs_aspng) to get the binary 
Dataframe of the raster in PNG format.
+Use [RS_AsPNG](../api/sql/Raster-writer.md#rs_aspng) to get the binary 
Dataframe of the raster in PNG format.
 
 ```sql
 SELECT RS_AsPNG(raster)
 ```
 
-Please refer to [Raster writer docs](../../api/sql/Raster-writer) for more 
details.
+Please refer to [Raster writer docs](../api/sql/Raster-writer.md) for more 
details.
 
 ## Collecting raster Dataframes and working with them locally in Python
 
@@ -671,4 +671,4 @@ df_raster.withColumn("mask", 
expr("mask_udf(rast)")).withColumn("mask_rast", exp
 
 ## Performance optimization
 
-When working with large raster datasets, refer to the [documentation on 
storing raster geometries in Parquet format](../storing-blobs-in-parquet) for 
recommendations to optimize performance.
+When working with large raster datasets, refer to the [documentation on 
storing raster geometries in Parquet format](storing-blobs-in-parquet.md) for 
recommendations to optimize performance.
diff --git a/docs/tutorial/rdd.md b/docs/tutorial/rdd.md
index f0539f2fc..bf404aff9 100644
--- a/docs/tutorial/rdd.md
+++ b/docs/tutorial/rdd.md
@@ -3,15 +3,15 @@ The page outlines the steps to create Spatial RDDs and run 
spatial queries using
 
 ## Set up dependencies
 
-Please refer to [Set up dependencies](../sql/#set-up-dependencies) to set up 
dependencies.
+Please refer to [Set up dependencies](sql.md#set-up-dependencies) to set up 
dependencies.
 
 ## Create Sedona config
 
-Please refer to [Create Sedona config](../sql/#create-sedona-config) to create 
a Sedona config.
+Please refer to [Create Sedona config](sql.md#create-sedona-config) to create 
a Sedona config.
 
 ## Initiate SedonaContext
 
-Please refer to [Initiate SedonaContext](../sql/#initiate-sedonacontext) to 
initiate a SedonaContext.
+Please refer to [Initiate SedonaContext](sql.md#initiate-sedonacontext) to 
initiate a SedonaContext.
 
 ## Create a SpatialRDD
 
diff --git a/docs/tutorial/snowflake/sql.md b/docs/tutorial/snowflake/sql.md
index b0652bb61..005da89bc 100644
--- a/docs/tutorial/snowflake/sql.md
+++ b/docs/tutorial/snowflake/sql.md
@@ -54,7 +54,7 @@ FROM city_tbl_geom
 ```
 
 !!!note
-       SedonaSQL provides lots of functions to create a Geometry column, 
please read [SedonaSQL API](../../../api/snowflake/vector-data/Constructor/).
+       SedonaSQL provides lots of functions to create a Geometry column, 
please read [SedonaSQL API](../../api/snowflake/vector-data/Constructor.md).
 
 ## Check the lon/lat order
 
@@ -109,7 +109,7 @@ FROM city_tbl_geom
 ```
 
 !!!note
-       SedonaSQL provides lots of functions to save the Geometry column, 
please read [SedonaSQL API](../../../api/snowflake/vector-data/Function/).
+       SedonaSQL provides lots of functions to save the Geometry column, 
please read [SedonaSQL API](../../api/snowflake/vector-data/Function.md).
 
 ## Transform the Coordinate Reference System
 
@@ -181,7 +181,7 @@ WHERE 
Sedona.ST_Contains(Sedona.ST_PolygonFromEnvelope(1.0,100.0,1000.0,1100.0),
 ```
 
 !!!note
-       Read [SedonaSQL API](../../../api/snowflake/vector-data/Constructor/) 
to learn how to create a Geometry type query window.
+       Read [SedonaSQL API](../../api/snowflake/vector-data/Constructor.md) to 
learn how to create a Geometry type query window.
 
 ## KNN query
 
@@ -251,7 +251,7 @@ WHERE ST_FrechetDistance(pointDf.pointshape, 
polygonDf.polygonshape) < 2
 ```
 
 !!!warning
-       If you use planar euclidean distance functions like `ST_Distance`, 
`ST_HausdorffDistance` or `ST_FrechetDistance` as the predicate, Sedona doesn't 
control the distance's unit (degree or meter). It is same with the geometry. If 
your coordinates are in the longitude and latitude system, the unit of 
`distance` should be degree instead of meter or mile. To change the geometry's 
unit, please either transform the coordinate reference system to a meter-based 
system. See [ST_Transform](../../.. [...]
+       If you use planar Euclidean distance functions like `ST_Distance`, 
`ST_HausdorffDistance` or `ST_FrechetDistance` as the predicate, Sedona doesn't 
control the distance's unit (degree or meter). It is same with the geometry. If 
your coordinates are in the longitude and latitude system, the unit of 
`distance` should be degree instead of meter or mile. To change the geometry's 
unit, please either transform the coordinate reference system to a meter-based 
system. See [ST_Transform](../../ap [...]
 
 ```sql
 SELECT *
@@ -267,7 +267,7 @@ Please use the following steps:
 
 ### 1. Generate S2 ids for both tables
 
-Use [ST_S2CellIds](../../../api/snowflake/vector-data/Function/#st_s2cellids) 
to generate cell IDs. Each geometry may produce one or more IDs.
+Use [ST_S2CellIds](../../api/snowflake/vector-data/Function.md#ST_S2CellIDs) 
to generate cell IDs. Each geometry may produce one or more IDs.
 
 ```sql
 SELECT * FROM lefts, TABLE(FLATTEN(ST_S2CellIDs(lefts.geom, 15))) s1
@@ -290,7 +290,7 @@ FROM lcs JOIN rcs ON lcs.cellId = rcs.cellId
 
 Due to the nature of S2 Cellid, the equi-join results might have a few 
false-positives depending on the S2 level you choose. A smaller level indicates 
bigger cells, less exploded rows, but more false positives.
 
-To ensure the correctness, you can use one of the [Spatial 
Predicates](../../../api/snowflake/vector-data/Predicate/) to filter out them. 
Use this query instead of the query in Step 2.
+To ensure the correctness, you can use one of the [Spatial 
Predicates](../../api/snowflake/vector-data/Predicate.md) to filter out them. 
Use this query instead of the query in Step 2.
 
 ```sql
 SELECT lcs.id as lcs_id, lcs.geom as lcs_geom, lcs.name as lcs_name, rcs.id as 
rcs_id, rcs.geom as rcs_geom, rcs.name as rcs_name
@@ -324,7 +324,7 @@ GROUP BY (lcs_geom, rcs_geom)
 ```
 
 !!!note
-       If you are doing point-in-polygon join, this is not a problem and you 
can safely discard this issue. This issue only happens when you do 
polygon-polygon, polygon-linestring, linestring-linestring join.
+       If you are doing point-in-polygon join, this is not a problem, and you 
can safely discard this issue. This issue only happens when you do 
polygon-polygon, polygon-linestring, linestring-linestring join.
 
 ### S2 for distance join
 
@@ -343,16 +343,16 @@ FROM lefts
 
 Sedona implements over 200 geospatial vector and raster functions, which are 
much more than what Snowflake native functions offer. For example:
 
-* [ST_3DDistance](../../../api/snowflake/vector-data/Function/#st_3ddistance)
-* [ST_Force2D](../../../api/snowflake/vector-data/Function/#st_force_2d)
-* [ST_GeometryN](../../../api/snowflake/vector-data/Function/#st_geometryn)
-* [ST_MakeValid](../../../api/snowflake/vector-data/Function/#st_makevalid)
-* [ST_Multi](../../../api/snowflake/vector-data/Function/#st_multi)
-* 
[ST_NumGeometries](../../../api/snowflake/vector-data/Function/#st_numgeometries)
-* 
[ST_ReducePrecision](../../../api/snowflake/vector-data/Function/#st_precisionreduce)
-* 
[ST_SubdivdeExplode](../../../api/snowflake/vector-data/Function/#st_subdivideexplode)
+* [ST_3DDistance](../../api/snowflake/vector-data/Function.md#st_3ddistance)
+* [ST_Force2D](../../api/snowflake/vector-data/Function.md#st_force_2d)
+* [ST_GeometryN](../../api/snowflake/vector-data/Function.md#st_geometryn)
+* [ST_MakeValid](../../api/snowflake/vector-data/Function.md#st_makevalid)
+* [ST_Multi](../../api/snowflake/vector-data/Function.md#st_multi)
+* 
[ST_NumGeometries](../../api/snowflake/vector-data/Function.md#st_numgeometries)
+* 
[ST_ReducePrecision](../../api/snowflake/vector-data/Function.md#st_reduceprecision)
+* 
[ST_SubdivdeExplode](../../api/snowflake/vector-data/Function.md#st_subdivideexplode)
 
-You can click the links above to learn more about these functions. More 
functions can be found in [SedonaSQL 
API](../../../api/snowflake/vector-data/Function/).
+You can click the links above to learn more about these functions. More 
functions can be found in [SedonaSQL 
API](../../api/snowflake/vector-data/Function.md).
 
 ## Interoperate with Snowflake native functions
 
diff --git a/docs/tutorial/sql.md b/docs/tutorial/sql.md
index c3cb99d77..21baa790d 100644
--- a/docs/tutorial/sql.md
+++ b/docs/tutorial/sql.md
@@ -34,12 +34,12 @@ Detailed SedonaSQL APIs are available here: [SedonaSQL 
API](../api/sql/Overview.
 
        1. Read [Sedona Maven Central 
coordinates](../setup/maven-coordinates.md) and add Sedona dependencies in 
build.sbt or pom.xml.
        2. Add [Apache Spark 
core](https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.11), 
[Apache 
SparkSQL](https://mvnrepository.com/artifact/org.apache.spark/spark-sql) in 
build.sbt or pom.xml.
-       3. Please see [SQL example project](../demo/)
+       3. Please see [SQL example project](demo.md)
 
 === "Python"
 
-       1. Please read [Quick start](../../setup/install-python) to install 
Sedona Python.
-       2. This tutorial is based on [Sedona SQL Jupyter Notebook 
example](../jupyter-notebook). You can interact with Sedona Python Jupyter 
notebook immediately on Binder. Click 
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/apache/sedona/HEAD?filepath=binder)
 to interact with Sedona Python Jupyter notebook immediately on Binder.
+       1. Please read [Quick start](../setup/install-python.md) to install 
Sedona Python.
+       2. This tutorial is based on [Sedona SQL Jupyter Notebook 
example](jupyter-notebook.md). You can interact with Sedona Python Jupyter 
notebook immediately on Binder. Click 
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/apache/sedona/HEAD?filepath=binder)
 to interact with Sedona Python Jupyter notebook immediately on Binder.
 
 ## Create Sedona config
 
@@ -333,7 +333,7 @@ This prevents Spark from interpreting the property and 
allows us to use the ST_G
 
 ## Load Shapefile and GeoJSON using SpatialRDD
 
-Shapefile and GeoJSON can be loaded by SpatialRDD and converted to DataFrame 
using Adapter. Please read [Load 
SpatialRDD](../rdd/#create-a-generic-spatialrdd) and [DataFrame <-> 
RDD](#convert-between-dataframe-and-spatialrdd).
+Shapefile and GeoJSON can be loaded by SpatialRDD and converted to DataFrame 
using Adapter. Please read [Load 
SpatialRDD](rdd.md#create-a-generic-spatialrdd) and [DataFrame <-> 
RDD](#convert-between-dataframe-and-spatialrdd).
 
 ## Load GeoParquet
 
@@ -588,7 +588,7 @@ SedonaPyDeck exposes APIs to create interactive map 
visualizations using [pydeck
 
 The following tutorial showcases the various maps that can be created using 
SedonaPyDeck, the datasets used to create these maps are publicly available.
 
-Each API exposed by SedonaPyDeck offers customization via optional arguments, 
details on all possible arguments can be found in the [API docs of 
SedonaPyDeck](../../api/sql/Visualization_SedonaPyDeck).
+Each API exposed by SedonaPyDeck offers customization via optional arguments, 
details on all possible arguments can be found in the [API docs of 
SedonaPyDeck](../api/sql/Visualization_SedonaPyDeck.md).
 
 #### Creating a Choropleth map using SedonaPyDeck
 
@@ -603,7 +603,7 @@ SedonaPyDeck.create_choropleth_map(df=groupedresult, 
plot_col='AirportCount')
 !!!Note
        `plot_col` is a required argument informing SedonaPyDeck of the column 
name used to render the choropleth effect.
 
-<img src="../../image/choropleth.gif" width="1000">
+![](../image/choropleth.gif){: width="1000px"}
 
 The dataset used is available 
[here](https://github.com/apache/sedona/tree/4c5fa8333b2c61850d5664b878df9493c7915066/binder/data/ne_50m_airports)
 and
 can also be found in the example notebook available 
[here](https://github.com/apache/sedona/blob/4c5fa8333b2c61850d5664b878df9493c7915066/binder/ApacheSedonaSQL_SpatialJoin_AirportsPerCountry.ipynb)
@@ -618,7 +618,7 @@ Example (referenced from overture notebook available via 
binder):
 SedonaPyDeck.create_geometry_map(df_building, elevation_col='height')
 ```
 
-<img src="../../image/buildings.gif" width="1000">
+![](../image/buildings.gif){: width="1000px"}
 
 !!!Tip
        `elevation_col` is an optional argument which can be used to render a 
3D map. Pass the column with 'elevation' values for the geometries here.
@@ -633,7 +633,7 @@ Example:
 SedonaPyDeck.create_scatterplot_map(df=crimes_df)
 ```
 
-<img src="../../image/points.gif" width="1000">
+![](../image/points.gif){: width="1000px"}
 
 The dataset used here is the Chicago crimes dataset, available 
[here](https://github.com/apache/sedona/blob/sedona-1.5.0/spark/common/src/test/resources/Chicago_Crimes.csv)
 
@@ -647,7 +647,7 @@ Example:
 SedonaPyDeck.create_heatmap(df=crimes_df)
 ```
 
-<img src="../../image/heatmap.gif" width="1000">
+![](../image/heatmap.gif){: width="1000px"}
 
 The dataset used here is the Chicago crimes dataset, available 
[here](https://github.com/apache/sedona/blob/sedona-1.5.0/spark/common/src/test/resources/Chicago_Crimes.csv)
 
@@ -671,12 +671,12 @@ Example (referenced from an example notebook via the 
binder):
 SedonaKepler.create_map(df=groupedresult, name="AirportCount")
 ```
 
-<img src="../../image/sedona_customization.gif" width="1000">
+![](../image/sedona_customization.gif){: width="1000px"}
 
 The dataset used is available 
[here](https://github.com/apache/sedona/tree/4c5fa8333b2c61850d5664b878df9493c7915066/binder/data/ne_50m_airports)
 and
 can also be found in the example notebook available 
[here](https://github.com/apache/sedona/blob/4c5fa8333b2c61850d5664b878df9493c7915066/binder/ApacheSedonaSQL_SpatialJoin_AirportsPerCountry.ipynb)
 
-Details on all the APIs available by SedonaKepler are listed in the 
[SedonaKepler API docs](../../api/sql/Visualization_SedonaKepler)
+Details on all the APIs available by SedonaKepler are listed in the 
[SedonaKepler API docs](../api/sql/Visualization_SedonaKepler.md)
 
 ## Create a User-Defined Function (UDF)
 
@@ -1007,7 +1007,7 @@ Due to the same reason, Sedona geoparquet reader and 
writer do NOT check the axi
 
 ## Sort then Save GeoParquet
 
-To maximize the performance of Sedona GeoParquet filter pushdown, we suggest 
that you sort the data by their geohash values (see 
[ST_GeoHash](../../api/sql/Function/#st_geohash)) and then save as a GeoParquet 
file. An example is as follows:
+To maximize the performance of Sedona GeoParquet filter pushdown, we suggest 
that you sort the data by their geohash values (see 
[ST_GeoHash](../api/sql/Function.md#st_geohash)) and then save as a GeoParquet 
file. An example is as follows:
 
 ```
 SELECT col1, col2, geom, ST_GeoHash(geom, 5) as geohash
@@ -1044,7 +1044,7 @@ my_postgis_db# alter table my_table alter column geom 
type geometry;
 
 ### DataFrame to SpatialRDD
 
-Use SedonaSQL DataFrame-RDD Adapter to convert a DataFrame to an SpatialRDD. 
Please read [Adapter 
Scaladoc](../../api/scaladoc/spark/org/apache/sedona/sql/utils/index.html)
+Use SedonaSQL DataFrame-RDD Adapter to convert a DataFrame to an SpatialRDD. 
Please read [Adapter 
Scaladoc](../api/scaladoc/spark/org/apache/sedona/sql/utils/index.html)
 
 === "Scala"
 
@@ -1073,7 +1073,7 @@ Use SedonaSQL DataFrame-RDD Adapter to convert a 
DataFrame to an SpatialRDD. Ple
 
 ### SpatialRDD to DataFrame
 
-Use SedonaSQL DataFrame-RDD Adapter to convert a DataFrame to an SpatialRDD. 
Please read [Adapter 
Scaladoc](../../api/javadoc/sql/org/apache/sedona/sql/utils/index.html)
+Use SedonaSQL DataFrame-RDD Adapter to convert a DataFrame to an SpatialRDD. 
Please read [Adapter 
Scaladoc](../api/javadoc/sql/org/apache/sedona/sql/utils/index.html)
 
 === "Scala"
 
@@ -1095,11 +1095,11 @@ Use SedonaSQL DataFrame-RDD Adapter to convert a 
DataFrame to an SpatialRDD. Ple
        spatialDf = Adapter.toDf(spatialRDD, sedona)
        ```
 
-All other attributes such as price and age will be also brought to the 
DataFrame as long as you specify ==carryOtherAttributes== (see [Read other 
attributes in an SpatialRDD](../rdd#read-other-attributes-in-an-spatialrdd)).
+All other attributes such as price and age will be also brought to the 
DataFrame as long as you specify ==carryOtherAttributes== (see [Read other 
attributes in an SpatialRDD](rdd.md#read-other-attributes-in-an-spatialrdd)).
 
 You may also manually specify a schema for the resulting DataFrame in case you 
require different column names or data
 types. Note that string schemas and not all data types are 
supported&mdash;please check the
-[Adapter 
Scaladoc](../../api/javadoc/sql/org/apache/sedona/sql/utils/index.html) to 
confirm what is supported for your use
+[Adapter Scaladoc](../api/javadoc/sql/org/apache/sedona/sql/utils/index.html) 
to confirm what is supported for your use
 case. At least one column for the user data must be provided.
 
 === "Scala"
@@ -1164,11 +1164,11 @@ or you can use the attribute names directly from the 
input RDD
        joinResultDf = Adapter.toDf(result_pair_rdd, leftRdd.fieldNames, 
rightRdd.fieldNames, spark)
        ```
 
-All other attributes such as price and age will be also brought to the 
DataFrame as long as you specify ==carryOtherAttributes== (see [Read other 
attributes in an SpatialRDD](../rdd#read-other-attributes-in-an-spatialrdd)).
+All other attributes such as price and age will be also brought to the 
DataFrame as long as you specify ==carryOtherAttributes== (see [Read other 
attributes in an SpatialRDD](rdd.md#read-other-attributes-in-an-spatialrdd)).
 
 You may also manually specify a schema for the resulting DataFrame in case you 
require different column names or data
 types. Note that string schemas and not all data types are 
supported&mdash;please check the
-[Adapter 
Scaladoc](../../api/javadoc/sql/org/apache/sedona/sql/utils/index.html) to 
confirm what is supported for your use
+[Adapter Scaladoc](../api/javadoc/sql/org/apache/sedona/sql/utils/index.html) 
to confirm what is supported for your use
 case. Columns for the left and right user data must be provided.
 
 === "Scala"
diff --git a/docs/tutorial/viz-gallery.md b/docs/tutorial/viz-gallery.md
index 12ef0ce5d..3185c2521 100644
--- a/docs/tutorial/viz-gallery.md
+++ b/docs/tutorial/viz-gallery.md
@@ -1,4 +1,4 @@
-<img style="float: left;" src="../../image/usrail.png" width="250">
-<img src="../../image/ustweet.png" width="250">
+![](../image/usrail.png){: width="250"}
+![](../image/ustweet.png){: width="250"}
 
-<img src="../../image/heatmapnycsmall.png" width="500">
+![](../image/heatmapnycsmall.png){: width="500px"}
diff --git a/docs/tutorial/viz.md b/docs/tutorial/viz.md
index 0bf607992..a709af002 100644
--- a/docs/tutorial/viz.md
+++ b/docs/tutorial/viz.md
@@ -5,7 +5,7 @@ SedonaViz provides native support for general cartographic 
design by extending S
 SedonaViz offers Map Visualization SQL. This gives users a more flexible way 
to design beautiful map visualization effects including scatter plots and heat 
maps. SedonaViz RDD API is also available.
 
 !!!note
-       All SedonaViz SQL/DataFrame APIs are explained in [SedonaViz 
API](../../api/viz/sql). Please see [Viz example 
project](https://github.com/apache/sedona/tree/master/examples/spark-viz)
+       All SedonaViz SQL/DataFrame APIs are explained in [SedonaViz 
API](../api/viz/sql.md). Please see [Viz example 
project](https://github.com/apache/sedona/tree/master/examples/spark-viz)
 
 ## Why scalable map visualization?
 
@@ -94,7 +94,7 @@ SELECT ST_Point(cast(pointtable._c0 as 
Decimal(24,20)),cast(pointtable._c1 as De
 FROM pointtable
 ```
 
-As you know, Sedona provides many different methods to load various spatial 
data formats. Please read [Write a Spatial DataFrame application](../sql).
+As you know, Sedona provides many different methods to load various spatial 
data formats. Please read [Write a Spatial DataFrame application](sql.md).
 
 ## Generate a single image
 
@@ -113,7 +113,7 @@ SELECT ST_Envelope_Aggr(shape) as bound FROM pointtable
 
 Then use ST_Pixelize to convert them to pixels.
 
-This example is for Sedona before v1.0.1. ST_Pixelize extends Generator so it 
can directly flatten the array without the **explode** function.
+This example is for Sedona before v1.0.1. ST_Pixelize extends Generator, so it 
can directly flatten the array without the **explode** function.
 
 ```sql
 CREATE OR REPLACE TEMP VIEW pixels AS
@@ -132,7 +132,7 @@ LATERAL VIEW explode(ST_Pixelize(ST_Transform(shape, 
'epsg:4326','epsg:3857'), 2
 This will give you a 256*256 resolution image after you run ST_Render at the 
end of this tutorial.
 
 !!!warning
-       We highly suggest that you should use ST_Transform to transform 
coordinates to a visualization-specific coordinate system such as epsg:3857. 
Otherwise you map may look distorted.
+       We highly suggest that you should use ST_Transform to transform 
coordinates to a visualization-specific coordinate system such as epsg:3857, 
otherwise you map may look distorted.
 
 ### Aggregate pixels
 
@@ -157,7 +157,7 @@ SELECT pixel, ST_Colorize(weight, (SELECT max(weight) FROM 
pixelaggregates)) as
 FROM pixelaggregates
 ```
 
-Please read [ST_Colorize](../../api/viz/sql/#st_colorize) for a detailed API 
description.
+Please read [ST_Colorize](../api/viz/sql.md#st_colorize) for a detailed API 
description.
 
 ### Render the image
 
diff --git a/docs/tutorial/zeppelin.md b/docs/tutorial/zeppelin.md
index f4e467c30..17e5f1abb 100644
--- a/docs/tutorial/zeppelin.md
+++ b/docs/tutorial/zeppelin.md
@@ -1,4 +1,4 @@
-Sedona provides a Helium visualization plugin tailored for [Apache 
Zeppelin](https://zeppelin.apache.org/). This finally bridges the gap between 
Sedona and Zeppelin.  Please read [Install 
Sedona-Zeppelin](../../setup/zeppelin/) to learn how to install this plugin in 
Zeppelin.
+Sedona provides a Helium visualization plugin tailored for [Apache 
Zeppelin](https://zeppelin.apache.org/). This finally bridges the gap between 
Sedona and Zeppelin.  Please read [Install 
Sedona-Zeppelin](../setup/zeppelin.md) to learn how to install this plugin in 
Zeppelin.
 
 Sedona-Zeppelin equips two approaches to visualize spatial data in Zeppelin. 
The first approach uses Zeppelin to plot all spatial objects on the map. The 
second one leverages SedonaViz to generate map images and overlay them on maps.
 
@@ -32,7 +32,7 @@ Select the geometry column to visualize:
 
 ## Large-scale with SedonaViz
 
-SedonaViz is a distributed visualization system that allows you to visualize 
big spatial data at scale. Please read [How to use SedonaViz](../viz).
+SedonaViz is a distributed visualization system that allows you to visualize 
big spatial data at scale. Please read [How to use SedonaViz](viz.md).
 
 You can use Sedona-Zeppelin to ask Zeppelin to overlay SedonaViz images on a 
map background. This way, you can easily visualize 1 billion spatial objects or 
more (depends on your cluster size).
 
diff --git a/mkdocs.yml b/mkdocs.yml
index 5748b1aaa..c0eac4247 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -153,7 +153,7 @@ extra:
     - icon: fontawesome/brands/twitter
       link: 'https://twitter.com/ApacheSedona'
     - icon: fontawesome/brands/discord
-      link: './community/discord-invite-form.html'
+      link: 'https://share.hsforms.com/1Ndql_ZigTdmLlVQc_d1o4gqga4q'
   sedona:
     current_version: 1.5.1
     current_geotools: 1.5.1-28.2


Reply via email to