This is an automated email from the ASF dual-hosted git repository.

jiayu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/sedona.git


The following commit(s) were added to refs/heads/master by this push:
     new 37766cfe [DOCS] Fix spelling (#765)
37766cfe is described below

commit 37766cfe325f8b98c91b789511f49f9e04b541aa
Author: John Bampton <[email protected]>
AuthorDate: Tue Feb 14 07:39:01 2023 +1000

    [DOCS] Fix spelling (#765)
---
 .../src/test/java/org/apache/sedona/core/spatialRDD/GeometryOpTest.java | 2 +-
 docs/api/flink/Function.md                                              | 2 +-
 docs/api/sql/Function.md                                                | 2 +-
 docs/setup/install-scala.md                                             | 2 +-
 docs/setup/release-notes.md                                             | 2 +-
 docs/tutorial/python-vector-osm.md                                      | 2 +-
 docs/tutorial/viz.md                                                    | 2 +-
 7 files changed, 7 insertions(+), 7 deletions(-)

diff --git 
a/core/src/test/java/org/apache/sedona/core/spatialRDD/GeometryOpTest.java 
b/core/src/test/java/org/apache/sedona/core/spatialRDD/GeometryOpTest.java
index 69636b81..75d82cb4 100644
--- a/core/src/test/java/org/apache/sedona/core/spatialRDD/GeometryOpTest.java
+++ b/core/src/test/java/org/apache/sedona/core/spatialRDD/GeometryOpTest.java
@@ -48,7 +48,7 @@ public class GeometryOpTest extends SpatialRDDTestBase
     }
 
     @Test
-    public void testFlipPolygonCoordiantes()
+    public void testFlipPolygonCoordinates()
     {
         PolygonRDD spatialRDD = new PolygonRDD(sc, InputLocation, splitter, 
true, numPartitions, StorageLevel.MEMORY_ONLY());
         Polygon oldGeom = spatialRDD.rawSpatialRDD.take(1).get(0);
diff --git a/docs/api/flink/Function.md b/docs/api/flink/Function.md
index 239800ee..a976ddd7 100644
--- a/docs/api/flink/Function.md
+++ b/docs/api/flink/Function.md
@@ -550,7 +550,7 @@ SELECT ST_NDims(ST_GeomFromEWKT('POINT(1 1 2)'))
 
 Output: `3`
 
-Spark SQL example with x,y co-ordinate:
+Spark SQL example with x,y coordinate:
 
 ```sql
 SELECT ST_NDims(ST_GeomFromText('POINT(1 1)'))
diff --git a/docs/api/sql/Function.md b/docs/api/sql/Function.md
index 09297fda..2cc145ec 100644
--- a/docs/api/sql/Function.md
+++ b/docs/api/sql/Function.md
@@ -911,7 +911,7 @@ SELECT ST_NDims(ST_GeomFromEWKT('POINT(1 1 2)'))
 
 Output: `3`
 
-Spark SQL example with x,y co-ordinate:
+Spark SQL example with x,y coordinate:
 
 ```sql
 SELECT ST_NDims(ST_GeomFromText('POINT(1 1)'))
diff --git a/docs/setup/install-scala.md b/docs/setup/install-scala.md
index f50b561d..a8f07c73 100644
--- a/docs/setup/install-scala.md
+++ b/docs/setup/install-scala.md
@@ -13,7 +13,7 @@ There are two ways to use a Scala or Java library with Apache 
Spark. You can use
 
 2. Run Spark shell with `--packages` option. This command will automatically 
download Sedona jars from Maven Central.
 ```
-./bin/spark-shell --packages MavenCoordiantes
+./bin/spark-shell --packages MavenCoordinates
 ```
 
 * Local mode: test Sedona without setting up a cluster
diff --git a/docs/setup/release-notes.md b/docs/setup/release-notes.md
index c20c4558..588f89fd 100644
--- a/docs/setup/release-notes.md
+++ b/docs/setup/release-notes.md
@@ -320,7 +320,7 @@ This version is a maintenance release on Sedona 1.0.0 line. 
It includes bug fixe
 
 ### Known issue
 
-In Sedona v1.0.1 and earlier versions, the Spark dependency in setup.py was 
configured to be ==< v3.1.0== [by 
mistake](https://github.com/apache/sedona/blob/8235924ac80939cbf2ce562b0209b71833ed9429/python/setup.py#L39).
 When you install Sedona Python (apache-sedona v1.0.1) from Pypi, pip might 
uninstall PySpark 3.1.1 and install PySpark 3.0.2 on your machine.
+In Sedona v1.0.1 and earlier versions, the Spark dependency in setup.py was 
configured to be ==< v3.1.0== [by 
mistake](https://github.com/apache/sedona/blob/8235924ac80939cbf2ce562b0209b71833ed9429/python/setup.py#L39).
 When you install Sedona Python (apache-sedona v1.0.1) from PyPI, pip might 
uninstall PySpark 3.1.1 and install PySpark 3.0.2 on your machine.
 
 Three ways to fix this:
 
diff --git a/docs/tutorial/python-vector-osm.md 
b/docs/tutorial/python-vector-osm.md
index e06d3b24..23d10d70 100644
--- a/docs/tutorial/python-vector-osm.md
+++ b/docs/tutorial/python-vector-osm.md
@@ -81,7 +81,7 @@ path = "hdfs://776faf4d6a1e:8020/"+file_name
 df = spark.read.json(path, multiLine = "true")
 ```
 
-### Consulting and organizing data for analisis
+### Consulting and organizing data for analysis
 
 ```
 from pyspark.sql.functions import explode, arrays_zip
diff --git a/docs/tutorial/viz.md b/docs/tutorial/viz.md
index 9011626e..3f6ac9e6 100644
--- a/docs/tutorial/viz.md
+++ b/docs/tutorial/viz.md
@@ -108,7 +108,7 @@ LATERAL VIEW explode(ST_Pixelize(ST_Transform(shape, 
'epsg:4326','epsg:3857'), 2
 This will give you a 256*256 resolution image after you run ST_Render at the 
end of this tutorial.
 
 !!!warning
-       We highly suggest that you should use ST_Transform to transform 
coordiantes to a visualization-specific coordinate system such as epsg:3857. 
Otherwise you map may look distorted.
+       We highly suggest that you should use ST_Transform to transform 
coordinates to a visualization-specific coordinate system such as epsg:3857. 
Otherwise you map may look distorted.
        
 ### Aggregate pixels
 

Reply via email to