This is an automated email from the ASF dual-hosted git repository.
jiayu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/sedona.git
The following commit(s) were added to refs/heads/master by this push:
new ad9c09698 [DOCS] Standardize Markdown code blocks: word case and
whitespace (#1060)
ad9c09698 is described below
commit ad9c09698d7e7cde9cc72cb511540cb603756a09
Author: John Bampton <[email protected]>
AuthorDate: Thu Oct 26 11:21:24 2023 +1000
[DOCS] Standardize Markdown code blocks: word case and whitespace (#1060)
---
R/README.md | 8 ++++----
R/vignettes/articles/apache-sedona.Rmd | 2 +-
README.md | 6 +++---
docs/api/flink/Function.md | 10 +++++-----
docs/api/sql/Function.md | 10 +++++-----
5 files changed, 18 insertions(+), 18 deletions(-)
diff --git a/R/README.md b/R/README.md
index c37b33985..b3202e6ae 100644
--- a/R/README.md
+++ b/R/README.md
@@ -11,7 +11,7 @@ enabling higher-level access through a `{dplyr}` backend and
familiar R function
## Installation
To use Apache Sedona from R, you just need to install the apache.sedona
package; Spark dependencies are managed directly by the package.
-``` r
+```r
# Install released version from CRAN
install.packages("apache.sedona")
```
@@ -21,7 +21,7 @@ To use the development version, you will need both the latest
version of the pac
To get the latest R package from GtiHub:
-``` r
+```r
# Install development version from GitHub
devtools::install_github("apache/sedona/R")
```
@@ -40,7 +40,7 @@ The path to the sedona-spark-shaded jars needs to be put in
the `SEDONA_JAR_FILE
The first time you load Sedona, Spark will download all the dependent jars,
which can take a few minutes and cause the connection to timeout. You can
either retry (some jars will already be downloaded and cached) or increase the
`"sparklyr.connect.timeout"` parameter in the sparklyr config.
-``` r
+```r
library(sparklyr)
library(apache.sedona)
@@ -51,7 +51,7 @@ sc <- spark_connect(master = "local")
polygon_sdf <- spark_read_geojson(sc, location = "/tmp/polygon.json")
```
-``` r
+```r
mean_area_sdf <- polygon_sdf %>%
dplyr::summarize(mean_area = mean(ST_Area(geometry)))
print(mean_area_sdf)
diff --git a/R/vignettes/articles/apache-sedona.Rmd
b/R/vignettes/articles/apache-sedona.Rmd
index 0d28210b5..b08e2dd30 100644
--- a/R/vignettes/articles/apache-sedona.Rmd
+++ b/R/vignettes/articles/apache-sedona.Rmd
@@ -362,7 +362,7 @@ to Sedona visualization routines. For example, the
following is
essentially the R equivalent of [this example in
Scala](https://github.com/apache/sedona/blob/f6b1c5e24bdb67d2c8d701a9b2af1fb5658fdc4d/viz/src/main/scala/org/apache/sedona/viz/showcase/ScalaExample.scala#L142-L160).
-``` {r}
+```{r}
resolution_x <- 1000
resolution_y <- 600
boundary <- c(-126.790180, -64.630926, 24.863836, 50.000)
diff --git a/README.md b/README.md
index a842b63ca..1ecf592ed 100644
--- a/README.md
+++ b/README.md
@@ -62,11 +62,11 @@ Apache Sedona is a widely used framework for working with
spatial data, and it h
This example loads NYC taxi trip records and taxi zone information stored as
.CSV files on AWS S3 into Sedona spatial dataframes. It then performs spatial
SQL query on the taxi trip datasets to filter out all records except those
within the Manhattan area of New York. The example also shows a spatial join
operation that matches taxi trip records to zones based on whether the taxi
trip lies within the geographical extents of the zone. Finally, the last code
snippet integrates the output o [...]
#### Load NYC taxi trips and taxi zones data from CSV Files Stored on AWS S3
-``` python
+```python
taxidf = sedona.read.format('csv').option("header","true").option("delimiter",
",").load("s3a://your-directory/data/nyc-taxi-data.csv")
taxidf = taxidf.selectExpr('ST_Point(CAST(Start_Lon AS Decimal(24,20)),
CAST(Start_Lat AS Decimal(24,20))) AS pickup', 'Trip_Pickup_DateTime',
'Payment_Type', 'Fare_Amt')
```
-``` python
+```python
zoneDf = sedona.read.format('csv').option("delimiter",
",").load("s3a://your-directory/data/TIGER2018_ZCTA5.csv")
zoneDf = zoneDf.selectExpr('ST_GeomFromWKT(_c0) as zone', '_c1 as zipcode')
```
@@ -105,7 +105,7 @@ We provide a Docker image for Apache Sedona with Python
JupyterLab and a single-
* To install the Python package:
- ```
+ ```
pip install apache-sedona
```
* To Compile the source code, please refer to [Sedona
website](https://sedona.apache.org/latest-snapshot/setup/compile/)
diff --git a/docs/api/flink/Function.md b/docs/api/flink/Function.md
index e88e4da41..81d82542a 100644
--- a/docs/api/flink/Function.md
+++ b/docs/api/flink/Function.md
@@ -1246,7 +1246,7 @@ Format: `ST_H3CellDistance(cell1: Long, cell2: Long)`
Since: `v1.5.0`
Example:
-```SQL
+```sql
select ST_H3CellDistance(ST_H3CellIDs(ST_GeomFromWKT('POINT(1 2)'), 8,
true)[1], ST_H3CellIDs(ST_GeomFromWKT('POINT(1.23 1.59)'), 8, true)[1])
```
@@ -1291,7 +1291,7 @@ Format: `ST_H3CellIDs(geom: geometry, level: Int,
fullCover: true)`
Since: `v1.5.0`
Example:
-```SQL
+```sql
SELECT ST_H3CellIDs(ST_GeomFromText('LINESTRING(1 3 4, 5 6 7)'), 6, true)
```
@@ -1318,7 +1318,7 @@ Format: `ST_H3KRing(cell: Long, k: Int, exactRing:
Boolean)`
Since: `v1.5.0`
Example:
-```SQL
+```sql
select ST_H3KRing(ST_H3CellIDs(ST_GeomFromWKT('POINT(1 2)'), 8, true)[1], 1,
false), ST_H3KRing(ST_H3CellIDs(ST_GeomFromWKT('POINT(1 2)'), 8, true)[1], 1,
true)
```
@@ -1342,7 +1342,7 @@ Format: `ST_H3ToGeom(cells: Array[Long])`
Since: `v1.5.0`
Example:
-```SQL
+```sql
SELECT ST_H3ToGeom(ST_H3CellIDs(ST_GeomFromWKT('POINT(1 2)'), 8, true)[0], 1,
true))
```
@@ -2189,7 +2189,7 @@ Since: `v1.4.0`
Example:
-```SQL
+```sql
SELECT ST_S2CellIDs(ST_GeomFromText('LINESTRING(1 3 4, 5 6 7)'), 6)
```
diff --git a/docs/api/sql/Function.md b/docs/api/sql/Function.md
index 29147372b..d41eeaf6a 100644
--- a/docs/api/sql/Function.md
+++ b/docs/api/sql/Function.md
@@ -1257,7 +1257,7 @@ Format: `ST_H3CellDistance(cell1: Long, cell2: Long)`
Since: `v1.5.0`
Spark SQL example:
-```SQL
+```sql
select ST_H3CellDistance(ST_H3CellIDs(ST_GeomFromWKT('POINT(1 2)'), 8,
true)[0], ST_H3CellIDs(ST_GeomFromWKT('POINT(1.23 1.59)'), 8, true)[0])
```
@@ -1302,7 +1302,7 @@ Format: `ST_H3CellIDs(geom: geometry, level: Int,
fullCover: Boolean)`
Since: `v1.5.0`
Spark SQL example:
-```SQL
+```sql
SELECT ST_H3CellIDs(ST_GeomFromText('LINESTRING(1 3 4, 5 6 7)'), 6, true)
```
@@ -1329,7 +1329,7 @@ Format: `ST_H3KRing(cell: Long, k: Int, exactRing:
Boolean)`
Since: `v1.5.0`
Spark SQL example:
-```SQL
+```sql
SELECT ST_H3KRing(ST_H3CellIDs(ST_GeomFromWKT('POINT(1 2)'), 8, true)[0], 1,
true) cells union select ST_H3KRing(ST_H3CellIDs(ST_GeomFromWKT('POINT(1 2)'),
8, true)[0], 1, false) cells
```
@@ -1354,7 +1354,7 @@ Format: `ST_H3ToGeom(cells: Array[Long])`
Since: `v1.5.0`
Spark SQL example:
-```SQL
+```sql
SELECT ST_H3ToGeom(ST_H3CellIDs(ST_GeomFromWKT('POINT(1 2)'), 8, true)[0], 1,
true))
```
@@ -2199,7 +2199,7 @@ Since: `v1.4.0`
Spark SQL Example:
-```SQL
+```sql
SELECT ST_S2CellIDs(ST_GeomFromText('LINESTRING(1 3 4, 5 6 7)'), 6)
```