This is an automated email from the ASF dual-hosted git repository. malka pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-sedona.git
commit 41d66f6cc95b57ec4d512163fe2d50264d863e72 Author: Jia Yu <[email protected]> AuthorDate: Sun Feb 14 15:55:58 2021 -0800 Update the doc with binder information --- README.md | 3 +++ docs/download/compile.md | 2 +- docs/download/features.md | 2 ++ docs/download/overview.md | 4 +++- docs/tutorial/geospark-core-python.md | 2 +- docs/tutorial/geospark-sql-python.md | 2 +- docs/tutorial/jupyter-notebook.md | 12 +++++++----- 7 files changed, 18 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index 72ef9fc..2521fd5 100644 --- a/README.md +++ b/README.md @@ -2,6 +2,9 @@ [](https://github.com/apache/incubator-sedona/actions?query=workflow%3A%22Scala+and+Java+build%22) [](https://github.com/apache/incubator-sedona/actions?query=workflow%3A%22Python+build%22) ](https://mybinder.org/v2/gh/apache/incubator-sedona/HEAD?filepath=binder) and play the interactive Sedona Python Jupyter Notebook immediately! + + Apache Sedonaâ„¢(incubating) is a cluster computing system for processing large-scale spatial data. Sedona extends Apache Spark / SparkSQL with a set of out-of-the-box Spatial Resilient Distributed Datasets (SRDDs)/ SpatialSQL that efficiently load, process, and analyze large-scale spatial data across machines. ### Sedona contains several modules: diff --git a/docs/download/compile.md b/docs/download/compile.md index 0d97e73..b7cd233 100644 --- a/docs/download/compile.md +++ b/docs/download/compile.md @@ -1,6 +1,6 @@ # Compile Sedona source code -[](https://github.com/apache/incubator-sedona/actions?query=workflow%3A%22Scala+and+Java+build%22) [](https://github.com/apache/incubator-sedona/actions?query=workflow%3A%22Python+build%22) ](https://github.com/apache/incubator-sedona/actions?query=workflow%3A%22Scala+and+Java+build%22) [](https://github.com/apache/incubator-sedona/actions?query=workflow%3A%22Python+build%22)  +Click [](https://mybinder.org/v2/gh/apache/incubator-sedona/HEAD?filepath=binder) and play the interactive Sedona Python Jupyter Notebook immediately! + ## Companies are using Sedona [<img src="https://www.dataiku.com/static/img/partners/LOGO-Blue-DME-PNG-3.png" width="200">](https://www.bluedme.com/) [<img src="https://images.ukfast.co.uk/comms/news/businesscloud/photos/14-08-2018/gyana.jpg" width="150">](https://www.gyana.co.uk/) [<img src="https://mobike.com/global/public/invitation__footer__logo.png" width="150">](https://mobike.com) and more! diff --git a/docs/download/overview.md b/docs/download/overview.md index a295627..21a6440 100644 --- a/docs/download/overview.md +++ b/docs/download/overview.md @@ -39,6 +39,8 @@ There are two ways to use a Scala or Java library with Apache Spark. You can use ## Install Sedona Python +Click [](https://mybinder.org/v2/gh/apache/incubator-sedona/HEAD?filepath=binder) and play the interactive Sedona Python Jupyter Notebook immediately! + Apache Sedona extends pyspark functions which depends on libraries: * pyspark @@ -105,4 +107,4 @@ export SPARK_HOME=~/Downloads/spark-3.0.1-bin-hadoop2.7 export PYTHONPATH=$SPARK_HOME/python ``` -You can then play with [Sedona Python Jupyter notebook](/tutorial/jupyter-notebook/) \ No newline at end of file +You can then play with [Sedona Python Jupyter notebook](/tutorial/jupyter-notebook/). \ No newline at end of file diff --git a/docs/tutorial/geospark-core-python.md b/docs/tutorial/geospark-core-python.md index 2f2869b..a4d7f1d 100644 --- a/docs/tutorial/geospark-core-python.md +++ b/docs/tutorial/geospark-core-python.md @@ -33,7 +33,7 @@ GeoData has one method to get user data. <li> getUserData() -> str </li> !!!note - This tutorial is based on [Sedona Core Jupyter Notebook example](../jupyter-notebook) + This tutorial is based on [Sedona Core Jupyter Notebook example](../jupyter-notebook). You can interact with Sedona Python Jupyter notebook immediately on Binder. Click [](https://mybinder.org/v2/gh/apache/incubator-sedona/HEAD?filepath=binder) and wait for a few minutes. Then select a notebook and enjoy! ## Installation diff --git a/docs/tutorial/geospark-sql-python.md b/docs/tutorial/geospark-sql-python.md index 37d0f75..b5f88e0 100644 --- a/docs/tutorial/geospark-sql-python.md +++ b/docs/tutorial/geospark-sql-python.md @@ -14,7 +14,7 @@ spark.sql("YOUR_SQL") ``` !!!note - This tutorial is based on [Sedona SQL Jupyter Notebook example](../jupyter-notebook) + This tutorial is based on [Sedona SQL Jupyter Notebook example](../jupyter-notebook). You can interact with Sedona Python Jupyter notebook immediately on Binder. Click [](https://mybinder.org/v2/gh/apache/incubator-sedona/HEAD?filepath=binder) and wait for a few minutes. Then select a notebook and enjoy! ## Installation diff --git a/docs/tutorial/jupyter-notebook.md b/docs/tutorial/jupyter-notebook.md index c69b739..12a12fa 100644 --- a/docs/tutorial/jupyter-notebook.md +++ b/docs/tutorial/jupyter-notebook.md @@ -1,21 +1,23 @@ # Python Jupyter Notebook Examples -Sedona Python provides two Jupyter Notebook examples: [Sedona core](https://github.com/apache/incubator-sedona/blob/master/python/ApacheSedonaCore.ipynb) and [Sedona SQL](https://github.com/apache/incubator-sedona/blob/master/python/ApacheSedonaSQL.ipynb) +Click [](https://mybinder.org/v2/gh/apache/incubator-sedona/HEAD?filepath=binder) and play the interactive Sedona Python Jupyter Notebook immediately! +Sedona Python provides two Jupyter Notebook examples: [Sedona core](https://github.com/apache/incubator-sedona/blob/master/binder/ApacheSedonaCore.ipynb) and [Sedona SQL](https://github.com/apache/incubator-sedona/blob/master/binder/ApacheSedonaSQL.ipynb) -Please use the following steps to run Jupyter notebook with Pipenv + +Please use the following steps to run Jupyter notebook with Pipenv on your machine 1. Clone Sedona GitHub repo or download the source code 2. Install Sedona Python from PyPi or GitHub source: Read [Install Sedona Python](/download/overview/#install-sedona) to learn. 3. Prepare python-adapter jar: Read [Install Sedona Python](/download/overview/#prepare-python-adapter-jar) to learn. 4. Setup pipenv python version. For Spark 3.0, Sedona supports 3.7 - 3.9 ```bash -cd python +cd binder pipenv --python 3.8 ``` 5. Install dependencies ```bash -cd python +cd binder pipenv install ``` 6. Install jupyter notebook kernel for pipenv @@ -25,7 +27,7 @@ pipenv shell ``` 7. In the pipenv shell, do ```bash -python -m ipykernel install --user --name=my-virtualenv-name +python -m ipykernel install --user --name=apache-sedona ``` 8. Setup environment variables `SPARK_HOME` and `PYTHONPATH` if you didn't do it before. Read [Install Sedona Python](/download/overview/#setup-environment-variables) to learn. 9. Launch jupyter notebook: `jupyter notebook`
