This is an automated email from the ASF dual-hosted git repository.

zjffdu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/zeppelin.git


The following commit(s) were added to refs/heads/master by this push:
     new bf2d702  [minor] Minor doc update
bf2d702 is described below

commit bf2d70290bacf651e49daab5460a9d7b7551cea9
Author: Jeff Zhang <zjf...@apache.org>
AuthorDate: Tue Feb 11 16:07:01 2020 +0800

    [minor] Minor doc update
---
 docs/quickstart/python_with_zeppelin.md | 1 +
 docs/quickstart/spark_with_zeppelin.md  | 3 +--
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/quickstart/python_with_zeppelin.md 
b/docs/quickstart/python_with_zeppelin.md
index b10d83b..80237f8 100644
--- a/docs/quickstart/python_with_zeppelin.md
+++ b/docs/quickstart/python_with_zeppelin.md
@@ -31,6 +31,7 @@ The following guides explain how to use Apache Zeppelin that 
enables you to writ
 - can query using 
[PandasSQL](../interpreter/python.html#sql-over-pandas-dataframes)
 - also, provides [PySpark](../interpreter/spark.html)
 - with [matplotlib 
integration](../interpreter/python.html#matplotlib-integration)
+- support 
[ipython](../interpreter/python.html#ipython-interpreter-pythonipython-recommended)
 
 - can create results including **UI widgets** using [Dynamic 
Form](../interpreter/python.html#using-zeppelin-dynamic-forms)
 
 <br/>
diff --git a/docs/quickstart/spark_with_zeppelin.md 
b/docs/quickstart/spark_with_zeppelin.md
index 9c423d5..6b35beb 100644
--- a/docs/quickstart/spark_with_zeppelin.md
+++ b/docs/quickstart/spark_with_zeppelin.md
@@ -29,8 +29,7 @@ For a brief overview of Apache Spark fundamentals with Apache 
Zeppelin, see the
 
 - **built-in** Apache Spark integration.
 - with [SparkSQL](http://spark.apache.org/sql/), 
[PySpark](https://spark.apache.org/docs/latest/api/python/pyspark.html), 
[SparkR](https://spark.apache.org/docs/latest/sparkr.html)
-- inject 
[SparkContext](https://spark.apache.org/docs/latest/api/java/org/apache/spark/SparkContext.html)
 and 
[SQLContext](https://spark.apache.org/docs/latest/sql-programming-guide.html) 
automatically
-- dependencies loading (jars) at runtime using [dependency 
loader](../interpreter/spark.html#dependencyloading) 
+- inject 
[SparkContext](https://spark.apache.org/docs/latest/api/java/org/apache/spark/SparkContext.html),
 [SQLContext](https://spark.apache.org/docs/latest/sql-programming-guide.html) 
and 
[SparkSession](https://spark.apache.org/docs/latest/sql-programming-guide.html) 
automatically
 - canceling job and displaying its progress 
 - supporting [Spark Cluster 
Mode](../setup/deployment/spark_cluster_mode.html#apache-zeppelin-on-spark-cluster-mode)
 for external spark clusters
 - supports [different context per user / 
note](../usage/interpreter/interpreter_binding_mode.html) 

Reply via email to