Repository: incubator-zeppelin
Updated Branches:
  refs/heads/master dcda63eb2 -> 0d157aebd


http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/0d157aeb/docs/interpreter/spark.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/spark.md b/docs/interpreter/spark.md
index 87eaac1..6141619 100644
--- a/docs/interpreter/spark.md
+++ b/docs/interpreter/spark.md
@@ -8,7 +8,6 @@ group: manual
 
 
 ## Spark Interpreter for Apache Zeppelin
-
 [Apache Spark](http://spark.apache.org) is supported in Zeppelin with 
 Spark Interpreter group, which consisted of 4 interpreters.
 
@@ -40,14 +39,10 @@ Spark Interpreter group, which consisted of 4 interpreters.
   </tr>
 </table>
 
-<br />
 ## Configuration
-
 Without any configuration, Spark interpreter works out of box in local mode. 
But if you want to connect to your Spark cluster, you'll need to follow below 
two simple steps.
 
-
 ### 1. Export SPARK_HOME
-
 In **conf/zeppelin-env.sh**, export `SPARK_HOME` environment variable with 
your Spark installation path.
 
 for example
@@ -64,7 +59,6 @@ export SPARK_SUBMIT_OPTIONS="--packages 
com.databricks:spark-csv_2.10:1.2.0"
 ```
 
 ### 2. Set master in Interpreter menu
-
 After start Zeppelin, go to **Interpreter** menu and edit **master** property 
in your Spark interpreter setting. The value may vary depending on your Spark 
cluster deployment type.
 
 for example,
@@ -74,25 +68,21 @@ for example,
  * **yarn-client** in Yarn client mode
  * **mesos://host:5050** in Mesos cluster
 
-
-
 That's it. Zeppelin will work with any version of Spark and any deployment 
type without rebuilding Zeppelin in this way. ( Zeppelin 0.5.5-incubating 
release works up to Spark 1.5.2 )
 
 > Note that without exporting `SPARK_HOME`, it's running in local mode with 
 > included version of Spark. The included version may vary depending on the 
 > build profile.
 
-<br />
 ## SparkContext, SQLContext, ZeppelinContext
 SparkContext, SQLContext, ZeppelinContext are automatically created and 
exposed as variable names 'sc', 'sqlContext' and 'z', respectively, both in 
scala and python environments.
 
 > Note that scala / python environment shares the same SparkContext, 
 > SQLContext, ZeppelinContext instance.
 
-<br />
 <a name="dependencyloading"> </a>
+
 ## Dependency Management
 There are two ways to load external library in spark interpreter. First is 
using Zeppelin's `%dep` interpreter and second is loading Spark properties.
 
 ### 1. Dynamic Dependency Loading via %dep interpreter
-
 When your code requires external library, instead of doing 
download/copy/restart Zeppelin, you can easily do following jobs using `%dep` 
interpreter.
 
  * Load libraries recursively from Maven repository
@@ -182,15 +172,13 @@ Here are few examples:
                spark.jars.packages             
com.databricks:spark-csv_2.10:1.2.0
                spark.files                             
/path/mylib1.py,/path/mylib2.egg,/path/mylib3.zip
 
-<br />
 ## ZeppelinContext
-
 Zeppelin automatically injects ZeppelinContext as variable 'z' in your 
scala/python environment. ZeppelinContext provides some additional functions 
and utility.
 
 ### Object Exchange
-
 ZeppelinContext extends map and it's shared between scala, python environment.
 So you can put some object from scala and read it from python, vise versa.
+
 <div class="codetabs">
   <div data-lang="scala" markdown="1">
 
@@ -212,6 +200,7 @@ myObject = z.get("objName")
   
   </div>
 </div>
+
 ### Form Creation
 
 ZeppelinContext provides functions for creating forms. 

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/0d157aeb/docs/manual/interpreters.md
----------------------------------------------------------------------
diff --git a/docs/manual/interpreters.md b/docs/manual/interpreters.md
index fd4c2bc..c886718 100644
--- a/docs/manual/interpreters.md
+++ b/docs/manual/interpreters.md
@@ -19,31 +19,24 @@ limitations under the License.
 -->
 {% include JB/setup %}
 
-
 ## Interpreters in Zeppelin
 In this section, we will explain about the role of interpreters, interpreters 
group and interpreter settings in Zeppelin.
 The concept of Zeppelin interpreter allows any 
language/data-processing-backend to be plugged into Zeppelin.
 Currently, Zeppelin supports many interpreters such as Scala ( with Apache 
Spark ), Python ( with Apache Spark ), SparkSQL, Hive, Markdown, Shell and so 
on.
 
-<br/>
 ## What is Zeppelin interpreter?
-
 Zeppelin Interpreter is a plug-in which enables Zeppelin users to use a 
specific language/data-processing-backend. For example, to use scala code in 
Zeppelin, you need `%spark` interpreter.
 
 When you click the ```+Create``` button in the interpreter page, the 
interpreter drop-down list box will show all the available interpreters on your 
server.
 
 <img src="/assets/themes/zeppelin/img/screenshots/interpreter_create.png">
 
-<br/>
 ## What is Zeppelin Interpreter Setting?
-
 Zeppelin interpreter setting is the configuration of a given interpreter on 
Zeppelin server. For example, the properties are required for hive JDBC 
interpreter to connect to the Hive server.
 
 <img src="/assets/themes/zeppelin/img/screenshots/interpreter_setting.png">
 
-<br/>
 ## What is Zeppelin Interpreter Group?
-
 Every Interpreter is belonged to an **Interpreter Group**. Interpreter Group 
is a unit of start/stop interpreter.
 By default, every interpreter is belonged to a single group, but the group 
might contain more interpreters. For example, spark interpreter group is 
including Spark support, pySpark, 
 SparkSQL and the dependency loader.
@@ -53,9 +46,7 @@ Technically, Zeppelin interpreters from the same group are 
running in the same J
 Each interpreters is belonged to a single group and registered together. All 
of their properties are listed in the interpreter setting like below image.
 <img 
src="/assets/themes/zeppelin/img/screenshots/interpreter_setting_spark.png">
 
-<br/>
 ## Programming Languages for Interpreter
-
 If the interpreter uses a specific programming language ( like Scala, Python, 
SQL ), it is generally recommended to add a syntax highlighting supported for 
that to the notebook paragraph editor.  
   
 To check out the list of languages supported, see the `mode-*.js` files under 
`zeppelin-web/bower_components/ace-builds/src-noconflict` or from 
[github.com/ajaxorg/ace-builds](https://github.com/ajaxorg/ace-builds/tree/master/src-noconflict).
  
@@ -65,5 +56,3 @@ If you want to add a new set of syntax highlighting,
 1. Add the `mode-*.js` file to `zeppelin-web/bower.json` ( when built, 
`zeppelin-web/src/index.html` will be changed automatically. ).  
 2. Add to the list of `editorMode` in 
`zeppelin-web/src/app/notebook/paragraph/paragraph.controller.js` - it follows 
the pattern 'ace/mode/x' where x is the name.  
 3. Add to the code that checks for `%` prefix and calls 
`session.setMode(editorMode.x)` in `setParagraphMode` located in 
`zeppelin-web/src/app/notebook/paragraph/paragraph.controller.js`.  
-  
-

Reply via email to