Repository: incubator-zeppelin
Updated Branches:
  refs/heads/gh-pages 6f666bcd6 -> 2292dfa4b


http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/docs/manual/interpreters.md
----------------------------------------------------------------------
diff --git a/docs/manual/interpreters.md b/docs/manual/interpreters.md
deleted file mode 100644
index ab9fdf4..0000000
--- a/docs/manual/interpreters.md
+++ /dev/null
@@ -1,38 +0,0 @@
----
-layout: page
-title: "Interpreters"
-description: ""
-group: manual
----
-{% include JB/setup %}
-
-
-## Interpreters in zeppelin
-
-This section explain the role of Interpreters, interpreters group and 
interpreters settings in Zeppelin.
-Zeppelin interpreter concept allows any language/data-processing-backend to be 
plugged into Zeppelin.
-Currently Zeppelin supports many interpreters such as Scala(with Apache 
Spark), Python(with Apache Spark), SparkSQL, Hive, Markdown and Shell.
-
-### What is zeppelin interpreter?
-
-Zeppelin Interpreter is the plug-in which enable zeppelin user to use a 
specific language/data-processing-backend. For example to use scala code in 
Zeppelin, you need ```spark``` interpreter.
-
-When you click on the ```+Create``` button in the interpreter page the 
interpreter drop-down list box will present all the available interpreters on 
your server.
-
-<img src="../../assets/themes/zeppelin/img/screenshots/interpreter_create.png">
-
-### What is zeppelin interpreter setting?
-
-Zeppelin interpreter setting is the configuration of a given interpreter on 
zeppelin server. For example, the properties requried for hive  JDBC 
interpreter to connect to the Hive server.
-
-<img 
src="../../assets/themes/zeppelin/img/screenshots/interpreter_setting.png">
-### What is zeppelin interpreter group?
-
-Every Interpreter belongs to an InterpreterGroup. InterpreterGroup is a unit 
of start/stop interpreter.
-By default, every interpreter belong to a single group but the group might 
contain more interpreters. For example, spark interpreter group include spark 
support, pySpark, 
-SparkSQL and the dependency loader.
-
-Technically, Zeppelin interpreters from the same group are running in the same 
JVM.
-
-Interpreters belong to a single group a registered together and all of their 
properties are listed in the interpreter setting.
-<img 
src="../../assets/themes/zeppelin/img/screenshots/interpreter_setting_spark.png">

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/docs/manual/notebookashomepage.md
----------------------------------------------------------------------
diff --git a/docs/manual/notebookashomepage.md 
b/docs/manual/notebookashomepage.md
deleted file mode 100644
index f1c0fae..0000000
--- a/docs/manual/notebookashomepage.md
+++ /dev/null
@@ -1,96 +0,0 @@
----
-layout: page
-title: "Notebook as Homepage"
-description: ""
-group: manual
----
-{% include JB/setup %}
-
-## Customize your zeppelin homepage
- Zeppelin allows you to use one of the notebooks you create as your zeppelin 
Homepage.
- With that you can brand your zeppelin installation, 
- adjust the instruction to your users needs and even translate to other 
languages.
-
- <br />
-### How to set a notebook as your zeppelin homepage
-
-The process for creating your homepage is very simple as shown below:
- 
- 1. Create a notebook using zeppelin
- 2. Set the notebook id in the config file
- 3. Restart zeppelin
- 
- <br />
-#### Create a notebook using zeppelin
-  Create a new notebook using zeppelin,
-  you can use ```%md``` interpreter for markdown content or any other 
interpreter you like.
-  
-  You can also use the display system to generate 
[text](../displaysystem/display.html), 
-  
[html](../displaysystem/display.html#html),[table](../displaysystem/table.html) 
or
-   [angular](../displaysystem/angular.html)
-
-   Run (shift+Enter) the notebook and see the output. Optionally, change the 
notebook view to report to hide 
-   the code sections.
-     
-   <br />
-#### Set the notebook id in the config file
-  To set the notebook id in the config file you should copy it from the last 
word in the notebook url 
-  
-  for example
-  
-  <img 
src="../../assets/themes/zeppelin/img/screenshots/homepage_notebook_id.png" />
-
-  Set the notebook id to the ```ZEPPELIN_NOTEBOOK_HOMESCREEN``` environment 
variable 
-  or ```zeppelin.notebook.homescreen``` property. 
-  
-  You can also set the ```ZEPPELIN_NOTEBOOK_HOMESCREEN_HIDE``` environment 
variable 
-  or ```zeppelin.notebook.homescreen.hide``` property to hide the new notebook 
from the notebook list.
-
-  <br />
-#### Restart zeppelin
-  Restart your zeppelin server
-  
-  ```
-  ./bin/zeppelin-deamon stop 
-  ./bin/zeppelin-deamon start
-  ```
-  ####That's it! Open your browser and navigate to zeppelin and see your 
customized homepage...
-    
-  
-<br />
-### Show notebooks list in your custom homepage
-If you want to display the list of notebooks on your custom zeppelin homepage 
all 
-you need to do is use our %angular support.
-  
-  <br />
-  Add the following code to a paragraph in you home page and run it... walla! 
you have your notebooks list.
-  
-  ```javascript
-  println(
-  """%angular 
-    <div class="col-md-4" ng-controller="HomeCtrl as home">
-      <h4>Notebooks</h4>
-      <div>
-        <h5><a href="" data-toggle="modal" data-target="#noteNameModal" 
style="text-decoration: none;">
-          <i style="font-size: 15px;" class="icon-notebook"></i> Create new 
note</a></h5>
-          <ul style="list-style-type: none;">
-            <li ng-repeat="note in home.notes.list track by $index"><i 
style="font-size: 10px;" class="icon-doc"></i>
-              <a style="text-decoration: none;" 
href="#/notebook/{{note.id}}">{{note.name || 'Note ' + note.id}}</a>
-            </li>
-          </ul>
-      </div>
-    </div>
-  """)
-  ```
-  
-  After running the notebook you will see output similar to this one:
-  <img 
src="../../assets/themes/zeppelin/img/screenshots/homepage_notebook_list.png" />
-  
-  The main trick here relays in linking the ```<div>``` to the controller:
-  
-  ```javascript
-  <div class="col-md-4" ng-controller="HomeCtrl as home">
-  ```
-  
-  Once we have ```home``` as our controller variable in our ```<div></div>``` 
-  we can use ```home.notes.list``` to get access to the notebook list.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/docs/pleasecontribute.md
----------------------------------------------------------------------
diff --git a/docs/pleasecontribute.md b/docs/pleasecontribute.md
deleted file mode 100644
index 4724a66..0000000
--- a/docs/pleasecontribute.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-layout: page
-title: "Please contribute"
-description: ""
-group: development
----
-{% include JB/setup %}
-
-
-### Waiting for your help
-The content does not exist yet.
-
-We're always welcoming contribution.
-
-If you're interested, please check [How to contribute 
(website)](./development/howtocontributewebsite.html).

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/docs/releases/zeppelin-release-0.5.0-incubating.md
----------------------------------------------------------------------
diff --git a/docs/releases/zeppelin-release-0.5.0-incubating.md 
b/docs/releases/zeppelin-release-0.5.0-incubating.md
deleted file mode 100644
index 7f6b347..0000000
--- a/docs/releases/zeppelin-release-0.5.0-incubating.md
+++ /dev/null
@@ -1,64 +0,0 @@
----
-layout: page
-title: "Zeppelin Release 0.5.0-incubating"
-description: ""
-group: release
----
-{% include JB/setup %}
-
-### Zeppelin Release 0.5.0-incubating
-
-Zeppelin 0.5.0-incubating is the first release under Apache incubation, with 
contributions from 42 developers and more than 600 commits.
-
-To download Zeppelin 0.5.0-incubating visit the 
[download](../../download.html) page.
-
-You can visit [issue 
tracker](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12316221&version=12329850)
 for full list of issues being resolved.
-
-### Contributors
-
-The following developers contributed to this release:
-
-* Akshat Aranya - New features and Improvements in UI.
-* Alexander Bezzubov -Improvements and Bug fixes in Core, UI, Build system. 
New feature and Improvements in Spark interpreter. Documentation in roadmap.
-* Anthony Corbacho - Improvements in Website. Bug fixes Build system. 
Improvements and Bug fixes in UI. Documentation in roadmap.
-* Brennon York - Improvements and Bug fixes in Build system.
-* CORNEAU Damien - New feature, Improvements and Bug fixes in UI and Build 
system.
-* Corey Huang - Improvements in Build system. New feature in Core.
-* Digeratus - Improvements in Tutorials.
-* Dimitrios Liapis - Improvements in Documentation.
-* DuyHai DOAN - New feature in Build system.
-* Emmanuelle Raffenne - Bug fixes in UI.
-* Eran Medan - Improvements in Documentation.
-* Eugene Morozov - Bug fixes in Core.
-* Felix Cheung - Improvements in Spark interpreter. Improvements in 
Documentation. New features, Improvements and Bug fixes in UI.
-* Hung Lin - Improvements in Core.
-* Hyungu Roh - Bug fixes in UI.
-* Ilya Ganelin - Improvements in Tutorials.
-* JaeHwa Jung - New features in Tajo interpreter.
-* Jakob Homan - Improvements in Website.
-* James Carman - Improvements in Build system.
-* Jongyoul Lee - Improvements in Core, Build system and Spark interpreter. Bug 
fixes in Spark Interpreter. New features in Build system and Spark interpreter. 
Improvements in Documentation.
-* Juarez Bochi - Bug fixes in Build system.
-* Julien Buret - Bug fixes in Spark interpreter.
-* Jérémy Subtil - Bug fixes in Build system.
-* Kevin (SangWoo) Kim - New features in Core, Tutorials. Improvements in 
Documentation. New features, Improvements and Bug fixes in UI.
-* Kyoung-chan Lee - Improvements in Documentation.
-* Lee moon soo - Improvements in Tutorials. New features, Improvements and Bug 
fixes in Core, UI, Build system and Spark interpreter. New features in Flink 
interpreter. Improvments in Documentation.
-* Mina Lee - Improvements and Bug fixes in UI. New features in UI. 
Improvements in Core, Website.
-* Rajat Gupta - Bug fixes in Spark interpreter.
-* Ram Venkatesh - Improvements in Core, Build system, Spark interpreter and 
Markdown interpreter. New features and Bug fixes in Hive interpreter.
-* Sebastian YEPES - Improvements in Core.
-* Seckin Savasci - Improvements in Build system.
-* Timothy Shelton - Bug fixes in UI.
-* Vincent Botta - New features in UI.
-* Young boom - Improvements in UI.
-* bobbych - Improvements in Spark interpreter.
-* debugger87 - Bug fixes in Core.
-* dobachi - Improvements in UI.
-* epahomov - Improvements in Core and Spark interpreter.
-* kevindai0126 - Improvements in Core.
-* rahul agarwal - Bug fixes in Core.
-* whisperstream - Improvements in Spark interpreter.
-* yundai - Improvements in Core.
-
-Thanks to everyone who made this release possible!

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/docs/rest-api/rest-interpreter.md
----------------------------------------------------------------------
diff --git a/docs/rest-api/rest-interpreter.md 
b/docs/rest-api/rest-interpreter.md
deleted file mode 100644
index 8bd56a0..0000000
--- a/docs/rest-api/rest-interpreter.md
+++ /dev/null
@@ -1,195 +0,0 @@
----
-layout: page
-title: "Interpreter REST API"
-description: ""
-group: rest-api
----
-{% include JB/setup %}
-
-## Zeppelin REST API
- Zeppelin provides several REST API's for interaction and remote activation of 
zeppelin functionality.
- 
- All REST API are available starting with the following endpoint 
```http://[zeppelin-server]:[zeppelin-port]/api```
- 
- Note that zeppein REST API receive or return JSON objects, it it recomended 
you install some JSON view such as 
- 
[JSONView](https://chrome.google.com/webstore/detail/jsonview/chklaanhfefbnpoihckbnefhakgolnmc)
- 
- 
- If you work with zeppelin and find a need for an additional REST API please 
[file an issue or send us mail](../../community.html) 
-
- <br />
-### Interpreter REST API list
-  
-  The role of registered interpreters, settings and interpreters group is 
described [here](../manual/interpreters.html)
-  
-  <table class="table-configuration">
-    <col width="200">
-    <tr>
-      <th>List registered interpreters</th>
-      <th></th>
-    </tr>
-    <tr>
-      <td>Description</td>
-      <td>This ```GET``` method return all the registered interpreters 
available on the server.</td>
-    </tr>
-    <tr>
-      <td>URL</td>
-      <td>```http://[zeppelin-server]:[zeppelin-port]/api/interpreter```</td>
-    </tr>
-    <tr>
-      <td>Success code</td>
-      <td>200</td>
-    </tr>
-    <tr>
-      <td> Fail code</td>
-      <td> 500 </td>
-    </tr>
-    <tr>
-      <td> sample JSON response
-      </td>
-      <td> [Interpreter list sample](rest-json/rest-json-interpreter-list.json)
-      </td>
-    </tr>
-  </table>
-  
-<br/>
-   
-  <table class="table-configuration">
-    <col width="200">
-    <tr>
-      <th>List interpreters settings</th>
-      <th></th>
-    </tr>
-    <tr>
-      <td>Description</td>
-      <td>This ```GET``` method return all the interpreters settings 
registered on the server.</td>
-    </tr>
-    <tr>
-      <td>URL</td>
-      
<td>```http://[zeppelin-server]:[zeppelin-port]/api/interpreter/setting```</td>
-    </tr>
-    <tr>
-      <td>Success code</td>
-      <td>200</td>
-    </tr>
-    <tr>
-      <td> Fail code</td>
-      <td> 500 </td>
-    </tr>
-    <tr>
-      <td> sample JSON response
-      </td>
-      <td> [Setting list sample](rest-json/rest-json-interpreter-setting.json)
-      </td>
-    </tr>
-  </table>
-
-<br/>
-   
-  <table class="table-configuration">
-    <col width="200">
-    <tr>
-      <th>Create an interpreter setting</th>
-      <th></th>
-    </tr>
-    <tr>
-      <td>Description</td>
-      <td>This ```POST``` method adds a new interpreter setting using a 
registered interpreter to the server.</td>
-    </tr>
-    <tr>
-      <td>URL</td>
-      
<td>```http://[zeppelin-server]:[zeppelin-port]/api/interpreter/setting```</td>
-    </tr>
-    <tr>
-      <td>Success code</td>
-      <td>201</td>
-    </tr>
-    <tr>
-      <td> Fail code</td>
-      <td> 500 </td>
-    </tr>
-    <tr>
-      <td> sample JSON input
-      </td>
-      <td> [Create JSON sample](rest-json/rest-json-interpreter-create.json)
-      </td>
-    </tr>
-    <tr>
-      <td> sample JSON response
-      </td>
-      <td> [Create response 
sample](rest-json/rest-json-interpreter-create-response.json)
-      </td>
-    </tr>
-  </table>
-  
-  
-<br/>
-   
-  <table class="table-configuration">
-    <col width="200">
-    <tr>
-      <th>Update an interpreter setting</th>
-      <th></th>
-    </tr>
-    <tr>
-      <td>Description</td>
-      <td>This ```PUT``` method updates an interpreter setting with new 
properties.</td>
-    </tr>
-    <tr>
-      <td>URL</td>
-      
<td>```http://[zeppelin-server]:[zeppelin-port]/api/interpreter/setting/[interpreter
 ID]```</td>
-    </tr>
-    <tr>
-      <td>Success code</td>
-      <td>200</td>
-    </tr>
-    <tr>
-      <td> Fail code</td>
-      <td> 500 </td>
-    </tr>
-    <tr>
-      <td> sample JSON input
-      </td>
-      <td> [Update JSON sample](rest-json/rest-json-interpreter-update.json)
-      </td>
-    </tr>
-    <tr>
-      <td> sample JSON response
-      </td>
-      <td> [Update response 
sample](rest-json/rest-json-interpreter-update-response.json)
-      </td>
-    </tr>
-  </table>
-
-  
-<br/>
-   
-  <table class="table-configuration">
-    <col width="200">
-    <tr>
-      <th>Delete an interpreter setting</th>
-      <th></th>
-    </tr>
-    <tr>
-      <td>Description</td>
-      <td>This ```DELETE``` method deletes an given interpreter setting.</td>
-    </tr>
-    <tr>
-      <td>URL</td>
-      
<td>```http://[zeppelin-server]:[zeppelin-port]/api/interpreter/setting/[interpreter
 ID]```</td>
-    </tr>
-    <tr>
-      <td>Success code</td>
-      <td>200</td>
-    </tr>
-    <tr>
-      <td> Fail code</td>
-      <td> 500 </td>
-    </tr>
-    <tr>
-      <td> sample JSON response
-      </td>
-      <td> [Delete response 
sample](rest-json/rest-json-interpreter-delete-response.json)
-      </td>
-    </tr>
-  </table>

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/docs/rest-api/rest-json/rest-json-interpreter-create-response.json
----------------------------------------------------------------------
diff --git a/docs/rest-api/rest-json/rest-json-interpreter-create-response.json 
b/docs/rest-api/rest-json/rest-json-interpreter-create-response.json
deleted file mode 100644
index dd2bda4..0000000
--- a/docs/rest-api/rest-json/rest-json-interpreter-create-response.json
+++ /dev/null
@@ -1 +0,0 @@
-{"status":"CREATED","message":"","body":{"id":"2AYW25ANY","name":"md2","group":"md","properties":{"propname":"propvalue"},"interpreterGroup":[{"class":"org.apache.zeppelin.markdown.Markdown","name":"md"}]}}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/docs/rest-api/rest-json/rest-json-interpreter-create.json
----------------------------------------------------------------------
diff --git a/docs/rest-api/rest-json/rest-json-interpreter-create.json 
b/docs/rest-api/rest-json/rest-json-interpreter-create.json
deleted file mode 100644
index 778b7b4..0000000
--- a/docs/rest-api/rest-json/rest-json-interpreter-create.json
+++ /dev/null
@@ -1 +0,0 @@
-{"name":"md2","group":"md","properties":{"propname":"propvalue"},"interpreterGroup":[{"class":"org.apache.zeppelin.markdown.Markdown","name":"md"}]}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/docs/rest-api/rest-json/rest-json-interpreter-delete-response.json
----------------------------------------------------------------------
diff --git a/docs/rest-api/rest-json/rest-json-interpreter-delete-response.json 
b/docs/rest-api/rest-json/rest-json-interpreter-delete-response.json
deleted file mode 100644
index 48aa9be..0000000
--- a/docs/rest-api/rest-json/rest-json-interpreter-delete-response.json
+++ /dev/null
@@ -1 +0,0 @@
-{"status":"OK"}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/docs/rest-api/rest-json/rest-json-interpreter-list.json
----------------------------------------------------------------------
diff --git a/docs/rest-api/rest-json/rest-json-interpreter-list.json 
b/docs/rest-api/rest-json/rest-json-interpreter-list.json
deleted file mode 100644
index 2489c53..0000000
--- a/docs/rest-api/rest-json/rest-json-interpreter-list.json
+++ /dev/null
@@ -1 +0,0 @@
-{"status":"OK","message":"","body":{"md.md":{"name":"md","group":"md","className":"org.apache.zeppelin.markdown.Markdown","properties":{},"path":"/home/Downloads/incubator-zeppelin-master/interpreter/md"},"flink.flink":{"name":"flink","group":"flink","className":"org.apache.zeppelin.flink.FlinkInterpreter","properties":{"port":{"defaultValue":"6123","description":"port
 of running JobManager"},"host":{"defaultValue":"local","description":"host 
name of running JobManager. \u0027local\u0027 runs flink in local 
mode"}},"path":"/home/Downloads/incubator-zeppelin-master/interpreter/flink"},"ignite.ignitesql":{"name":"ignitesql","group":"ignite","className":"org.apache.zeppelin.ignite.IgniteSqlInterpreter","properties":{"ignite.jdbc.url":{"defaultValue":"jdbc:ignite://localhost:11211/","description":"Ignite
 JDBC connection 
URL."}},"path":"/home/Downloads/incubator-zeppelin-master/interpreter/ignite"},"tajo.tql":{"name":"tql","group":"tajo","className":"org.apache.zeppelin.tajo.TajoInterpre
 
ter","properties":{"tajo.jdbc.uri":{"defaultValue":"jdbc:tajo://localhost:26002/default","description":"The
 URL for 
TajoServer."}},"path":"/home/Downloads/incubator-zeppelin-master/interpreter/tajo"},"sh.sh":{"name":"sh","group":"sh","className":"org.apache.zeppelin.shell.ShellInterpreter","properties":{},"path":"/home/Downloads/incubator-zeppelin-master/interpreter/sh"},"hive.hql":{"name":"hql","group":"hive","className":"org.apache.zeppelin.hive.HiveInterpreter","properties":{"hive.hiveserver2.password":{"defaultValue":"","description":"The
 password for the hive 
user"},"hive.hiveserver2.user":{"defaultValue":"hive","description":"The hive 
user"},"hive.hiveserver2.url":{"defaultValue":"jdbc:hive2://localhost:10000","description":"The
 URL for 
HiveServer2."}},"path":"/home/Downloads/incubator-zeppelin-master/interpreter/hive"},"ignite.ignite":{"name":"ignite","group":"ignite","className":"org.apache.zeppelin.ignite.IgniteInterpreter","properties":{"ignite.config.url":{"defaultValue":
 "","description":"Configuration URL. Overrides all other 
settings."},"ignite.peerClassLoadingEnabled":{"defaultValue":"true","description":"Peer
 class loading enabled. true or 
false"},"ignite.clientMode":{"defaultValue":"true","description":"Client mode. 
true or 
false"},"ignite.addresses":{"defaultValue":"127.0.0.1:47500..47509","description":"Coma
 separated list of addresses (e.g. 127.0.0.1:47500 or 
127.0.0.1:47500..47509)"}},"path":"/home/Downloads/incubator-zeppelin-master/interpreter/ignite"},"psql.sql":{"name":"sql","group":"psql","className":"org.apache.zeppelin.postgresql.PostgreSqlInterpreter","properties":{"postgresql.password":{"defaultValue":"","description":"The
 PostgreSQL user 
password"},"postgresql.max.result":{"defaultValue":"1000","description":"Max 
number of SQL result to 
display."},"postgresql.user":{"defaultValue":"gpadmin","description":"The 
PostgreSQL user 
name"},"postgresql.url":{"defaultValue":"jdbc:postgresql://localhost:5432/","description":"The
 URL for Post
 
greSQL."},"postgresql.driver.name":{"defaultValue":"org.postgresql.Driver","description":"JDBC
 Driver 
Name"}},"path":"/home/Downloads/incubator-zeppelin-master/interpreter/psql"},"geode.oql":{"name":"oql","group":"geode","className":"org.apache.zeppelin.geode.GeodeOqlInterpreter","properties":{"geode.max.result":{"defaultValue":"1000","description":"Max
 number of OQL result to 
display."},"geode.locator.host":{"defaultValue":"localhost","description":"The 
Geode Locator 
Host."},"geode.locator.port":{"defaultValue":"10334","description":"The Geode 
Locator 
Port"}},"path":"/home/Downloads/incubator-zeppelin-master/interpreter/geode"},"cassandra.cassandra":{"name":"cassandra","group":"cassandra","className":"org.apache.zeppelin.cassandra.CassandraInterpreter","properties":{"cassandra.pooling.new.connection.threshold.remote":{"defaultValue":"100","description":"Cassandra
 new connection threshold remove. Protocol V2 and below default \u003d 
100Protocol V3 and above default \u003d 200"},"cas
 
sandra.query.default.fetchSize":{"defaultValue":"5000","description":"Cassandra 
query default fetch size. Default \u003d 
5000"},"cassandra.socket.tcp.no_delay":{"defaultValue":"true","description":"Cassandra
 socket TCP no delay. Default \u003d 
true"},"cassandra.hosts":{"defaultValue":"localhost","description":"Comma 
separated Cassandra hosts (DNS name or IP address). Default \u003d localhost. 
Ex: 
\u0027192.168.0.12,node2,node3\u0027"},"cassandra.credentials.username":{"defaultValue":"none","description":"Cassandra
 credentials username. Default \u003d 
\u0027none\u0027"},"cassandra.pooling.new.connection.threshold.local":{"defaultValue":"100","description":"Cassandra
 new connection threshold local. Protocol V2 and below default \u003d 
100Protocol V3 and above default \u003d 
800"},"cassandra.socket.read.timeout.millisecs":{"defaultValue":"12000","description":"Cassandra
 socket read timeout in millisecs. Default \u003d 
12000"},"cassandra.load.balancing.policy":{"defaultValue":"DEFAULT",
 "description":"Cassandra Load Balancing Policy. Default \u003d new 
TokenAwarePolicy(new 
DCAwareRoundRobinPolicy())"},"cassandra.pooling.max.request.per.connection.local":{"defaultValue":"1024","description":"Cassandra
 max request per connection local. Protocol V2 and below default \u003d 
128Protocol V3 and above default \u003d 
1024"},"cassandra.cluster":{"defaultValue":"Test 
Cluster","description":"Cassandra cluster name. Default \u003d \u0027Test 
Cluster\u0027"},"cassandra.pooling.heartbeat.interval.seconds":{"defaultValue":"30","description":"Cassandra
 pool heartbeat interval in secs. Default \u003d 
30"},"cassandra.query.default.serial.consistency":{"defaultValue":"SERIAL","description":"Cassandra
 query default serial consistency level. Default \u003d 
SERIAL"},"cassandra.retry.policy":{"defaultValue":"DEFAULT","description":"Cassandra
 Retry Policy. Default \u003d 
DefaultRetryPolicy.INSTANCE"},"cassandra.native.port":{"defaultValue":"9042","description":"Cassandra
 native port. Defa
 ult \u003d 
9042"},"cassandra.interpreter.parallelism":{"defaultValue":"10","description":"Cassandra
 interpreter parallelism.Default \u003d 
10"},"cassandra.pooling.pool.timeout.millisecs":{"defaultValue":"5000","description":"Cassandra
 pool time out in millisecs. Default \u003d 
5000"},"cassandra.pooling.max.request.per.connection.remote":{"defaultValue":"256","description":"Cassandra
 max request per connection remote. Protocol V2 and below default \u003d 
128Protocol V3 and above default \u003d 
256"},"cassandra.compression.protocol":{"defaultValue":"NONE","description":"Cassandra
 compression protocol. Available values: NONE, SNAPPY, LZ4. Default \u003d 
NONE"},"cassandra.socket.connection.timeout.millisecs":{"defaultValue":"5000","description":"Cassandra
 socket default connection timeout in millisecs. Default \u003d 
5000"},"cassandra.query.default.consistency":{"defaultValue":"ONE","description":"Cassandra
 query default consistency level. Default \u003d 
ONE"},"cassandra.keyspace":{"def
 aultValue":"system","description":"Cassandra keyspace name. Default \u003d 
\u0027system\u0027"},"cassandra.reconnection.policy":{"defaultValue":"DEFAULT","description":"Cassandra
 Reconnection Policy. Default \u003d new ExponentialReconnectionPolicy(1000, 10 
* 60 * 
1000)"},"cassandra.pooling.max.connection.per.host.local":{"defaultValue":"8","description":"Cassandra
 max connection per host local. Protocol V2 and below default \u003d 8Protocol 
V3 and above default \u003d 
1"},"cassandra.credentials.password":{"defaultValue":"none","description":"Cassandra
 credentials password. Default \u003d 
\u0027none\u0027"},"cassandra.protocol.version":{"defaultValue":"3","description":"Cassandra
 protocol version. Default \u003d 
3"},"cassandra.max.schema.agreement.wait.second":{"defaultValue":"10","description":"Cassandra
 max schema agreement wait in second.Default \u003d 
ProtocolOptions.DEFAULT_MAX_SCHEMA_AGREEMENT_WAIT_SECONDS"},"cassandra.pooling.core.connection.per.host.remote":{"defaultValue":"
 1","description":"Cassandra core connection per host remove. Protocol V2 and 
below default \u003d 1Protocol V3 and above default \u003d 
1"},"cassandra.pooling.core.connection.per.host.local":{"defaultValue":"2","description":"Cassandra
 core connection per host local. Protocol V2 and below default \u003d 2Protocol 
V3 and above default \u003d 
1"},"cassandra.pooling.max.connection.per.host.remote":{"defaultValue":"2","description":"Cassandra
 max connection per host remote. Protocol V2 and below default \u003d 2Protocol 
V3 and above default \u003d 
1"},"cassandra.pooling.idle.timeout.seconds":{"defaultValue":"120","description":"Cassandra
 idle time out in seconds. Default \u003d 
120"},"cassandra.speculative.execution.policy":{"defaultValue":"DEFAULT","description":"Cassandra
 Speculative Execution Policy. Default \u003d 
NoSpeculativeExecutionPolicy.INSTANCE"}},"path":"/home/Downloads/incubator-zeppelin-master/interpreter/cassandra"},"lens.lens":{"name":"lens","group":"lens","className":"o
 
rg.apache.zeppelin.lens.LensInterpreter","properties":{"lens.server.base.url":{"defaultValue":"http://\u003chostname\u003e:\u003cport\u003e/lensapi","description":"The
 URL for Lens 
Server"},"zeppelin.lens.maxThreads":{"defaultValue":"10","description":"If 
concurrency is true then how many 
threads?"},"zeppelin.lens.maxResults":{"defaultValue":"1000","description":"max 
number of rows to 
display"},"lens.client.dbname":{"defaultValue":"default","description":"The 
database schema 
name"},"lens.query.enable.persistent.resultset":{"defaultValue":"false","description":"Apache
 Lens to persist result in 
HDFS?"},"zeppelin.lens.run.concurrent":{"defaultValue":"true","description":"Run
 concurrent Lens 
Sessions"},"lens.session.cluster.user":{"defaultValue":"default","description":"Hadoop
 cluster 
username"}},"path":"/home/Downloads/incubator-zeppelin-master/interpreter/lens"},"spark.spark":{"name":"spark","group":"spark","className":"org.apache.zeppelin.spark.SparkInterpreter","properties":{"spark.
 executor.memory":{"defaultValue":"512m","description":"Executor memory per 
worker instance. ex) 512m, 32g"},"args":{"defaultValue":"","description":"spark 
commandline args"},"spark.yarn.jar":{"defaultValue":"","description":"The 
location of the Spark jar file. If you use yarn as a cluster, we should set 
this 
value"},"zeppelin.spark.useHiveContext":{"defaultValue":"true","description":"Use
 HiveContext instead of SQLContext if it is 
true."},"spark.app.name":{"defaultValue":"Zeppelin","description":"The name of 
spark application."},"spark.cores.max":{"defaultValue":"","description":"Total 
number of cores to use. Empty value uses all available 
core."},"zeppelin.spark.maxResult":{"defaultValue":"1000","description":"Max 
number of SparkSQL result to 
display."},"master":{"defaultValue":"local[*]","description":"Spark master uri. 
ex) 
spark://masterhost:7077"}},"path":"/home/Downloads/incubator-zeppelin-master/interpreter/spark"},"angular.angular":{"name":"angular","group":"angular","classNa
 
me":"org.apache.zeppelin.angular.AngularInterpreter","properties":{},"path":"/home/Downloads/incubator-zeppelin-master/interpreter/angular"},"phoenix.sql":{"name":"sql","group":"phoenix","className":"org.apache.zeppelin.phoenix.PhoenixInterpreter","properties":{"phoenix.jdbc.url":{"defaultValue":"jdbc:phoenix:localhost:2181:/hbase-unsecure","description":"Phoenix
 JDBC connection string"},"phoenix.user":{"defaultValue":"","description":"The 
Phoenix 
user"},"phoenix.driver.name":{"defaultValue":"org.apache.phoenix.jdbc.PhoenixDriver","description":"Phoenix
 Driver classname."},"phoenix.password":{"defaultValue":"","description":"The 
password for the Phoenix 
user"},"phoenix.max.result":{"defaultValue":"1000","description":"Max number of 
SQL results to 
display."}},"path":"/home/Downloads/incubator-zeppelin-master/interpreter/phoenix"},"spark.pyspark":{"name":"pyspark","group":"spark","className":"org.apache.zeppelin.spark.PySparkInterpreter","properties":{"spark.home":{"defaultValue":"","
 description":"Spark home path. Should be provided for 
pyspark"},"zeppelin.pyspark.python":{"defaultValue":"python","description":"Python
 command to run pyspark 
with"}},"path":"/home/Downloads/incubator-zeppelin-master/interpreter/spark"},"spark.sql":{"name":"sql","group":"spark","className":"org.apache.zeppelin.spark.SparkSqlInterpreter","properties":{"zeppelin.spark.concurrentSQL":{"defaultValue":"false","description":"Execute
 multiple SQL concurrently if set 
true."},"zeppelin.spark.maxResult":{"defaultValue":"1000","description":"Max 
number of SparkSQL result to 
display."}},"path":"/home/Downloads/incubator-zeppelin-master/interpreter/spark"},"spark.dep":{"name":"dep","group":"spark","className":"org.apache.zeppelin.spark.DepInterpreter","properties":{"zeppelin.dep.localrepo":{"defaultValue":"local-repo","description":"local
 repository for dependency 
loader"}},"path":"/home/Downloads/incubator-zeppelin-master/interpreter/spark"}}}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/docs/rest-api/rest-json/rest-json-interpreter-setting.json
----------------------------------------------------------------------
diff --git a/docs/rest-api/rest-json/rest-json-interpreter-setting.json 
b/docs/rest-api/rest-json/rest-json-interpreter-setting.json
deleted file mode 100644
index 04b9486..0000000
--- a/docs/rest-api/rest-json/rest-json-interpreter-setting.json
+++ /dev/null
@@ -1 +0,0 @@
-{"status":"OK","message":"","body":[{"id":"2AY6GV7Q3","name":"spark","group":"spark","properties":{"spark.cores.max":"","spark.yarn.jar":"","master":"local[*]","zeppelin.spark.maxResult":"1000","zeppelin.dep.localrepo":"local-repo","spark.app.name":"Zeppelin","spark.executor.memory":"512m","zeppelin.spark.useHiveContext":"true","args":"","spark.home":"","zeppelin.spark.concurrentSQL":"false","zeppelin.pyspark.python":"python"},"interpreterGroup":[{"class":"org.apache.zeppelin.spark.SparkInterpreter","name":"spark"},{"class":"org.apache.zeppelin.spark.PySparkInterpreter","name":"pyspark"},{"class":"org.apache.zeppelin.spark.SparkSqlInterpreter","name":"sql"},{"class":"org.apache.zeppelin.spark.DepInterpreter","name":"dep"}]},{"id":"2AYUGP2D5","name":"md","group":"md","properties":{"":""},"interpreterGroup":[{"class":"org.apache.zeppelin.markdown.Markdown","name":"md"}]},{"id":"2AWBZQVB8","name":"angular","group":"angular","properties":{},"interpreterGroup":[{"class":"org.apache.zeppe
 
lin.angular.AngularInterpreter","name":"angular"}]},{"id":"2AWSES8Z8","name":"sh","group":"sh","properties":{},"interpreterGroup":[{"class":"org.apache.zeppelin.shell.ShellInterpreter","name":"sh"}]},{"id":"2AWTCSXEX","name":"hive","group":"hive","properties":{"hive.hiveserver2.url":"jdbc:hive2://localhost:10000","hive.hiveserver2.password":"","hive.hiveserver2.user":"hive"},"interpreterGroup":[{"class":"org.apache.zeppelin.hive.HiveInterpreter","name":"hql"}]}]}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/docs/rest-api/rest-json/rest-json-interpreter-update-response.json
----------------------------------------------------------------------
diff --git a/docs/rest-api/rest-json/rest-json-interpreter-update-response.json 
b/docs/rest-api/rest-json/rest-json-interpreter-update-response.json
deleted file mode 100644
index abaeff1..0000000
--- a/docs/rest-api/rest-json/rest-json-interpreter-update-response.json
+++ /dev/null
@@ -1 +0,0 @@
-{"status":"OK","message":"","body":{"id":"2AYW25ANY","name":"md2","group":"md","properties":{"propname":"Otherpropvalue"},"interpreterGroup":[{"class":"org.apache.zeppelin.markdown.Markdown","name":"md"}]}}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/docs/rest-api/rest-json/rest-json-interpreter-update.json
----------------------------------------------------------------------
diff --git a/docs/rest-api/rest-json/rest-json-interpreter-update.json 
b/docs/rest-api/rest-json/rest-json-interpreter-update.json
deleted file mode 100644
index 4588a92..0000000
--- a/docs/rest-api/rest-json/rest-json-interpreter-update.json
+++ /dev/null
@@ -1 +0,0 @@
-{"name":"md2","group":"md","properties":{"propname":"Otherpropvalue"},"interpreterGroup":[{"class":"org.apache.zeppelin.markdown.Markdown","name":"md"}]}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/docs/tutorial/tutorial.md
----------------------------------------------------------------------
diff --git a/docs/tutorial/tutorial.md b/docs/tutorial/tutorial.md
deleted file mode 100644
index 5f8f936..0000000
--- a/docs/tutorial/tutorial.md
+++ /dev/null
@@ -1,185 +0,0 @@
----
-layout: page
-title: "Tutorial"
-description: ""
-group: tutorial
----
-
-### Zeppelin Tutorial
-
-We will assume you have Zeppelin installed already. If that's not the case, 
see [Install](../install/install.html).
-
-Zeppelin's current main backend processing engine is [Apache 
Spark](https://spark.apache.org). If you're new to the system, you might want 
to start by getting an idea of how it processes data to get the most out of 
Zeppelin.
-
-<br />
-### Tutorial with Local File
-
-#### Data Refine
-
-Before you start Zeppelin tutorial, you will need to download 
[bank.zip](http://archive.ics.uci.edu/ml/machine-learning-databases/00222/bank.zip).
 
-
-First, to transform data from csv format into RDD of `Bank` objects, run 
following script. This will also remove header using `filter` function.
-
-```scala
-val bankText = sc.textFile("yourPath/bank/bank-full.csv")
-
-case class Bank(age:Integer, job:String, marital : String, education : String, 
balance : Integer)
-
-val bank = bankText.map(s=>s.split(";")).filter(s=>s(0)!="\"age\"").map(
-    s=>Bank(s(0).toInt, 
-            s(1).replaceAll("\"", ""),
-            s(2).replaceAll("\"", ""),
-            s(3).replaceAll("\"", ""),
-            s(5).replaceAll("\"", "").toInt
-        )
-)
-
-// Below line works only in spark 1.3.0.
-// For spark 1.1.x and spark 1.2.x,
-// use bank.registerTempTable("bank") instead.
-bank.toDF().registerTempTable("bank")
-```
-
-<br />
-#### Data Retrieval
-
-Suppose we want to see age distribution from `bank`. To do this, run:
-
-```sql
-%sql select age, count(1) from bank where age < 30 group by age order by age
-```
-
-You can make input box for setting age condition by replacing `30` with 
`${maxAge=30}`.
-
-```sql
-%sql select age, count(1) from bank where age < ${maxAge=30} group by age 
order by age
-```
-
-Now we want to see age distribution with certain marital status and add combo 
box to select marital status. Run:
-
-```sql
-%sql select age, count(1) from bank where 
marital="${marital=single,single|divorced|married}" group by age order by age
-```
-
-<br />
-### Tutorial with Streaming Data 
-
-#### Data Refine
-
-Since this tutorial is based on Twitter's sample tweet stream, you must 
configure authentication with a Twitter account. To do this, take a look at 
[Twitter Credential 
Setup](https://databricks-training.s3.amazonaws.com/realtime-processing-with-spark-streaming.html#twitter-credential-setup).
 After you get API keys, you should fill out credential related 
values(`apiKey`, `apiSecret`, `accessToken`, `accessTokenSecret`) with your API 
keys on following script.
-
-This will create a RDD of `Tweet` objects and register these stream data as a 
table:
-
-```scala
-import org.apache.spark.streaming._
-import org.apache.spark.streaming.twitter._
-import org.apache.spark.storage.StorageLevel
-import scala.io.Source
-import scala.collection.mutable.HashMap
-import java.io.File
-import org.apache.log4j.Logger
-import org.apache.log4j.Level
-import sys.process.stringSeqToProcess
-
-/** Configures the Oauth Credentials for accessing Twitter */
-def configureTwitterCredentials(apiKey: String, apiSecret: String, 
accessToken: String, accessTokenSecret: String) {
-  val configs = new HashMap[String, String] ++= Seq(
-    "apiKey" -> apiKey, "apiSecret" -> apiSecret, "accessToken" -> 
accessToken, "accessTokenSecret" -> accessTokenSecret)
-  println("Configuring Twitter OAuth")
-  configs.foreach{ case(key, value) =>
-    if (value.trim.isEmpty) {
-      throw new Exception("Error setting authentication - value for " + key + 
" not set")
-    }
-    val fullKey = "twitter4j.oauth." + key.replace("api", "consumer")
-    System.setProperty(fullKey, value.trim)
-    println("\tProperty " + fullKey + " set as [" + value.trim + "]")
-  }
-  println()
-}
-
-// Configure Twitter credentials
-val apiKey = "xxxxxxxxxxxxxxxxxxxxxxxxx"
-val apiSecret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
-val accessToken = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
-val accessTokenSecret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
-configureTwitterCredentials(apiKey, apiSecret, accessToken, accessTokenSecret)
-
-import org.apache.spark.streaming.twitter._
-val ssc = new StreamingContext(sc, Seconds(2))
-val tweets = TwitterUtils.createStream(ssc, None)
-val twt = tweets.window(Seconds(60))
-
-case class Tweet(createdAt:Long, text:String)
-twt.map(status=>
-  Tweet(status.getCreatedAt().getTime()/1000, status.getText())
-).foreachRDD(rdd=>
-  // Below line works only in spark 1.3.0.
-  // For spark 1.1.x and spark 1.2.x,
-  // use rdd.registerTempTable("tweets") instead.
-  rdd.toDF().registerAsTable("tweets")
-)
-
-twt.print
-
-ssc.start()
-```
-
-<br />
-#### Data Retrieval
-
-For each following script, every time you click run button you will see 
different result since it is based on real-time data.
-
-Let's begin by extracting maximum 10 tweets which contain the word "girl".
-
-```sql
-%sql select * from tweets where text like '%girl%' limit 10
-```
-
-This time suppose we want to see how many tweets have been created per sec 
during last 60 sec. To do this, run:
-
-```sql
-%sql select createdAt, count(1) from tweets group by createdAt order by 
createdAt
-```
-
-
-You can make user-defined function and use it in Spark SQL. Let's try it by 
making function named `sentiment`. This function will return one of the three 
attitudes(positive, negative, neutral) towards the parameter.
-
-```scala
-def sentiment(s:String) : String = {
-    val positive = Array("like", "love", "good", "great", "happy", "cool", 
"the", "one", "that")
-    val negative = Array("hate", "bad", "stupid", "is")
-    
-    var st = 0;
-
-    val words = s.split(" ")    
-    positive.foreach(p =>
-        words.foreach(w =>
-            if(p==w) st = st+1
-        )
-    )
-    
-    negative.foreach(p=>
-        words.foreach(w=>
-            if(p==w) st = st-1
-        )
-    )
-    if(st>0)
-        "positivie"
-    else if(st<0)
-        "negative"
-    else
-        "neutral"
-}
-
-// Below line works only in spark 1.3.0.
-// For spark 1.1.x and spark 1.2.x,
-// use sqlc.registerFunction("sentiment", sentiment _) instead.
-sqlc.udf.register("sentiment", sentiment _)
-
-```
-
-To check how people think about girls using `sentiment` function we've made 
above, run this:
-
-```sql
-%sql select sentiment(text), count(1) from tweets where text like '%girl%' 
group by sentiment(text)
-```

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/download.md
----------------------------------------------------------------------
diff --git a/download.md b/download.md
deleted file mode 100644
index b206100..0000000
--- a/download.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-layout: page
-title: "Download"
-description: ""
-group: nav-right
----
-{% include JB/setup %}
-
-### Download Zeppelin
-
-The latest release of Apache Zeppelin (incubating) is 0.5.0-incubating. 
Released on July 23, 2015 ([release 
notes](./docs/releases/zeppelin-release-0.5.0-incubating.html)) ([git 
tag](https://git-wip-us.apache.org/repos/asf?p=incubator-zeppelin.git;a=tag;h=refs/tags/v0.5.0))
-
-[Download](http://www.apache.org/dyn/closer.cgi/incubator/zeppelin/0.5.0-incubating)
-
-
-### Build from source, installation
-
-Check [install](./docs/install/install.html).
-
-
-<!-- 
--------------
-### Old release
-
-##### Zeppelin-0.3.3 (2014.03.29)
-
-Download <a onclick="ga('send', 'event', 'download', 'zeppelin', '0.3.3');" 
href="https://s3-ap-northeast-1.amazonaws.com/zeppel.in/zeppelin-0.3.3.tar.gz";>zeppelin-0.3.3.tar.gz</a>
 ([release 
note](https://zeppelin-project.atlassian.net/secure/ReleaseNote.jspa?projectId=10001&version=10301))
-
-
-##### Zeppelin-0.3.2 (2014.03.14)
-
-Download <a onclick="ga('send', 'event', 'download', 'zeppelin', '0.3.2');" 
href="https://s3-ap-northeast-1.amazonaws.com/zeppel.in/zeppelin-0.3.2.tar.gz";>zeppelin-0.3.2.tar.gz</a>
 ([release 
note](https://zeppelin-project.atlassian.net/secure/ReleaseNote.jspa?projectId=10001&version=10300))
-
-##### Zeppelin-0.3.1 (2014.03.06)
-
-Download <a onclick="ga('send', 'event', 'download', 'zeppelin', '0.3.1');" 
href="https://s3-ap-northeast-1.amazonaws.com/zeppel.in/zeppelin-0.3.1.tar.gz";>zeppelin-0.3.1.tar.gz</a>
 ([release 
note](https://zeppelin-project.atlassian.net/secure/ReleaseNote.jspa?projectId=10001&version=10201))
-
-##### Zeppelin-0.3.0 (2014.02.07)
-
-Download <a onclick="ga('send', 'event', 'download', 'zeppelin', '0.3.0');" 
href="https://s3-ap-northeast-1.amazonaws.com/zeppel.in/zeppelin-0.3.0.tar.gz";>zeppelin-0.3.0.tar.gz</a>,
 ([release 
note](https://zeppelin-project.atlassian.net/secure/ReleaseNote.jspa?projectId=10001&version=10200))
-
-##### Zeppelin-0.2.0 (2014.01.22)
-
-Download Download <a onclick="ga('send', 'event', 'download', 'zeppelin', 
'0.2.0');" 
href="https://s3-ap-northeast-1.amazonaws.com/zeppel.in/zeppelin-0.2.0.tar.gz";>zeppelin-0.2.0.tar.gz</a>,
 ([release 
note](https://zeppelin-project.atlassian.net/secure/ReleaseNote.jspa?projectId=10001&version=10001))
-
--->
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/index.md
----------------------------------------------------------------------
diff --git a/index.md b/index.md
deleted file mode 100644
index a5245e6..0000000
--- a/index.md
+++ /dev/null
@@ -1,115 +0,0 @@
----
-layout: page
-title: Zeppelin
-tagline: Less Development, More analysis!
----
-{% include JB/setup %}
-
-<div class="row">
- <div class="col-md-5">
-<h2>Multi-purpose Notebook</h2>
-
-<p style="font-size:16px; color:#555555;font-style:italic;margin-bottom: 
15px;">
-  The Notebook is the place for all your needs
-</p>
-<ul style="list-style-type: none;padding-left:10px;" >
-  <li style="font-size:20px; margin: 5px;"><span class="glyphicon 
glyphicon-import"></span> Data Ingestion</li>
-  <li style="font-size:20px; margin: 5px;"><span class="glyphicon 
glyphicon-eye-open"></span> Data Discovery</li>
-  <li style="font-size:20px; margin: 5px;"><span class="glyphicon 
glyphicon-wrench"></span> Data Analytics</li>
-  <li style="font-size:20px; margin: 5px;"><span class="glyphicon 
glyphicon-dashboard"></span> Data Visualization & Collaboration</li>
-</ul>
-
- </div>
- <div class="col-md-7"><img class="img-responsive" style="border: 1px solid 
#ecf0f1;" height="auto" src="assets/themes/zeppelin/img/notebook.png" /></div>
-</div>
-
-
-<br />
-### Multiple language backend
-
-Zeppelin interpreter concept allows any language/data-processing-backend to be 
plugged into Zeppelin.
-Currently Zeppelin supports many interpreters such as Scala(with Apache 
Spark), Python(with Apache Spark), SparkSQL, Hive, Markdown and Shell.
-
-<img class="img-responsive" 
src="assets/themes/zeppelin/img/screenshots/multiple_language_backend.png" />
-
-Adding new language-backend is really simple. Learn [how to write a zeppelin 
interpreter](./docs/development/writingzeppelininterpreter.html).
-
-
-<br />
-### Apache Spark integration
-
-Zeppelin provides built-in Apache Spark integration. You don't need to build a 
separate module, plugin or library for it.
-
-<img src="assets/themes/zeppelin/img/spark_logo.jpg" width="80px" />
-
-Zeppelin's Spark integration provides
-
-- Automatic SparkContext and SQLContext injection
-- Runtime jar dependency loading from local filesystem or maven repository. 
Learn more about [dependency 
loader](./docs/interpreter/spark.html#dependencyloading).
-- Canceling job and displaying its progress
-
-<br />
-### Data visualization
-
-Some basic charts are already included in Zeppelin. Visualizations are not 
limited to SparkSQL's query, any output from any language backend can be 
recognized and visualized.
-
-<div class="row">
-  <div class="col-md-6">
-    <img class="img-responsive" src="./assets/themes/zeppelin/img/graph1.png" 
/>
-  </div>
-  <div class="col-md-6">
-    <img class="img-responsive" src="./assets/themes/zeppelin/img/graph2.png" 
/>
-  </div>
-</div>
-
-#### Pivot chart
-
-With simple drag and drop Zeppelin aggeregates the values and display them in 
pivot chart. You can easily create chart with multiple aggregated values 
including sum, count, average, min, max.
-
-<div class="row">
-  <div class="col-md-8">
-    <img class="img-responsive" 
src="./assets/themes/zeppelin/img/screenshots/pivot.png" />
-  </div>
-</div>
-Learn more about Zeppelin's [Display system](./docs/display.html).
-
-
-<br />
-### Dynamic forms
-
-Zeppelin can dynamically create some input forms into your notebook.
-
-<img class="img-responsive" 
src="./assets/themes/zeppelin/img/screenshots/form_input.png" />
-
-Learn more about [Dynamic Forms](./docs/dynamicform.html).
-
-
-<br />
-### Collaboration
-
-Notebook URL can be shared among collaborators. Zeppelin can then broadcast 
any changes in realtime, just like the collaboration in Google docs.
-
-<img src="./assets/themes/zeppelin/img/screenshots/collaboration.png" />
-
-<br />
-### Publish
-
-<p>Zeppelin provides an URL to display the result only, that page does not 
include Zeppelin's menu and buttons.
-This way, you can easily embed it as an iframe inside of your website.</p>
-
-<div class="row">
-  <img class="img-responsive center-block" 
src="./assets/themes/zeppelin/img/screenshots/publish.png" />
-</div>
-
-<br />
-### 100% Opensource
-
-Apache Zeppelin (incubating) is Apache2 Licensed software. Please check out 
the [source repository](https://github.com/apache/incubator-zeppelin) and [How 
to contribute](./docs/development/howtocontribute.html)
-
-Zeppelin has a very active development community.
-Join the [Mailing list](./community.html) and report issues on our [Issue 
tracker](https://issues.apache.org/jira/browse/ZEPPELIN).
-
-<br />
-### Undergoing Incubation
-Apache Zeppelin is an effort undergoing 
[incubation](https://incubator.apache.org/index.html) at The Apache Software 
Foundation (ASF), sponsored by the Incubator. Incubation is required of all 
newly accepted projects until a further review indicates that the 
infrastructure, communications, and decision making process have stabilized in 
a manner consistent with other successful ASF projects. While incubation status 
is not necessarily a reflection of the completeness or stability of the code, 
it does indicate that the project has yet to be fully endorsed by the ASF.
- 

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/robot.txt
----------------------------------------------------------------------
diff --git a/robot.txt b/robot.txt
deleted file mode 100644
index e69de29..0000000

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/rss.xml
----------------------------------------------------------------------
diff --git a/rss.xml b/rss.xml
deleted file mode 100644
index 106b649..0000000
--- a/rss.xml
+++ /dev/null
@@ -1,28 +0,0 @@
----
-layout: nil
-title : RSS Feed
----
-
-<?xml version="1.0" encoding="UTF-8" ?>
-<rss version="2.0">
-<channel>
-        <title>{{ site.title }}</title>
-        <description>{{ site.title }} - {{ site.author.name }}</description>
-        <link>{{ site.production_url }}{{ site.rss_path }}</link>
-        <link>{{ site.production_url }}</link>
-        <lastBuildDate>{{ site.time | date_to_xmlschema }}</lastBuildDate>
-        <pubDate>{{ site.time | date_to_xmlschema }}</pubDate>
-        <ttl>1800</ttl>
-
-{% for post in site.posts %}
-        <item>
-                <title>{{ post.title }}</title>
-                <description>{{ post.content | xml_escape }}</description>
-                <link>{{ site.production_url }}{{ post.url }}</link>
-                <guid>{{ site.production_url }}{{ post.id }}</guid>
-                <pubDate>{{ post.date | date_to_xmlschema }}</pubDate>
-        </item>
-{% endfor %}
-
-</channel>
-</rss>

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/screenshots.md
----------------------------------------------------------------------
diff --git a/screenshots.md b/screenshots.md
deleted file mode 100644
index 10e6b57..0000000
--- a/screenshots.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-layout: page
-title: "Screenshots"
-description: ""
----
-{% include JB/setup %}
-
-<div class="row">
-     <div class="col-md-3">
-          <a href="assets/themes/zeppelin/img/screenshots/sparksql.png"><img 
class="thumbnail" src="assets/themes/zeppelin/img/screenshots/sparksql.png" 
/></a>
-          <center>SparkSQL with inline visualization</center>
-     </div>
-     <div class="col-md-3">
-          <a href="assets/themes/zeppelin/img/screenshots/spark.png"><img 
class="thumbnail" src="assets/themes/zeppelin/img/screenshots/spark.png" /></a>
-          <center>Scala code runs with Spark</center>
-     </div>
-     <div class="col-md-3">
-          <a href="assets/themes/zeppelin/img/screenshots/markdown.png"><img 
class="thumbnail" src="assets/themes/zeppelin/img/screenshots/markdown.png" 
/></a>
-          <center>Markdown supported</center>
-     </div>
-</div>
-<br />
-<div class="row">
-     <div class="col-md-3">
-          <a href="assets/themes/zeppelin/img/screenshots/notebook.png"><img 
class="thumbnail" src="assets/themes/zeppelin/img/screenshots/notebook.png" 
/></a>
-          <center>Notebook</center>
-     </div>
-     <div class="col-md-3">
-     </div>
-     <div class="col-md-3">
-     </div>
-</div>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/2292dfa4/sitemap.txt
----------------------------------------------------------------------
diff --git a/sitemap.txt b/sitemap.txt
deleted file mode 100644
index 25c568f..0000000
--- a/sitemap.txt
+++ /dev/null
@@ -1,8 +0,0 @@
----
-# Remember to set production_url in your _config.yml file!
-title : Sitemap
----
-{% for page in site.pages %}
-{{site.production_url}}{{ page.url }}{% endfor %}
-{% for post in site.posts %}
-{{site.production_url}}{{ post.url }}{% endfor %}
\ No newline at end of file

Reply via email to