Modified: zeppelin/site/docs/0.8.2/search_data.json
URL: 
http://svn.apache.org/viewvc/zeppelin/site/docs/0.8.2/search_data.json?rev=1867697&r1=1867696&r2=1867697&view=diff
==============================================================================
--- zeppelin/site/docs/0.8.2/search_data.json (original)
+++ zeppelin/site/docs/0.8.2/search_data.json Sun Sep 29 07:49:17 2019
@@ -1,925 +1,947 @@
 {
   
 
-    "interpreter-alluxio": {
-      "title": "Alluxio Interpreter for Apache Zeppelin",
-      "content"  : "Alluxio Interpreter for Apache ZeppelinOverviewAlluxio is 
a memory-centric distributed storage system enabling reliable data sharing at 
memory-speed across cluster frameworks.Configuration      Name    Class    
Description        alluxio.master.hostname    localhost    Alluxio master 
hostname        alluxio.master.port    19998    Alluxio master port  Enabling 
Alluxio InterpreterIn a notebook, to enable the Alluxio interpreter, click on 
the Gear icon and select Alluxio.Using the Alluxio InterpreterIn a paragraph, 
use %alluxio to select the Alluxio interpreter and then input all 
commands.%alluxiohelpTip : Use ( Ctrl + . ) for autocompletion.Interpreter 
CommandsThe Alluxio interpreter accepts the following commands.            
Operation      Syntax      Description              cat      cat 
"path"      Print the content of the file to the console.     
         chgrp      chgrp "group" "path"    
  Change the grou
 p of the directory or file.              chmod      chmod 
"permission" "path"      Change the 
permission of the directory or file.              chown      chown 
"owner" "path"      Change the owner of the 
directory or file.              copyFromLocal      copyFromLocal 
"source path" "remote path"      Copy the 
specified file specified by "source path" to the path 
specified by "remote path".      This command will fail if 
"remote path" already exists.              copyToLocal      
copyToLocal "remote path" "local path"      
Copy the specified file from the path specified by "remote 
path" to a local destination.              count      count 
"path"      Display the number of folders and files matching 
the specified prefix in "path".     
          du      du "path"      Display the size of a file 
or a directory specified by the input path.              fileInfo      fileInfo 
"path"      Print the information of the blocks of a 
specified file.              free      free "path"      Free 
a file or all files under a directory from Alluxio. If the file/directory is 
also      in under storage, it will still be available there.              
getCapacityBytes      getCapacityBytes      Get the capacity of the AlluxioFS.  
            getUsedBytes      getUsedBytes      Get number of bytes used in the 
AlluxioFS.              load      load "path"      Load the 
data of a file or a directory from under storage into Alluxio.              
loadMetadata      loadMetadata "path"      Load the metadata 
of a file or a directory from under storage into Alluxio.              location 
     location "path"      Display a list of hos
 ts that have the file data.              ls      ls "path"   
   List all the files and directories directly under the given path with 
information such as      size.              mkdir      mkdir 
"path1" ... "pathn"      Create 
directory(ies) under the given paths, along with any necessary parent 
directories.      Multiple paths separated by spaces or tabs. This command will 
fail if any of the given paths      already exist.              mount      
mount "path" "uri"      Mount the 
underlying file system path "uri" into the Alluxio namespace 
as "path". The "path"      is assumed not 
to exist and is created by the operation. No data or metadata is loaded from 
under      storage into Alluxio. After a path is mounted, operations on objects 
under the mounted path are      mirror to the mounted under storage.            
  mv      mv "sour
 ce" "destination"      Move a file or directory 
specified by "source" to a new location 
"destination". This command      will fail if 
"destination" already exists.              persist      
persist "path"      Persist a file or directory currently 
stored only in Alluxio to the underlying file system.              pin      pin 
"path"      Pin the given file to avoid evicting it from 
memory. If the given path is a directory, it      recursively pins all the 
files contained and any new files created within this directory.              
report      report "path"      Report to the master that a 
file is lost.              rm      rm "path"      Remove a 
file. This command will fail if the given path is a directory rather than a 
file.              setTtl      setTtl "time"      Set the TTL 
(time to live) in milliseconds t
 o a file.              tail      tail "path"      Print the 
last 1KB of the specified file to the console.              touch      touch 
"path"      Create a 0-byte file at the specified location.   
           unmount      unmount "path"      Unmount the 
underlying file system path mounted in the Alluxio namespace as 
"path". Alluxio      objects under "path" 
are removed from Alluxio, but they still exist in the previously mounted      
under storage.              unpin      unpin "path"      
Unpin the given file to allow Alluxio to evict this file again. If the given 
path is a      directory, it recursively unpins all files contained and any new 
files created within this      directory.              unsetTtl      unsetTtl   
   Remove the TTL (time to live) setting from a file.      How to test 
it's workingBe sure to have configured correctly the Alluxio 
interpreter, the
 n open a new paragraph and type one of the above commands.Below a simple 
example to show how to interact with Alluxio interpreter.Following steps are 
performed:using sh interpreter a new text file is created on local machineusing 
Alluxio interpreter:is listed the content of the afs (Alluxio File System) 
rootthe file previously created is copied to afsis listed again the content of 
the afs root to check the existence of the new copied fileis showed the content 
of the copied file (using the tail command)the file previously copied to afs is 
copied to local machine using sh interpreter it's checked the existence 
of the new file copied from Alluxio and its content is showed  ",
-      "url": " /interpreter/alluxio",
+    "/interpreter/livy.html": {
+      "title": "Livy Interpreter for Apache Zeppelin",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Livy Interpreter for Apache 
ZeppelinOverviewLivy is an open source REST interface for interacting with 
Spark from anywhere. It supports executing snippets of code or programs in a 
Spark context that runs locally or in YARN.Interactive Scala, Python and R 
shellsBatch submissions in Scala, Java, PythonMulti users can share the same 
server (impersonation support)Can be used for submitting jobs from anywhere 
with RESTDoes not require a
 ny code change to your programsRequirementsAdditional requirements for the 
Livy interpreter are:Spark 1.3 or above.Livy server.ConfigurationWe added some 
common configurations for spark, and you can set any configuration you want.You 
can find all Spark configurations in here.And instead of starting property with 
spark. it should be replaced with livy.spark..Example: spark.driver.memory to 
livy.spark.driver.memory      Property    Default    Description        
zeppelin.livy.url    http://localhost:8998    URL where livy server is running  
      zeppelin.livy.spark.sql.maxResult    1000    Max number of Spark SQL 
result to display.        zeppelin.livy.spark.sql.field.truncate    true    
Whether to truncate field values longer than 20 characters or not        
zeppelin.livy.session.create_timeout    120    Timeout in seconds for session 
creation        zeppelin.livy.displayAppInfo    true    Whether to display app 
info        zeppelin.livy.pull_status.interval.millis    1000    The int
 erval for checking paragraph execution status        livy.spark.driver.cores   
     Driver cores. ex) 1, 2.          livy.spark.driver.memory        Driver 
memory. ex) 512m, 32g.          livy.spark.executor.instances        Executor 
instances. ex) 1, 4.          livy.spark.executor.cores        Num cores per 
executor. ex) 1, 4.        livy.spark.executor.memory        Executor memory 
per worker instance. ex) 512m, 32g.        livy.spark.dynamicAllocation.enabled 
       Use dynamic resource allocation. ex) True, False.        
livy.spark.dynamicAllocation.cachedExecutorIdleTimeout        Remove an 
executor which has cached data blocks.        
livy.spark.dynamicAllocation.minExecutors        Lower bound for the number of 
executors.        livy.spark.dynamicAllocation.initialExecutors        Initial 
number of executors to run.        livy.spark.dynamicAllocation.maxExecutors    
    Upper bound for the number of executors.            
livy.spark.jars.packages            Adding extra libr
 aries to livy interpreter          zeppelin.livy.ssl.trustStore        client 
trustStore file. Used when livy ssl is enabled        
zeppelin.livy.ssl.trustStorePassword        password for trustStore file. Used 
when livy ssl is enabled        zeppelin.livy.http.headers    key_1: value_1; 
key_2: value_2    custom http headers when calling livy rest api. Each http 
header is separated by `;`, and each header is one key value pair where key 
value is separated by `:`  We remove livy.spark.master in zeppelin-0.7. Because 
we sugguest user to use livy 0.3 in zeppelin-0.7. And livy 0.3 don't 
allow to specify livy.spark.master, it enfornce yarn-cluster mode.Adding 
External librariesYou can load dynamic library to livy interpreter by set 
livy.spark.jars.packages property to comma-separated list of maven coordinates 
of jars to include on the driver and executor classpaths. The format for the 
coordinates should be groupId:artifactId:version.Example      Property    
Example    Description
           livy.spark.jars.packages      io.spray:spray-json_2.10:1.3.1      
Adding extra libraries to livy interpreter      How to useBasically, you can 
usespark%livy.sparksc.versionpyspark%livy.pysparkprint 
"1"sparkR%livy.sparkrhello <- function( name ) {    
sprintf( "Hello, %s", name 
);}hello("livy")ImpersonationWhen Zeppelin server is running 
with authentication enabled,then this interpreter utilizes Livy’s user 
impersonation featurei.e. sends extra parameter for creating and running a 
session ("proxyUser": 
"${loggedInUser}").This is particularly useful when multi 
users are sharing a Notebook server.Apply Zeppelin Dynamic FormsYou can 
leverage Zeppelin Dynamic Form. Form templates is only avalible for livy sql 
interpreter.%livy.sqlselect * from products where ${product_id=1}And creating 
dynamic formst programmatically is not feasible in livy interpreter, because 
ZeppelinContext i
 s not available in livy interpreter.Shared SparkContextStarting from livy 0.5 
which is supported by Zeppelin 0.8.0, SparkContext is shared between scala, 
python, r and sql.That means you can query the table via %livy.sql when this 
table is registered in %livy.spark, %livy.pyspark, $livy.sparkr.FAQLivy 
debugging: If you see any of these in error consoleConnect to livyhost:8998 
[livyhost/127.0.0.1, livyhost/0:0:0:0:0:0:0:1] failed: Connection refusedLooks 
like the livy server is not up yet or the config is wrongException: Session not 
found, Livy server would have restarted, or lost session.The session would have 
timed out, you may need to restart the interpreter.Blacklisted configuration 
values in session config: spark.masterEdit conf/spark-blacklist.conf file in 
livy server and comment out #spark.master line.If you choose to work on livy in 
apps/spark/java directory in https://github.com/cloudera/hue,copy 
spark-user-configurable-options.template to spark-user-configurable-options.con
 f file in livy server and comment out #spark.master.",
+      "url": " /interpreter/livy.html",
       "group": "interpreter",
-      "excerpt": "Alluxio is a memory-centric distributed storage system 
enabling reliable data sharing at memory-speed across cluster frameworks."
+      "excerpt": "Livy is an open source REST interface for interacting with 
Spark from anywhere. It supports executing snippets of code or programs in a 
Spark context that runs locally or in YARN."
     }
     ,
     
   
 
-    "usage-display-system-angular-backend": {
-      "title": "Backend Angular API in Apache Zeppelin",
-      "content"  : "Backend Angular API in Apache ZeppelinOverviewAngular 
display system treats output as a view template for AngularJS.It compiles 
templates and displays them inside of Apache Zeppelin. Zeppelin provides a 
gateway between your interpreter and your compiled AngularJS view 
templates.Therefore, you can not only update scope variables from your 
interpreter but also watch them in the interpreter, which is JVM process.Basic 
UsagePrint AngularJS viewTo use angular display system, you should start with 
%angular.Since name is not defined, Hello will display Hello.Please Note: 
Display system is backend independent.Bind / Unbind VariablesThrough 
ZeppelinContext, you can bind / unbind variables to AngularJS view. Currently, 
it only works in Spark Interpreter ( scala ).// bind my 
'object' as angular scope variable 'name' in 
current notebook.z.angularBind(String name, Object object)// bind my 
'object' as angular scope variable &
 #39;name' in all notebooks related to current 
interpreter.z.angularBindGlobal(String name, Object object)// unbind angular 
scope variable 'name' in current 
notebook.z.angularUnbind(String name)// unbind angular scope variable 
'name' in all notebooks related to current 
interpreter.z.angularUnbindGlobal(String name)Using the above example, 
let's bind world variable to name. Then you can see AngularJs view is 
immediately updated.Watch / Unwatch VariablesThrough ZeppelinContext, you can 
watch / unwatch variables in AngularJs view. Currently, it only works in Spark 
Interpreter ( scala ).// register for angular scope variable 
'name' (notebook)z.angularWatch(String name, (before, after) 
=> { ... })// unregister watcher for angular variable 
'name' (notebook)z.angularUnwatch(String name)// register for 
angular scope variable 'name' 
(global)z.angularWatchGlobal(String name, (before, after) =
 > { ... })// unregister watcher for angular variable 
'name' (global)z.angularUnwatchGlobal(String name)Let's 
make a button. When it is clicked, the value of run will be increased 1 by 
1.z.angularBind("run", 0) will initialize run to zero. And 
then, it will be also applied to run in z.angularWatch().When the button is 
clicked, you'll see both run and numWatched are incremented by 
1.Let's make it Simpler and more IntuitiveIn this section, we will 
introduce a simpler and more intuitive way of using Angular Display System in 
Zeppelin.Here are some usages.Import// In notebook scopeimport 
org.apache.zeppelin.display.angular.notebookscope._import AngularElem._// In 
paragraph scopeimport 
org.apache.zeppelin.display.angular.paragraphscope._import AngularElem._Display 
Element// automatically convert to string and print with %angular display 
system directive in front.<div></div>.displayEvent 
Handler// 
 on click<div></div>.onClick(() => {   my 
callback routine}).display// on 
change<div></div>.onChange(() => {  my 
callback routine}).display// arbitrary 
event<div></div>.onEvent("ng-click",
 () => {  my callback routine}).displayBind Model// bind 
model<div></div>.model("myModel").display//
 bind model with initial 
value<div></div>.model("myModel", 
initialValue).displayInteract with Model// read 
modelAngularModel("myModel")()// update 
modelAngularModel("myModel", 
"newValue")Example: Basic UsageUsing the above basic usages, 
you can apply them like below examples.Display Elements<div 
style="color:blue">  <h4>Hello Angular 
Display System</h4></div>.displ
 ayOnClick Event<div class="btn btn-success">  
Click me</div>.onClick{() =>  // callback for button 
click}.displayBind Model  
<div>{{{{myModel}}}}</div>.model("myModel",
 "Initial Value").displayInteract With Model// read the 
valueAngularModel("myModel")()// update the 
valueAngularModel("myModel", "New 
value")Example: String ConverterUsing below example, you can convert 
the lowercase string to uppercase.// clear previously created angular 
object.AngularElem.disassociateval button = <div class="btn 
btn-success btn-sm">Convert</div>.onClick{() 
=>  val inputString = AngularModel("input")().toString 
 AngularModel("title", 
inputString.toUpperCase)}<div>  { <h4> 
{{{{title}}}}</h4>.model(&qu
 ot;title", "Please type text to convert uppercase") 
}   Your text { <input 
type="text"></input>.model("input",
 "") }  {button}</div>.display",
-      "url": " /usage/display_system/angular_backend",
-      "group": "usage/display_system",
-      "excerpt": "Apache Zeppelin provides a gateway between your interpreter 
and your compiled AngularJS view templates. You can not only update scope 
variables from your interpreter but also watch them in the interpreter, which 
is JVM process."
+    "/interpreter/pig.html": {
+      "title": "Pig Interpreter for Apache Zeppelin",
+      "content"  : "Pig Interpreter for Apache ZeppelinOverviewApache Pig is a 
platform for analyzing large data sets that consists of a high-level language 
for expressing data analysis programs, coupled with infrastructure for 
evaluating these programs. The salient property of Pig programs is that their 
structure is amenable to substantial parallelization, which in turns enables 
them to handle very large data sets.Supported interpreter type%pig.script 
(default Pig interpreter, so you can use %pig)%pig.script is like the Pig grunt 
shell. Anything you can run in Pig grunt shell can be run in %pig.script 
interpreter, it is used for running Pig script where you don’t need to 
visualize the data, it is suitable for data munging. %pig.query%pig.query is a 
little different compared with %pig.script. It is used for exploratory data 
analysis via Pig latin where you can leverage Zeppelin’s visualization 
ability. There're 2 minor differences in the last statement between %pig
 .script and %pig.queryNo pig alias in the last statement in %pig.query (read 
the examples below).The last statement must be in single line in %pig.queryHow 
to useHow to setup Pig execution modes.Local ModeSet zeppelin.pig.execType as 
local.MapReduce ModeSet zeppelin.pig.execType as mapreduce. HADOOP_CONF_DIR 
needs to be specified in ZEPPELIN_HOME/conf/zeppelin-env.sh.Tez Local ModeOnly 
Tez 0.7 is supported. Set zeppelin.pig.execType as tez_local.Tez ModeOnly Tez 
0.7 is supported. Set zeppelin.pig.execType as tez. HADOOP_CONF_DIR and 
TEZ_CONF_DIR needs to be specified in ZEPPELIN_HOME/conf/zeppelin-env.sh.Spark 
Local ModeOnly Spark 1.6.x is supported, by default it is Spark 1.6.3. Set 
zeppelin.pig.execType as spark_local.Spark ModeOnly Spark 1.6.x is supported, 
by default it is Spark 1.6.3. Set zeppelin.pig.execType as spark. For now, only 
yarn-client mode is supported. To enable it, you need to set property 
SPARK_MASTER to yarn-client and set SPARK_JAR to the spark assembly jar.How 
 to choose custom Spark VersionBy default, Pig Interpreter would use Spark 
1.6.3 built with scala 2.10, if you want to use another spark version or scala 
version, you need to rebuild Zeppelin by specifying the custom Spark version 
via -Dpig.spark.version= and scala version via -Dpig.scala.version= in the 
maven build command.How to configure interpreterAt the Interpreters menu, you 
have to create a new Pig interpreter. Pig interpreter has below properties by 
default.And you can set any Pig properties here which will be passed to Pig 
engine. (like tez.queue.name & mapred.job.queue.name).Besides, we use 
paragraph title as job name if it exists, else use the last line of Pig script. 
So you can use that to find app running in YARN RM UI.            Property      
  Default        Description                zeppelin.pig.execType        
mapreduce        Execution mode for pig runtime. local | mapreduce | tez_local 
| tez | spark_local | spark                 zeppelin.pig.includeJobSta
 ts        false        whether display jobStats info in %pig.script            
    zeppelin.pig.maxResult        1000        max row number displayed in 
%pig.query                tez.queue.name        default        queue name for 
tez engine                mapred.job.queue.name        default        queue 
name for mapreduce engine                SPARK_MASTER        local        local 
| yarn-client                SPARK_JAR                The spark assembly jar, 
both jar in local or hdfs is supported. Put it on hdfs could have        
performance benefit      Examplepig%pigbankText = load 
'bank.csv' using PigStorage(';');bank = foreach 
bankText generate $0 as age, $1 as job, $2 as marital, $3 as education, $5 as 
balance; bank = filter bank by age != 
'"age"';bank = foreach bank generate 
(int)age, REPLACE(job,'"','') as job, 
REPLACE(marital, '"', '&a
 mp;#39;) as marital, (int)(REPLACE(balance, '"', 
'')) as balance;store bank into 
'clean_bank.csv' using PigStorage(';'); -- this 
statement is optional, it just show you that most of time %pig.script is used 
for data munging before querying the data. pig.queryGet the number of each age 
where age is less than 30%pig.querybank_data = filter bank by age < 30;b 
= group bank_data by age;foreach b generate group, COUNT($1);The same as above, 
but use dynamic text form so that use can specify the variable maxAge in 
textbox. (See screenshot below). Dynamic form is a very cool feature of 
Zeppelin, you can refer this link) for details.%pig.querybank_data = filter 
bank by age < ${maxAge=40};b = group bank_data by age;foreach b generate 
group, COUNT($1) as count;Get the number of each age for specific marital type, 
also use dynamic form here. User can choose the marital type in the dropdown 
list (see screenshot
  below).%pig.querybank_data = filter bank by 
marital=='${marital=single,single|divorced|married}';b = group 
bank_data by age;foreach b generate group, COUNT($1) as count;The above 
examples are in the Pig tutorial note in Zeppelin, you can check that for 
details. Here's the screenshot.Data is shared between %pig and 
%pig.query, so that you can do some common work in %pig, and do different kinds 
of query based on the data of %pig. Besides, we recommend you to specify alias 
explicitly so that the visualization can display the column name correctly. In 
the above example 2 and 3 of %pig.query, we name COUNT($1) as count. If you 
don't do this, then we will name it using position. E.g. in the above 
first example of %pig.query, we will use col_1 in chart to represent 
COUNT($1).",
+      "url": " /interpreter/pig.html",
+      "group": "manual",
+      "excerpt": "Apache Pig is a platform for analyzing large data sets that 
consists of a high-level language for expressing data analysis programs, 
coupled with infrastructure for evaluating these programs."
     }
     ,
     
   
 
-    "usage-display-system-angular-frontend": {
-      "title": "Frontend Angular API in Apache Zeppelin",
-      "content"  : "Frontend Angular API in Apache ZeppelinBasic UsageIn 
addition to the backend Angular API to handle Angular objects binding, Apache 
Zeppelin also exposes a simple AngularJS z object on the front-end side to 
expose the same capabilities.This z object is accessible in the Angular 
isolated scope for each paragraph.Bind / Unbind VariablesThrough the z, you can 
bind / unbind variables to AngularJS view.Bind a value to an angular object and 
a mandatory target paragraph:%angular<form 
class="form-inline">  <div 
class="form-group">    <label 
for="superheroId">Super Hero: </label>   
 <input type="text" 
class="form-control" id="superheroId" 
placeholder="Superhero name ..." 
ng-model="superhero"></input>  
</div>  <button type=&q
 uot;submit" class="btn btn-primary" 
ng-click="z.angularBind('superhero',superhero,'20160222-232336_1472609686')">
 Bind</button></form>Unbind/remove a value from 
angular object and a mandatory target paragraph:%angular<form 
class="form-inline">  <button 
type="submit" class="btn btn-primary" 
ng-click="z.angularUnbind('superhero','20160222-232336_1472609686')">
 UnBind</button></form>The signature for the 
z.angularBind() / z.angularUnbind() functions are:// 
Bindz.angularBind(angularObjectName, angularObjectValue, paragraphId);// 
Unbindz.angularUnbind(angularObjectName, angularObjectValue, paragraphId);All 
the parameters are mandatory.Run ParagraphYou can also trigger paragraph 
execution by calling z.runParagraph() funct
 ion passing the appropriate paragraphId: %angular<form 
class="form-inline">  <div 
class="form-group">    <label 
for="paragraphId">Paragraph Id: </label> 
   <input type="text" 
class="form-control" id="paragraphId" 
placeholder="Paragraph Id ..." 
ng-model="paragraph"></input>  
</div>  <button type="submit" 
class="btn btn-primary" 
ng-click="z.runParagraph(paragraph)"> Run 
Paragraph</button></form>Overriding dynamic form 
with Angular ObjectThe front-end Angular Interaction API has been designed to 
offer richer form capabilities and variable binding. With the existing Dynamic 
Form system you can already create input text, select and checkbox forms but 
the c
 hoice is rather limited and the look & feel cannot be changed.The idea 
is to create a custom form using plain HTML/AngularJS code and bind actions on 
this form to push/remove Angular variables to targeted paragraphs using this 
new API. Consequently if you use the Dynamic Form syntax in a paragraph and 
there is a bound Angular object having the same name as the ${formName}, the 
Angular object will have higher priority and the Dynamic Form will not be 
displayed. Example: Feature matrix comparisonHow does the front-end AngularJS 
API compares to the backend Angular API? Below is a comparison matrix for both 
APIs:                        Actions            Front-end API            
Back-end API                                Initiate binding            
z.angularbind(var, initialValue, paragraphId)            z.angularBind(var, 
initialValue)                            Update value            same to 
ordinary angularjs scope variable, or z.angularbind(var, newValue, paragraphId) 
    
        z.angularBind(var, newValue)                            Watching value  
          same to ordinary angularjs scope variable            
z.angularWatch(var, (oldVal, newVal) => ...)                            
Destroy binding            z.angularUnbind(var, paragraphId)            
z.angularUnbind(var)                            Executing Paragraph            
z.runParagraph(paragraphId)            z.run(paragraphId)                       
     Executing Paragraph (Specific paragraphs in other notes) (                 
       z.run(noteid, paragraphId)                            Executing note     
                   z.runNote(noteId)                     Both APIs are pretty 
similar, except for value watching where it is done naturally by AngularJS 
internals on the front-end and by user custom watcher functions in the 
back-end.There is also a slight difference in term of scope. Front-end API 
limits the Angular object binding to a paragraph scope whereas back-end API 
allows you to 
 bind an Angular object at the global or note scope. This restriction has been 
designed purposely to avoid Angular object leaks and scope pollution.",
-      "url": " /usage/display_system/angular_frontend",
-      "group": "usage/display_system",
-      "excerpt": "In addition to the back-end API to handle Angular objects 
binding, Apache Zeppelin exposes a simple AngularJS z object on the front-end 
side to expose the same capabilities."
+    "/interpreter/markdown.html": {
+      "title": "Markdown Interpreter for Apache Zeppelin",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Markdown Interpreter for 
Apache ZeppelinOverviewMarkdown is a plain text formatting syntax designed so 
that it can be converted to HTML.Apache Zeppelin uses pegdown and markdown4j as 
markdown parsers.In Zeppelin notebook, you can use %md in the beginning of a 
paragraph to invoke the Markdown interpreter and generate static html from 
Markdown plain text.In Zeppelin, Markdown interpreter is enabled by default and 
uses the pegdown par
 ser.ExampleThe following example demonstrates the basic usage of Markdown in a 
Zeppelin notebook.Mathematical expressionMarkdown interpreter leverages %html 
display system internally. That means you can mix mathematical expressions with 
markdown syntax. For more information, please see Mathematical Expression 
section.Configuration      Name    Default Value    Description        
markdown.parser.type    pegdown    Markdown Parser Type.  Available values: 
pegdown, markdown4j.  Pegdown Parserpegdown parser provides github flavored 
markdown.pegdown parser provides YUML and Websequence plugins also. Markdown4j 
ParserSince pegdown parser is more accurate and provides much more markdown 
syntax markdown4j option might be removed later. But keep this parser for the 
backward compatibility.",
+      "url": " /interpreter/markdown.html",
+      "group": "interpreter",
+      "excerpt": "Markdown is a plain text formatting syntax designed so that 
it can be converted to HTML. Apache Zeppelin uses markdown4j."
     }
     ,
     
   
-  
 
-    "setup-security-authentication-nginx": {
-      "title": "HTTP Basic Auth using NGINX",
-      "content"  : "Authentication for NGINXBuild in authentication mechanism 
is recommended way for authentication. In case of you want authenticate using 
NGINX and HTTP basic auth, please read this document.HTTP Basic Authentication 
using NGINXQuote from Wikipedia: NGINX is a web server. It can act as a reverse 
proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load 
balancer and an HTTP cache.So you can use NGINX server as proxy server to serve 
HTTP Basic Authentication as a separate process along with Zeppelin server.Here 
are instructions how to accomplish the setup NGINX as a front-end 
authentication server and connect Zeppelin at behind.This instruction based on 
Ubuntu 14.04 LTS but may work with other OS with few configuration 
changes.Install NGINX server on your server instanceYou can install NGINX 
server with same box where zeppelin installed or separate box where it is 
dedicated to serve as proxy server.$ apt-get install nginxNOTE : On pre 1.3.13 
ver
 sion of NGINX, Proxy for Websocket may not fully works. Please use latest 
version of NGINX. See: NGINX documentation.Setup init script in NGINXIn most 
cases, NGINX configuration located under /etc/nginx/sites-available. Create 
your own configuration or add your existing configuration at 
/etc/nginx/sites-available.$ cd /etc/nginx/sites-available$ touch 
my-zeppelin-auth-settingNow add this script into my-zeppelin-auth-setting file. 
You can comment out optional lines If you want serve Zeppelin under regular 
HTTP 80 Port.upstream zeppelin {    server 
[YOUR-ZEPPELIN-SERVER-IP]:[YOUR-ZEPPELIN-SERVER-PORT];   # For security, It is 
highly recommended to make this address/port as non-public accessible}# 
Zeppelin Websiteserver {    listen [YOUR-ZEPPELIN-WEB-SERVER-PORT];    listen 
443 ssl;                                      # optional, to serve HTTPS 
connection    server_name [YOUR-ZEPPELIN-SERVER-HOST];             # for 
example: zeppelin.mycompany.com    ssl_certificate [PATH-TO-YOUR-CERT
 -FILE];            # optional, to serve HTTPS connection    
ssl_certificate_key [PATH-TO-YOUR-CERT-KEY-FILE];    # optional, to serve HTTPS 
connection    if ($ssl_protocol = "") {        rewrite ^ 
https://$host$request_uri? permanent;  # optional, to force use of HTTPS    }   
 location / {    # For regular websever support        proxy_pass 
http://zeppelin;        proxy_set_header X-Real-IP $remote_addr;        
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;        
proxy_set_header Host $http_host;        proxy_set_header X-NginX-Proxy true;   
     proxy_redirect off;        auth_basic "Restricted";      
  auth_basic_user_file /etc/nginx/.htpasswd;    }    location /ws {  # For 
websocket support        proxy_pass http://zeppelin/ws;        
proxy_http_version 1.1;        proxy_set_header Upgrade websocket;        
proxy_set_header Connection upgrade;        proxy_read_timeout 86400;    }}Then 
make a symbolic link to this file from /etc/
 nginx/sites-enabled/ to enable configuration above when NGINX reloads.$ ln -s 
/etc/nginx/sites-enabled/my-zeppelin-auth-setting 
/etc/nginx/sites-available/my-zeppelin-auth-settingSetup user credential into 
.htpasswd file and restart serverNow you need to setup .htpasswd file to serve 
list of authenticated user credentials for NGINX server.$ cd /etc/nginx$ 
htpasswd -c htpasswd [YOUR-ID]NEW passwd: [YOUR-PASSWORD]RE-type new passwd: 
[YOUR-PASSWORD-AGAIN]Or you can use your own apache .htpasswd files in other 
location for setting up property: auth_basic_user_fileRestart NGINX server.$ 
service nginx restartThen check HTTP Basic Authentication works in browser. If 
you can see regular basic auth popup and then able to login with credential you 
entered into .htpasswd you are good to go.More security considerationUsing 
HTTPS connection with Basic Authentication is highly recommended since basic 
auth without encryption may expose your important credential information over 
the network.Using S
 hiro Security feature built-into Zeppelin is recommended if you prefer 
all-in-one solution for authentication but NGINX may provides ad-hoc solution 
for re-use authentication served by your system's NGINX server or in 
case of you need to separate authentication from zeppelin server.It is 
recommended to isolate direct connection to Zeppelin server from public 
internet or external services to secure your zeppelin instance from unexpected 
attack or problems caused by public zone.Another optionAnother option is to 
have an authentication server that can verify user credentials in an LDAP 
server.If an incoming request to the Zeppelin server does not have a cookie 
with user information encrypted with the authentication server public key, the 
useris redirected to the authentication server. Once the user is verified, the 
authentication server redirects the browser to a specific URL in the Zeppelin 
server which sets the authentication cookie in the browser.The end result is 
that all r
 equests to the Zeppelin web server have the authentication cookie which 
contains user and groups information.",
-      "url": " /setup/security/authentication_nginx",
-      "group": "setup/security",
-      "excerpt": "There are multiple ways to enable authentication in Apache 
Zeppelin. This page describes HTTP basic auth using NGINX."
+    "/interpreter/mahout.html": {
+      "title": "Mahout Interpreter for Apache Zeppelin",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Apache Mahout Interpreter 
for Apache ZeppelinInstallationApache Mahout is a collection of packages that 
enable machine learning and matrix algebra on underlying engines such as Apache 
Flink or Apache Spark.  A convenience script for creating and configuring two 
Mahout enabled interpreters exists.  The %sparkMahout and %flinkMahout 
interpreters do not exist by default but can be easily created using this 
script.  Easy InstallationTo
  quickly and easily get up and running using Apache Mahout, run the following 
command from the top-level directory of the Zeppelin install:python 
scripts/mahout/add_mahout.pyThis will create the %sparkMahout and %flinkMahout 
interpreters, and restart Zeppelin.Advanced InstallationThe add_mahout.py 
script contains several command line arguments for advanced users.      
Argument    Description    Example        --zeppelin_home    This is the path 
to the Zeppelin installation.  This flag is not needed if the script is run 
from the top-level installation directory or from the zeppelin/scripts/mahout 
directory.    /path/to/zeppelin        --mahout_home    If the user has already 
installed Mahout, this flag can set the path to MAHOUT_HOME.  If this is set, 
downloading Mahout will be skipped.    /path/to/mahout_home        
--restart_later    Restarting is necessary for updates to take effect. By 
default the script will restart Zeppelin for you. Restart will be skipped if 
this flag is set. 
    NA        --force_download    This flag will force the script to 
re-download the binary even if it already exists.  This is useful for 
previously failed downloads.    NA          --overwrite_existing      This flag 
will force the script to overwrite existing %sparkMahout and %flinkMahout 
interpreters. Useful when you want to just start over.      NA    NOTE 1: 
Apache Mahout at this time only supports Spark 1.5 and Spark 1.6 and Scala 
2.10.  If the user is using another version of Spark (e.g. 2.0), the 
%sparkMahout will likely not work.  The %flinkMahout interpreter will still 
work and the user is encouraged to develop with that engine as the code can be 
ported via copy and paste, as is evidenced by the tutorial notebook.NOTE 2: If 
using Apache Flink in cluster mode, the following libraries will also need to 
be coppied to ${FLINK_HOME}/lib- mahout-math-0.12.2.jar- 
mahout-math-scala2.10-0.12.2.jar- mahout-flink2.10-0.12.2.jar- 
mahout-hdfs-0.12.2.jar- com.google.guava:guava:14.0.1Ov
 erviewThe Apache Mahout™ project's goal is to build an environment 
for quickly creating scalable performant machine learning applications.Apache 
Mahout software provides three major features:A simple and extensible 
programming environment and framework for building scalable algorithmsA wide 
variety of premade algorithms for Scala + Apache Spark, H2O, Apache 
FlinkSamsara, a vector math experimentation environment with R-like syntax 
which works at scaleIn other words:Apache Mahout provides a unified API for 
quickly creating machine learning algorithms on a variety of engines.How to 
useWhen starting a session with Apache Mahout, depending on which engine you 
are using (Spark or Flink), a few imports must be made and a Distributed 
Context must be declared.  Copy and paste the following code and run once to 
get started.Flink%flinkMahoutimport org.apache.flink.api.scala._import 
org.apache.mahout.math.drm._import 
org.apache.mahout.math.drm.RLikeDrmOps._import org.apache.mahout
 .flinkbindings._import org.apache.mahout.math._import scalabindings._import 
RLikeOps._implicit val ctx = new 
FlinkDistributedContext(benv)Spark%sparkMahoutimport 
org.apache.mahout.math._import org.apache.mahout.math.scalabindings._import 
org.apache.mahout.math.drm._import 
org.apache.mahout.math.scalabindings.RLikeOps._import 
org.apache.mahout.math.drm.RLikeDrmOps._import 
org.apache.mahout.sparkbindings._implicit val sdc: 
org.apache.mahout.sparkbindings.SparkDistributedContext = sc2sdc(sc)Same Code, 
Different EnginesAfter importing and setting up the distributed context, the 
Mahout R-Like DSL is consistent across engines.  The following code will run in 
both %flinkMahout and %sparkMahoutval drmData = drmParallelize(dense(  (2, 2, 
10.5, 10, 29.509541),  // Apple Cinnamon Cheerios  (1, 2, 12,   12, 18.042851), 
 // Cap'n'Crunch  (1, 1, 12,   13, 22.736446),  // Cocoa Puffs  
(2, 1, 11,   13, 32.207582),  // Froot Loops  (1, 2, 12,   11, 21.871292),  // 
Honey Graham Ohs  (
 2, 1, 16,   8,  36.187559),  // Wheaties Honey Gold  (6, 2, 17,   1,  
50.764999),  // Cheerios  (3, 2, 13,   7,  40.400208),  // Clusters  (3, 3, 13, 
  4,  45.811716)), numPartitions = 2)drmData.collect(::, 0 until 4)val drmX = 
drmData(::, 0 until 4)val y = drmData.collect(::, 4)val drmXtX = drmX.t %*% 
drmXval drmXty = drmX.t %*% yval XtX = drmXtX.collectval Xty = 
drmXty.collect(::, 0)val beta = solve(XtX, Xty)Leveraging Resource Pools and R 
for VisualizationResource Pools are a powerful Zeppelin feature that lets us 
share information between interpreters. A fun trick is to take the output of 
our work in Mahout and analyze it in other languages.Setting up a Resource Pool 
in FlinkIn Spark based interpreters resource pools are accessed via the 
ZeppelinContext API.  To put and get things from the resource pool one can be 
done simpleval myVal = 1z.put("foo", myVal)val myFetchedVal = 
z.get("foo")To add this functionality to a Flink based 
interpreter we
  declare the follwoing%flinkMahoutimport 
org.apache.zeppelin.interpreter.InterpreterContextval z = 
InterpreterContext.get().getResourcePool()Now we can access the resource pool 
in a consistent manner from the %flinkMahout interpreter.Passing a variable 
from Mahout to R and PlottingIn this simple example, we use Mahout (on Flink or 
Spark, the code is the same) to create a random matrix and then take the Sin of 
each element. We then randomly sample the matrix and create a tab separated 
string. Finally we pass that string to R where it is read as a .tsv file, and a 
DataFrame is created and plotted using native R plotting libraries.val mxRnd = 
Matrices.symmetricUniformView(5000, 2, 1234)val drmRand = 
drmParallelize(mxRnd)val drmSin = drmRand.mapBlock() {case (keys, block) 
=>    val blockB = block.like()  for (i <- 0 until block.nrow) {  
  blockB(i, 0) = block(i, 0)    blockB(i, 1) = Math.sin((block(i, 0) * 8))  }  
keys -> blockB}z.put("sinDrm", org
 .apache.mahout.math.drm.drmSampleToTSV(drmSin, 0.85))And then in an R 
paragraph...%spark.r {"imageWidth": 
"400px"}library("ggplot2")sinStr = 
z.get("flinkSinDrm")data <- read.table(text= sinStr, 
sep="t", header=FALSE)plot(data,  
col="red")",
+      "url": " /interpreter/mahout.html",
+      "group": "interpreter",
+      "excerpt": "Apache Mahout provides a unified API (the R-Like Scala DSL) 
for quickly creating machine learning algorithms on a variety of engines."
     }
     ,
     
   
 
-    "usage-display-system-basic": {
-      "title": "Basic Display System in Apache Zeppelin",
-      "content"  : "Basic Display System in Apache ZeppelinTextBy default, 
Apache Zeppelin prints interpreter response as a plain text using text display 
system.You can explicitly say you're using text display system.HtmlWith 
%html directive, Zeppelin treats your output as HTMLMathematical 
expressionsHTML display system automatically formats mathematical expression 
using MathJax. You can use( INLINE EXPRESSION ) and $$ EXPRESSION $$ to format. 
For exampleTableIf you have data that row separated by n (newline) and column 
separated by t (tab) with first row as header row, for exampleYou can simply 
use %table display system to leverage Zeppelin's built in 
visualization.If table contents start with %html, it is interpreted as an 
HTML.Note : Display system is backend independent.NetworkWith the %network 
directive, Zeppelin treats your output as a graph. Zeppelin can leverage the 
Property Graph Model.What is the Labelled Property Graph Model?A Property Graph 
is a graph tha
 t has these elements:a set of verticeseach vertex has a unique identifier.each 
vertex has a set of outgoing edges.each vertex has a set of incoming edges.each 
vertex has a collection of properties defined by a map from key to valuea set 
of edgeseach edge has a unique identifier.each edge has an outgoing tail 
vertex.each edge has an incoming head vertex.each edge has a label that denotes 
the type of relationship between its two vertices.each edge has a collection of 
properties defined by a map from key to value.A Labelled Property Graph is a 
Property Graph where the nodes can be tagged with labels representing their 
different roles in the graph modelWhat are the APIs?The new NETWORK 
visualization is based on json with the following 
params:"nodes" (mandatory): list of nodes of the graph every 
node can have the following params:"id" (mandatory): the id 
of the node (must be unique);"label": the main Label of the 
node;"labels
 ": the list of the labels of the node;"data": the 
data attached to the node;"edges": list of the edges of the 
graph;"id" (mandatory): the id of the edge (must be 
unique);"source" (mandatory): the id of source node of the 
edge;"target" (mandatory): the id of target node of the 
edge;"label": the main type of the 
edge;"data": the data attached to the 
edge;"labels": a map (K, V) where K is the node label and V 
is the color of the node;"directed": (true/false, default 
false) wich tells if is directed graph or not;"types": a 
distinct list of the edge types of the graphIf you click on a node or edge on 
the bottom of the paragraph you find a list of entity propertiesThis kind of 
graph can be easily flatten in order to support other visualization formats 
provided by Zeppelin.How to use it?An example of a s
 imple graph%sparkprint(s"""%network {    
"nodes": [        {"id": 1},        
{"id": 2},        {"id": 3}    ],    
"edges": [        {"source": 1, 
"target": 2, "id" : 1},        
{"source": 2, "target": 3, 
"id" : 2},        {"source": 1, 
"target": 2, "id" : 3},        
{"source": 1, "target": 2, 
"id" : 4},        {"source": 2, 
"target": 1, "id" : 5},        
{"source": 2, "target": 1, 
"id" : 6}    ]}""")that will look 
like:A little more complex 
graph:%sparkprint(s"""%network {    
"nodes&q
 uot;: [{"id": 1, "label": 
"User", "data": 
{"fullName":"Andrea 
Santurbano"}},{"id": 2, "label": 
"User", "data": 
{"fullName":"Lee Moon 
Soo"}},{"id": 3, "label": 
"Project", "data": 
{"name":"Zeppelin"}}],    
"edges": [{"source": 2, 
"target": 1, "id" : 1, 
"label": 
"HELPS"},{"source": 2, 
"target": 3, "id" : 2, 
"label": 
"CREATE"},{"source": 1, 
"target": 3, "id" : 3, 
"label": "CONTRIBUTE_TO&quot
 ;, "data": {"oldPR": 
"https://github.com/apache/zeppelin/pull/1582"}}],    
"labels": {"User": 
"#8BC34A", "Project": 
"#3071A9"},    "directed": true,    
"types": ["HELPS", 
"CREATE", 
"CONTRIBUTE_TO"]}""")that will 
look like:",
-      "url": " /usage/display_system/basic",
-      "group": "usage/display_system",
-      "excerpt": "There are 3 basic display systems in Apache Zeppelin. By 
default, Zeppelin prints interpreter responce as a plain text using text 
display system. With %html directive, Zeppelin treats your output as HTML. You 
can also simply use %table display system..."
+    "/interpreter/spark.html": {
+      "title": "Apache Spark Interpreter for Apache Zeppelin",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Spark Interpreter for Apache 
ZeppelinOverviewApache Spark is a fast and general-purpose cluster computing 
system.It provides high-level APIs in Java, Scala, Python and R, and an 
optimized engine that supports general execution graphs.Apache Spark is 
supported in Zeppelin with Spark interpreter group which consists of below five 
interpreters.      Name    Class    Description        %spark    
SparkInterpreter    Creates a SparkConte
 xt and provides a Scala environment        %spark.pyspark    
PySparkInterpreter    Provides a Python environment        %spark.r    
SparkRInterpreter    Provides an R environment with SparkR support        
%spark.sql    SparkSQLInterpreter    Provides a SQL environment        
%spark.dep    DepInterpreter    Dependency loader  ConfigurationThe Spark 
interpreter can be configured with properties provided by Zeppelin.You can also 
set other Spark properties which are not listed in the table. For a list of 
additional properties, refer to Spark Available Properties.      Property    
Default    Description        args        Spark commandline args      master    
local[*]    Spark master uri.  ex) spark://masterhost:7077      spark.app.name  
  Zeppelin    The name of spark application.        spark.cores.max        
Total number of cores to use.  Empty value uses all available core.        
spark.executor.memory     1g    Executor memory per worker instance.  ex) 512m, 
32g        zeppelin.dep
 .additionalRemoteRepository    spark-packages,  
http://dl.bintray.com/spark-packages/maven,  false;    A list of 
id,remote-repository-URL,is-snapshot;  for each remote repository.        
zeppelin.dep.localrepo    local-repo    Local repository for dependency loader  
      PYSPARKPYTHON    python    Python binary executable to use for PySpark in 
both driver and workers (default is python).            Property 
spark.pyspark.python take precedence if it is set        PYSPARKDRIVERPYTHON    
python    Python binary executable to use for PySpark in driver only (default 
is PYSPARKPYTHON).            Property spark.pyspark.driver.python take 
precedence if it is set        zeppelin.spark.concurrentSQL    false    Execute 
multiple SQL concurrently if set true.        zeppelin.spark.maxResult    1000  
  Max number of Spark SQL result to display.        
zeppelin.spark.printREPLOutput    true    Print REPL output        
zeppelin.spark.useHiveContext    true    Use HiveContext instead of SQLConte
 xt if it is true.        zeppelin.spark.importImplicit    true    Import 
implicits, UDF collection, and sql if set true.        
zeppelin.spark.enableSupportedVersionCheck    true    Do not change - developer 
only setting, not for production use        zeppelin.spark.sql.interpolation    
false    Enable ZeppelinContext variable interpolation into paragraph text      
zeppelin.spark.uiWebUrl        Overrides Spark UI default URL. Value should be 
a full URL (ex: http://{hostName}/{uniquePath}  Without any configuration, 
Spark interpreter works out of box in local mode. But if you want to connect to 
your Spark cluster, you'll need to follow below two simple steps.1. 
Export SPARK_HOMEIn conf/zeppelin-env.sh, export SPARK_HOME environment 
variable with your Spark installation path.For example,export 
SPARK_HOME=/usr/lib/sparkYou can optionally set more environment variables# set 
hadoop conf direxport HADOOP_CONF_DIR=/usr/lib/hadoop# set options to pass 
spark-submit commandexport SPA
 RK_SUBMIT_OPTIONS="--packages 
com.databricks:spark-csv_2.10:1.2.0"# extra classpath. e.g. set 
classpath for hive-site.xmlexport 
ZEPPELIN_INTP_CLASSPATH_OVERRIDES=/etc/hive/confFor Windows, ensure you have 
winutils.exe in %HADOOP_HOME%bin. Please see Problems running Hadoop on Windows 
for the details.2. Set master in Interpreter menuAfter start Zeppelin, go to 
Interpreter menu and edit master property in your Spark interpreter setting. 
The value may vary depending on your Spark cluster deployment type.For 
example,local[*] in local modespark://master:7077 in standalone 
clusteryarn-client in Yarn client modeyarn-cluster in Yarn cluster 
modemesos://host:5050 in Mesos clusterThat's it. Zeppelin will work 
with any version of Spark and any deployment type without rebuilding Zeppelin 
in this way.For the further information about Spark & Zeppelin version 
compatibility, please refer to "Available Interpreters" 
section in Zeppelin download pa
 ge.Note that without exporting SPARK_HOME, it's running in local mode 
with included version of Spark. The included version may vary depending on the 
build profile.3. Yarn modeZeppelin support both yarn client and yarn cluster 
mode (yarn cluster mode is supported from 0.8.0). For yarn mode, you must 
specify SPARK_HOME & HADOOP_CONF_DIR.You can either specify them in 
zeppelin-env.sh, or in interpreter setting page. Specifying them in 
zeppelin-env.sh means you can use only one version of spark & hadoop. 
Specifying themin interpreter setting page means you can use multiple versions 
of spark & hadoop in one zeppelin instance.4. New Version of 
SparkInterpreterThere's one new version of SparkInterpreter with better 
spark support and code completion starting from Zeppelin 0.8.0. We enable it by 
default, but user can still use the old version of SparkInterpreter by setting 
zeppelin.spark.useNew as false in its interpreter setting.SparkContext, SQLConte
 xt, SparkSession, ZeppelinContextSparkContext, SQLContext and ZeppelinContext 
are automatically created and exposed as variable names sc, sqlContext and z, 
respectively, in Scala, Python and R environments.Staring from 0.6.1 
SparkSession is available as variable spark when you are using Spark 2.x.Note 
that Scala/Python/R environment shares the same SparkContext, SQLContext and 
ZeppelinContext instance. How to pass property to SparkConfThere're 2 
kinds of properties that would be passed to SparkConfStandard spark property 
(prefix with spark.). e.g. spark.executor.memory will be passed to 
SparkConfNon-standard spark property (prefix with zeppelin.spark.).  e.g. 
zeppelin.spark.property_1, property_1 will be passed to SparkConfDependency 
ManagementThere are two ways to load external libraries in Spark interpreter. 
First is using interpreter setting menu and second is loading Spark 
properties.1. Setting Dependencies via Interpreter SettingPlease see Dependency 
Management for the 
 details.2. Loading Spark PropertiesOnce SPARK_HOME is set in 
conf/zeppelin-env.sh, Zeppelin uses spark-submit as spark interpreter runner. 
spark-submit supports two ways to load configurations.The first is command line 
options such as --master and Zeppelin can pass these options to spark-submit by 
exporting SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.sh. Second is reading 
configuration options from SPARK_HOME/conf/spark-defaults.conf. Spark 
properties that user can set to distribute libraries are:      
spark-defaults.conf    SPARK_SUBMIT_OPTIONS    Description        spark.jars    
--jars    Comma-separated list of local jars to include on the driver and 
executor classpaths.        spark.jars.packages    --packages    
Comma-separated list of maven coordinates of jars to include on the driver and 
executor classpaths. Will search the local maven repo, then maven central and 
any additional remote repositories given by --repositories. The format for the 
coordinates should be groupId:artifa
 ctId:version.        spark.files    --files    Comma-separated list of files 
to be placed in the working directory of each executor.  Here are few 
examples:SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.shexport 
SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.10:1.2.0 
--jars /path/mylib1.jar,/path/mylib2.jar --files 
/path/mylib1.py,/path/mylib2.zip,/path/mylib3.egg"SPARK_HOME/conf/spark-defaults.confspark.jars
        /path/mylib1.jar,/path/mylib2.jarspark.jars.packages   
com.databricks:spark-csv_2.10:1.2.0spark.files       
/path/mylib1.py,/path/mylib2.egg,/path/mylib3.zip3. Dynamic Dependency Loading 
via %spark.dep interpreterNote: %spark.dep interpreter loads libraries to 
%spark and %spark.pyspark but not to  %spark.sql interpreter. So we recommend 
you to use the first option instead.When your code requires external library, 
instead of doing download/copy/restart Zeppelin, you can easily do following 
jobs using %spark.dep interpreter.Load libraries recursiv
 ely from maven repositoryLoad libraries from local filesystemAdd additional 
maven repositoryAutomatically add libraries to SparkCluster (You can turn 
off)Dep interpreter leverages Scala environment. So you can write any Scala 
code here.Note that %spark.dep interpreter should be used before %spark, 
%spark.pyspark, %spark.sql.Here's usages.%spark.depz.reset() // clean 
up previously added artifact and repository// add maven 
repositoryz.addRepo("RepoName").url("RepoURL")//
 add maven snapshot 
repositoryz.addRepo("RepoName").url("RepoURL").snapshot()//
 add credentials for private maven 
repositoryz.addRepo("RepoName").url("RepoURL").username("username").password("password")//
 add artifact from filesystemz.load("/path/to.jar")// add 
artifact from maven repository, with no 
dependencyz.load("groupId:artifactId:versio
 n").excludeAll()// add artifact 
recursivelyz.load("groupId:artifactId:version")// add 
artifact recursively except comma separated GroupID:ArtifactId 
listz.load("groupId:artifactId:version").exclude("groupId:artifactId,groupId:artifactId,
 ...")// exclude with 
patternz.load("groupId:artifactId:version").exclude(*)z.load("groupId:artifactId:version").exclude("groupId:artifactId:*")z.load("groupId:artifactId:version").exclude("groupId:*")//
 local() skips adding artifact to spark clusters (skipping 
sc.addJar())z.load("groupId:artifactId:version").local()ZeppelinContextZeppelin
 automatically injects ZeppelinContext as variable z in your Scala/Python 
environment. ZeppelinContext provides some additional functions and 
utilities.See Zeppelin-Context for more details.Matplotlib Integration 
(pyspark)Both the python and pyspar
 k interpreters have built-in support for inline visualization using 
matplotlib,a popular plotting library for python. More details can be found in 
the python interpreter documentation,since matplotlib support is identical. 
More advanced interactive plotting can be done with pyspark throughutilizing 
Zeppelin's built-in Angular Display System, as shown below:Interpreter 
setting optionYou can choose one of shared, scoped and isolated options wheh 
you configure Spark interpreter.Spark interpreter creates separated Scala 
compiler per each notebook but share a single SparkContext in scoped mode 
(experimental).It creates separated SparkContext per each notebook in isolated 
mode.IPython supportBy default, zeppelin would use IPython in pyspark when 
IPython is available, Otherwise it would fall back to the original PySpark 
implementation.If you don't want to use IPython, then you can set 
zeppelin.pyspark.useIPython as false in interpreter setting. For the IPython 
features, you
  can refer docPython InterpreterSetting up Zeppelin with KerberosLogical setup 
with Zeppelin, Kerberos Key Distribution Center (KDC), and Spark on 
YARN:Configuration SetupOn the server that Zeppelin is installed, install 
Kerberos client modules and configuration, krb5.conf.This is to make the server 
communicate with KDC.Set SPARK_HOME in [ZEPPELIN_HOME]/conf/zeppelin-env.sh to 
use spark-submit(Additionally, you might have to set export 
HADOOP_CONF_DIR=/etc/hadoop/conf)Add the two properties below to Spark 
configuration 
([SPARK_HOME]/conf/spark-defaults.conf):spark.yarn.principalspark.yarn.keytabNOTE:
 If you do not have permission to access for the above spark-defaults.conf 
file, optionally, you can add the above lines to the Spark Interpreter setting 
through the Interpreter tab in the Zeppelin UI.That's it. Play with 
Zeppelin!",
+      "url": " /interpreter/spark.html",
+      "group": "interpreter",
+      "excerpt": "Apache Spark is a fast and general-purpose cluster computing 
system. It provides high-level APIs in Java, Scala, Python and R, and an 
optimized engine that supports general execution engine."
     }
     ,
     
   
 
-    "interpreter-beam": {
-      "title": "Beam interpreter in Apache Zeppelin",
-      "content"  : "Beam interpreter for Apache ZeppelinOverviewApache Beam is 
an open source unified platform for data processing pipelines. A pipeline can 
be build using one of the Beam SDKs.The execution of the pipeline is done by 
different Runners. Currently, Beam supports Apache Flink Runner, Apache Spark 
Runner, and Google Dataflow Runner.How to useBasically, you can write normal 
Beam java code where you can determine the Runner. You should write the main 
method inside a class becuase the interpreter invoke this main to execute the 
pipeline. Unlike Zeppelin normal pattern, each paragraph is considered as a 
separate job, there isn't any relation to any other paragraph.The 
following is a demonstration of a word count example with data represented in 
array of stringsBut it can read data from files by replacing 
Create.of(SENTENCES).withCoder(StringUtf8Coder.of()) with 
TextIO.Read.from("path/to/filename.txt")%beam// most used 
importsimport org.apache.beam.
 sdk.coders.StringUtf8Coder;import org.apache.beam.sdk.transforms.Create;import 
java.io.Serializable;import java.util.Arrays;import java.util.List;import 
java.util.ArrayList;import org.apache.beam.runners.direct.*;import 
org.apache.beam.sdk.runners.*;import org.apache.beam.sdk.options.*;import 
org.apache.beam.runners.flink.*;import org.apache.beam.sdk.Pipeline;import 
org.apache.beam.sdk.io.TextIO;import 
org.apache.beam.sdk.options.PipelineOptionsFactory;import 
org.apache.beam.sdk.transforms.Count;import 
org.apache.beam.sdk.transforms.DoFn;import 
org.apache.beam.sdk.transforms.MapElements;import 
org.apache.beam.sdk.transforms.ParDo;import 
org.apache.beam.sdk.transforms.SimpleFunction;import 
org.apache.beam.sdk.values.KV;import 
org.apache.beam.sdk.options.PipelineOptions;public class MinimalWordCount {  
static List<String> s = new ArrayList<>();  static 
final String[] SENTENCES_ARRAY = new String[] {    "Hadoop is the 
Elephant King!",    &a
 mp;quot;A yellow and elegant thing.",    "He never 
forgets",    "Useful data, or lets",    "An 
extraneous element cling!",    "A wonderful king is 
Hadoop.",    "The elephant plays well with Sqoop.",  
  "But what helps him to thrive",    "Are Impala, 
and Hive,",    "And HDFS in the group.",    
"Hadoop is an elegant fellow.",    "An elephant 
gentle and mellow.",    "He never gets mad,",    
"Or does anything bad,",    "Because, at his core, 
he is yellow",    };    static final List<String> 
SENTENCES = Arrays.asList(SENTENCES_ARRAY);  public static void main(String[] 
args) {    PipelineOptions options = 
PipelineOptionsFactory.create().as(PipelineOptions.class);    
options.setRunner(FlinkRunner.class);    Pipeline p = Pipeline.create(o
 ptions);    p.apply(Create.of(SENTENCES).withCoder(StringUtf8Coder.of()))      
   .apply("ExtractWords", ParDo.of(new DoFn<String, 
String>() {           @ProcessElement           public void 
processElement(ProcessContext c) {             for (String word : 
c.element().split("[^a-zA-Z']+")) {               if 
(!word.isEmpty()) {                 c.output(word);               }             
}           }         }))        .apply(Count.<String> 
perElement())        .apply("FormatResults", ParDo.of(new 
DoFn<KV<String, Long>, String>() {          
@ProcessElement          public void 
processElement(DoFn<KV<String, Long>, 
String>.ProcessContext arg0)            throws Exception {            
s.add("n" + arg0.element().getKey() + "t" + 
arg0.element().getValue());            }        }));    p.run();    System.out.
 println("%table wordtcount");    for (int i = 0; i < 
s.size(); i++) {      System.out.print(s.get(i));    }  }}",
-      "url": " /interpreter/beam",
+    "/interpreter/python.html": {
+      "title": "Python 2 & 3 Interpreter for Apache Zeppelin",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Python 2 & 3 
Interpreter for Apache ZeppelinConfiguration      Property    Default    
Description        zeppelin.python    python    Path of the already installed 
Python binary (could be python2 or python3).    If python is not in your $PATH 
you can set the absolute directory (example : /usr/bin/python)            
zeppelin.python.maxResult    1000    Max number of dataframe rows to display.  
Enabling Python InterpreterIn a
  notebook, to enable the Python interpreter, click on the Gear icon and select 
PythonUsing the Python InterpreterIn a paragraph, use %python to select the 
Python interpreter and then input all commands.The interpreter can only work if 
you already have python installed (the interpreter doesn't bring it own 
python binaries).To access the help, type help()Python environmentsDefaultBy 
default, PythonInterpreter will use python command defined in zeppelin.python 
property to run python process.The interpreter can use all modules already 
installed (with pip, easy_install...)CondaConda is an package management system 
and environment management system for python.%python.conda interpreter lets you 
change between environments.Usageget the Conda Infomation: %python.conda 
infolist the Conda environments: %python.conda env listcreate a conda 
enviornment: %python.conda create --name [ENV NAME]activate an environment 
(python interpreter will be restarted): %python.conda activate [ENV NAME]d
 eactivate%python.conda deactivateget installed package list inside the current 
environment%python.conda listinstall package%python.conda install [PACKAGE 
NAME]uninstall package%python.conda uninstall [PACKAGE 
NAME]Docker%python.docker interpreter allows PythonInterpreter creates python 
process in a specified docker container.Usageactivate an 
environment%python.docker activate [Repository]%python.docker activate 
[Repository:Tag]%python.docker activate [Image Id]deactivate%python.docker 
deactivateHere is an example# activate latest tensorflow image as a python 
environment%python.docker activate gcr.io/tensorflow/tensorflow:latestUsing 
Zeppelin Dynamic FormsYou can leverage Zeppelin Dynamic Form inside your Python 
code.Zeppelin Dynamic Form can only be used if py4j Python library is installed 
in your system. If not, you can install it with pip install py4j.Example : 
%python### Input formprint 
(z.input("f1","defaultValue"))### Select 
formprint (z.sele
 
ct("f1",[("o1","1"),("o2","2")],"2"))###
 Checkbox 
formprint("".join(z.checkbox("f3", 
[("o1","1"), 
("o2","2")],["1"])))Matplotlib
 integrationThe python interpreter can display matplotlib figures inline 
automatically using the pyplot module:%pythonimport matplotlib.pyplot as 
pltplt.plot([1, 2, 3])This is the recommended method for using matplotlib from 
within a Zeppelin notebook. The output of this command will by default be 
converted to HTML by implicitly making use of the %html magic. Additional 
configuration can be achieved using the builtin z.configure_mpl() method. For 
example, z.configure_mpl(width=400, height=300, 
fmt='svg')plt.plot([1, 2, 3])Will produce a 400x300 image in 
SVG format, which by default are normally 600x400 and PNG r
 espectively. In the future, another option called angular can be used to make 
it possible to update a plot produced from one paragraph directly from another 
(the output will be %angular instead of %html). However, this feature is 
already available in the pyspark interpreter. More details can be found in the 
included "Zeppelin Tutorial: Python - matplotlib basic" 
tutorial notebook. If Zeppelin cannot find the matplotlib backend files (which 
should usually be found in $ZEPPELIN_HOME/interpreter/lib/python) in your 
PYTHONPATH, then the backend will automatically be set to agg, and the 
(otherwise deprecated) instructions below can be used for more limited inline 
plotting.If you are unable to load the inline backend, use 
z.show(plt):%pythonimport matplotlib.pyplot as pltplt.figure()(.. 
..)z.show(plt)plt.close()The z.show() function can take optional parameters to 
adapt graph dimensions (width and height) as well as output format (png or 
optionally svg).%pythonz.show(plt
 , width='50px')z.show(plt, height='150px', 
fmt='svg')Pandas integrationApache Zeppelin Table Display 
System provides built-in data visualization capabilities. Python interpreter 
leverages it to visualize Pandas DataFrames though similar z.show() API, same 
as with Matplotlib integration.Example:import pandas as pdrates = 
pd.read_csv("bank.csv", 
sep=";")z.show(rates)SQL over Pandas DataFramesThere is a 
convenience %python.sql interpreter that matches Apache Spark experience in 
Zeppelin and enables usage of SQL language to query Pandas DataFrames and 
visualization of results though built-in Table Display 
System.Pre-requestsPandas pip install pandasPandaSQL pip install -U pandasqlIn 
case default binded interpreter is Python (first in the interpreter list, under 
the Gear Icon), you can just use it as %sql i.efirst paragraphimport pandas as 
pdrates = pd.read_csv("bank.csv", sep="
 ;")next paragraph%sqlSELECT * FROM rates WHERE age < 
40Otherwise it can be referred to as %python.sqlIPython SupportIPython is more 
powerful than the default python interpreter with extra functionality. You can 
use IPython with Python2 or Python3 which depends on which python you set 
zeppelin.python.Pre-requests- Jupyter `pip install jupyter`- grpcio `pip 
install grpcio`- protobuf `pip install protobuf`If you already install 
anaconda, then you just need to install grpcio as Jupyter is already included 
in anaconda. For grpcio version >= 1.12.0 you'll also need to 
install protobuf separately.In addition to all basic functions of the python 
interpreter, you can use all the IPython advanced features as you use it in 
Jupyter Notebook.e.g. Use IPython magic%python.ipython#python 
helprange?#timeit%timeit range(100)Use matplotlib %python.ipython%matplotlib 
inlineimport matplotlib.pyplot as pltprint("hello 
world")data=[1,2,3,4]plt.figure()plt.
 plot(data)We also make ZeppelinContext available in IPython Interpreter. You 
can use ZeppelinContext to create dynamic forms and display pandas 
DataFrame.e.g.Create dynamic formz.input(name='my_name', 
defaultValue='hello')Show pandas dataframeimport pandas as pddf 
= pd.DataFrame({'id':[1,2,3], 
'name':['a','b','c']})z.show(df)By
 default, we would use IPython in %python.python if IPython is available. 
Otherwise it would fall back to the original Python implementation.If you 
don't want to use IPython, then you can set zeppelin.python.useIPython 
as false in interpreter setting.Technical descriptionFor in-depth technical 
details on current implementation please refer to python/README.md.Some 
features not yet implemented in the Python InterpreterInterrupt a paragraph 
execution (cancel() method) is currently only supported in Linux and MacOs. If 
interpreter runs in anothe
 r operating system (for instance MS Windows) , interrupt a paragraph will 
close the whole interpreter. A JIRA ticket (ZEPPELIN-893) is opened to 
implement this feature in a next release of the interpreter.Progression bar in 
webUI  (getProgress() method) is currently not implemented.Code-completion is 
currently not implemented.",
+      "url": " /interpreter/python.html",
       "group": "interpreter",
-      "excerpt": "Apache Beam is an open source, unified programming model 
that you can use to create a data processing pipeline."
+      "excerpt": "Python is a programming language that lets you work quickly 
and integrate systems more effectively."
     }
     ,
     
   
 
-    "interpreter-bigquery": {
-      "title": "BigQuery Interpreter for Apache Zeppelin",
-      "content"  : "BigQuery Interpreter for Apache ZeppelinOverviewBigQuery 
is a highly scalable no-ops data warehouse in the Google Cloud Platform. 
Querying massive datasets can be time consuming and expensive without the right 
hardware and infrastructure. Google BigQuery solves this problem by enabling 
super-fast SQL queries against append-only tables using the processing power of 
Google's infrastructure. Simply move your data into BigQuery and let us 
handle the hard work. You can control access to both the project and your data 
based on your business needs, such as giving others the ability to view or 
query your data.  Configuration      Name    Default Value    Description       
 zeppelin.bigquery.project_id          Google Project Id        
zeppelin.bigquery.wait_time    5000    Query Timeout in Milliseconds        
zeppelin.bigquery.max_no_of_rows    100000    Max result set size        
zeppelin.bigquery.sql_dialect        BigQuery SQL dialect (standardSQL or 
legacySQL
 ). If empty, [query 
prefix](https://cloud.google.com/bigquery/docs/reference/standard-sql/enabling-standard-sql#sql-prefix)
 like '#standardSQL' can be used.  BigQuery APIZeppelin is built 
against BigQuery API version v2-rev265-1.21.0 - API JavadocsEnabling the 
BigQuery InterpreterIn a notebook, to enable the BigQuery interpreter, click 
the Gear icon and select bigquery.Provide Application Default CredentialsWithin 
Google Cloud Platform (e.g. Google App Engine, Google Compute Engine),built-in 
credentials are used by default.Outside of GCP, follow the Google API 
authentication instructions for Zeppelin Google Cloud StorageUsing the BigQuery 
InterpreterIn a paragraph, use %bigquery.sql to select the BigQuery interpreter 
and then input SQL statements against your datasets stored in BigQuery.You can 
use BigQuery SQL Reference to build your own SQL.For Example, SQL to query for 
top 10 departure delays across airports using the flights public 
dataset%bigquery.sqlSELECT departure_ai
 rport,count(case when departure_delay>0 then 1 else 0 end) as 
no_of_delays FROM [bigquery-samples:airline_ontime_data.flights] group by 
departure_airport order by 2 desc limit 10Another Example, SQL to query for 
most commonly used java packages from the github data hosted in BigQuery 
%bigquery.sqlSELECT  package,  COUNT(*) countFROM (  SELECT    
REGEXP_EXTRACT(line, r' ([a-z0-9._]*).') package,    id  FROM ( 
   SELECT      SPLIT(content, 'n') line,      id    FROM      
[bigquery-public-data:github_repos.sample_contents]    WHERE      content 
CONTAINS 'import'      AND sample_path LIKE 
'%.java'    HAVING      LEFT(line, 6)='import' 
)  GROUP BY    package,    id )GROUP BY  1ORDER BY  count DESCLIMIT  
40Technical descriptionFor in-depth technical details on current implementation 
please refer to bigquery/README.md.",
-      "url": " /interpreter/bigquery",
+    "/interpreter/hive.html": {
+      "title": "Hive Interpreter for Apache Zeppelin",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Hive Interpreter for Apache 
ZeppelinImportant NoticeHive Interpreter will be deprecated and merged into 
JDBC Interpreter. You can use Hive Interpreter by using JDBC Interpreter with 
same functionality. See the example below of settings and 
dependencies.Properties      Property    Value        hive.driver    
org.apache.hive.jdbc.HiveDriver        hive.url    jdbc:hive2://localhost:10000 
       hive.user    hiveUser        hive.passw
 ord    hivePassword  Dependencies      Artifact    Exclude        
org.apache.hive:hive-jdbc:0.14.0            
org.apache.hadoop:hadoop-common:2.6.0      Configuration      Property    
Default    Description        default.driver    org.apache.hive.jdbc.HiveDriver 
   Class path of JDBC driver        default.url    jdbc:hive2://localhost:10000 
   Url for connection        default.user        ( Optional ) Username of the 
connection        default.password        ( Optional ) Password of the 
connection        default.xxx        ( Optional ) Other properties used by the 
driver        ${prefix}.driver        Driver class path of %hive(${prefix})     
    ${prefix}.url        Url of %hive(${prefix})         ${prefix}.user        
( Optional ) Username of the connection of %hive(${prefix})         
${prefix}.password        ( Optional ) Password of the connection of 
%hive(${prefix})         ${prefix}.xxx        ( Optional ) Other properties 
used by the driver of %hive(${prefix})   This interpr
 eter provides multiple configuration with ${prefix}. User can set a multiple 
connection properties by this prefix. It can be used like 
%hive(${prefix}).OverviewThe Apache Hive ™ data warehouse software 
facilitates querying and managing large datasets residing in distributed 
storage. Hive provides a mechanism to project structure onto this data and 
query the data using a SQL-like language called HiveQL. At the same time this 
language also allows traditional map/reduce programmers to plug in their custom 
mappers and reducers when it is inconvenient or inefficient to express this 
logic in HiveQL.How to useBasically, you can use%hiveselect * from 
my_table;or%hive(etl)-- 'etl' is a ${prefix}select * from 
my_table;You can also run multiple queries up to 10 by default. Changing these 
settings is not implemented yet.Apply Zeppelin Dynamic FormsYou can leverage 
Zeppelin Dynamic Form inside your queries. You can use both the text input and 
select form parameterization fea
 tures.%hiveSELECT ${group_by}, count(*) as countFROM 
retail_demo.order_lineitems_pxfGROUP BY 
${group_by=product_id,product_id|product_name|customer_id|store_id}ORDER BY 
count ${order=DESC,DESC|ASC}LIMIT ${limit=10};",
+      "url": " /interpreter/hive.html",
       "group": "interpreter",
-      "excerpt": "BigQuery is a highly scalable no-ops data warehouse in the 
Google Cloud Platform."
+      "excerpt": "Apache Hive data warehouse software facilitates querying and 
managing large datasets residing in distributed storage. Hive provides a 
mechanism to project structure onto this data and query the data using a 
SQL-like language called HiveQL. At the same time this..."
     }
     ,
     
   
 
-    "interpreter-cassandra": {
-      "title": "Cassandra CQL Interpreter for Apache Zeppelin",

[... 1319 lines stripped ...]

Reply via email to