Repository: incubator-zeppelin
Updated Branches:
  refs/heads/master 24aeec402 -> 18c8c9ea5


improved readability

### info on versioning: ###
1. declare the validity of this tutorial for Spark 1.5 in "document description"
2. remove not relevant info comment about DataFrames started in Spark 1.3

### put some basic comments into code ###

Author: Tomas <[email protected]>

Closes #339 from xhudik/patch-2 and squashes the following commits:

2f11ff7 [Tomas] Update tutorial.md
1dd7c52 [Tomas] improved readability


Project: http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/repo
Commit: 
http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/commit/18c8c9ea
Tree: http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/tree/18c8c9ea
Diff: http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/diff/18c8c9ea

Branch: refs/heads/master
Commit: 18c8c9ea512a0d87699a73e2ca26192d03748661
Parents: 24aeec4
Author: Tomas <[email protected]>
Authored: Fri Oct 9 10:17:44 2015 +0200
Committer: Lee moon soo <[email protected]>
Committed: Tue Oct 13 09:51:51 2015 +0200

----------------------------------------------------------------------
 docs/docs/tutorial/tutorial.md | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/18c8c9ea/docs/docs/tutorial/tutorial.md
----------------------------------------------------------------------
diff --git a/docs/docs/tutorial/tutorial.md b/docs/docs/tutorial/tutorial.md
index 5f8f936..f5e1e61 100644
--- a/docs/docs/tutorial/tutorial.md
+++ b/docs/docs/tutorial/tutorial.md
@@ -1,7 +1,7 @@
 ---
 layout: page
 title: "Tutorial"
-description: ""
+description: "Tutorial is valid for Spark 1.3 and higher"
 group: tutorial
 ---
 
@@ -21,10 +21,12 @@ Before you start Zeppelin tutorial, you will need to 
download [bank.zip](http://
 First, to transform data from csv format into RDD of `Bank` objects, run 
following script. This will also remove header using `filter` function.
 
 ```scala
+
 val bankText = sc.textFile("yourPath/bank/bank-full.csv")
 
 case class Bank(age:Integer, job:String, marital : String, education : String, 
balance : Integer)
 
+// split each line, filter out header (starts with "age"), and map it into 
Bank case class  
 val bank = bankText.map(s=>s.split(";")).filter(s=>s(0)!="\"age\"").map(
     s=>Bank(s(0).toInt, 
             s(1).replaceAll("\"", ""),
@@ -34,9 +36,7 @@ val bank = 
bankText.map(s=>s.split(";")).filter(s=>s(0)!="\"age\"").map(
         )
 )
 
-// Below line works only in spark 1.3.0.
-// For spark 1.1.x and spark 1.2.x,
-// use bank.registerTempTable("bank") instead.
+// convert to DataFrame and create temporal table
 bank.toDF().registerTempTable("bank")
 ```
 

Reply via email to