Repository: incubator-zeppelin
Updated Branches:
  refs/heads/master 3e1e884bb -> a911b0ec4


[ZEPPELIN-5] Tutorial notebook fails if fs.defaultFS is set to non-local FS in 
Hadoop core-site.xml

I updated the tutorial to ignore the default settings for the file system when 
looking up bank.csv by prepending ("file://"). This is safe since the working 
directory for $zeppelinHome is set on the previous line to PWD which will 
always be the local FS.

Author: Ilya Ganelin <[email protected]>

Closes #70 from ilganeli/ZEPPELIN-5 and squashes the following commits:

01fb781 [Ilya Ganelin] Changed tutorial string to ignore default setting for 
local FS


Project: http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/repo
Commit: 
http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/commit/a911b0ec
Tree: http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/tree/a911b0ec
Diff: http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/diff/a911b0ec

Branch: refs/heads/master
Commit: a911b0ec43ce4866cc50bf9e4bcd66242af583c7
Parents: 3e1e884
Author: Ilya Ganelin <[email protected]>
Authored: Mon May 11 16:50:38 2015 -0700
Committer: Lee moon soo <[email protected]>
Committed: Thu May 21 13:39:02 2015 +0900

----------------------------------------------------------------------
 notebook/2A94M5J1Z/note.json | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/a911b0ec/notebook/2A94M5J1Z/note.json
----------------------------------------------------------------------
diff --git a/notebook/2A94M5J1Z/note.json b/notebook/2A94M5J1Z/note.json
index d3b73b5..86485dd 100644
--- a/notebook/2A94M5J1Z/note.json
+++ b/notebook/2A94M5J1Z/note.json
@@ -67,7 +67,7 @@
     },
     {
       "title": "Load data into table",
-      "text": "import sys.process._\n// sc is an existing SparkContext.\nval 
sqlContext \u003d new org.apache.spark.sql.SQLContext(sc)\n\n\nval zeppelinHome 
\u003d (\"pwd\" !!).replace(\"\\n\", \"\")\nval bankText \u003d 
sc.textFile(s\"$zeppelinHome/data/bank-full.csv\")\n\ncase class Bank(age: 
Integer, job: String, marital: String, education: String, balance: 
Integer)\n\nval bank \u003d bankText.map(s \u003d\u003e 
s.split(\";\")).filter(s \u003d\u003e s(0) !\u003d \"\\\"age\\\"\").map(\n    s 
\u003d\u003e Bank(s(0).toInt, \n            s(1).replaceAll(\"\\\"\", \"\"),\n  
          s(2).replaceAll(\"\\\"\", \"\"),\n            
s(3).replaceAll(\"\\\"\", \"\"),\n            s(5).replaceAll(\"\\\"\", 
\"\").toInt\n        )\n).toDF()\nbank.registerTempTable(\"bank\")\n\n",
+      "text": "import sys.process._\n// sc is an existing SparkContext.\nval 
sqlContext \u003d new org.apache.spark.sql.SQLContext(sc)\n\n\nval zeppelinHome 
\u003d (\"pwd\" !!).replace(\"\\n\", \"\")\nval bankText \u003d 
sc.textFile(s\"file://$zeppelinHome/data/bank-full.csv\")\n\ncase class 
Bank(age: Integer, job: String, marital: String, education: String, balance: 
Integer)\n\nval bank \u003d bankText.map(s \u003d\u003e 
s.split(\";\")).filter(s \u003d\u003e s(0) !\u003d \"\\\"age\\\"\").map(\n    s 
\u003d\u003e Bank(s(0).toInt, \n            s(1).replaceAll(\"\\\"\", \"\"),\n  
          s(2).replaceAll(\"\\\"\", \"\"),\n            
s(3).replaceAll(\"\\\"\", \"\"),\n            s(5).replaceAll(\"\\\"\", 
\"\").toInt\n        )\n).toDF()\nbank.registerTempTable(\"bank\")\n\n",
       "config": {
         "colWidth": 12.0,
         "graph": {

Reply via email to