Repository: zeppelin
Updated Branches:
  refs/heads/master 2a9663600 -> ffa4ee31a


ZEPPELIN-3508. Enable new spark interpreter in 0.8.0

### What is this PR for?
I'd like to enable it in 0.8 so that we can get more feedback from users. And 
user can still use the old implementing via setting `zeppelin.spark.useNew` to 
`false`

### What type of PR is it?
[Improvement]

### Todos
* [ ] - Task

### What is the Jira issue?
* https://issues.apache.org/jira/browse/ZEPPELIN-3508

### How should this be tested?
* CI pass

### Screenshots (if appropriate)

### Questions:
* Does the licenses files need update? No
* Is there breaking changes for older versions? No
* Does this needs documentation? No

Author: Jeff Zhang <zjf...@apache.org>

Closes #2989 from zjffdu/ZEPPELIN-3508 and squashes the following commits:

25c900b9f [Jeff Zhang] ZEPPELIN-3508. Enable new spark interpreter in 0.8.0


Project: http://git-wip-us.apache.org/repos/asf/zeppelin/repo
Commit: http://git-wip-us.apache.org/repos/asf/zeppelin/commit/ffa4ee31
Tree: http://git-wip-us.apache.org/repos/asf/zeppelin/tree/ffa4ee31
Diff: http://git-wip-us.apache.org/repos/asf/zeppelin/diff/ffa4ee31

Branch: refs/heads/master
Commit: ffa4ee31a2d45a0f8e4256cd3c285357c42410e1
Parents: 2a96636
Author: Jeff Zhang <zjf...@apache.org>
Authored: Tue May 29 12:55:02 2018 +0800
Committer: Jeff Zhang <zjf...@apache.org>
Committed: Tue May 29 13:27:50 2018 +0800

----------------------------------------------------------------------
 docs/interpreter/spark.md                                     | 3 +--
 spark/interpreter/src/main/resources/interpreter-setting.json | 2 +-
 2 files changed, 2 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/zeppelin/blob/ffa4ee31/docs/interpreter/spark.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/spark.md b/docs/interpreter/spark.md
index 85b2981..51b7c9e 100644
--- a/docs/interpreter/spark.md
+++ b/docs/interpreter/spark.md
@@ -205,8 +205,7 @@ You can either specify them in `zeppelin-env.sh`, or in 
interpreter setting page
 in interpreter setting page means you can use multiple versions of `spark` & 
`hadoop` in one zeppelin instance.
 
 ### 4. New Version of SparkInterpreter
-There's one new version of SparkInterpreter starting with better spark support 
and code completion from Zeppelin 0.8.0, by default we still use the old 
version of SparkInterpreter.
-If you want to use the new one, you can configure `zeppelin.spark.useNew` as 
`true` in its interpreter setting.
+There's one new version of SparkInterpreter with better spark support and code 
completion starting from Zeppelin 0.8.0. We enable it by default, but user can 
still use the old version of SparkInterpreter by setting 
`zeppelin.spark.useNew` as `false` in its interpreter setting.
 
 ## SparkContext, SQLContext, SparkSession, ZeppelinContext
 SparkContext, SQLContext and ZeppelinContext are automatically created and 
exposed as variable names `sc`, `sqlContext` and `z`, respectively, in Scala, 
Python and R environments.

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/ffa4ee31/spark/interpreter/src/main/resources/interpreter-setting.json
----------------------------------------------------------------------
diff --git a/spark/interpreter/src/main/resources/interpreter-setting.json 
b/spark/interpreter/src/main/resources/interpreter-setting.json
index 30ae737..c4bd442 100644
--- a/spark/interpreter/src/main/resources/interpreter-setting.json
+++ b/spark/interpreter/src/main/resources/interpreter-setting.json
@@ -78,7 +78,7 @@
       "zeppelin.spark.useNew": {
         "envName": null,
         "propertyName": "zeppelin.spark.useNew",
-        "defaultValue": "false",
+        "defaultValue": "true",
         "description": "Whether use new spark interpreter implementation",
         "type": "checkbox"
       }

Reply via email to