zjffdu commented on a change in pull request #3375: ZEPPELIN-4176. Remove old
spark interpreter
URL: https://github.com/apache/zeppelin/pull/3375#discussion_r291898550
##########
File path:
spark/interpreter/src/main/java/org/apache/zeppelin/spark/SparkInterpreter.java
##########
@@ -50,46 +71,103 @@ public SparkInterpreter(Properties properties) {
if
(Boolean.parseBoolean(properties.getProperty("zeppelin.spark.scala.color",
"true"))) {
System.setProperty("scala.color", "true");
}
- if (Boolean.parseBoolean(properties.getProperty("zeppelin.spark.useNew",
"false"))) {
- delegation = new NewSparkInterpreter(properties);
- } else {
- delegation = new OldSparkInterpreter(properties);
- }
- delegation.setParentSparkInterpreter(this);
+ this.enableSupportedVersionCheck = java.lang.Boolean.parseBoolean(
+ properties.getProperty("zeppelin.spark.enableSupportedVersionCheck",
"true"));
+ innerInterpreterClassMap.put("2.10",
"org.apache.zeppelin.spark.SparkScala210Interpreter");
+ innerInterpreterClassMap.put("2.11",
"org.apache.zeppelin.spark.SparkScala211Interpreter");
}
@Override
public void open() throws InterpreterException {
- delegation.setInterpreterGroup(getInterpreterGroup());
- delegation.setUserName(getUserName());
- delegation.setClassloaderUrls(getClassloaderUrls());
-
- delegation.open();
+ try {
+ String scalaVersion = extractScalaVersion();
+ LOGGER.info("Using Scala Version: " + scalaVersion);
+ SparkConf conf = new SparkConf();
+ for (Map.Entry<Object, Object> entry : getProperties().entrySet()) {
+ if (!StringUtils.isBlank(entry.getValue().toString())) {
+ conf.set(entry.getKey().toString(), entry.getValue().toString());
+ }
+ // zeppelin.spark.useHiveContext & zeppelin.spark.concurrentSQL are
legacy zeppelin
+ // properties, convert them to spark properties here.
+ if (entry.getKey().toString().equals("zeppelin.spark.useHiveContext"))
{
+ conf.set("spark.useHiveContext", entry.getValue().toString());
+ }
+ if (entry.getKey().toString().equals("zeppelin.spark.concurrentSQL")
Review comment:
This would not affect the executing order of spark scala code. Because spark
scala interpreter use FIFOScheduler. Only SparkSqlInterpreter is affected, as
SparkSqlInterpreter use `ParallelScheduler`
https://github.com/apache/zeppelin/blob/master/spark/interpreter/src/main/java/org/apache/zeppelin/spark/SparkSqlInterpreter.java#L128
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services