Guys - I'm forwarding this because he raises an issue that I've raised several times in different contexts: what should we do about spark-under-Zeppelin? I had thought it would have been fully deprecated by now. Part of the issue he raises is that currently the Zeppelin configuration system is not completely defined. You can set conflicting configurations in different places and different parts of Zeppelin will make different choices.
As an example - try the different spark and pyspark interpreters if spark.home is set differently from SPARK_HOME. Pulling out spark-under-Zeppelin would let us pull out a giant chunk of configuration, parts of the build that have been cumbersome to maintain, etc. Begin forwarded message: > From: Weipu Zhao <zhaoweipu....@gmail.com> > Date: March 4, 2016 at 2:14:12 PM EST > To: us...@zeppelin.incubator.apache.org > Subject: Zeppelin Spark Interpreter ignore settings in spark-default.conf > Reply-To: us...@zeppelin.incubator.apache.org > > Hi all, > > I been struggling for this and feel that there should be a way to fix it. > With the new Zeppelin 0.5.6 we were able to specify SPARK_HOME to external > Spark, which is good as we put a lot spark configuration inside > SPARK_HOME/conf/spark-default.conf. However there is one configuration > spark.executor.memory inside zeppelin/conf/interpreter.json been set to > default 512m, and whenever we restart zeppelin, this conf will always be pick > rather than the spark-default.conf. The only way we can get around this is > put -Dspark.executor.memory in ZEPPELIN_JAVA_OPTS. > > In addition, if we go to Interpreter setting in a running Zeppelin and > restart the Spark Interpreter, it will also restart the Spark context with > the default setting in interpreter.json rather than spark-default.conf. > > What should be a proper configuration setting to using zeppelin and external > Spark, so that the zeppelin can read the confs in spark-default.conf as its > Spark interpreter setting? > > Regards, > Weipu