I'm trying to deploy zeppelin 0.10 on k8s, using following manual build:

mvn clean package -DskipTests -Pspark-scala-2.12 -Pinclude-hadoop
-Pspark-3.0 -Phadoop2  -Pbuild-distr  -pl
zeppelin-interpreter,zeppelin-zengine,spark/interpreter,spark/spark-dependencies,zeppelin-web,zeppelin-server,zeppelin-distribion,jdbc,zeppelin-plugins/notebookrepo/filesystem,zeppelin-plugins/launcher/k8s-standard
-am


Spark itself is configured to use mesos as resource manager.
It seems as if when trying to start the spark
interpreter, K8sRemoteInterpreterProcess tries to find a sidecar pod for
spark interpreter:

Pod pod = client.pods().inNamespace(namespace).withName(podName).get();

Is there any option not to have spark interpreter as a separate pod, and
instead just create the spark context within the zeppelin process? I'm
trying to understand if I could make zeppelin
use K8sStandardInterpreterLauncher instead (I assume it's an alternative to
the remote interpreter?)

Thanks,
Lior

Reply via email to