Hi,

I'm trying to build/install Zeppelin 0.6.0 (version 0.5.6 also has the same symptoms) on a new CDH cluster running Hadoop 2.6.0-cdh5.7.0 and Spark 1.6.0, but I'm getting this error when I use SPARK_HOME to point to the /opt/cloudera/parcels/CDH/lib/spark directory in zeppelin-env.sh:

java.lang.NoSuchMethodException: org.apache.spark.repl.SparkILoop$SparkILoopInterpreter.classServerUri()

Which seems to imply that there are no Interpreters available for Spark? Is there a way to get around this? I've tried deleting the build folder and pulling a fresh copy, but end up at the same place.

It built successfully on Ubuntu 14.0.4 LTS and Maven 3.3.3 using this command:

sudo mvn clean package -Dspark.version=1.6.0 -Pspark-1.6 -Dhadoop.version=2.6.0-cdh5.6.0 -Phadoop-2.6 -Ppyspark -Pvendor-repo -DskipTests*
*
However, if I leave the configuration at it's default level, when I try to run the "Zeppelin Tutorial", it'll return this error:

akka.ConfigurationException: Akka JAR version [2.2.3] does not match the provided config version [2.3.11]

Which makes sense, because the CDH builds Spark under Akka version 2.2.3, but I'm not sure why the builtin Spark is attempting to use 2.2.3? Shouldn't I be able to run Zeppelin without any dependencies on CDH, or did the -Pvendor-repo mess up this build?
http://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_spark_ic.html

Any guidance is welcome!

thx,
z
--
Scott Zelenka
Jabber Engineering - US
Phone: (+1) 919-392-1394
Email: szele...@cisco.com

This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply email and delete all copies of this message.

For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/index.html

Reply via email to