Re: Cloudera Spark 2.2
Might need to recompile Zeppelin with Scala 2.11? Also Spark 2.2 now requires JDK8 I believe. -- Ruslan Dautkhanov On Tue, Aug 1, 2017 at 6:26 PM, Benjamin Kimwrote: > Here is more. > > org.apache.zeppelin.interpreter.InterpreterException: WARNING: > User-defined SPARK_HOME (/opt/cloudera/parcels/SPARK2- > 2.2.0.cloudera1-1.cdh5.12.0.p0.142354/lib/spark2) overrides detected > (/opt/cloudera/parcels/SPARK2/lib/spark2). > WARNING: Running spark-class from user-defined location. > Exception in thread "main" java.lang.NoSuchMethodError: > scala.Predef$.$conforms()Lscala/Predef$$less$colon$less; > at org.apache.spark.util.Utils$.getDefaultPropertiesFile(Utils.scala:2103) > at org.apache.spark.deploy.SparkSubmitArguments$$anonfun$ > mergeDefaultSparkProperties$1.apply(SparkSubmitArguments.scala:124) > at org.apache.spark.deploy.SparkSubmitArguments$$anonfun$ > mergeDefaultSparkProperties$1.apply(SparkSubmitArguments.scala:124) > at scala.Option.getOrElse(Option.scala:120) > at org.apache.spark.deploy.SparkSubmitArguments. > mergeDefaultSparkProperties(SparkSubmitArguments.scala:124) > at org.apache.spark.deploy.SparkSubmitArguments.( > SparkSubmitArguments.scala:110) > at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112) > at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) > > Cheers, > Ben > > > On Tue, Aug 1, 2017 at 5:24 PM Jeff Zhang wrote: > >> >> Then it is due to some classpath issue. I am not sure familiar with CDH, >> please check whether spark of CDH include hadoop jar with it. >> >> >> Benjamin Kim 于2017年8月2日周三 上午8:22写道: >> >>> Here is the error that was sent to me. >>> >>> org.apache.zeppelin.interpreter.InterpreterException: Exception in >>> thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/ >>> FSDataInputStream >>> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs. >>> FSDataInputStream >>> >>> Cheers, >>> Ben >>> >>> >>> On Tue, Aug 1, 2017 at 5:20 PM Jeff Zhang wrote: >>> By default, 0.7.1 doesn't support spark 2.2. But you can set zeppelin.spark.enableSupportedVersionCheck in interpreter setting to disable the supported version check. Jeff Zhang 于2017年8月2日周三 上午8:18写道: > > What's the error you see in log ? > > > Benjamin Kim 于2017年8月2日周三 上午8:18写道: > >> Has anyone configured Zeppelin 0.7.1 for Cloudera's release of Spark >> 2.2? I can't get it to work. I downloaded the binary and set SPARK_HOME >> to >> /opt/cloudera/parcels/SPARK2/lib/spark2. I must be missing something. >> >> Cheers, >> Ben >> >
Re: Cloudera Spark 2.2
Here is more. org.apache.zeppelin.interpreter.InterpreterException: WARNING: User-defined SPARK_HOME (/opt/cloudera/parcels/SPARK2-2.2.0.cloudera1-1.cdh5.12.0.p0.142354/lib/spark2) overrides detected (/opt/cloudera/parcels/SPARK2/lib/spark2). WARNING: Running spark-class from user-defined location. Exception in thread "main" java.lang.NoSuchMethodError: scala.Predef$.$conforms()Lscala/Predef$$less$colon$less; at org.apache.spark.util.Utils$.getDefaultPropertiesFile(Utils.scala:2103) at org.apache.spark.deploy.SparkSubmitArguments$$anonfun$mergeDefaultSparkProperties$1.apply(SparkSubmitArguments.scala:124) at org.apache.spark.deploy.SparkSubmitArguments$$anonfun$mergeDefaultSparkProperties$1.apply(SparkSubmitArguments.scala:124) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.deploy.SparkSubmitArguments.mergeDefaultSparkProperties(SparkSubmitArguments.scala:124) at org.apache.spark.deploy.SparkSubmitArguments.(SparkSubmitArguments.scala:110) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Cheers, Ben On Tue, Aug 1, 2017 at 5:24 PM Jeff Zhangwrote: > > Then it is due to some classpath issue. I am not sure familiar with CDH, > please check whether spark of CDH include hadoop jar with it. > > > Benjamin Kim 于2017年8月2日周三 上午8:22写道: > >> Here is the error that was sent to me. >> >> org.apache.zeppelin.interpreter.InterpreterException: Exception in thread >> "main" java.lang.NoClassDefFoundError: >> org/apache/hadoop/fs/FSDataInputStream >> Caused by: java.lang.ClassNotFoundException: >> org.apache.hadoop.fs.FSDataInputStream >> >> Cheers, >> Ben >> >> >> On Tue, Aug 1, 2017 at 5:20 PM Jeff Zhang wrote: >> >>> >>> By default, 0.7.1 doesn't support spark 2.2. But you can set >>> zeppelin.spark.enableSupportedVersionCheck >>> in interpreter setting to disable the supported version check. >>> >>> >>> Jeff Zhang 于2017年8月2日周三 上午8:18写道: >>> What's the error you see in log ? Benjamin Kim 于2017年8月2日周三 上午8:18写道: > Has anyone configured Zeppelin 0.7.1 for Cloudera's release of Spark > 2.2? I can't get it to work. I downloaded the binary and set SPARK_HOME to > /opt/cloudera/parcels/SPARK2/lib/spark2. I must be missing something. > > Cheers, > Ben >
Re: Cloudera Spark 2.2
Here is the error that was sent to me. org.apache.zeppelin.interpreter.InterpreterException: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FSDataInputStream Cheers, Ben On Tue, Aug 1, 2017 at 5:20 PM Jeff Zhangwrote: > > By default, 0.7.1 doesn't support spark 2.2. But you can set > zeppelin.spark.enableSupportedVersionCheck > in interpreter setting to disable the supported version check. > > > Jeff Zhang 于2017年8月2日周三 上午8:18写道: > >> >> What's the error you see in log ? >> >> >> Benjamin Kim 于2017年8月2日周三 上午8:18写道: >> >>> Has anyone configured Zeppelin 0.7.1 for Cloudera's release of Spark >>> 2.2? I can't get it to work. I downloaded the binary and set SPARK_HOME to >>> /opt/cloudera/parcels/SPARK2/lib/spark2. I must be missing something. >>> >>> Cheers, >>> Ben >>> >>
Re: Cloudera Spark 2.2
What's the error you see in log ? Benjamin Kim于2017年8月2日周三 上午8:18写道: > Has anyone configured Zeppelin 0.7.1 for Cloudera's release of Spark 2.2? > I can't get it to work. I downloaded the binary and set SPARK_HOME to > /opt/cloudera/parcels/SPARK2/lib/spark2. I must be missing something. > > Cheers, > Ben >
Cloudera Spark 2.2
Has anyone configured Zeppelin 0.7.1 for Cloudera's release of Spark 2.2? I can't get it to work. I downloaded the binary and set SPARK_HOME to /opt/cloudera/parcels/SPARK2/lib/spark2. I must be missing something. Cheers, Ben
Zeppelin help: custom interpreter to create data, then using this data with existing interpreters
Hello, I have created a custom interpreter that collects data from a service with a custom query language, and I would like to be able to use this data with existing interpreters in Zeppelin, like the spark interpreters. Basically the scenario I'm imagining is the custom interpreter runs, formats the data into a data frame/RDD, injects the data collected into the context, and then subsequent paragraphs have interpreters from the spark group that process this data further. This is similar to what happens in the "Zeppelin Tutorial/Basic Features (Spark)" notebook where scala code creates some data, uses "registerTempTable" to put the data into the spark context, and then this data can be used in SQL scripts in later paragraphs. How can I accomplish this? Is there a simple solution involving calling something like "registerTempTable" in the custom interpreter and then run the other interpreters normally below as the tutorial does? Thank you for any guidance.