Hi,

I added the -Pyarn" flag, modified Master property and updated zeppelin-env.sh
file but still I get the same error when running the %sql

java.lang.ClassCastException: org.apache.hadoop.mapred.JobConf cannot be
cast to org.apache.spark.rdd.RDD at
org.apache.spark.SpaekContext$$anonfun$27.apply(SparkContext.scala:1045)
at
org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:170)
at
org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:170)
at scala.Option.map(Option.scala:145)

At the ui I get this error: java.reflect.InvocationTargetException.

I think that the example does not support the spark version in my
environment which is 1.2.0.

One step back in the tutorial I get an error when running .toDF().. I guess
this is also because of spark version?

Any suggestion? Maybe there is an older example I can use?

Thanks,
Ronen



On Wed, May 6, 2015 at 4:37 PM, MrAsanjar . <afsan...@gmail.com> wrote:

> Stas,
> As far as I know you also have to update zeppelin-env.sh file also.
> This is how my cluster configuration ( I used Juju to build the cluster of
> LXC nodes on my laptop):
> hadoop cluster nodes
>  1 namenodes node
>  1 resourcemanager node
>  3+ compute nodes
> Spark 1.3 node
>  1 Spark+hadoop-plugin+zeppelin
>
> If you are using Ubuntu or testing on EWS, I could share my Zeppelin Juju
> bundles. It builds above cluster in less then 15 minutes.
>
> On Wed, May 6, 2015 at 8:18 AM, Stas Zubarev <szuba...@gmail.com> wrote:
>
>> I built Zeppelin with "-Pyarn" flag - does it mean that Zeppelin will be
>> started in cluster mode on HDP 2.2 or I need additional configuration for
>> it?
>>
>> On Wed, May 6, 2015 at 9:14 AM, MrAsanjar . <afsan...@gmail.com> wrote:
>>
>>> i assume your hadoop cluster is configured for yarn. make sure you are
>>> build with "-Pyarn" flag also. Also verify you have "expost
>>> HADOOP_CONF_DIR=" in your zeppelin-env.sh file
>>>
>>> On Wed, May 6, 2015 at 5:45 AM, Ronen Gross <ronengr...@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I have an Exception when I try to run the Zeppelin Tutorial example.
>>>> Zeppelin in spark local mode worked OK but when I'm using spark in
>>>> cluster mode I got an error when I run the %sql
>>>>
>>>> The Error is:
>>>> java.lang.ClassCastException: org.apache.hadoop.mapred.JobConf cannot
>>>> be cast to org.apache.spark.rdd.RDD at
>>>> org.apache.spark.SpaekContext$$anonfun$27.apply(SparkContext.scala:1045)
>>>> at
>>>> org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:170)
>>>> at
>>>> org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:170)
>>>> at scala.Option.map(Option.scala:145)
>>>> ...
>>>> ...
>>>> ...
>>>>
>>>>
>>>> I used this command to build a distribution of Zeppelin:
>>>> mvn clean package -Pspark-1.2 -Phadoop-2.4 -DskipTests
>>>> -Dhadoop.version=2.5.0-cdh5.3.1 -Dspark.version=1.2.1 -P build-distr
>>>>
>>>>
>>>> Is it an error of using spark 1.2.0 ?
>>>>
>>>>
>>>> Thanks,
>>>> Ronen
>>>>
>>>
>>>
>>
>

Reply via email to