That's how it's supposed to work, right? You don't deploy an assembly
.jar for this reason. You get things like Hadoop from the cluster at
runtime. At least this was the gist of what Matei described last
month. This is not some issue with CDH.

On Wed, Jul 23, 2014 at 8:28 AM, Debasish Das <debasish.da...@gmail.com> wrote:
> I found the issue...
>
> If you use spark git and generate the assembly jar then
> org.apache.hadoop.io.Writable.class is packaged with it....
>
> If you use the assembly jar that ships with CDH in
> /opt/cloudera/parcels/CDH/lib/spark/assembly/lib/spark-assembly_2.10-0.9.0-cdh5.0.2-hadoop2.3.0-cdh5.0.2.jar,
> they don't put org.apache.hadoop.io.Writable.class in it..
>
> That's weird...
>
> If I can run the spark app using bare bone java I am sure it will run with
> Ooyala's job server as well...
>
>
>
> On Wed, Jul 23, 2014 at 12:15 AM, buntu <buntu...@gmail.com> wrote:
>>
>> If you need to run Spark apps through Hue, see if Ooyala's job server
>> helps:
>>
>>
>>
>> http://gethue.com/get-started-with-spark-deploy-spark-server-and-compute-pi-from-your-web-browser/
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-deployed-by-Cloudera-Manager-tp10472p10474.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
>

Reply via email to