Hi,

thank you for your interest in the project!

It seems like the best way to get Zeppelin up and running in your case
would be to build it manually with relevant Spark\Hadoop options as
described here http://zeppelin.incubator.apache.org/docs/install/install.html

Please, let me know if that helps.

--
BR,
Alex

On Tue, Jul 21, 2015 at 11:35 AM, 江之源 <jiangzhiy...@liulishuo.com> wrote:
> hi
> i installed zeppelin some time before, but it always failed in my server
> cluster. i found the z-management Occasionally. I installed and success in
> my server. But when i wanna to read in my HDFS file like:
>
> sc.textFile("hdfs://llscluster/tmp/jzyresult/part-04093").count()
>
>
> it throw the errors in my cluster:Job aborted due to stage failure: Task 15
> in stage 6.0 failed 4 times, most recent failure: Lost task 15.3 in stage
> 6.0 (TID 386, lls7): java.io.EOFException
>
> when i modify it to the local model, it could read HDFS file successfully.
> My cluster is Spark1.3.0 Hadoop2.0.0-CDH4.5.0. but the install options just
> have Spark1.3.0 and Hadoop2.0.0-CDH-4.7.0. Is this the cause to read HDFS
> file failed?
> Look forward to your reply!
> THANK YOU!
> JZY



-- 
--
Kind regards,
Alexander.

Reply via email to