Thanks Akhil and Sean for the responses.

I will try shutting down spark, then storage and then the instances.
Initially, when hdfs was in safe mode, I waited for >1 hour and the problem
still persisted. I will try this new method.

Thanks!



On Sat, Jan 17, 2015 at 2:03 AM, Sean Owen <so...@cloudera.com> wrote:

> You would not want to turn off storage underneath Spark. Shut down
> Spark first, then storage, then shut down the instances. Reverse the
> order when restarting.
>
> HDFS will be in safe mode for a short time after being started before
> it becomes writeable. I would first check that it's not just that.
> Otherwise, find out why the cluster went into safe mode from the logs,
> fix it, and then leave safe mode.
>
> On Sat, Jan 17, 2015 at 9:03 AM, Akhil Das <ak...@sigmoidanalytics.com>
> wrote:
> > Safest way would be to first shutdown HDFS and then shutdown Spark (call
> > stop-all.sh would do) and then shutdown the machines.
> >
> > You can execute the following command to disable safe mode:
> >
> >> hadoop fs -safemode leave
> >
> >
> >
> > Thanks
> > Best Regards
> >
> > On Sat, Jan 17, 2015 at 8:31 AM, Su She <suhsheka...@gmail.com> wrote:
> >>
> >> Hello Everyone,
> >>
> >> I am encountering trouble running Spark applications when I shut down my
> >> EC2 instances. Everything else seems to work except Spark. When I try
> >> running a simple Spark application, like sc.parallelize() I get the
> message
> >> that hdfs name node is in safemode.
> >>
> >> Has anyone else had this issue? Is there a proper protocol I should be
> >> following to turn off my spark nodes?
> >>
> >> Thank you!
> >>
> >>
> >
>

Reply via email to