Hello Sean and Akhil,

I shut down the services on Cloudera Manager. I shut them down in the
appropriate order and then stopped all services of CM. I then shut down my
instances. I then turned my instances back on, but I am getting the same
error.

1) I tried hadoop fs -safemode leave and it said -safemode is an unknown
command, but it does recognize hadoop fs

2) I also noticed I can't ping my instances from my personal laptop and I
can't ping google.com from my instances. However, I can still run my Kafka
Zookeeper/server/console producer/consumer. I know this is the spark
thread, but thought that might be relevant.

Thank you for any suggestions!

Best,

Su



On Thu, Jan 22, 2015 at 2:41 AM, Sean Owen <so...@cloudera.com> wrote:

> If you are using CDH, you would be shutting down services with
> Cloudera Manager. I believe you can do it manually using Linux
> 'services' if you do the steps correctly across your whole cluster.
> I'm not sure if the stock stop-all.sh script is supposed to work.
> Certainly, if you are using CM, by far the easiest is to start/stop
> all of these things in CM.
>
> On Wed, Jan 21, 2015 at 6:08 PM, Su She <suhsheka...@gmail.com> wrote:
> > Hello Sean & Akhil,
> >
> > I tried running the stop-all.sh script on my master and I got this
> message:
> >
> > localhost: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
> > chown: changing ownership of
> > `/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/logs':
> Operation
> > not permitted
> > no org.apache.spark.deploy.master.Master to stop
> >
> > I am running Spark (Yarn) via Cloudera Manager. I tried stopping it from
> > Cloudera Manager first, but it looked like it was only stopping the
> history
> > server, so I started Spark again and tried ./stop-all.sh and got the
> above
> > message.
> >
> > Also, what is the command for shutting down storage or can I simply stop
> > hdfs in Cloudera Manager?
> >
> > Thank you for the help!
> >
> >
> >
> > On Sat, Jan 17, 2015 at 12:58 PM, Su She <suhsheka...@gmail.com> wrote:
> >>
> >> Thanks Akhil and Sean for the responses.
> >>
> >> I will try shutting down spark, then storage and then the instances.
> >> Initially, when hdfs was in safe mode, I waited for >1 hour and the
> problem
> >> still persisted. I will try this new method.
> >>
> >> Thanks!
> >>
> >>
> >>
> >> On Sat, Jan 17, 2015 at 2:03 AM, Sean Owen <so...@cloudera.com> wrote:
> >>>
> >>> You would not want to turn off storage underneath Spark. Shut down
> >>> Spark first, then storage, then shut down the instances. Reverse the
> >>> order when restarting.
> >>>
> >>> HDFS will be in safe mode for a short time after being started before
> >>> it becomes writeable. I would first check that it's not just that.
> >>> Otherwise, find out why the cluster went into safe mode from the logs,
> >>> fix it, and then leave safe mode.
> >>>
> >>> On Sat, Jan 17, 2015 at 9:03 AM, Akhil Das <ak...@sigmoidanalytics.com
> >
> >>> wrote:
> >>> > Safest way would be to first shutdown HDFS and then shutdown Spark
> >>> > (call
> >>> > stop-all.sh would do) and then shutdown the machines.
> >>> >
> >>> > You can execute the following command to disable safe mode:
> >>> >
> >>> >> hadoop fs -safemode leave
> >>> >
> >>> >
> >>> >
> >>> > Thanks
> >>> > Best Regards
> >>> >
> >>> > On Sat, Jan 17, 2015 at 8:31 AM, Su She <suhsheka...@gmail.com>
> wrote:
> >>> >>
> >>> >> Hello Everyone,
> >>> >>
> >>> >> I am encountering trouble running Spark applications when I shut
> down
> >>> >> my
> >>> >> EC2 instances. Everything else seems to work except Spark. When I
> try
> >>> >> running a simple Spark application, like sc.parallelize() I get the
> >>> >> message
> >>> >> that hdfs name node is in safemode.
> >>> >>
> >>> >> Has anyone else had this issue? Is there a proper protocol I should
> be
> >>> >> following to turn off my spark nodes?
> >>> >>
> >>> >> Thank you!
> >>> >>
> >>> >>
> >>> >
> >>
> >>
> >
>

Reply via email to