If you are using CDH, you would be shutting down services with
Cloudera Manager. I believe you can do it manually using Linux
'services' if you do the steps correctly across your whole cluster.
I'm not sure if the stock stop-all.sh script is supposed to work.
Certainly, if you are using CM, by far the easiest is to start/stop
all of these things in CM.

On Wed, Jan 21, 2015 at 6:08 PM, Su She <suhsheka...@gmail.com> wrote:
> Hello Sean & Akhil,
>
> I tried running the stop-all.sh script on my master and I got this message:
>
> localhost: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
> chown: changing ownership of
> `/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/logs': Operation
> not permitted
> no org.apache.spark.deploy.master.Master to stop
>
> I am running Spark (Yarn) via Cloudera Manager. I tried stopping it from
> Cloudera Manager first, but it looked like it was only stopping the history
> server, so I started Spark again and tried ./stop-all.sh and got the above
> message.
>
> Also, what is the command for shutting down storage or can I simply stop
> hdfs in Cloudera Manager?
>
> Thank you for the help!
>
>
>
> On Sat, Jan 17, 2015 at 12:58 PM, Su She <suhsheka...@gmail.com> wrote:
>>
>> Thanks Akhil and Sean for the responses.
>>
>> I will try shutting down spark, then storage and then the instances.
>> Initially, when hdfs was in safe mode, I waited for >1 hour and the problem
>> still persisted. I will try this new method.
>>
>> Thanks!
>>
>>
>>
>> On Sat, Jan 17, 2015 at 2:03 AM, Sean Owen <so...@cloudera.com> wrote:
>>>
>>> You would not want to turn off storage underneath Spark. Shut down
>>> Spark first, then storage, then shut down the instances. Reverse the
>>> order when restarting.
>>>
>>> HDFS will be in safe mode for a short time after being started before
>>> it becomes writeable. I would first check that it's not just that.
>>> Otherwise, find out why the cluster went into safe mode from the logs,
>>> fix it, and then leave safe mode.
>>>
>>> On Sat, Jan 17, 2015 at 9:03 AM, Akhil Das <ak...@sigmoidanalytics.com>
>>> wrote:
>>> > Safest way would be to first shutdown HDFS and then shutdown Spark
>>> > (call
>>> > stop-all.sh would do) and then shutdown the machines.
>>> >
>>> > You can execute the following command to disable safe mode:
>>> >
>>> >> hadoop fs -safemode leave
>>> >
>>> >
>>> >
>>> > Thanks
>>> > Best Regards
>>> >
>>> > On Sat, Jan 17, 2015 at 8:31 AM, Su She <suhsheka...@gmail.com> wrote:
>>> >>
>>> >> Hello Everyone,
>>> >>
>>> >> I am encountering trouble running Spark applications when I shut down
>>> >> my
>>> >> EC2 instances. Everything else seems to work except Spark. When I try
>>> >> running a simple Spark application, like sc.parallelize() I get the
>>> >> message
>>> >> that hdfs name node is in safemode.
>>> >>
>>> >> Has anyone else had this issue? Is there a proper protocol I should be
>>> >> following to turn off my spark nodes?
>>> >>
>>> >> Thank you!
>>> >>
>>> >>
>>> >
>>
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to