Thanks Akhil!
1) I had to do sudo -u hdfs hdfs dfsadmin -safemode leave
a) I had created a user called hdfs with superuser privileges in Hue, hence
the double hdfs.
2) Lastly, I know this is getting a bit off topic, but this is my etc/hosts
file:
127.0.0.1 localhost.localdomain loca
Command would be:
hadoop dfsadmin -safemode leave
If you are not able to ping your instances, it can be because of you are
blocking all the ICMP requests. Im not quiet sure why you are not able to
ping google.com from your instances. Make sure the internal IP (ifconfig)
is proper in the f
Hello Sean and Akhil,
I shut down the services on Cloudera Manager. I shut them down in the
appropriate order and then stopped all services of CM. I then shut down my
instances. I then turned my instances back on, but I am getting the same
error.
1) I tried hadoop fs -safemode leave and it said -
If you are using CDH, you would be shutting down services with
Cloudera Manager. I believe you can do it manually using Linux
'services' if you do the steps correctly across your whole cluster.
I'm not sure if the stock stop-all.sh script is supposed to work.
Certainly, if you are using CM, by far
Hello Sean & Akhil,
I tried running the stop-all.sh script on my master and I got this message:
localhost: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
chown: changing ownership of
`/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/logs':
Operation not permitted
no org.apa
Thanks Akhil and Sean for the responses.
I will try shutting down spark, then storage and then the instances.
Initially, when hdfs was in safe mode, I waited for >1 hour and the problem
still persisted. I will try this new method.
Thanks!
On Sat, Jan 17, 2015 at 2:03 AM, Sean Owen wrote:
> Y
You would not want to turn off storage underneath Spark. Shut down
Spark first, then storage, then shut down the instances. Reverse the
order when restarting.
HDFS will be in safe mode for a short time after being started before
it becomes writeable. I would first check that it's not just that.
Ot
Safest way would be to first shutdown HDFS and then shutdown Spark (call
stop-all.sh would do) and then shutdown the machines.
You can execute the following command to disable safe mode:
*hadoop fs -safemode leave*
Thanks
Best Regards
On Sat, Jan 17, 2015 at 8:31 AM, Su She wrote:
> Hello E
Hello Everyone,
I am encountering trouble running Spark applications when I shut down my
EC2 instances. Everything else seems to work except Spark. When I try
running a simple Spark application, like sc.parallelize() I get the message
that hdfs name node is in safemode.
Has anyone else had this i