Hey all,
OK thanks for your advice on setting up a hadoop test environment to get
started in learning how to use hadoop! I'm very excited to be able to start
to take this plunge!
Although rather than using BigTop or Cloudera, I just decided to go for a
straight apache hadoop install. I setup 3
Can you see the slave logs to find out what is happening there? For e.g.,
/home/hadoop/logs/hadoop-hadoop-datanode-hadoop3.log and
/home/hadoop/logs/yarn-hadoop-nodemanager-hadoop-hadoop3.log.
+Vinod
On Sun, Nov 23, 2014 at 10:24 AM, Tim Dunphy bluethu...@gmail.com wrote:
Hey all,
OK thanks
Hi
There is not enough information to recognise the exact problem.
Check the last 50-60 log lines of the datanode logs first.
tail -fn 60 /home/hadoop/logs/hadoop-hadoop-datanode-hadoop2.log
tail -fn 60 /home/hadoop/logs/hadoop-hadoop-datanode-hadoop3.log
You can see many useful information
Hey guys,
Can you see the slave logs to find out what is happening there? For e.g.,
/home/hadoop/logs/hadoop-hadoop-datanode-hadoop3.log and
/home/hadoop/logs/yarn-hadoop-nodemanager-hadoop-hadoop3.log.
There is not enough information to recognise the exact problem.
Check the last 50-60 log
few things:
- Make sure you are using the internal IP's in aws.
- As you are using Hadoop 2.x, DNS resolution is mandatory. Make sure
sure, forward and reverse looks work ofr the nodes. Else namenode will
not let the datanodes join.
- Check the logs on namendoe to see, whether datanodes
I just installed vanilla hadoop 2.4 on aws. These are the steps I followed.
1. Changed the hostnames
2. Made sure the machines could find one another by pinging. By default it
was not finding each other.
3. Made sure ssh worked
4. Configured all site xmls and slaves files
5. Started
On Mon, Nov
I had this problem on a small cluster running Debian, in my case the
issue was caused by a reference to 0.0.0.0 as localhost in /etc/hosts, I
removed it leaving just 127.0.0.1 as localhost ip and the problem was
solved. If it's not the same issue it could be something similar about
address
Hi Hamza Zafar
I would like to let you know first that ApplicationMasterProtocol#
allocate() has not only for requesting container but also doubles up as a
heartbeat to let the ResourceManager know that the ApplicationMaster is alive
So basically your ApplicationMaster should be keep