HI Sisu Xi,

On the master node can you check
Hadoop dfsadmin –report
And listing all the slave nodes or you can check master URL and it should all 
datanodes listed as slave nodes.
Check for RM UI and slave nodes listed there also.

Thanks,
Sam

From: Sisu Xi <xis...@gmail.com<mailto:xis...@gmail.com>>
Reply-To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>" 
<user@hadoop.apache.org<mailto:user@hadoop.apache.org>>
Date: Sunday, July 13, 2014 at 11:28 AM
To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>" 
<user@hadoop.apache.org<mailto:user@hadoop.apache.org>>
Subject: hadoop multinode, only master node doing the work

Hi, all:

I am new to hadoop. I followed the tutorial on
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/

and installed hadoop 2.2.0 on two 4-core Ubuntu 12.04 machines.

I can start the pi program, however, only the master node is doing the work (I 
checked the top on each machine).
Seems the two nodes are configured correctly, because I can start  the program 
in the slave node as well, and still only the master node is doing the actual 
work.
I have tried different number of mappers for the pi program, and the results is 
the same.

Is there anything else I can check?

In the end is my configure file on each host.

Thanks very much!

Sisu

---------yarn-site.xml-------

<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
</property>


<property>
  <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
  <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>


<property>
  <name>yarn.resourcemanager.address</name>
  <value>master:8032</value>
</property>

---------------hdfs-site.xml--------------------

<property>
  <name>dfs.replication</name>
  <value>2</value>
</property>


<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/home/xisisu/mydata/hdfs/namenode</value>
</property>


<property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/home/xisisu/mydata/hdfs/datanode</value>
</property>


-------------core-site.xml-------------

<property>
<name>fs.default.name<http://fs.default.name></name>
<value>hdfs://master:9000</value>
</property>

------------------mapred-site.xml-----------------

<property>
  <name>mapreduce.framework.name<http://mapreduce.framework.name></name>
  <value>yarn</value>
</property>

<property>
  <name>mapred.job.tracker</name>
  <value>master:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>




--
[https://dl.dropboxusercontent.com/u/268069/Sisu_Contact.jpeg]

Sisu Xi, PhD Candidate

http://www.cse.wustl.edu/~xis/
Department of Computer Science and Engineering
Campus Box 1045
Washington University in St. Louis
One Brookings Drive
St. Louis, MO 63130

Reply via email to