Joey,
I just tried it and it worked great. I configured the entire cluster (added
a couple more DataNodes) and I was able to run a simple map/reduce job.
Thanks for your help!
Pony
On Tue, May 31, 2011 at 6:26 PM, gordoslocos gordoslo...@gmail.com wrote:
:D i'll give that a try 1st thing in
Hi Guys,
I recently configured my cluster to have 2 VMs. I configured 1
machine (slave3) to be the namenode and another to be the
jobtracker (slave2). They both work as datanode/tasktracker as well.
Both configs have the following contents in their masters and slaves file:
*slave2*
*slave3*
Both
This seems to be your problem, really...
* namemapred.job.tracker/name*
* valueslave2:9001/value*
On Tue, May 31, 2011 at 06:07PM, Juan P. wrote:
Hi Guys,
I recently configured my cluster to have 2 VMs. I configured 1
machine (slave3) to be the namenode and another to be the
jobtracker
The problem is that start-all.sh isn't all that intelligent. The way
that start-all.sh works is by running start-dfs.sh and
start-mapred.sh. The start-mapred.sh script always starts a job
tracker on the local host and a task tracker on all of the hosts
listed in slaves (it uses SSH to do the
Eeeeh why? Isnt That the config for the jobtracker? Slave2 has been defined
in my /etc/hosts files.
Should those lines not be in both nodes?
Thanks for helping!
Pony
On 31/05/2011, at 18:12, Konstantin Boudnik c...@apache.org wrote:
This seems to be your problem, really...
*
:D i'll give that a try 1st thing in the morning! Thanks a lot joey!!
Sent from my iPhone
On 31/05/2011, at 18:18, Joey Echeverria j...@cloudera.com wrote:
The problem is that start-all.sh isn't all that intelligent. The way
that start-all.sh works is by running start-dfs.sh and
On Tue, May 31, 2011 at 06:21PM, gordoslocos wrote:
Eeeeh why? Isnt That the config for the jobtracker? Slave2 has been
defined in my /etc/hosts files.
Should those lines not be in both nodes?
Indeed, but you are running MR start script on slave3 meaning that JT will be
started on slave3