Hello Mates:

Thanks to everyone for their help so far. I have learnt a lot and have now done 
single and pseudo mode. I have a hadoop cluster but I ran jps on the master 
node and slave node but not all process are started

master:
22160 NameNode
22716 Jps
22458 JobTracker

slave:
32195 Jps

I also checked the logs and I see files for all the datanodes, jobtracker, 
namenode, secondarynamenode, and tasktracker. The tasktracker has one slave 
node log missing. The namenode formatted correctly. I set the values for below 
so I'm not sure if I need more. My cluster is 11 nodes (1 master, 10 slaves). I 
do not have permission to access root only my directory so hadoop is installed 
in there. I can ssh to the slaves properly.
        * 
fs.default.name, dfs.name.dir, dfs.data.dir, mapred.job.tracker, mapred.system.dir


It also gave errors regarding:
        * it cannot find the hadoop-daemon.sh file but I can see it

/home/my-user/hadoop-0.20.2_cluster/bin/hadoop-daemon.sh: line 40: cd: 
/home/my-user/hadoop-0.20.2_cluster/bin: No such file or directory

        * it has the wrong path for the hadoop-config.sh so which parameter 
sets this field??

/home/my-user/hadoop-0.20.2_cluster/bin/hadoop-daemon.sh: line 42: 
/home/my-user/hadoop-0.20.2_cluster/hadoop-config.sh: No such file or directory

        * not being able to create the log directory on the same slave node 
that doesn't have its tasktracker, which parameters should be used to set the 
log directory?

The same slave node which is giving problems also has:
 Usage: hadoop-daemon.sh [--config <conf-dir>] [--hosts hostlistfile] 
(start|stop) <hadoop-command> <args...>


Thanks for your help.

Cheers,
Tamara

Reply via email to