I've been staring at some shell scripts, trying to work out why logs go into a different place when the init.d scripts run than when the command line hadoop daemons run.
That is, why do the settings in /etc/hadoop/conf not get picked up? The answer is, of course, because the default values override the customisations due to the second read of the files: . /etc/default/hadoop # ... . /usr/lib/hadoop/bin/hadoop-config.sh # FIXME: this needs to be removed once hadoop-config.sh stop clobbering HADOOP_HOME . /etc/default/hadoop We've got two different config systems fighting here. I propose 1. The core hadoop conf files are set up to always handle a predefined value and not override it (remember, per-installation confs can fix that) 2. the init.d daemons only read the values in once.
