I would recommend installing the Hadoop RPMs and avoid the start-all scripts all together. The RPMs ship with init scripts, allowing you to start and stop daemons with /sbin/service (or with a configuration management tool, which I assume you'll be using as your cluster grows). Here's more info on the RPMs:
<http://www.cloudera.com/hadoop> The start-all scripts are easy ways for small clusters to get started / stopped, but they're more annoying as your cluster grows (you have to distribute authorized_keys files, iteratively start each daemon, etc). If you want to stick with the tarball, you can use bin/hadoop-daemon.sh on each node as well, though the only thing this buys you is being able to avoid shipping your public key around for the "hadoop" user: bin/hadoop-daemon.sh start datanode > bin/hadoop-daemon.sh start tasktracker > bin/hadoop-daemon.sh start etc > Hope this helps. Alex On Tue, Apr 21, 2009 at 8:56 PM, Yabo-Arber Xu <arber.resea...@gmail.com>wrote: > Hi there, > > I setup a small cluster for testing. When I start my cluster on my master > node, I have to type the password for starting each datanode and > tasktracker. That's pretty annoying and may be hard to handle when the > cluster grows. Any graceful way to handle this? > > Best, > Arber >