machine if you want to start all the deamons. This will start and
add the new datanodes to the up-and-running cluster.
hopefully my info will be help.
Ski Gh3 写道:
hi,
I am wondering how to add more datanodes to an up-and-running hadoop
instance?
Couldn't find instructions on this from
is using the hadoop jar
command line for running the program. however, I don't understand why this
won't work since I am using the jobclient interface for interacting with
hadoop...
Would really appreciate if anybody can share some experience on this!
Thank you!
On Mon, Oct 6, 2008 at 10:48 AM, Ski
Hi all,
I have a weird problem regarding running the wordcount example from eclipse.
I was able to run the wordcount example from the command line like:
$ ...MyHadoop/bin/hadoop jar ../MyHadoop/hadoop-xx-examples.jar wordcount
myinputdir myoutputdir
However, if I try to run the wordcount
Hi all,
I have a maybe naive question on providing input to a mapreduce program:
how can I specify the input with respect to the hdfs path?
right now I can specify a input file from my local directory, say, hadoop
trunk
I can also specify an absolute path for a dfs file using where it is
job, but again you want to point your second job's input at the
directory that the first job outputted to.
Hope this helps.
Alex
On Fri, Oct 3, 2008 at 11:15 AM, Ski Gh3 [EMAIL PROTECTED] wrote:
Hi all,
I have a maybe naive question on providing input to a mapreduce program:
how can
Hi all,
I'm trying to set up a small cluster with 3 machines. I'd like to have one
machine serves as the namenode and the jobtracker, while the 3 all serve as
the datanode and tasktrackers.
After following the set up instructions, I got an exception running
$HADOOP_HOME/bin/start-dfs.sh: