This is the error message in task track log: ( someone has any ideas ?)
2009-05-31 09:49:16,165 ERROR org.apache.hadoop.mapred.TaskTracker: Caught
exception: java.io.IOException: Call to localhost/127.0.0.1:9001 failed on
local exception: An existing connection was forcibly closed by the remote
You should read the logs to find out what happened.
On Sun, May 31, 2009 at 9:48 AM, zhang jianfeng wrote:
> I also find the tasktracker log is increasing, seems the task tracker
> works, but it will exhaust my disk space.
>
>
>
> On Sun, May 31, 2009 at 9:45 AM, zhang jianfeng wrote:
>
>> Hi a
I also find the tasktracker log is increasing, seems the task tracker
works, but it will exhaust my disk space.
On Sun, May 31, 2009 at 9:45 AM, zhang jianfeng wrote:
> Hi all,
>
> I folllow the tutorial of hadoop and run it in local pseudo-distrubuted
> model. But every time I run
> bin/had
Hi all,
I folllow the tutorial of hadoop and run it in local pseudo-distrubuted
model. But every time I run
bin/hadoop jar hadoop-0.19.0-examples.jar grep input output 'dfs[a-z.]+',
the job will always pend, I don't know what's the reason.
ps, my platform is windows XP, I run it in cygwin.
Tha
Hi all,
I am just getting started with hadoop 0.20 and trying to run a job in
pseudo-distributed mode.
I configured hadoop according to the tutorial, but it seems it does
not work as expected.
My map/reduce tasks are running sequencially and output result is
stored on local filesystem instead of
Hi all,
I am running the hadoop-0.19.1 and met strange problem
in these days. Several days before, hadoop run smoothly
and three nodes have been running TaskTracker and DataNode
deamons. However, one of node can not start DataNode
after I moved them to another place.
I have checked the network a