Hi,all
I am using hadoop-0.18.0-core.jar and nutch-2008-08-18_04-01-55.jar,
and running hadoop on one namenode and 4 slaves.
attached is my hadoop-site.xml, and I didn't change the file
hadoop-default.xml
when data in segments are large,this kind of errors occure:
java.io.IOException: Could not o
Hi,all,
I got this problem below every time I ran reduce.
10.254.106.48:50010:DataXceiver: java.io.IOException: Block
blk_-7951711472001460544 is valid, and cannot be written to.
I am running map-reduce on *4* slave nodes,using *hadoop-0.16.4-core.jar*
I was wondering whether this error is rela
Hi,all
I always met this kind of error when do mapping job.
Task task_200807130149_0067_m_00_0 failed to report status for 604 seconds.
Killing!
I am using hadoop-0.16.4-core.jar ,one namenode,one datanode.
What does this error message suggest? Does it mean functions in mapper is too
slo