Could not obtain block: blk_-2634319951074439134_1129 file=/user/root/crawl_debug/segments/20080825053518/content/part-00002/data

2008-08-27 Thread wangxu
Hi,all I am using hadoop-0.18.0-core.jar and nutch-2008-08-18_04-01-55.jar, and running hadoop on one namenode and 4 slaves. attached is my hadoop-site.xml, and I didn't change the file hadoop-default.xml when data in segments are large,this kind of errors occure: java.io.IOException: Could not o

hadoop reduce problem: Block blk_-2061672148590840392 is valid, and cannot be written to

2008-08-10 Thread wangxu
Hi,all, I got this problem below every time I ran reduce. 10.254.106.48:50010:DataXceiver: java.io.IOException: Block blk_-7951711472001460544 is valid, and cannot be written to. I am running map-reduce on *4* slave nodes,using *hadoop-0.16.4-core.jar* I was wondering whether this error is rela

help,error "...failed to report status for xxx seconds..."

2008-07-31 Thread wangxu
Hi,all I always met this kind of error when do mapping job. Task task_200807130149_0067_m_00_0 failed to report status for 604 seconds. Killing! I am using hadoop-0.16.4-core.jar ,one namenode,one datanode. What does this error message suggest? Does it mean functions in mapper is too slo