I'm looking for some help. I'm Nutch user, everything was working fine, but
now I get the following error when indexing.
I have a single note pseudo distributed set up.
Some people on the Nutch list indicated to me that I could full, so I remove
many things and hdfs is far from full.
This file & directory was perfectly OK the day before.
I did a "hadoop fsck"... report says healthy.

What can I do ?

Is is safe to do a Linux FSCK just in case ?

Caused by: java.io.IOException: Could not obtain block:
blk_8851198258748412820_9031
file=/user/nutch/crawl/indexed-segments/20100111233601/part-00000/_103.frq


-- 
-MilleBii-

Reply via email to