>> No, it's a RAID-5 local disk
>>
>> /dev/md0              1.5T  193G  1.2T  14% /array
>
> Ok, then I don't know what's causing it. There are two options,  
> really - either the file is really corrupted, or there is a subtle  
> bug in Hadoop somewhere. Since this is a local FS, you can try  
> removing the .crc file and see if it helps (Hadoop should rebuild  
> this file when it's needed).
>
> -- 

OK-- I will check the RAID overnight and run the crawl again on a  
different drive. I can't just re-run the segment merge because the re- 
crawl script deletes all the segment directories whether or not they  
were successfully merged.

Relatedly, is there a way for a script to know if bin/nutch  
(somecommand) completed successfully? I would like to avoid this  
issue in the future :)

-Brian




-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Nutch-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nutch-general

Reply via email to