60111 103432 reduce > reduce
060111 103432 Optimizing index.
060111 103433 closing > reduce
060111 103434 closing > reduce
060111 103435 closing > reduce
java.lang.NullPointerException: value cannot be null
        at
org.apache.lucene.document.Field.<init>(Field.java:469)
        at
org.apache.lucene.document.Field.<init>(Field.java:412)
        at
org.apache.lucene.document.Field.UnIndexed(Field.java:195)
        at
org.apache.nutch.indexer.Indexer.reduce(Indexer.java:198)
        at
org.apache.nutch.mapred.ReduceTask.run(ReduceTask.java:260)
        at
org.apache.nutch.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:90)
Exception in thread "main" java.io.IOException: Job
failed!
        at
org.apache.nutch.mapred.JobClient.runJob(JobClient.java:308)
        at
org.apache.nutch.indexer.Indexer.index(Indexer.java:259)
        at
org.apache.nutch.crawl.Crawl.main(Crawl.java:121)
[EMAIL PROTECTED]:/data/nutch/trunk$


Pulled todays build and got above error. No problems
running out of disk space or anything like that. This
is a single instance, local file systems.

Anyway to recover the crawl/finish the reduce job from
where it failed?

Reply via email to