This is solving by adding
in conf/hadoop-site.xml
these lines:
<property>
<name>mapred.speculative.execution</name>
<value>false</value>
<description>If true, then multiple instances of some map and reduce tasks
may be executed in parallel.</description>
</property>
P.S. I'll must read jira before...
> Hi
> While running crawl with last trunk nutch version I get this error by
> hadoop:
> Exception in thread "main" java.io.IOException: Job failed!
> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:399)
> at org.apache.nutch.indexer.Indexer.index(Indexer.java:304)
> at org.apache.nutch.crawl.Crawl.main(Crawl.java:130)
>
>
> Is this a bug or my instance's misconfiguration?
>
> Running on single box, java-1.5.0.09
>
> Thanks.