What version of Hadoop are you using, and is your use of the local (non-cluster) job runner mode intentional?
On Wed, Jan 23, 2013 at 9:23 PM, 吴靖 <qhwj2...@126.com> wrote: > hi, everyone! > I want use the nutch to crawl the web pages, but problem comes as the > log like, I think it maybe some permissions problem,but i am not sure. > Any help will be appreciated, think you > > 2013-01-23 07:37:21,809 ERROR mapred.FileOutputCommitter - Mkdirs failed > to create file > :/home/wj/apps/apache-nutch-1.6/bin/crawl/crawldb/190684692/_temporary > 2013-01-23 07:37:24,836 WARN mapre d.LocalJobRunner - job_local_0002 > java.io.IOException: The temporary job-output directory > file:/home/wj/apps/apache-nutch-1.6/bin/crawl/crawldb/190684692/_temporary > doesn't exist! > at > org.apache.hadoop.mapred.FileOutputCommitter.getWorkPath(FileOutputCommitter.java:250) > at > org.apache.hadoop.mapred.FileOutputFormat.getTaskOutputPath(FileOutputFormat.java:244) > at > org.apache.hadoop.mapred.MapFileOutputFormat.getRecordWriter(MapFileOutputFormat.java:46) > at > org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.<init>(ReduceTask.java:448) > at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:490) > ** at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420) > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260) > > > -- Harsh J