Can you check your /etc/hosts to see that all master and slave entries are
correct? If you up the logs to DEBUG, you will see where this is failing.

Thanks and Regards,
Sonal
<https://github.com/sonalgoyal/hiho>Hadoop ETL and Data
Integration<https://github.com/sonalgoyal/hiho>
Nube Technologies <http://www.nubetech.co>

<http://in.linkedin.com/in/sonalgoyal>





On Mon, Mar 14, 2011 at 9:41 AM, Yorgo Sun <yorgo...@gmail.com> wrote:

> Hi all
>
> I have a hadoop cluster with a namenode and 3 datanodes, I've installed it
> by normal process. everything's fine, but it couldn't run wordcount map
> reduce job. Follow are output logs
>
>
> [hadoop@namenode hadoop-0.20.2]$ hadoop jar hadoop-0.20.2-examples.jar
> wordcount /user/root/text.log /user/output1
> 11/03/14 12:07:13 INFO input.FileInputFormat: Total input paths to process
> : 1
> 11/03/14 12:07:13 INFO mapred.JobClient: Running job: job_201103141205_0001
> 11/03/14 12:07:14 INFO mapred.JobClient:  map 0% reduce 0%
> 11/03/14 12:07:22 INFO mapred.JobClient:  map 100% reduce 0%
> 11/03/14 12:07:27 INFO mapred.JobClient: Task Id :
> attempt_201103141205_0001_r_000000_0, Status : FAILED
> Error: java.lang.NullPointerException
> at java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:796)
>  at
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.getMapCompletionEvents(ReduceTask.java:2683)
> at
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.run(ReduceTask.java:2605)
>
> 11/03/14 12:07:33 INFO mapred.JobClient: Task Id :
> attempt_201103141205_0001_r_000000_1, Status : FAILED
> Error: java.lang.NullPointerException
> at java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:796)
>  at
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.getMapCompletionEvents(ReduceTask.java:2683)
> at
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.run(ReduceTask.java:2605)
>
> 11/03/14 12:07:40 INFO mapred.JobClient: Task Id :
> attempt_201103141205_0001_r_000000_2, Status : FAILED
> Error: java.lang.NullPointerException
> at java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:796)
>  at
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.getMapCompletionEvents(ReduceTask.java:2683)
> at
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.run(ReduceTask.java:2605)
>
> 11/03/14 12:07:49 INFO mapred.JobClient: Job complete:
> job_201103141205_0001
> 11/03/14 12:07:49 INFO mapred.JobClient: Counters: 12
> 11/03/14 12:07:49 INFO mapred.JobClient:   Job Counters
> 11/03/14 12:07:49 INFO mapred.JobClient:     Launched reduce tasks=4
> 11/03/14 12:07:49 INFO mapred.JobClient:     Launched map tasks=1
> 11/03/14 12:07:49 INFO mapred.JobClient:     Data-local map tasks=1
> 11/03/14 12:07:49 INFO mapred.JobClient:     Failed reduce tasks=1
> 11/03/14 12:07:49 INFO mapred.JobClient:   FileSystemCounters
> 11/03/14 12:07:49 INFO mapred.JobClient:     HDFS_BYTES_READ=1366
> 11/03/14 12:07:49 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=1868
> 11/03/14 12:07:49 INFO mapred.JobClient:   Map-Reduce Framework
> 11/03/14 12:07:49 INFO mapred.JobClient:     Combine output records=131
> 11/03/14 12:07:49 INFO mapred.JobClient:     Map input records=31
> 11/03/14 12:07:49 INFO mapred.JobClient:     Spilled Records=131
> 11/03/14 12:07:49 INFO mapred.JobClient:     Map output bytes=2055
> 11/03/14 12:07:49 INFO mapred.JobClient:     Combine input records=179
> 11/03/14 12:07:49 INFO mapred.JobClient:     Map output records=179
>
> Is there anyone have this problem too? please help me. thanks a lot.
>
> --
> 孙绍轩 Yorgo Sun
>
>

Reply via email to