I have checked earlier error and solved it after seeing logs but sitll have
some problem .....many of the solutions suggests about number of entries in
/etc/hosts but not confirmed try to get replies from mailing list





arpit@arpit:~/hadoop-1.0.3$ bin/hadoop jar hadoop-examples-1.0.3.jar
wordcount /Input /O1

WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please
use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties
files.
13/04/10 17:56:20 INFO input.FileInputFormat: Total input paths to process
: 3
13/04/10 17:56:20 INFO util.NativeCodeLoader: Loaded the native-hadoop
library
13/04/10 17:56:20 WARN snappy.LoadSnappy: Snappy native library not loaded
13/04/10 17:56:21 INFO mapred.JobClient: Running job: job_201304101755_0001
13/04/10 17:56:22 INFO mapred.JobClient:  map 0% reduce 0%
13/04/10 17:56:36 INFO mapred.JobClient:  map 66% reduce 0%
13/04/10 17:56:39 INFO mapred.JobClient:  map 100% reduce 0%
13/04/10 17:56:45 INFO mapred.JobClient:  map 100% reduce 22%
13/04/10 17:57:23 INFO mapred.JobClient: Task Id :
attempt_201304101755_0001_m_000002_0,
Status : FAILED
Too many fetch-failures
attempt_201304101755_0001_m_000002_0: WARNING:
org.apache.hadoop.metrics.jvm.
EventCounter is deprecated. Please use
org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties
files.
13/04/10 17:57:27 INFO mapred.JobClient:  map 66% reduce 22%
13/04/10 17:57:30 INFO mapred.JobClient:  map 100% reduce 22%
13/04/10 17:57:39 INFO mapred.JobClient:  map 100% reduce 100%
13/04/10 17:57:44 INFO mapred.JobClient: Job complete: job_201304101755_0001
13/04/10 17:57:44 INFO mapred.JobClient: Counters: 29
13/04/10 17:57:44 INFO mapred.JobClient:   Job Counters
13/04/10 17:57:44 INFO mapred.JobClient:     Launched reduce tasks=1
13/04/10 17:57:44 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=29110
13/04/10 17:57:44 INFO mapred.JobClient:     Total time spent by all
reduces waiting after reserving slots (ms)=0
13/04/10 17:57:44 INFO mapred.JobClient:     Total time spent by all maps
waiting after reserving slots (ms)=0
13/04/10 17:57:44 INFO mapred.JobClient:     Launched map tasks=4
13/04/10 17:57:44 INFO mapred.JobClient:     Data-local map tasks=2
13/04/10 17:57:44 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=62283
13/04/10 17:57:44 INFO mapred.JobClient:   File Output Format Counters
13/04/10 17:57:44 INFO mapred.JobClient:     Bytes Written=54
13/04/10 17:57:44 INFO mapred.JobClient:   FileSystemCounters
13/04/10 17:57:44 INFO mapred.JobClient:     FILE_BYTES_READ=154
13/04/10 17:57:44 INFO mapred.JobClient:     HDFS_BYTES_READ=371
13/04/10 17:57:44 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=86469
13/04/10 17:57:44 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=54
13/04/10 17:57:44 INFO mapred.JobClient:   File Input Format Counters
13/04/10 17:57:44 INFO mapred.JobClient:     Bytes Read=86
13/04/10 17:57:44 INFO mapred.JobClient:   Map-Reduce Framework
13/04/10 17:57:44 INFO mapred.JobClient:     Map output materialized
bytes=166
13/04/10 17:57:44 INFO mapred.JobClient:     Map input records=2
13/04/10 17:57:44 INFO mapred.JobClient:     Reduce shuffle bytes=166
13/04/10 17:57:44 INFO mapred.JobClient:     Spilled Records=20
13/04/10 17:57:44 INFO mapred.JobClient:     Map output bytes=128
13/04/10 17:57:44 INFO mapred.JobClient:     CPU time spent (ms)=5450
13/04/10 17:57:44 INFO mapred.JobClient:     Total committed heap usage
(bytes)=495321088
13/04/10 17:57:44 INFO mapred.JobClient:     Combine input records=10
13/04/10 17:57:44 INFO mapred.JobClient:     SPLIT_RAW_BYTES=285
13/04/10 17:57:44 INFO mapred.JobClient:     Reduce input records=10
13/04/10 17:57:44 INFO mapred.JobClient:     Reduce input groups=5
13/04/10 17:57:44 INFO mapred.JobClient:     Combine output records=10
13/04/10 17:57:44 INFO mapred.JobClient:     Physical memory (bytes)
snapshot=735719424
13/04/10 17:57:44 INFO mapred.JobClient:     Reduce output records=5
13/04/10 17:57:44 INFO mapred.JobClient:     Virtual memory (bytes)
snapshot=2068717568
13/04/10 17:57:44 INFO mapred.JobClient:     Map output records=10


On Tue, Apr 9, 2013 at 6:22 PM, Harsh J <ha...@cloudera.com> wrote:

> Hi,
>
> This is most likely caused by an improper network environment wherein
> the reducer is not able to resolve all available tasktrackers to read
> the map outputs. Check the logs of the task attempt
> attempt_201304091351_0001_r_000000_0 from the web UI for more specific
> information on which host it wasn't able to resolve.
>
> On Tue, Apr 9, 2013 at 2:48 PM, Rajashree Bagal
> <rajashreeba...@gmail.com> wrote:
> > we are getting the following error/warning while running wordcount
> program
> > on hadoop 2 node cluster with one master and one slave...
> >
> >
> > arpit@arpit:~/hadoop-1.0.3$ bin/hadoop jar hadoop-examples-1.0.3.jar
> > wordcount /Input /Output
> > WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please
> > use org.apache.hadoop.log.metrics.EventCounter in all the
> log4j.properties
> > files.
> > 13/04/09 13:51:56 INFO input.FileInputFormat: Total input paths to
> process :
> > 3
> > 13/04/09 13:51:56 INFO util.NativeCodeLoader: Loaded the native-hadoop
> > library
> > 13/04/09 13:51:56 WARN snappy.LoadSnappy: Snappy native library not
> loaded
> > 13/04/09 13:51:57 INFO mapred.JobClient: Running job:
> job_201304091351_0001
> > 13/04/09 13:51:58 INFO mapred.JobClient:  map 0% reduce 0%
> > 13/04/09 13:52:13 INFO mapred.JobClient:  map 66% reduce 0%
> > 13/04/09 13:52:16 INFO mapred.JobClient:  map 100% reduce 0%
> > 13/04/09 13:52:22 INFO mapred.JobClient:  map 100% reduce 22%
> > 13/04/09 13:59:47 INFO mapred.JobClient:  map 100% reduce 0%
> > 13/04/09 13:59:52 INFO mapred.JobClient: Task Id :
> > attempt_201304091351_0001_r_000000_0, Status : FAILED
> > Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
> > 13/04/09 13:59:52 WARN mapred.JobClient: Error reading task outputhadoop
> > 13/04/09 13:59:52 WARN mapred.JobClient: Error reading task outputhadoop
> > 13/04/09 14:00:05 INFO mapred.JobClient:  map 100% reduce 11%
> >
> > what can be the possible solution.... is it the fault of setup or
> anything
> > else....
> > please help
>
>
>
> --
> Harsh J
>

Reply via email to