Hye tried doing the same to no avial.
I guess if firewall was the problem then as mentioned in the post
data distribution using HDFS would not have been a problem.
( I checked map task were executed sucessfully in both master and slave)
For some strange reasons reduce wont execute

on 0.16.3 i get following output on console. (Note for some reasons all
the reduce task get shifted to one reducer. and maps are rexecuted) Any
Idea.
Is it BUG HADOOP-1374

08/05/09 12:33:19 INFO mapred.FileInputFormat: Total input paths to
process : 7
08/05/09 12:33:19 INFO mapred.JobClient: Running job: job_200805091211_0003
08/05/09 12:33:20 INFO mapred.JobClient:  map 0% reduce 0%
08/05/09 12:33:26 INFO mapred.JobClient:  map 14% reduce 0%
08/05/09 12:33:27 INFO mapred.JobClient:  map 28% reduce 0%
08/05/09 12:33:28 INFO mapred.JobClient:  map 42% reduce 0%
08/05/09 12:33:29 INFO mapred.JobClient:  map 71% reduce 0%
08/05/09 12:33:32 INFO mapred.JobClient:  map 100% reduce 0%
08/05/09 12:33:42 INFO mapred.JobClient:  map 100% reduce 19%
08/05/09 12:34:42 INFO mapred.JobClient: Task Id :
task_200805091211_0003_m_000003_0, Status : FAILED
Too many fetch-failures
08/05/09 12:34:42 WARN mapred.JobClient: Error reading task
outputmtech-desktop
08/05/09 12:34:42 WARN mapred.JobClient: Error reading task
outputmtech-desktop
08/05/09 12:34:43 INFO mapred.JobClient:  map 85% reduce 19%
08/05/09 12:34:45 INFO mapred.JobClient:  map 100% reduce 19%
08/05/09 12:34:53 INFO mapred.JobClient: Task Id :
task_200805091211_0003_m_000004_0, Status : FAILED
Too many fetch-failures
08/05/09 12:34:53 WARN mapred.JobClient: Error reading task
outputmtech-desktop
08/05/09 12:34:53 WARN mapred.JobClient: Error reading task
outputmtech-desktop
08/05/09 12:34:55 INFO mapred.JobClient:  map 85% reduce 19%
08/05/09 12:34:56 INFO mapred.JobClient:  map 100% reduce 19%
08/05/09 12:35:00 INFO mapred.JobClient:  map 100% reduce 23%
08/05/09 12:35:00 INFO mapred.JobClient: Task Id :
task_200805091211_0003_m_000000_0, Status : FAILED
Too many fetch-failures
08/05/09 12:35:00 WARN mapred.JobClient: Error reading task
outputmtech-desktop
08/05/09 12:35:00 WARN mapred.JobClient: Error reading task
outputmtech-desktop
08/05/09 12:35:01 INFO mapred.JobClient:  map 85% reduce 23%
08/05/09 12:35:05 INFO mapred.JobClient:  map 100% reduce 28%
08/05/09 12:35:13 INFO mapred.JobClient:  map 100% reduce 100%
08/05/09 12:35:14 INFO mapred.JobClient: Job complete: job_200805091211_0003
08/05/09 12:35:14 INFO mapred.JobClient: Counters: 12
08/05/09 12:35:14 INFO mapred.JobClient:   Job Counters
08/05/09 12:35:14 INFO mapred.JobClient:     Launched map tasks=10
08/05/09 12:35:14 INFO mapred.JobClient:     Launched reduce tasks=1
08/05/09 12:35:14 INFO mapred.JobClient:     Data-local map tasks=7
08/05/09 12:35:14 INFO mapred.JobClient:   Map-Reduce Framework
08/05/09 12:35:14 INFO mapred.JobClient:     Map input records=136582
08/05/09 12:35:14 INFO mapred.JobClient:     Map output records=1173106
08/05/09 12:35:14 INFO mapred.JobClient:     Map input bytes=6929688
08/05/09 12:35:14 INFO mapred.JobClient:     Map output bytes=11403672
08/05/09 12:35:14 INFO mapred.JobClient:     Combine input records=1173106
08/05/09 12:35:14 INFO mapred.JobClient:     Combine output records=195209
08/05/09 12:35:14 INFO mapred.JobClient:     Reduce input groups=131275
08/05/09 12:35:14 INFO mapred.JobClient:     Reduce input records=195209
08/05/09 12:35:14 INFO mapred.JobClient:     Reduce output records=131275




-- 
Amit Singh
First Year PostGraduate Student.
Department Of Computer Science And Engineering.
Indian Institute Of Technology,Mumbai.


-- 
"A man's reach should exceed his grasps, or what are the heavens for"
--Vinton G Cerf

> May be due to firewall.
> Try after stopping the iptables.
> If it works add firewall rules to allow communication between master and
> slaves (better allow all nodes in the subnet)
>
> -----Original Message-----
> From: Amit Kumar Singh [mailto:[EMAIL PROTECTED]
> Sent: Thursday, May 08, 2008 4:50 PM
> To: core-user@hadoop.apache.org
> Subject: [Reduce task stalls] Problem Detailed Report
>
> Some of the details that might reveal something more about the the problem
> i posted
> http://mail-archives.apache.org/mod_mbox/hadoop-core-user/200805.mbox/[EMAIL 
> PROTECTED]
>
> Hadoop Version Used
> ****************************************
> 0.15.3
> 0.16.3
>
>
> My environment
> **************
> Ubuntu 7.10 JDK 6.0.
>
>
> Setup
> **************
> 2 cluster machine (one master and 2 slaves. master is itself a slave)
>
>
> Application
> **************
> Sample wordcount example provided with hadoop distributions
>
>
> Problem
> **************
> Tried on both versions. In 0.16.3 the reduce task ends after failures
> (mapred.JobClient: Task Id : XTZ , Status : FAILED
> Too many fetch-failures). but in 0.15.3 entire thing just stalls
>
> (Dataset size is <10 MB )
>
> (All the logs and outputs are for version 0.15.3)
>
>
> Console output for 0.15.3
> -----------------------
> 08/05/09 11:12:22 INFO mapred.FileInputFormat: Total input paths to
> process : 7
> 08/05/09 11:12:22 INFO mapred.JobClient: Running job:
> job_200805091110_0001
> 08/05/09 11:12:23 INFO mapred.JobClient:  map 0% reduce 0%
> 08/05/09 11:12:31 INFO mapred.JobClient:  map 14% reduce 0%
> 08/05/09 11:12:32 INFO mapred.JobClient:  map 42% reduce 0%
> 08/05/09 11:12:33 INFO mapred.JobClient:  map 57% reduce 0%
> 08/05/09 11:12:34 INFO mapred.JobClient:  map 71% reduce 0%
> 08/05/09 11:12:35 INFO mapred.JobClient:  map 100% reduce 0%
> 08/05/09 11:12:43 INFO mapred.JobClient:  map 100% reduce 9%
> 08/05/09 11:12:53 INFO mapred.JobClient:  map 100% reduce 14%
>
> AND ENTIRE THING HANGS ....
>
>
> Steps followed for Setup
> -------------------------
> 1) Modified conf/haddop-env.sh (Java home)
> 2) Modified conf/master added hostname of master server (in my case
> master)--- ONLY ON MASTER
> 3) Modified conf/slave added hostname of slave (in my case master and
> slave)--- ONLY ON MASTER
> 4) Enables password free ssh from master to slave, master to master ,
> slave to slave, and slave to master
> 5) Modified hadoop-site.xml (Both Master and Slave)
> <property>
>    <name>fs.default.name</name>
>    <value>hdfs://master:54310</value>
> </property>
>  <property>
>    <name>mapred.job.tracker</name>
>    <value>master:54311</value>
>  </property>
> <property>
>    <name>dfs.replication</name>
>    <value>2</value>
> </property>
>
> <property>
> <name>mapred.reduce.copy.backoff</name>  (ADDED THIS PROPERTY AS ONE OF
> THE POST SUGGESTED THIS AS SOLUTION)
> <value>5</value>
> </property>
>
> 6) hadoop namenode -format --- ONLY ON MASTER
> 7) start-dfs.sh --- ONLY ON MASTER
> 8) start-mapred.sh --- ONLY ON MASTER
> 9)./hadoop dfs -copyFromLocal ../../data/ d1 (d1 - folder containing some
> text files) --- ONLY ON MASTER
> 10)./hadoop jar hadoop-0.15.3-examples.jar wordcount d1 d1_op (map reduce
> task) --- ONLY ON MASTER
>
>
> MASTER LOGS
>
> *****************************************************************
>                 hadoop-hadoop-tasktracker-master.log
> *****************************************************************
> 2008-05-09 11:10:15,582 INFO org.mortbay.util.Credential: Checking
> Resource aliases
> 2008-05-09 11:10:15,637 INFO org.mortbay.http.HttpServer: Version
> Jetty/5.1.4
> 2008-05-09 11:10:15,638 INFO org.mortbay.util.Container: Started
> HttpContext[/static,/static]
> 2008-05-09 11:10:15,638 INFO org.mortbay.util.Container: Started
> HttpContext[/logs,/logs]
> 2008-05-09 11:10:16,000 INFO org.mortbay.util.Container: Started
> [EMAIL PROTECTED]
> 2008-05-09 11:10:16,033 INFO org.mortbay.util.Container: Started
> WebApplicationContext[/,/]
> 2008-05-09 11:10:16,037 INFO org.mortbay.http.SocketListener: Started
> SocketListener on 0.0.0.0:50060
> 2008-05-09 11:10:16,037 INFO org.mortbay.util.Container: Started
> [EMAIL PROTECTED]
> 2008-05-09 11:10:16,045 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=TaskTracker, sessionId2008-05-09
> 11:10:16,059 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 59074: starting
> 2008-05-09 11:10:16,059 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 59074: starting
> 2008-05-09 11:10:16,059 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 59074: starting
> 2008-05-09 11:10:16,059 INFO org.apache.hadoop.mapred.TaskTracker:
> TaskTracker up at: /127.0.0.1:59074
> 2008-05-09 11:10:16,059 INFO org.apache.hadoop.mapred.TaskTracker:
> Starting tracker tracker_cse-desktop:/127.0.0.1:59074
> 2008-05-09 11:10:16,100 INFO org.apache.hadoop.mapred.TaskTracker:
> Starting thread: Map-events fetcher for all reduce tasks on
> tracker_cse-desktop:/127.0.0.1:59074
> 2008-05-09 11:12:26,313 INFO org.apache.hadoop.mapred.TaskTracker:
> LaunchTaskAction: task_200805091110_0001_m_000000_0
> 2008-05-09 11:12:26,764 INFO org.apache.hadoop.mapred.TaskTracker:
> LaunchTaskAction: task_200805091110_0001_m_000002_0
> 2008-05-09 11:12:31,971 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_m_000002_0 1.0%
> hdfs://master:54310/user/hadoop/suvidya/8ldvc10.txt:0+1427769
> 2008-05-09 11:12:31,975 INFO org.apache.hadoop.mapred.TaskTracker: Task
> task_200805091110_0001_m_000002_0 is done.
> 2008-05-09 11:12:32,032 INFO org.apache.hadoop.mapred.TaskTracker:
> LaunchTaskAction: task_200805091110_0001_m_000005_0
> 2008-05-09 11:12:32,857 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_m_000000_0 1.0%
> hdfs://master:54310/user/hadoop/suvidya/19699.txt:0+1945731
> 2008-05-09 11:12:32,857 INFO org.apache.hadoop.mapred.TaskTracker: Task
> task_200805091110_0001_m_000000_0 is done.
> 2008-05-09 11:12:32,938 INFO org.apache.hadoop.mapred.TaskTracker:
> LaunchTaskAction: task_200805091110_0001_r_000000_0
> 2008-05-09 11:12:34,401 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_m_000005_0 1.0%
> hdfs://master:54310/user/hadoop/suvidya/dvldc10.txt:0+385962
> 2008-05-09 11:12:34,403 INFO org.apache.hadoop.mapred.TaskTracker: Task
> task_200805091110_0001_m_000005_0 is done.
> 2008-05-09 11:12:40,065 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.095238104% reduce > copy (2 of 7 at
> 0.80 MB/s) >
> 2008-05-09 11:12:46,067 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> 2008-05-09 11:12:52,069 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> 2008-05-09 11:12:58,071 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> 2008-05-09 11:13:01,079 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> 2008-05-09 11:13:07,081 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> 2008-05-09 11:13:13,083 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> 2008-05-09 11:13:16,084 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> 2008-05-09 11:13:22,086 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> 2008-05-09 11:13:28,088 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> 2008-05-09 11:13:31,089 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> 2008-05-09 11:13:37,091 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> 2008-05-09 11:13:43,094 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> 2008-05-09 11:13:49,096 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> 2008-05-09 11:13:52,098 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> 2008-05-09 11:13:58,100 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> 2008-05-09 11:14:04,102 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> 2008-05-09 11:14:07,104 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> 2008-05-09 11:14:13,106 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> 2008-05-09 11:14:19,108 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_r_000000_0 0.14285716% reduce > copy (3 of 7 at
> 0.24 MB/s) >
> .
> .
> .
> .
> GOES ON AND ON AND ON same message...
>
>
> *****************************************************************
>                 hadoop-hadoop-jobtracker-master.log
> *****************************************************************
>
> 2008-05-09 11:10:14,341 INFO org.apache.hadoop.mapred.JobTracker:
> STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting JobTracker
> STARTUP_MSG:   host = cse-desktop/127.0.1.1
> STARTUP_MSG:   args = []
> ************************************************************/
> 2008-05-09 11:10:14,434 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 54311: starting
> 2008-05-09 11:10:14,434 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54311: starting
> 2008-05-09 11:10:14,434 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54311: starting
> 2008-05-09 11:10:14,434 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54311: starting
> 2008-05-09 11:10:14,435 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54311: starting
> 2008-05-09 11:10:14,435 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54311: starting
> 2008-05-09 11:10:14,435 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54311: starting
> 2008-05-09 11:10:14,435 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54311: starting
> 2008-05-09 11:10:14,435 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54311: starting
> 2008-05-09 11:10:14,435 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54311: starting
> 2008-05-09 11:10:14,451 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54311: starting
> 2008-05-09 11:10:14,489 INFO org.mortbay.util.Credential: Checking
> Resource aliases
> 2008-05-09 11:10:14,522 INFO org.mortbay.http.HttpServer: Version
> Jetty/5.1.4
> 2008-05-09 11:10:14,523 INFO org.mortbay.util.Container: Started
> HttpContext[/static,/static]
> 2008-05-09 11:10:14,523 INFO org.mortbay.util.Container: Started
> HttpContext[/logs,/logs]
> 2008-05-09 11:10:14,887 INFO org.mortbay.util.Container: Started
> [EMAIL PROTECTED]
> 2008-05-09 11:10:14,931 INFO org.mortbay.util.Container: Started
> WebApplicationContext[/,/]
> 2008-05-09 11:10:14,933 INFO org.mortbay.http.SocketListener: Started
> SocketListener on 0.0.0.0:50030
> 2008-05-09 11:10:14,934 INFO org.mortbay.util.Container: Started
> [EMAIL PROTECTED]
> 2008-05-09 11:10:14,936 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=JobTracker, sessionId2008-05-09
> 11:10:14,943 INFO org.apache.hadoop.mapred.JobTracker:
> JobTracker up at: 54311
> 2008-05-09 11:10:14,944 INFO org.apache.hadoop.mapred.JobTracker:
> JobTracker webserver: 50030
> 2008-05-09 11:10:15,053 INFO org.apache.hadoop.mapred.JobTracker: Starting
> RUNNING
> 2008-05-09 11:12:26,256 INFO org.apache.hadoop.mapred.JobInProgress:
> Choosing cached task tip_200805091110_0001_m_000000
> 2008-05-09 11:12:26,274 INFO org.apache.hadoop.mapred.JobTracker: Adding
> task 'task_200805091110_0001_m_000000_0' to tip
> tip_200805091110_0001_m_000000, for tracker
> 'tracker_cse-desktop:/127.0.0.1:59074'
> 2008-05-09 11:12:26,471 INFO org.apache.hadoop.mapred.JobInProgress:
> Choosing cached task tip_200805091110_0001_m_000001
> 2008-05-09 11:12:26,472 INFO org.apache.hadoop.mapred.JobTracker: Adding
> task 'task_200805091110_0001_m_000001_0' to tip
> tip_200805091110_0001_m_000001, for tracker
> 'tracker_mtech-desktop:/127.0.0.1:36527'
> 2008-05-09 11:12:26,744 INFO org.apache.hadoop.mapred.JobInProgress:
> Choosing cached task tip_200805091110_0001_m_000002
> 2008-05-09 11:12:26,744 INFO org.apache.hadoop.mapred.JobTracker: Adding
> task 'task_200805091110_0001_m_000002_0' to tip
> tip_200805091110_0001_m_000002, for tracker
> 'tracker_cse-desktop:/127.0.0.1:59074'
> 2008-05-09 11:12:27,101 INFO org.apache.hadoop.mapred.JobInProgress:
> Choosing cached task tip_200805091110_0001_m_000003
> 2008-05-09 11:12:27,101 INFO org.apache.hadoop.mapred.JobTracker: Adding
> task 'task_200805091110_0001_m_000003_0' to tip
> tip_200805091110_0001_m_000003, for tracker
> 'tracker_mtech-desktop:/127.0.0.1:36527'
> 2008-05-09 11:12:31,253 INFO org.apache.hadoop.mapred.JobInProgress:
> Choosing cached task tip_200805091110_0001_m_000004
> 2008-05-09 11:12:31,253 INFO org.apache.hadoop.mapred.JobTracker: Adding
> task 'task_200805091110_0001_m_000004_0' to tip
> tip_200805091110_0001_m_000004, for tracker
> 'tracker_mtech-desktop:/127.0.0.1:36527'
> 2008-05-09 11:12:31,352 INFO org.apache.hadoop.mapred.JobInProgress: Task
> 'task_200805091110_0001_m_000003_0' has completed
> tip_200805091110_0001_m_000003 successfully.
> 2008-05-09 11:12:32,030 INFO org.apache.hadoop.mapred.JobInProgress:
> Choosing cached task tip_200805091110_0001_m_000005
> 2008-05-09 11:12:32,030 INFO org.apache.hadoop.mapred.JobTracker: Adding
> task 'task_200805091110_0001_m_000005_0' to tip
> tip_200805091110_0001_m_000005, for tracker
> 'tracker_cse-desktop:/127.0.0.1:59074'
> 2008-05-09 11:12:32,031 INFO org.apache.hadoop.mapred.JobInProgress: Task
> 'task_200805091110_0001_m_000002_0' has completed
> tip_200805091110_0001_m_000002 successfully.
> 2008-05-09 11:12:32,098 INFO org.apache.hadoop.mapred.JobInProgress:
> Choosing cached task tip_200805091110_0001_m_000006
> 2008-05-09 11:12:32,099 INFO org.apache.hadoop.mapred.JobTracker: Adding
> task 'task_200805091110_0001_m_000006_0' to tip
> tip_200805091110_0001_m_000006, for tracker
> 'tracker_mtech-desktop:/127.0.0.1:36527'
> 2008-05-09 11:12:32,099 INFO org.apache.hadoop.mapred.JobInProgress: Task
> 'task_200805091110_0001_m_000001_0' has completed
> tip_200805091110_0001_m_000001 successfully.
> 2008-05-09 11:12:32,930 INFO org.apache.hadoop.mapred.JobInProgress:
> Choosing normal task tip_200805091110_0001_r_000000
> 2008-05-09 11:12:32,933 INFO org.apache.hadoop.mapred.JobTracker: Adding
> task 'task_200805091110_0001_r_000000_0' to tip
> tip_200805091110_0001_r_000000, for tracker
> 'tracker_cse-desktop:/127.0.0.1:59074'
> 2008-05-09 11:12:32,935 INFO org.apache.hadoop.mapred.JobInProgress: Task
> 'task_200805091110_0001_m_000000_0' has completed
> tip_200805091110_0001_m_000000 successfully.
> 2008-05-09 11:12:34,530 INFO org.apache.hadoop.mapred.JobInProgress: Task
> 'task_200805091110_0001_m_000005_0' has completed
> tip_200805091110_0001_m_000005 successfully.
> 2008-05-09 11:12:35,452 INFO org.apache.hadoop.mapred.JobInProgress: Task
> 'task_200805091110_0001_m_000004_0' has completed
> tip_200805091110_0001_m_000004 successfully.
> 2008-05-09 11:12:35,453 INFO org.apache.hadoop.mapred.JobInProgress: Task
> 'task_200805091110_0001_m_000006_0' has completed
> tip_200805091110_0001_m_000006 successfully.
> 2008-05-09 11:20:14,597 INFO org.apache.hadoop.mapred.JobInProgress:
> Failed fetch notification #1 for task task_200805091110_0001_m_000001_0
> 2008-05-09 11:23:35,452 INFO org.apache.hadoop.conf.Configuration: found
> resource webapps/static/jobconf.xsl at
> file:/home/hadoop/HADOOP/hadoop-0.15.3/webapps/static/jobconf.xsl
> 2008-05-09 11:32:13,314 INFO org.apache.hadoop.mapred.JobInProgress:
> Failed fetch notification #1 for task task_200805091110_0001_m_000006_0
> 2008-05-09 11:33:23,329 INFO org.apache.hadoop.mapred.JobInProgress:
> Failed fetch notification #1 for task task_200805091110_0001_m_000004_0
> 2008-05-09 11:38:53,398 INFO org.apache.hadoop.mapred.JobInProgress:
> Failed fetch notification #1 for task task_200805091110_0001_m_000003_0
> 2008-05-09 11:49:33,527 INFO org.apache.hadoop.mapred.JobInProgress:
> Failed fetch notification #2 for task task_200805091110_0001_m_000001_0
> 2008-05-09 11:51:43,554 INFO org.apache.hadoop.mapred.JobInProgress:
> Failed fetch notification #2 for task task_200805091110_0001_m_000006_0
> 2008-05-09 11:53:53,579 INFO org.apache.hadoop.mapred.JobInProgress:
> Failed fetch notification #2 for task task_200805091110_0001_m_000004_0
>
>
>
> *****************************************************************
>                 hadoop-hadoop-datanode-master.log
> *****************************************************************
> 2008-05-09 11:10:09,361 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=DataNode, sessionId=null
> 2008-05-09 11:10:10,972 INFO org.apache.hadoop.dfs.Storage: Storage
> directory /home/hadoop/tmp1/hadoop-cse/dfs/data is not formatted.
> 2008-05-09 11:10:10,972 INFO org.apache.hadoop.dfs.Storage: Formatting ...
> 2008-05-09 11:10:11,022 INFO org.apache.hadoop.dfs.DataNode: Opened server
> at 50010
> 2008-05-09 11:10:11,091 INFO org.mortbay.util.Credential: Checking
> Resource aliases
> 2008-05-09 11:10:11,127 INFO org.mortbay.http.HttpServer: Version
> Jetty/5.1.4
> 2008-05-09 11:10:11,128 INFO org.mortbay.util.Container: Started
> HttpContext[/static,/static]
> 2008-05-09 11:10:11,128 INFO org.mortbay.util.Container: Started
> HttpContext[/logs,/logs]
> 2008-05-09 11:10:11,491 INFO org.mortbay.util.Container: Started
> [EMAIL PROTECTED]
> 2008-05-09 11:10:11,531 INFO org.mortbay.util.Container: Started
> WebApplicationContext[/,/]
> 2008-05-09 11:10:11,549 INFO org.mortbay.http.SocketListener: Started
> SocketListener on 0.0.0.0:50075
> 2008-05-09 11:10:11,550 INFO org.mortbay.util.Container: Started
> [EMAIL PROTECTED]
> 2008-05-09 11:10:15,653 INFO org.apache.hadoop.dfs.DataNode: New storage
> id DS-91763126-127.0.1.1-50010-1210311615634 is assigned to data-node
> 10.105.41.101:50010
> 2008-05-09 11:10:15,653 INFO org.apache.hadoop.dfs.DataNode: In
> DataNode.run, data
> FSDataset{dirpath='/home/hadoop/tmp1/hadoop-cse/dfs/data/current'}
> 2008-05-09 11:10:15,653 INFO org.apache.hadoop.dfs.DataNode: using
> BLOCKREPORT_INTERVAL of 3284786msec
> 2008-05-09 11:10:45,396 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_-3015722797502311964 from /10.105.41.101
> 2008-05-09 11:10:45,685 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_-1049683426562175211 from /10.105.41.101
> 2008-05-09 11:10:45,826 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_8092696251576742269 from /10.105.41.101
> 2008-05-09 11:10:45,906 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_-8254569623864016135 from /10.105.41.101
> 2008-05-09 11:10:46,038 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_358867731888329836 from /10.105.41.101
> 2008-05-09 11:10:46,342 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_3681060079121545753 from /10.105.41.101
> 2008-05-09 11:10:46,457 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_-3622716618848320575 from /10.105.41.101
> 2008-05-09 11:12:22,110 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_4971396424299898755 from /10.105.41.101
> 2008-05-09 11:12:22,208 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_4546830771263681687 from /10.105.41.101
> 2008-05-09 11:12:22,364 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_2087945424235891541 from /10.105.41.101
> 2008-05-09 11:12:22,430 INFO org.apache.hadoop.dfs.DataNode: Served block
> blk_2087945424235891541 to /10.105.41.101
> 2008-05-09 11:12:22,508 INFO org.apache.hadoop.dfs.DataNode: Served block
> blk_4971396424299898755 to /10.105.41.101
> 2008-05-09 11:12:22,606 INFO org.apache.hadoop.dfs.DataNode: Served block
> blk_4546830771263681687 to /10.105.41.101
> 2008-05-09 11:12:26,467 INFO org.apache.hadoop.dfs.DataNode: Served block
> blk_2087945424235891541 to /10.105.41.101
> 2008-05-09 11:12:26,540 INFO org.apache.hadoop.dfs.DataNode: Served block
> blk_4971396424299898755 to /10.105.41.101
> 2008-05-09 11:12:28,817 INFO org.apache.hadoop.dfs.DataNode: Served block
> blk_-1049683426562175211 to /10.105.41.101
> 2008-05-09 11:12:29,143 INFO org.apache.hadoop.dfs.DataNode: Served block
> blk_3681060079121545753 to /10.105.41.101
> 2008-05-09 11:12:33,293 INFO org.apache.hadoop.dfs.DataNode: Served block
> blk_-3622716618848320575 to /10.105.41.101
>
>
>
> *****************************************************************
>                 hadoop-hadoop-namenode-master.log
> *****************************************************************
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = cse-desktop/127.0.1.1
> STARTUP_MSG:   args = []
> ************************************************************/
> 2008-05-09 11:10:09,340 INFO org.apache.hadoop.dfs.NameNode: Namenode up
> at: master/10.105.41.101:54310
> 2008-05-09 11:10:09,351 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2008-05-09 11:10:09,844 INFO org.apache.hadoop.dfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2008-05-09 11:10:09,844 INFO org.apache.hadoop.dfs.StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2008-05-09 11:10:10,096 INFO org.mortbay.util.Credential: Checking
> Resource aliases
> 2008-05-09 11:10:10,152 INFO org.mortbay.http.HttpServer: Version
> Jetty/5.1.4
> 2008-05-09 11:10:10,158 INFO org.mortbay.util.Container: Started
> HttpContext[/static,/static]
> 2008-05-09 11:10:10,158 INFO org.mortbay.util.Container: Started
> HttpContext[/logs,/logs]
> 2008-05-09 11:10:10,670 INFO org.mortbay.util.Container: Started
> [EMAIL PROTECTED]
> 2008-05-09 11:10:10,733 INFO org.mortbay.util.Container: Started
> WebApplicationContext[/,/]
> 2008-05-09 11:10:10,735 INFO org.mortbay.http.SocketListener: Started
> SocketListener on 0.0.0.0:50070
> 2008-05-09 11:10:10,735 INFO org.mortbay.util.Container: Started
> [EMAIL PROTECTED]
> 2008-05-09 11:10:10,735 INFO org.apache.hadoop.fs.FSNamesystem: Web-server
> up at: 50070
> 2008-05-09 11:10:10,736 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 54310: starting
> 2008-05-09 11:10:10,743 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: starting
> 2008-05-09 11:10:10,743 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: starting
> 2008-05-09 11:10:10,743 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: starting
> 2008-05-09 11:10:10,743 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: starting
> 2008-05-09 11:10:10,747 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: starting
> 2008-05-09 11:10:10,747 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: starting
> 2008-05-09 11:10:10,747 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: starting
> 2008-05-09 11:10:10,747 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: starting
> 2008-05-09 11:10:10,748 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: starting
> 2008-05-09 11:10:10,748 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: starting
> 2008-05-09 11:10:13,185 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.registerDatanode: node registration from 10.105.41.98:50010
> storage DS-2127148821-127.0.1.1-50010-1210275412696
> 2008-05-09 11:10:13,189 INFO org.apache.hadoop.net.NetworkTopology: Adding
> a new node: /default-rack/10.105.41.98:50010
> 2008-05-09 11:10:15,032 WARN org.apache.hadoop.dfs.StateChange: DIR*
> FSDirectory.unprotectedDelete: failed to remove
> /home/hadoop/tmp1/hadoop-cse/mapred/system because it does not exist
> 2008-05-09 11:10:15,635 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.registerDatanode: node registration from 10.105.41.101:50010
> storage DS-91763126-127.0.1.1-50010-1210311615634
> 2008-05-09 11:10:15,635 INFO org.apache.hadoop.net.NetworkTopology: Adding
> a new node: /default-rack/10.105.41.101:50010
> 2008-05-09 11:10:45,054 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.allocateBlock: /user/hadoop/suvidya/ulyss12.txt.
> blk_-3015722797502311964
> 2008-05-09 11:10:45,363 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.101:50010 is added
> to blk_-3015722797502311964
> 2008-05-09 11:10:45,395 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.98:50010 is added
> to blk_-3015722797502311964
> 2008-05-09 11:10:45,556 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.allocateBlock: /user/hadoop/suvidya/8ldvc10.txt.
> blk_-1049683426562175211
> 2008-05-09 11:10:45,640 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.101:50010 is added
> to blk_-1049683426562175211
> 2008-05-09 11:10:45,685 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.98:50010 is added
> to blk_-1049683426562175211
> 2008-05-09 11:10:45,770 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.allocateBlock: /user/hadoop/suvidya/advsh12.txt.
> blk_8092696251576742269
> 2008-05-09 11:10:45,811 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.101:50010 is added
> to blk_8092696251576742269
> 2008-05-09 11:10:45,827 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.98:50010 is added
> to blk_8092696251576742269
> 2008-05-09 11:10:45,871 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.allocateBlock: /user/hadoop/suvidya/132.txt.
> blk_-8254569623864016135
> 2008-05-09 11:10:45,895 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.101:50010 is added
> to blk_-8254569623864016135
> 2008-05-09 11:10:45,906 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.98:50010 is added
> to blk_-8254569623864016135
> 2008-05-09 11:10:45,970 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.allocateBlock: /user/hadoop/suvidya/20417.txt.
> blk_358867731888329836
> 2008-05-09 11:10:46,022 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.101:50010 is added
> to blk_358867731888329836
> 2008-05-09 11:10:46,038 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.98:50010 is added
> to blk_358867731888329836
> 2008-05-09 11:10:46,128 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.allocateBlock: /user/hadoop/suvidya/19699.txt.
> blk_3681060079121545753
> 2008-05-09 11:10:46,311 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.101:50010 is added
> to blk_3681060079121545753
> 2008-05-09 11:10:46,343 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.98:50010 is added
> to blk_3681060079121545753
> 2008-05-09 11:10:46,401 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.allocateBlock: /user/hadoop/suvidya/dvldc10.txt.
> blk_-3622716618848320575
> 2008-05-09 11:10:46,445 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.101:50010 is added
> to blk_-3622716618848320575
> 2008-05-09 11:10:46,457 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.98:50010 is added
> to blk_-3622716618848320575
> 2008-05-09 11:12:22,041 WARN org.apache.hadoop.dfs.StateChange: DIR*
> FSDirectory.unprotectedDelete: failed to remove
> /home/hadoop/tmp1/hadoop-cse/mapred/system/job_200805091110_0001 because
> it does not exist
> 2008-05-09 11:12:22,041 INFO org.apache.hadoop.fs.FSNamesystem: Number of
> transactions: 30 Total time for transactions(ms): 1 Number of syncs: 17
> SyncTimes(ms): 109
> 2008-05-09 11:12:22,086 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.allocateBlock:
> /home/hadoop/tmp1/hadoop-cse/mapred/system/job_200805091110_0001/job.jar.
> blk_4971396424299898755
> 2008-05-09 11:12:22,115 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.101:50010 is added
> to blk_4971396424299898755
> 2008-05-09 11:12:22,116 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.98:50010 is added
> to blk_4971396424299898755
> 2008-05-09 11:12:22,122 INFO org.apache.hadoop.fs.FSNamesystem: Increasing
> replication for file
> /home/hadoop/tmp1/hadoop-cse/mapred/system/job_200805091110_0001/job.jar.
> New replication is 10
> 2008-05-09 11:12:22,202 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.allocateBlock:
> /home/hadoop/tmp1/hadoop-cse/mapred/system/job_200805091110_0001/job.split.
> blk_4546830771263681687
> 2008-05-09 11:12:22,207 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.101:50010 is added
> to blk_4546830771263681687
> 2008-05-09 11:12:22,208 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.98:50010 is added
> to blk_4546830771263681687
> 2008-05-09 11:12:22,358 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.allocateBlock:
> /home/hadoop/tmp1/hadoop-cse/mapred/system/job_200805091110_0001/job.xml.
> blk_2087945424235891541
> 2008-05-09 11:12:22,363 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.101:50010 is added
> to blk_2087945424235891541
> 2008-05-09 11:12:22,364 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.addStoredBlock: blockMap updated: 10.105.41.98:50010 is added
> to blk_2087945424235891541
> 2008-05-09 11:15:11,668 INFO org.apache.hadoop.fs.FSNamesystem: Roll Edit
> Log from 10.105.41.101
> 2008-05-09 11:15:11,668 INFO org.apache.hadoop.fs.FSNamesystem: Number of
> transactions: 41 Total time for transactions(ms): 1 Number of syncs: 24
> SyncTimes(ms): 130
> 2008-05-09 11:20:12,032 INFO org.apache.hadoop.fs.FSNamesystem: Roll Edit
> Log from 10.105.41.101
> 2008-05-09 11:25:12,328 INFO org.apache.hadoop.fs.FSNamesystem: Roll Edit
> Log from 10.105.41.101
> 2008-05-09 11:30:12,334 INFO org.apache.hadoop.fs.FSNamesystem: Roll Edit
> Log from 10.105.41.101
> 2008-05-09 11:35:12,341 INFO org.apache.hadoop.fs.FSNamesystem: Roll Edit
> Log from 10.105.41.101
> 2008-05-09 11:40:12,348 INFO org.apache.hadoop.fs.FSNamesystem: Roll Edit
> Log from 10.105.41.101
> 2008-05-09 11:45:12,354 INFO org.apache.hadoop.fs.FSNamesystem: Roll Edit
> Log from 10.105.41.101
> 2008-05-09 11:50:12,362 INFO org.apache.hadoop.fs.FSNamesystem: Roll Edit
> Log from 10.105.41.101
> 2008-05-09 11:55:12,369 INFO org.apache.hadoop.fs.FSNamesystem: Roll Edit
> Log from 10.105.41.101
>
>
>
>
> SLAVE LOGS
>
> *****************************************************************
>                 hadoop-hadoop-tasktracker-slave.log
> *****************************************************************
> /************************************************************
> STARTUP_MSG: Starting TaskTracker
> STARTUP_MSG:   host = mtech-desktop/127.0.1.1
> STARTUP_MSG:   args = []
> ************************************************************/
> 2008-05-09 01:06:55,228 INFO org.mortbay.util.Credential: Checking
> Resource aliases
> 2008-05-09 01:06:55,291 INFO org.mortbay.http.HttpServer: Version
> Jetty/5.1.4
> 2008-05-09 01:06:55,292 INFO org.mortbay.util.Container: Started
> HttpContext[/static,/static]
> 2008-05-09 01:06:55,292 INFO org.mortbay.util.Container: Started
> HttpContext[/logs,/logs]
> 2008-05-09 01:06:55,648 INFO org.mortbay.util.Container: Started
> [EMAIL PROTECTED]
> 2008-05-09 01:06:55,691 INFO org.mortbay.util.Container: Started
> WebApplicationContext[/,/]
> 2008-05-09 01:06:55,694 INFO org.mortbay.http.SocketListener: Started
> SocketListener on 0.0.0.0:50060
> 2008-05-09 01:06:55,694 INFO org.mortbay.util.Container: Started
> [EMAIL PROTECTED]
> 2008-05-09 01:06:55,704 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=TaskTracker, sessionId2008-05-09
> 01:06:55,758 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 36527: starting
> 2008-05-09 01:06:55,758 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 36527: starting
> 2008-05-09 01:06:55,759 INFO org.apache.hadoop.mapred.TaskTracker:
> TaskTracker up at: /127.0.0.1:36527
> 2008-05-09 01:06:55,759 INFO org.apache.hadoop.mapred.TaskTracker:
> Starting tracker tracker_mtech-desktop:/127.0.0.1:36527
> 2008-05-09 01:06:55,761 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 36527: starting
> 2008-05-09 01:06:55,806 INFO org.apache.hadoop.mapred.TaskTracker:
> Starting thread: Map-events fetcher for all reduce tasks on
> tracker_mtech-desktop:/127.0.0.1:36527
> 2008-05-09 01:09:06,030 INFO org.apache.hadoop.mapred.TaskTracker:
> LaunchTaskAction: task_200805091110_0001_m_000001_0
> 2008-05-09 01:09:06,629 INFO org.apache.hadoop.mapred.TaskTracker:
> LaunchTaskAction: task_200805091110_0001_m_000003_0
> 2008-05-09 01:09:10,711 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_m_000003_0 1.0%
> hdfs://master:54310/user/hadoop/suvidya/20417.txt:0+674762
> 2008-05-09 01:09:10,712 INFO org.apache.hadoop.mapred.TaskTracker: Task
> task_200805091110_0001_m_000003_0 is done.
> 2008-05-09 01:09:10,793 INFO org.apache.hadoop.mapred.TaskTracker:
> LaunchTaskAction: task_200805091110_0001_m_000004_0
> 2008-05-09 01:09:11,595 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_m_000001_0 1.0%
> hdfs://master:54310/user/hadoop/suvidya/ulyss12.txt:0+1561677
> 2008-05-09 01:09:11,596 INFO org.apache.hadoop.mapred.TaskTracker: Task
> task_200805091110_0001_m_000001_0 is done.
> 2008-05-09 01:09:11,616 INFO org.apache.hadoop.mapred.TaskTracker:
> LaunchTaskAction: task_200805091110_0001_m_000006_0
> 2008-05-09 01:09:14,839 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_m_000004_0 1.0%
> hdfs://master:54310/user/hadoop/suvidya/advsh12.txt:0+590093
> 2008-05-09 01:09:14,841 INFO org.apache.hadoop.mapred.TaskTracker: Task
> task_200805091110_0001_m_000004_0 is done.
> 2008-05-09 01:09:14,875 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_m_000004_0 1.0%
> hdfs://master:54310/user/hadoop/suvidya/advsh12.txt:0+590093
> 2008-05-09 01:09:14,876 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_m_000004_0 Ignoring status-update since task is
> 'done'
> 2008-05-09 01:09:14,943 INFO org.apache.hadoop.mapred.TaskTracker:
> task_200805091110_0001_m_000006_0 1.0%
> hdfs://master:54310/user/hadoop/suvidya/132.txt:0+343694
> 2008-05-09 01:09:14,944 INFO org.apache.hadoop.mapred.TaskTracker: Task
> task_200805091110_0001_m_000006_0 is done.
>
> *****************************************************************
>                 hadoop-hadoop-datanode-slave.log
> *****************************************************************
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = mtech-desktop/127.0.1.1
> STARTUP_MSG:   args = []
> ************************************************************/
> 2008-05-09 01:06:50,795 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=DataNode, sessionId=null
> 2008-05-09 01:06:51,284 INFO org.apache.hadoop.dfs.Storage: Storage
> directory /home/hadoop/tmp1/hadoop-cse/dfs/data is not formatted.
> 2008-05-09 01:06:51,284 INFO org.apache.hadoop.dfs.Storage: Formatting ...
> 2008-05-09 01:06:51,373 INFO org.apache.hadoop.dfs.DataNode: Opened server
> at 50010
> 2008-05-09 01:06:51,709 INFO org.mortbay.util.Credential: Checking
> Resource aliases
> 2008-05-09 01:06:51,815 INFO org.mortbay.http.HttpServer: Version
> Jetty/5.1.4
> 2008-05-09 01:06:51,854 INFO org.mortbay.util.Container: Started
> HttpContext[/static,/static]
> 2008-05-09 01:06:51,854 INFO org.mortbay.util.Container: Started
> HttpContext[/logs,/logs]
> 2008-05-09 01:06:52,531 INFO org.mortbay.util.Container: Started
> [EMAIL PROTECTED]
> 2008-05-09 01:06:52,644 INFO org.mortbay.util.Container: Started
> WebApplicationContext[/,/]
> 2008-05-09 01:06:52,647 INFO org.mortbay.http.SocketListener: Started
> SocketListener on 0.0.0.0:50075
> 2008-05-09 01:06:52,647 INFO org.mortbay.util.Container: Started
> [EMAIL PROTECTED]
> 2008-05-09 01:06:52,708 INFO org.apache.hadoop.dfs.DataNode: New storage
> id DS-2127148821-127.0.1.1-50010-1210275412696 is assigned to data-node
> 10.105.41.98:50010
> 2008-05-09 01:06:52,709 INFO org.apache.hadoop.dfs.DataNode: In
> DataNode.run, data
> FSDataset{dirpath='/home/hadoop/tmp1/hadoop-cse/dfs/data/current'}
> 2008-05-09 01:06:52,709 INFO org.apache.hadoop.dfs.DataNode: using
> BLOCKREPORT_INTERVAL of 3371740msec
> 2008-05-09 01:07:24,909 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_-3015722797502311964 from /10.105.41.101
> 2008-05-09 01:07:25,200 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_-1049683426562175211 from /10.105.41.101
> 2008-05-09 01:07:25,342 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_8092696251576742269 from /10.105.41.101
> 2008-05-09 01:07:25,421 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_-8254569623864016135 from /10.105.41.101
> 2008-05-09 01:07:25,553 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_358867731888329836 from /10.105.41.101
> 2008-05-09 01:07:25,858 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_3681060079121545753 from /10.105.41.101
> 2008-05-09 01:07:25,972 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_-3622716618848320575 from /10.105.41.101
> 2008-05-09 01:09:01,624 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_4971396424299898755 from /10.105.41.101
> 2008-05-09 01:09:01,723 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_4546830771263681687 from /10.105.41.101
> 2008-05-09 01:09:01,879 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_2087945424235891541 from /10.105.41.101
> 2008-05-09 01:09:06,150 INFO org.apache.hadoop.dfs.DataNode: Served block
> blk_2087945424235891541 to /10.105.41.98
> 2008-05-09 01:09:06,248 INFO org.apache.hadoop.dfs.DataNode: Served block
> blk_4971396424299898755 to /10.105.41.98
> 2008-05-09 01:09:08,190 INFO org.apache.hadoop.dfs.DataNode: Served block
> blk_358867731888329836 to /10.105.41.98
> 2008-05-09 01:09:08,714 INFO org.apache.hadoop.dfs.DataNode: Served block
> blk_-3015722797502311964 to /10.105.41.98
> 2008-05-09 01:09:12,573 INFO org.apache.hadoop.dfs.DataNode: Served block
> blk_8092696251576742269 to /10.105.41.98
> 2008-05-09 01:09:12,986 INFO org.apache.hadoop.dfs.DataNode: Served block
> blk_-8254569623864016135 to /10.105.41.98
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>

Reply via email to