I can't see anything unusual. these are the logs for
attempt_201005032003_0006_m_000000_2
-----------------------------------------------------------------------------------------------------------------------
2010-05-03 23:05:48,590 INFO org.apache.hadoop.mapred.TaskTracker:
LaunchTaskAction (registerTask): attempt_201005032003_0006_m_000000_2 task's
state:UNASSIGNED
2010-05-03 23:05:48,590 INFO org.apache.hadoop.mapred.TaskTracker: Trying to
launch : attempt_201005032003_0006_m_000000_2
2010-05-03 23:05:48,590 INFO org.apache.hadoop.mapred.TaskTracker: In
TaskLauncher, current free slots : 7 and trying to launch
attempt_201005032003_0006_m_000000_2
2010-05-03 23:05:49,473 INFO org.apache.hadoop.mapred.JvmManager: In JvmRunner
constructed JVM ID: jvm_201005032003_0006_m_1086521438
2010-05-03 23:05:49,473 INFO org.apache.hadoop.mapred.JvmManager: JVM Runner
jvm_201005032003_0006_m_1086521438 spawned.
2010-05-03 23:05:50,248 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID:
jvm_201005032003_0006_m_1086521438 given task:
attempt_201005032003_0006_m_000000_2
2010-05-03 23:05:56,784 INFO org.apache.hadoop.mapred.TaskTracker:
attempt_201005032003_0006_m_000000_2 0.0%
hdfs://bunwell.cs.ucl.ac.uk:54310/user/tjambor/temp/userVectors/part-00000:0+6105463
2010-05-03 23:25:59,246 INFO org.apache.hadoop.mapred.TaskTracker:
attempt_201005032003_0006_m_000000_2: Task attempt_201005032003_0006_m_000000_2
failed to report status for 1202 seconds. Killing!
2010-05-03 23:25:59,261 INFO org.apache.hadoop.mapred.TaskTracker: Process
Thread Dump: lost task
-----------------------------------------------------------------------------------------
2010-05-03 23:05:48,966 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/128.16.1.46:50010, dest: /128.16.1.46:38302, bytes: 10619777, op: HDFS_READ,
cliID: DFSClient_-1485123568, srvID:
DS-826409173-128.16.1.46-50010-1272538619979, blockid:
blk_7183689076291458667_1382
2010-05-03 23:06:46,099 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/128.16.1.46:50010, dest: /128.16.1.45:39357, bytes: 156864, op: HDFS_READ,
cliID: DFSClient_attempt_201005032003_0006_m_000000_1, srvID:
DS-826409173-128.16.1.46-50010-1272538619979, blockid:
blk_6035162353377502411_1381
2010-05-03 23:11:34,183 INFO
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded
for blk_1615080304667462031_1375
2010-05-03 23:16:52,824 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
BlockReport of 8 blocks got processed in 4 msecs
2010-05-03 23:25:59,385 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/128.16.1.46:50010, dest: /128.16.1.46:38307, bytes: 185760, op: HDFS_READ,
cliID: DFSClient_attempt_201005032003_0006_m_000000_2, srvID:
DS-826409173-128.16.1.46-50010-1272538619979, blockid:
blk_6035162353377502411_1381
2010-05-03 23:26:59,811 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/128.16.1.46:50010, dest: /128.16.2.130:35911, bytes: 156864, op: HDFS_READ,
cliID: DFSClient_attempt_201005032003_0006_m_000000_3, srvID:
DS-826409173-128.16.1.46-50010-1272538619979, blockid:
blk_6035162353377502411_1381
2010-05-03 23:27:08,722 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Receiving block blk_5895961339651429890_1386 src: /128.16.1.45:40451 dest:
/128.16.1.46:50010
2010-05-03 23:27:08,727 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/128.16.1.45:40451, dest: /128.16.1.46:50010, bytes: 7585, op: HDFS_WRITE,
cliID: DFSClient_-1971088822, srvID:
DS-826409173-128.16.1.46-50010-1272538619979, blockid:
blk_5895961339651429890_1386
2010-05-03 23:27:08,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
PacketResponder 0 for block blk_5895961339651429890_1386 terminating
2010-05-03 23:27:17,245 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Deleting block blk_7183689076291458667_1382 file
/usr/local/hadoop-datastore/hadoop-tjambor/dfs/data/current/blk_7183689076291458667
2010-05-03 23:31:38,622 INFO
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded
for blk_1342409871477974276_1375
2010-05-03 23:33:05,378 INFO
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded
for blk_6035162353377502411_1381
---------------------------------------------------------------------------------------
10/05/03 22:45:09 INFO mapred.JobClient: Running job: job_201005032003_0006
10/05/03 22:45:10 INFO mapred.JobClient: map 0% reduce 0%
10/05/03 22:46:01 INFO mapred.JobClient: map 1% reduce 0%
10/05/03 22:46:04 INFO mapred.JobClient: map 2% reduce 0%
10/05/03 22:46:34 INFO mapred.JobClient: map 9% reduce 0%
10/05/03 22:46:37 INFO mapred.JobClient: map 28% reduce 0%
10/05/03 22:46:40 INFO mapred.JobClient: map 50% reduce 0%
10/05/03 22:49:14 INFO mapred.JobClient: map 50% reduce 16%
10/05/03 23:05:48 INFO mapred.JobClient: Task Id :
attempt_201005032003_0006_m_000000_0, Status : FAILED
Task attempt_201005032003_0006_m_000000_0 failed to report status for 1200
seconds. Killing!
10/05/03 23:06:53 INFO mapred.JobClient: Task Id :
attempt_201005032003_0006_m_000000_1, Status : FAILED
Task attempt_201005032003_0006_m_000000_1 failed to report status for 1201
seconds. Killing!
10/05/03 23:26:06 INFO mapred.JobClient: Task Id :
attempt_201005032003_0006_m_000000_2, Status : FAILED
Task attempt_201005032003_0006_m_000000_2 failed to report status for 1202
seconds. Killing!
10/05/03 23:27:09 INFO mapred.JobClient: Job complete: job_201005032003_0006
10/05/03 23:27:09 INFO mapred.JobClient: Counters: 15
10/05/03 23:27:09 INFO mapred.JobClient: Job Counters
10/05/03 23:27:09 INFO mapred.JobClient: Launched reduce tasks=1
10/05/03 23:27:09 INFO mapred.JobClient: Rack-local map tasks=2
10/05/03 23:27:09 INFO mapred.JobClient: Launched map tasks=6
10/05/03 23:27:09 INFO mapred.JobClient: Data-local map tasks=4
10/05/03 23:27:09 INFO mapred.JobClient: Failed map tasks=1
10/05/03 23:27:09 INFO mapred.JobClient: FileSystemCounters
10/05/03 23:27:09 INFO mapred.JobClient: FILE_BYTES_READ=9415218
10/05/03 23:27:09 INFO mapred.JobClient: HDFS_BYTES_READ=6105577
10/05/03 23:27:09 INFO mapred.JobClient: FILE_BYTES_WRITTEN=18151812
10/05/03 23:27:09 INFO mapred.JobClient: Map-Reduce Framework
10/05/03 23:27:09 INFO mapred.JobClient: Combine output records=3338383
10/05/03 23:27:09 INFO mapred.JobClient: Map input records=2971
10/05/03 23:27:09 INFO mapred.JobClient: Spilled Records=6676766
10/05/03 23:27:09 INFO mapred.JobClient: Map output bytes=58177620
10/05/03 23:27:09 INFO mapred.JobClient: Map input bytes=6104511
10/05/03 23:27:09 INFO mapred.JobClient: Combine input records=4848135
10/05/03 23:27:09 INFO mapred.JobClient: Map output records=4848135
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
at
org.apache.mahout.cf.taste.hadoop.item.RecommenderJob.run(RecommenderJob.java:132)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at
org.apache.mahout.cf.taste.hadoop.item.RecommenderJob.main(RecommenderJob.java:185)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)