This is a bug in 0.21. MAPREDUCE-1905 ( https://issues.apache.org/jira/browse/MAPREDUCE-1905) is open for this.
On 9/21/10 4:29 PM, "Marc Sturlese" <marc.sturl...@gmail.com> wrote: I am using hadoop 0.21 I have a reducer task wich takes more time to finish that the mapreduce.task.timeout so it's being killed: Task attempt_201009211103_0001_r_000000_0 failed to report status for 602 seconds. Killing! I have implemented a thread which is suposed to send progress and update the status with an incremented counter but it seems not to be working. The attempt is killed anyway. I have tried an even simpler example: Not to use a thread and create an infinite loop in the reducer which updates the status and sends progress on each iteration... but the attempt keeps being killed: @Override public void reduce(Text keyName, Iterable<Text> paths, Context context) throws IOException, InterruptedException { while(true) { context.getCounter(COUNTER_ADS.total_ads).increment(1L) ; context.setStatus(""+context.getCounter(COUNTER_ADS.total_ads)) ; context.progress(); } context.write(new Text("done!"), NullWritable.get()); } I have even tryed to use TaskInputOutputContext insted of stright Context: @Override public void reduce(Text keyName, Iterable<Text> paths, Context context) throws IOException, InterruptedException { TaskInputOutputContext tac = (TaskInputOutputContext)context; while(true) { tac.getCounter(COUNTER_ADS.total_ads).increment(1L) ; tac.setStatus(""+context.getCounter(COUNTER_ADS.total_ads)) ; tac.progress(); } context.write(new Text("done!"), NullWritable.get()); } Can anyone tell me what else could I try or what am I doing wrong? I am really stuck on this problem and have no idea what else to do... Thanks in advance -- View this message in context: http://lucene.472066.n3.nabble.com/can-not-report-progress-from-reducer-context-with-hadoop-0-21-tp1534700p1534700.html Sent from the Hadoop lucene-users mailing list archive at Nabble.com.