Re: [jira] Commented: (HADOOP-181) task trackers should not restart for having a late heartbeat

2006-08-10 Thread Eric Baldeschwieler
(RESEND, MENT TO ATTACH THE COMMENT BELOW TO THIS POSTING) Why don't we include documenting this as part of the the "map-reduce walk-through" sprint item? - On reintegrating lost task trackers... It does seem like we should do this to me, but we need to make sure we reason through how

Re: [jira] Commented: (HADOOP-181) task trackers should not restart for having a late heartbeat

2006-08-10 Thread Eric Baldeschwieler
Why don't we include documenting this as part of the the "map-reduce walk-through" sprint item? - Oh a whole thread can be had on this I'm sure! Why does one turn off speculative execution? Presumably because a MAP has unmanaged side-effects? But... the framework still will rerun jobs

[jira] Commented: (HADOOP-181) task trackers should not restart for having a late heartbeat

2006-08-10 Thread eric baldeschwieler (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-181?page=comments#action_12427373 ] eric baldeschwieler commented on HADOOP-181: On reintegrating lost task trackers... It does seem like we should do this to me, but we need to make sure

[jira] Commented: (HADOOP-181) task trackers should not restart for having a late heartbeat

2006-08-10 Thread eric baldeschwieler (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-181?page=comments#action_12427371 ] eric baldeschwieler commented on HADOOP-181: Oh a whole thread can be had on this I'm sure! Why does one turn off speculative execution? Presumably be

[jira] Commented: (HADOOP-263) task status should include timestamps for when a job transitions

2006-08-10 Thread Owen O'Malley (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-263?page=comments#action_12427353 ] Owen O'Malley commented on HADOOP-263: -- A couple of points: 1. I'd express the start times as absolute times and the others as relative times in hours, min

[jira] Created: (HADOOP-446) Startup sanity data directory check in main loop.

2006-08-10 Thread Benjamin Reed (JIRA)
Startup sanity data directory check in main loop. - Key: HADOOP-446 URL: http://issues.apache.org/jira/browse/HADOOP-446 Project: Hadoop Issue Type: Bug Components: dfs Affects Ve

[jira] Created: (HADOOP-445) Parallel data/socket writing for DFSOutputStream

2006-08-10 Thread Benjamin Reed (JIRA)
Parallel data/socket writing for DFSOutputStream Key: HADOOP-445 URL: http://issues.apache.org/jira/browse/HADOOP-445 Project: Hadoop Issue Type: Improvement Affects Versions: 0.5.0

[jira] Commented: (HADOOP-442) slaves file should include an 'exclude' section, to prevent "bad" datanodes and tasktrackers from disrupting a cluster

2006-08-10 Thread Bryan Pendleton (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-442?page=comments#action_12427334 ] Bryan Pendleton commented on HADOOP-442: I too have had problems like this over time. Another way to deal with this might be to hand out a new random ID to

[jira] Commented: (HADOOP-181) task trackers should not restart for having a late heartbeat

2006-08-10 Thread Devaraj Das (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-181?page=comments#action_12427327 ] Devaraj Das commented on HADOOP-181: Doug, does it make sense to do what is done in this patch only when speculative execution is on? > task trackers should n

[jira] Created: (HADOOP-444) In streaming with a NONE reducer, you get duplicate files if a mapper fails, is restarted, and succeeds next time.

2006-08-10 Thread Dick King (JIRA)
In streaming with a NONE reducer, you get duplicate files if a mapper fails, is restarted, and succeeds next time. -- Key: HADOOP-444 URL: http://issue

[jira] Updated: (HADOOP-436) Concluding that the Map task failed may not be always right in getMapOutput.jsp

2006-08-10 Thread Devaraj Das (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-436?page=all ] Devaraj Das updated HADOOP-436: --- Attachment: getmapout.patch Attcahed is the patch. > Concluding that the Map task failed may not be always right in > getMapOutput.jsp > ---

[jira] Assigned: (HADOOP-436) Concluding that the Map task failed may not be always right in getMapOutput.jsp

2006-08-10 Thread Devaraj Das (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-436?page=all ] Devaraj Das reassigned HADOOP-436: -- Assignee: Devaraj Das > Concluding that the Map task failed may not be always right in > getMapOutput.jsp > ---

[jira] Updated: (HADOOP-263) task status should include timestamps for when a job transitions

2006-08-10 Thread Sanjay Dahiya (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-263?page=all ] Sanjay Dahiya updated HADOOP-263: - Status: Patch Available (was: Open) > task status should include timestamps for when a job transitions >

[jira] Updated: (HADOOP-263) task status should include timestamps for when a job transitions

2006-08-10 Thread Sanjay Dahiya (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-263?page=all ] Sanjay Dahiya updated HADOOP-263: - Attachment: patch.txt Updated the display. now showing start/finish times in separate columns. > task status should include timestamps for when a job transi

[jira] Created: (HADOOP-443) optionally ignore a bad entry in namenode state when starting up

2006-08-10 Thread Yoram Arnon (JIRA)
optionally ignore a bad entry in namenode state when starting up Key: HADOOP-443 URL: http://issues.apache.org/jira/browse/HADOOP-443 Project: Hadoop Issue Type: Improvement

[jira] Created: (HADOOP-442) slaves file should include an 'exclude' section, to prevent "bad" datanodes and tasktrackers from disrupting a cluster

2006-08-10 Thread Yoram Arnon (JIRA)
slaves file should include an 'exclude' section, to prevent "bad" datanodes and tasktrackers from disrupting a cluster --- Key: HADOOP-442 URL: ht

[jira] Commented: (HADOOP-181) task trackers should not restart for having a late heartbeat

2006-08-10 Thread Yoram Arnon (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-181?page=comments#action_12427126 ] Yoram Arnon commented on HADOOP-181: Eric, we don't get many lost task trackers in our sort benchmark any more, so I don't expect a significant improvement. I