[jira] Updated: (HADOOP-552) getMapOutput doesn't reliably detect errors and throw to the caller

2006-09-29 Thread Owen O'Malley (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-552?page=all ] Owen O'Malley updated HADOOP-552: - Status: Patch Available (was: Open) > getMapOutput doesn't reliably detect errors and throw to the caller > -

[jira] Updated: (HADOOP-552) getMapOutput doesn't reliably detect errors and throw to the caller

2006-09-29 Thread Owen O'Malley (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-552?page=all ] Owen O'Malley updated HADOOP-552: - Attachment: size-check.patch Re-write of the MapOutputLocation.getFile to handle errors better including checking the content-length and deleting the partial

[jira] Commented: (HADOOP-563) DFS client should try to re-new lease if it gets a lease expiration exception when it adds a block to a file

2006-09-29 Thread Sameer Paranjpye (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-563?page=comments#action_12438813 ] Sameer Paranjpye commented on HADOOP-563: - +1 for the losable (maybe we should call them stale?) leases proposal > DFS client should try to re-new lease if

[jira] Commented: (HADOOP-489) Seperating user logs from system logs in map reduce

2006-09-29 Thread Owen O'Malley (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-489?page=comments#action_12438810 ] Owen O'Malley commented on HADOOP-489: -- Certainly, my grand plan includes grep= with context lines, but I think it is worth doing the first iteration without

[jira] Updated: (HADOOP-513) IllegalStateException is thrown by TaskTracker

2006-09-29 Thread Owen O'Malley (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-513?page=all ] Owen O'Malley updated HADOOP-513: - Attachment: map-out-servlet.patch This patch replaces the getMapOutput.jsp with a Servlet. The problem was that the getMapOutput.jsp was closing the output

[jira] Updated: (HADOOP-513) IllegalStateException is thrown by TaskTracker

2006-09-29 Thread Doug Cutting (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-513?page=all ] Doug Cutting updated HADOOP-513: Status: Resolved (was: Patch Available) Resolution: Fixed I just committed this. Thanks, Owen! > IllegalStateException is thrown by TaskTracker > ---

[jira] Updated: (HADOOP-513) IllegalStateException is thrown by TaskTracker

2006-09-29 Thread Owen O'Malley (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-513?page=all ] Owen O'Malley updated HADOOP-513: - Status: Patch Available (was: In Progress) Fix Version/s: 0.7.0 > IllegalStateException is thrown by TaskTracker > ---

[jira] Updated: (HADOOP-423) file paths are not normalized

2006-09-29 Thread Doug Cutting (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-423?page=all ] Doug Cutting updated HADOOP-423: Status: Resolved (was: Patch Available) Resolution: Fixed I just committed this. Thanks, Wendy! > file paths are not normalized > ---

[jira] Updated: (HADOOP-489) Seperating user logs from system logs in map reduce

2006-09-29 Thread Sameer Paranjpye (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-489?page=all ] Sameer Paranjpye updated HADOOP-489: I'd also like to see a grep= argument to the getUserLogs jsp. Maybe this should be structured like the namenode and datanode browse jsps. The getUserLogs j

[jira] Created: (HADOOP-565) Upgrade Jetty to 6.x

2006-09-29 Thread Owen O'Malley (JIRA)
Upgrade Jetty to 6.x Key: HADOOP-565 URL: http://issues.apache.org/jira/browse/HADOOP-565 Project: Hadoop Issue Type: Improvement Components: mapred Reporter: Owen O'Malley Assigned To: Owe

[jira] Work started: (HADOOP-513) IllegalStateException is thrown by TaskTracker

2006-09-29 Thread Owen O'Malley (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-513?page=all ] Work on HADOOP-513 started by Owen O'Malley. > IllegalStateException is thrown by TaskTracker > -- > > Key: HADOOP-513 > URL: http://issues.apache.org/jira

[jira] Assigned: (HADOOP-513) IllegalStateException is thrown by TaskTracker

2006-09-29 Thread Owen O'Malley (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-513?page=all ] Owen O'Malley reassigned HADOOP-513: Assignee: Owen O'Malley > IllegalStateException is thrown by TaskTracker > -- > > Key: HADOO

Re: Creating splits/tasks at the client

2006-09-29 Thread Doug Cutting
Benjamin Reed wrote: Split will write the hosts first, so in the JobTracker, when you get the byte array representing the Split, any fields from the sub class will follow the Split serialized bytes. The JobTracker can skip the Type in the bytes representing the serialized Split and then deseriali

Re: Creating splits/tasks at the client

2006-09-29 Thread Doug Cutting
Owen O'Malley wrote: Of course, once we allow user-defined InputSplits we will be back in exactly the same boat of running user-code on the JobTracker, unless we also ship over the preferred hosts for each InputFormat too. So, to entirely avoid user code in the job tracker we'd need a final

Re: Creating splits/tasks at the client

2006-09-29 Thread Benjamin Reed
No, even with user defined Splits we don't need to use user code in the JobTracker if we make Split a Writable class that has the hosts array. Split will write the hosts first, so in the JobTracker, when you get the byte array representing the Split, any fields from the sub class will follow the S

Re: Creating splits/tasks at the client

2006-09-29 Thread Owen O'Malley
On Sep 29, 2006, at 12:20 AM, Benjamin Reed wrote: I please correct me if I'm reading the code incorrectly, but it seems like submitJob puts the submitted job on the jobInitQueue which is immediately dequeued by the JobInitThread and then initTasks() will get the file splits and create Tasks

[jira] Updated: (HADOOP-239) job tracker WI drops jobs after 24 hours

2006-09-29 Thread Sanjay Dahiya (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-239?page=all ] Sanjay Dahiya updated HADOOP-239: - Attachment: Hadoop-239_2.patch Here is an updated patch. > job tracker WI drops jobs after 24 hours > > >

Re: Creating splits/tasks at the client

2006-09-29 Thread Benjamin Reed
I please correct me if I'm reading the code incorrectly, but it seems like submitJob puts the submitted job on the jobInitQueue which is immediately dequeued by the JobInitThread and then initTasks() will get the file splits and create Tasks. Thus, it doesn't seem like there is any difference in me