[ http://issues.apache.org/jira/browse/HADOOP-552?page=all ]
Owen O'Malley updated HADOOP-552:
-
Status: Patch Available (was: Open)
> getMapOutput doesn't reliably detect errors and throw to the caller
> -
[ http://issues.apache.org/jira/browse/HADOOP-552?page=all ]
Owen O'Malley updated HADOOP-552:
-
Attachment: size-check.patch
Re-write of the MapOutputLocation.getFile to handle errors better including
checking the content-length and deleting the partial
[
http://issues.apache.org/jira/browse/HADOOP-563?page=comments#action_12438813 ]
Sameer Paranjpye commented on HADOOP-563:
-
+1 for the losable (maybe we should call them stale?) leases proposal
> DFS client should try to re-new lease if
[
http://issues.apache.org/jira/browse/HADOOP-489?page=comments#action_12438810 ]
Owen O'Malley commented on HADOOP-489:
--
Certainly, my grand plan includes grep= with context lines, but I
think it is worth doing the first iteration without
[ http://issues.apache.org/jira/browse/HADOOP-513?page=all ]
Owen O'Malley updated HADOOP-513:
-
Attachment: map-out-servlet.patch
This patch replaces the getMapOutput.jsp with a Servlet.
The problem was that the getMapOutput.jsp was closing the output
[ http://issues.apache.org/jira/browse/HADOOP-513?page=all ]
Doug Cutting updated HADOOP-513:
Status: Resolved (was: Patch Available)
Resolution: Fixed
I just committed this. Thanks, Owen!
> IllegalStateException is thrown by TaskTracker
> ---
[ http://issues.apache.org/jira/browse/HADOOP-513?page=all ]
Owen O'Malley updated HADOOP-513:
-
Status: Patch Available (was: In Progress)
Fix Version/s: 0.7.0
> IllegalStateException is thrown by TaskTracker
> ---
[ http://issues.apache.org/jira/browse/HADOOP-423?page=all ]
Doug Cutting updated HADOOP-423:
Status: Resolved (was: Patch Available)
Resolution: Fixed
I just committed this. Thanks, Wendy!
> file paths are not normalized
> ---
[ http://issues.apache.org/jira/browse/HADOOP-489?page=all ]
Sameer Paranjpye updated HADOOP-489:
I'd also like to see a grep= argument to the getUserLogs jsp.
Maybe this should be structured like the namenode and datanode browse jsps. The
getUserLogs j
Upgrade Jetty to 6.x
Key: HADOOP-565
URL: http://issues.apache.org/jira/browse/HADOOP-565
Project: Hadoop
Issue Type: Improvement
Components: mapred
Reporter: Owen O'Malley
Assigned To: Owe
[ http://issues.apache.org/jira/browse/HADOOP-513?page=all ]
Work on HADOOP-513 started by Owen O'Malley.
> IllegalStateException is thrown by TaskTracker
> --
>
> Key: HADOOP-513
> URL: http://issues.apache.org/jira
[ http://issues.apache.org/jira/browse/HADOOP-513?page=all ]
Owen O'Malley reassigned HADOOP-513:
Assignee: Owen O'Malley
> IllegalStateException is thrown by TaskTracker
> --
>
> Key: HADOO
Benjamin Reed wrote:
Split will write the hosts first, so in the JobTracker, when you get the
byte array representing the Split, any fields from the sub class will
follow the Split serialized bytes. The JobTracker can skip the Type in
the bytes representing the serialized Split and then deseriali
Owen O'Malley wrote:
Of course, once we allow user-defined InputSplits we
will be back in exactly the same boat of running user-code on the
JobTracker, unless we also ship over the preferred hosts for each
InputFormat too.
So, to entirely avoid user code in the job tracker we'd need a final
No, even with user defined Splits we don't need to use user code in the
JobTracker if we make Split a Writable class that has the hosts array.
Split will write the hosts first, so in the JobTracker, when you get the
byte array representing the Split, any fields from the sub class will
follow the S
On Sep 29, 2006, at 12:20 AM, Benjamin Reed wrote:
I please correct me if I'm reading the code incorrectly, but it seems
like submitJob puts the submitted job on the jobInitQueue which is
immediately dequeued by the JobInitThread and then initTasks() will
get
the file splits and create Tasks
[ http://issues.apache.org/jira/browse/HADOOP-239?page=all ]
Sanjay Dahiya updated HADOOP-239:
-
Attachment: Hadoop-239_2.patch
Here is an updated patch.
> job tracker WI drops jobs after 24 hours
>
>
>
I please correct me if I'm reading the code incorrectly, but it seems
like submitJob puts the submitted job on the jobInitQueue which is
immediately dequeued by the JobInitThread and then initTasks() will get
the file splits and create Tasks. Thus, it doesn't seem like there is
any difference in me
18 matches
Mail list logo