[ 
https://issues.apache.org/jira/browse/HADOOP-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12636566#action_12636566
 ] 

Amar Kamat commented on HADOOP-4261:
------------------------------------

+1, looks fine to me from the recovery point of view. One general comment
- Can we have a enum in {{JobHistory}} (say {{TaskType}}) and pass {{TaskType}} 
as a parameter instead of {{isCleanup}}, {{isSetup}} etc. Can you check if its 
possible? The reason is in future we can simply grow the enum and make 
appropriate calls. 

> Jobs failing in the init stage will never cleanup
> -------------------------------------------------
>
>                 Key: HADOOP-4261
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4261
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Amar Kamat
>            Assignee: Amareshwari Sriramadasu
>            Priority: Blocker
>             Fix For: 0.19.0
>
>         Attachments: patch-4261.txt, patch-4261.txt, patch-4261.txt
>
>
> Pre HADOOP-3150, if the job fails in the init stage, {{job.kill()}} was 
> called. This used to make sure that the job was cleaned up w.r.t 
> - staus set to KILLED/FAILED
> - job files from the system dir are deleted
> - closing of job history files
> - making jobtracker aware of this through {{jobTracker.finalizeJob()}}
> - cleaning up the data structures via {{JobInProgress.garbageCollect()}}
> Now if the job fails in the init stage, {{job.fail()}} is called which doesnt 
> do the cleanup. HADOOP-3150 introduces cleanup tasks which are launched once 
> the job completes i.e killed/failed/succeeded.  Jobtracker will never 
> consider this job for scheduling as the job will be in the {{PREP}} state 
> forever.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to