[ 
https://issues.apache.org/jira/browse/HADOOP-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12636583#action_12636583
 ] 

Amar Kamat commented on HADOOP-4261:
------------------------------------

bq. Having separate enum for TaskType will break the existing checks such as 
Task.get(Keys.TASK_TYPE).equals(Values.Map().name()
What I meant was pass enum instead of booleans. What goes in the jobhistory 
remains same and hence what gets retrieved also remains same. Something like
{code}
// before
public static void logStarted(TaskAttemptID taskAttemptId, long startTime, 
String trackerName, int httpPort,  boolean isCleanup, boolean isSetup){}
//after
public static void logStarted(TaskAttemptID taskAttemptId, long startTime, 
String trackerName, int httpPort,  TaskType type){}
{code}
and 
{code}
// before
isCleanup ? Values.CLEANUP.name() :  isSetup ? Values.SETUP.name() : 
Values.MAP.name(),
//after
type.name()
{code}

But you could also do something like
{code}
// before
public static void logStarted(TaskAttemptID taskAttemptId, long startTime, 
String trackerName, int httpPort,  String taskType){}
{code}
and pass appropriate tasktype from JIP.

But if this turns out to be a big change then we can do it in HADOOP-4122 as 
Amareshwari suggested.

> Jobs failing in the init stage will never cleanup
> -------------------------------------------------
>
>                 Key: HADOOP-4261
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4261
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Amar Kamat
>            Assignee: Amareshwari Sriramadasu
>            Priority: Blocker
>             Fix For: 0.19.0
>
>         Attachments: patch-4261.txt, patch-4261.txt, patch-4261.txt
>
>
> Pre HADOOP-3150, if the job fails in the init stage, {{job.kill()}} was 
> called. This used to make sure that the job was cleaned up w.r.t 
> - staus set to KILLED/FAILED
> - job files from the system dir are deleted
> - closing of job history files
> - making jobtracker aware of this through {{jobTracker.finalizeJob()}}
> - cleaning up the data structures via {{JobInProgress.garbageCollect()}}
> Now if the job fails in the init stage, {{job.fail()}} is called which doesnt 
> do the cleanup. HADOOP-3150 introduces cleanup tasks which are launched once 
> the job completes i.e killed/failed/succeeded.  Jobtracker will never 
> consider this job for scheduling as the job will be in the {{PREP}} state 
> forever.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to