I think TaskInProgress would take care of cleanup after a job is killed. It has 
flow for killed/failed as well as successful jobs cleanup. You might like to 
look into JobTracker/ TaskTracker/ HeartbeatResponse as well.

Cheers,
/R

On 1/20/10 10:45 AM, "#YONG YONG CHENG#" <aarnc...@pmail.ntu.edu.sg> wrote:

Good Day,

I am using Hadoop 0.19.1 on a Windows cluster. Everything work fine.
But I have a programming question.

For a successful map task, close() method will be invoked by the map task.
But for a failed and killed map task, close() method is not invoked.

I get the above finding by putting a println statement in the close() method of 
the
map task.

My question is, how to clean up the resource used by the map task if it fails 
or being killed?

Thanks,

Reply via email to