[ 
https://issues.apache.org/jira/browse/MAPREDUCE-430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12747514#action_12747514
 ] 

Arun C Murthy commented on MAPREDUCE-430:
-----------------------------------------

Aaron - I apologize for being unclear and assuming people got the context. 

The distinction mainly arises from the desire to weed out node/hardware 
failures as opposed to application errors. The the distinction is rooted in the 
desire to treat node/hardware errors (disk, corrupt RAM/NIC etc.) differently 
so as to quickly and accurately penalize the node (e.g. blacklist the 
tasktracker). Currently we use _all_ task failures uniformly to penalize the 
tasktracker... clearly penalizing a tasktracker for an application error which 
results in an OOM (for e.g.) is injudicious. Does that make sense?

> Task stuck in cleanup with OutOfMemoryErrors
> --------------------------------------------
>
>                 Key: MAPREDUCE-430
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-430
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>            Reporter: Amareshwari Sriramadasu
>            Assignee: Amar Kamat
>             Fix For: 0.20.1
>
>         Attachments: MAPREDUCE-430-v1.11.patch, 
> MAPREDUCE-430-v1.12-branch-0.20.patch, MAPREDUCE-430-v1.12.patch, 
> MAPREDUCE-430-v1.6-branch-0.20.patch, MAPREDUCE-430-v1.6.patch, 
> MAPREDUCE-430-v1.7.patch, MAPREDUCE-430-v1.8.patch
>
>
> Obesrved a task with OutOfMemory error, stuck in cleanup.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to