[
https://issues.apache.org/jira/browse/HADOOP-1144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12492727
]
Doug Cutting commented on HADOOP-1144:
--------------------------------------
bq. Christian: could be made configurable separately for mappers and reducers
I agree that it makes sense to have separate parameters for map and reduce,
something like mapred.max.map.failures.percent and
mapred.max.reduce.failures.percent. These should be settable from JobConf.
bq. Owen: need [..] an interface to determine how many of the maps and reduces
failed.
Could we use counters for this?
> Hadoop should allow a configurable percentage of failed map tasks before
> declaring a job failed.
> ------------------------------------------------------------------------------------------------
>
> Key: HADOOP-1144
> URL: https://issues.apache.org/jira/browse/HADOOP-1144
> Project: Hadoop
> Issue Type: Improvement
> Components: mapred
> Affects Versions: 0.12.0
> Reporter: Christian Kunz
> Assigned To: Arun C Murthy
> Fix For: 0.13.0
>
>
> In our environment it can occur that some map tasks will fail repeatedly
> because of corrupt input data, which sometimes is non-critical as long as the
> amount is limited. In this case it is annoying that the whole Hadoop job
> fails and cannot be restarted till the corrupt data are identified and
> eliminated from the input. It would be extremely helpful if the job
> configuration would allow to indicate how many map tasks are allowed to fail.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.