Hi,
Please check the value of mapreduce.map.maxattempts and
mapreduce.reduce.maxattempts. If you'd like to ignore the error only
in specific jobs, it's useful to use -D option to change the
configuration as follows:
bin/hadoop jar job.jar -Dmapreduce.map.maxattempts=10
Thanks,
- Tsuyoshi
On Tue
Check the parameter yarn.app.mapreduce.client.max-retries.
On 8/18/14, parnab kumar wrote:
> Hi All,
>
>I am running a job where there are between 1300-1400 map tasks. Some
> map task fails due to some error. When 4 such maps fail the job naturally
> gets killed. How to ignore the faile
Hi All,
I am running a job where there are between 1300-1400 map tasks. Some
map task fails due to some error. When 4 such maps fail the job naturally
gets killed. How to ignore the failed tasks and go around executing the
other map tasks. I am okay with loosing some data for the failed t