Check the parameter yarn.app.mapreduce.client.max-retries.
On 8/18/14, parnab kumar <parnab.2...@gmail.com> wrote: > Hi All, > > I am running a job where there are between 1300-1400 map tasks. Some > map task fails due to some error. When 4 such maps fail the job naturally > gets killed. How to ignore the failed tasks and go around executing the > other map tasks. I am okay with loosing some data for the failed tasks. > > Thanks, > Parnab >