Is it possible that using too many mappers causes issues in Hadoop 0.17.1? I
have an input data directory with 100 files in it. I am running a job that
takes these files as input. When I set "-jobconf mapred.map.tasks=200" in
the job invocation, its seems like the mappers received "empty" inputs (that
my binary does not cleanly handle). When I unset the mapred.map.tasks
parameter, the jobs runs fine, many mappers do get used because the input
files are manually split. Can anyone offer an explanation / have there been
changes in the use of this parameter between 0.16.4 and 0.17.1?
Ashish

Reply via email to