Hi,

I've been running Terasort on Hadoop-2.0.4.

Every time there is s a small number of Map failures (like 4 or 5) because
of container's running beyond virtual memory limit.

I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
TaskAttempt goes fine while the values of those failed maps are the default
1024MB.

My question is thus, why a small number of container's memory values are
set to default rather than that of user-configured ?

Any thoughts ?

Thanks,
Manu Zhang

Reply via email to