You can find out the reason from the JT logs (eg: memory/timeout restrictions) 
and adjust the timeout - mapred.task.timeout or the memory parameters 
accordingly.Refer 
http://hadoop.apache.org/common/docs/r0.20.0/cluster_setup.html
Cheers,
/R

On 1/29/10 12:22 PM, "john li" <lij...@gmail.com> wrote:

when hadoop running multi jobs concurrently, that is when hadoop is busy,
always have killed tasks in some jobs, although the jobs success finally.

anybody tell me why?

--
Regards
Junyong

Reply via email to