always have killed or failed task in job when running multi jobs concurrently

2010-01-28 Thread john li
when hadoop running multi jobs concurrently, that is when hadoop is busy, always have killed tasks in some jobs, although the jobs success finally. anybody tell me why? -- Regards Junyong

Re: always have killed or failed task in job when running multi jobs concurrently

2010-01-28 Thread Wang Xu
On Fri, Jan 29, 2010 at 2:52 PM, john li lij...@gmail.com wrote: when hadoop running multi jobs concurrently, that is when hadoop is busy, always have killed tasks in some jobs, although the jobs success finally. anybody tell me why? if only killed, don't mind it. JobTracker schedules idle

Re: always have killed or failed task in job when running multi jobs concurrently

2010-01-28 Thread Rekha Joshi
You can find out the reason from the JT logs (eg: memory/timeout restrictions) and adjust the timeout - mapred.task.timeout or the memory parameters accordingly.Refer http://hadoop.apache.org/common/docs/r0.20.0/cluster_setup.html Cheers, /R On 1/29/10 12:22 PM, john li lij...@gmail.com wrote: