In the current stables, this is available at the task level with a
default fo 10m of non-responsiveness per task. Controlled per-job via
"mapred.task.timeout".

There is no built-in feature that lets you monitor and set a timeout
on the job execution itself, however (but should be easy to do) -- How
do you imagine this being useful vs. per-task timeouts that help
unsticking jobs or failing them eventually if they are improperly
written (which causes them to hang and not report any status for the
timeout period)?

On Mon, Jan 30, 2012 at 12:36 PM, praveenesh kumar <praveen...@gmail.com> wrote:
> Is there anyway through which we can kill hadoop jobs that are taking
> enough time to execute ?
>
> What I want to achieve is - If some job is running more than
> "_some_predefined_timeout_limit", it should be killed automatically.
>
> Is it possible to achieve this, through shell scripts or any other way ?
>
> Thanks,
> Praveenesh



-- 
Harsh J
Customer Ops. Engineer, Cloudera

Reply via email to