You might want to take a look at the kill command : "hadoop job -kill
<jobid>".

Prashant

On Sun, Jan 29, 2012 at 11:06 PM, praveenesh kumar <praveen...@gmail.com>wrote:

> Is there anyway through which we can kill hadoop jobs that are taking
> enough time to execute ?
>
> What I want to achieve is - If some job is running more than
> "_some_predefined_timeout_limit", it should be killed automatically.
>
> Is it possible to achieve this, through shell scripts or any other way ?
>
> Thanks,
> Praveenesh
>

Reply via email to