Hi,

So, it seems that when you kill a hadoop streaming job, it doesn't
kill underlying processes, but only stops the job from processing new
input. In the event of a long running input (say, someone not using
streaming as they probably should), this is less than ideal. Is there
any way to quickly kill the job without ssh'ing into the machines
running the task?

Thanks,
David Hall

Reply via email to