That's true, if the scheduler waits until the control task is RUNNING before
doing anything else, this problem goes away. There's also then no need to rely
on the order tasks are launched on the executor.
Thanks everyone!
On Tue, Sep 30, 2014 at 5:51 PM, Benjamin Mahler
Thanks Vinod. I missed that issue when searching!
I did consider sending a shutdown task, though my worry was that there may be
cases where the task might not launch. Perhaps due to resource starvation
and/or no offers being received. Presumably it would not be correct to store
the original
Why can't the executor just commit suicide if all running tasks are killed?
If you're simultaneously launching two tasks for each executor, you'll only see
this race if you kill very quickly after launching. Your scheduler is informed
when both tasks are running as well, so that could gate the
Hi,
I've been making some modifications to the Hadoop framework recently and
have come up against a brick wall. I'm wondering if the concept of killing
an executor from a framework has been discussed before?
Currently we are launching two tasks for each Hadoop TaskTracker, one that
has a bit of
Adding a shutdownExecutor() driver call has been discussed before.
https://issues.apache.org/jira/browse/MESOS-330
As a work around, have you considered sending a special kill task as a
signal to the executor to commit suicide?
On Mon, Sep 29, 2014 at 5:27 PM, Tom Arnfeld t...@duedil.com