[ 
https://issues.apache.org/jira/browse/MESOS-6026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15419779#comment-15419779
 ] 

Benjamin Mahler commented on MESOS-6026:
----------------------------------------

Took a look with [~vinodkone] and [~kaysoky], it appears the race is that the 
second terminal update (TASK_FAILED) in this case, races past the first one 
because it {{_statusUpdate}} it does not have to wait for the 
{{containerizer->update()}} to release the resources (which is a bug! resources 
should be released before we notify the world that the task is terminal):

https://github.com/apache/mesos/blob/a536cef67d2f250f60eb2f991e62402dae0590a1/src/slave/slave.cpp#L3457-L3496
{code}
void Slave::_statusUpdate(
    StatusUpdate update,
    const Option<process::UPID>& pid,
    const ExecutorID& executorId,
    const Future<ContainerStatus>& future)
{
  ...

  executor->updateTaskState(status);

  // Handle the task appropriately if it is terminated.
  // TODO(vinod): Revisit these semantics when we disallow duplicate
  // terminal updates (e.g., when slave recovery is always enabled).
  if (protobuf::isTerminalState(status.state()) &&
      (executor->queuedTasks.contains(status.task_id()) ||
       executor->launchedTasks.contains(status.task_id()))) {
    executor->terminateTask(status.task_id(), status);

    // XXX: TASK_FINISHED has to wait for the containerizer to update the 
resources.
    containerizer->update(executor->containerId, executor->resources)
      .onAny(defer(self(),
                   &Slave::__statusUpdate,
                   lambda::_1,
                   update,
                   pid,
                   executor->id,
                   executor->containerId,
                   executor->checkpoint));
  } else {
    // XXX: TASK_FAILED races through directly to __statusUpdate!
    __statusUpdate(None(),
                   update,
                   pid,
                   executor->id,
                   executor->containerId,
                   executor->checkpoint);
  }
{code}

We should fix this race. Probably the simplest thing to do here is to ignore 
double-terminal updates (notice the TODO) and drop them. We should probably log 
a warning in this case and still ACK them in case some executors retry these 
indefinitely.

> Tasks mistakenly marked as FAILED due to race b/w 
> ⁠sendExecutorTerminatedStatusUpdate()⁠ and ⁠_statusUpdate()⁠
> ------------------------------------------------------------------------------------------------------------------
>
>                 Key: MESOS-6026
>                 URL: https://issues.apache.org/jira/browse/MESOS-6026
>             Project: Mesos
>          Issue Type: Bug
>          Components: slave
>            Reporter: Kapil Arya
>              Labels: mesosphere
>
> Due to a race between ⁠sendExecutorTerminatedStatusUpdate()⁠ and 
> ⁠_statusUpdate()⁠ that happens when the task has just finished and the 
> executor is exiting.
> Here is an example of slave log messages:
> {code}
> Aug 10 21:32:53 ip-10-10-0-205 mesos-slave[20413]: I0810 21:32:53.959374 
> 20418 slave.cpp:3211] Handling status update TASK_FINISHED (UUID: 
> fd79d0bd-4ece-41dc-bced-b93491f6bb2e) for task 291 of framework 
> 340dfe26-a09f-4857-85b8-faba5f8d95df-0008 from executor(1)@10.10.0.205:53504
> Aug 10 21:32:53 ip-10-10-0-205 mesos-slave[20413]: I0810 21:32:53.959604 
> 20418 slave.cpp:3732] executor(1)@10.10.0.205:53504 exited
> Aug 10 21:32:53 ip-10-10-0-205 mesos-slave[20413]: I0810 21:32:53.959643 
> 20418 slave.cpp:4089] Executor '291' of framework 
> 340dfe26-a09f-4857-85b8-faba5f8d95df-0008 exited with status 0
> Aug 10 21:32:53 ip-10-10-0-205 mesos-slave[20413]: I0810 21:32:53.959744 
> 20418 slave.cpp:3211] Handling status update TASK_FAILED (UUID: 
> b94722fb-1658-4936-b604-6d642ffe20a0) for task 291 of framework 
> 340dfe26-a09f-4857-85b8-faba5f8d95df-0008 from @0.0.0.0:0
> {code}
> As can be noticed, the task is marked as TASK_FAILED after the executor has 
> exited.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to