[ https://issues.apache.org/jira/browse/MESOS-4999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15206443#comment-15206443 ]
Sergey Galkin commented on MESOS-4999: -------------------------------------- Tasks in RUNNING is Marathon issue. I stopped Marathon, delete Framework in Mesos, deleted /marathon/state/framework:id from zookeeper, started Marathon and Marathon created new Framework and 10 new tasks but I don't see applications in Marathon {code:bash} ╰─➤ curl -v http://172.20.8.34:8080/env-testing/marathon/v2/apps * Trying 172.20.8.34... * Connected to 172.20.8.34 (172.20.8.34) port 8080 (#0) > GET /env-testing/marathon/v2/apps HTTP/1.1 > Host: 172.20.8.34:8080 > User-Agent: curl/7.47.1 > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.4.6 (Ubuntu) < Date: Tue, 22 Mar 2016 14:26:17 GMT < Content-Type: application/json; qs=2 < Transfer-Encoding: chunked < Connection: keep-alive < Cache-Control: no-cache, no-store, must-revalidate < X-Marathon-Leader: http://172.20.9.51:8080 < Expires: 0 < Pragma: no-cache < X-Marathon-Via: 1.1 172.20.9.50:8080 < * Connection #0 to host 172.20.8.34 left intact {"apps":[]} {code} > Mesos (or Marathon) lost tasks > ------------------------------ > > Key: MESOS-4999 > URL: https://issues.apache.org/jira/browse/MESOS-4999 > Project: Mesos > Issue Type: Bug > Affects Versions: 0.27.2 > Environment: mesos - 0.27.0 > marathon - 0.15.2 > 189 mesos slaves with Ubuntu 14.04.2 on HP ProLiant DL380 Gen9, > CPU - 2 x Intel(R) Xeon(R) CPU E5-2680 v3 @2.50GHz (48 cores (with > hyperthreading)) > RAM - 264G, > Storage - 3.0T on RAID on HP Smart Array P840 Controller, > HDD - 12 x HP EH0600JDYTL > Network - 2 x Intel Corporation Ethernet 10G 2P X710, > Reporter: Sergey Galkin > Attachments: mesos-nodes.png > > > After a lot of create/delete application with docker instances through > Marathon API I have a lot of lost tasks after last *deleting all application > in Marathon*. > They are divided into three types > 1. Tasks hangs in STAGED status. I don't see this tasks in 'docker ps' on the > slave and _service docker restart_ on mesos slave did not fix these tasks. > 2. RUNNING because docker hangs and can't delete these instances (a lot of > {code} > Killing docker task > Shutting down > Killing docker task > Shutting down > {code} > in stdout, > _docker stop ID_ hangs and these tasks can be fixed by _service docker > restart_ on mesos slave. > 3. RUNNING after _service docker restart_ on mesos slave. > Screenshot attached -- This message was sent by Atlassian JIRA (v6.3.4#6332)