Hi Adam,
that's exactly what happened. Thanks a lot for the explanation and
suggestion. Now mesos is clean again :)
On 03/31/16 03:51, Adam Bordelon wrote:
I suspect that after your maintenance operation, Marathon may have
registered with a new frameworkId and launched is own copies of your
I suspect that after your maintenance operation, Marathon may have
registered with a new frameworkId and launched is own copies of your tasks
(why you see double). However, the old Marathon frameworkId probably has a
failover_timeout of a week, so it will continue to be considered
"registered", but
Hi haosdent,
thanks for your reply. It is actually very weird, first time I see this
situation in around one year using mesos.
I am pasting here the truncate output you asked for. It is showing one
of the tasks with "Failed" state under "Active tasks":
{
"executor_id": "",
>"Active tasks" with status "Failed"
A bit wired here. According to my test, it should exists in "Completed
Tasks". If possible, could you show you /master/state endpoint result. I
think the frameworks node in state response would be helpful to analyze the
problem.
On Wed, Mar 30, 2016 at 6:26 PM,
Hi all,
after a maintenance carried on in a mesos cluster (0.25) using marathon
(0.10) as a only scheduler , I've finished with the double of tasks for
each application. But marathon was recognizing only half of them.
For getting rid of this orphaned tasks, I've did a "kill PID" of them,
so th
5 matches
Mail list logo