Hi,
I am trying to get a job in DCOS to run a docker container on a Windows agent 
machine. DCOS was installed using the AWS CF template here: 
https://downloads.dcos.io/dcos/stable/aws.html (single master).
The Windows agent is added:
C:\mesos\mesos\build\src\mesos-agent.exe --attributes=os:windows 
--containerizers=docker,mesos --hostname=10.19.10.206 --IP=10.19.10.206 
--master=zk://10.22.1.94:2181/mesos --work_dir=c:\mesos\work_dir 
--launcher_dir=c:\mesos\mesos\build\src --log_dir=c:\mesos\logs

And a simple job works:

dcos.activestate.com -> Job -> New



{

  "id": "mywindowstest01",

  "labels": {},

  "run": {

    "cpus": 0.01,

    "mem": 128,

    "disk": 0,

    "cmd": "C:\\Windows\\System32\\cmd.exe /c echo helloworld > 
c:\\mesos\\work_dir\\helloworld2",

    "env": {},

    "placement": {

      "constraints": [

        {

          "attribute": "os",

          "operator": "EQ",

          "value": "windows"

        }

      ]

    },

    "artifacts": [],

    "maxLaunchDelay": 3600,

    "volumes": [],

    "restart": {

      "policy": "NEVER"

    }

  },

  "schedules": []

}

creates: "c:\\mesos\\work_dir\\helloworld2"

The Windows agent has DockerCE installed and is set to run Windows containers 
(tried with Linux containers as well and getting the same problem, but for the 
purpose of this question let's stick to Windows containers)
I confirmed that it's possible to run a Windows container manually, directly on 
Windows 10 by starting a Powershell as Administrator and running:
docker run -ti microsoft/windowsservercoreand docker run 
microsoft/windowsservercore
Both commands create a new container (verified with "docker ps" , besides I get 
a cmd.exe shell in the conatiner for the first command)
Now the problem:
trying to run a container from DCOS does not work:

dcos job add a.json

with the json:

{  "id": "myattempt11",  "labels": {},  "run": {    "env": {},    "cpus": 1.00, 
   "mem": 512,    "disk": 1000,    "placement": {      "constraints": [        
{          "attribute": "os",          "operator": "EQ",          "value": 
"windows"        }      ]    },    "artifacts": [],    "maxLaunchDelay": 3600,  
  "docker": {      "image": "microsoft/windowsservercore"    },    "restart": { 
     "policy": "NEVER"    }  },  "schedules": []}
Does not work:
# dcos job add a.json
# dcos job run myattempt11 
Run ID: 20180202203339zVpxc
The log on the Mesos Agent on Windows shows activity but not much information 
about the problem (see "TASK_FAILED" at the end below):
Log file created at: 2018/02/02 12:52:47Running on machine: DESKTOP-JJK06UJLog 
line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msgI0202 
12:52:47.330880  8388 logging.cpp:201] INFO level logging started!I0202 
12:52:47.335886  8388 main.cpp:365] Build: 2017-12-20 23:35:42 UTC by Anne S 
BellI0202 12:52:47.335886  8388 main.cpp:366] Version: 1.5.0I0202 
12:52:47.337895  8388 main.cpp:373] Git SHA: 
327726d3c7272806c8f3c3b7479758c26e55fd43I0202 12:52:47.358888  8388 
resolver.cpp:69] Creating default secret resolverI0202 12:52:47.574883  8388 
containerizer.cpp:304] Using isolation { windows/cpu, filesystem/windows, 
windows/mem, environment_secret }I0202 12:52:47.577883  8388 
provisioner.cpp:299] Using default backend 'copy'I0202 12:52:47.596886  3348 
slave.cpp:262] Mesos agent started on (1)@10.19.10.206:5051I0202 
12:52:47.597883  3348 slave.cpp:263] Flags at startup: 
--appc_simple_discovery_uri_prefix="http://"; 
--appc_store_dir="C:\Users\activeit\AppData\Local\Temp\mesos\store\appc" 
--attributes="os:windows" --authenticate_http_readonly="false" 
--authenticate_http_readwrite="false" --authenticatee="crammd5" 
--authentication_backoff_factor="1secs" --authorizer="local" 
--container_disk_watch_interval="15secs" --containerizers="docker,mesos" 
--default_role="*" --disk_watch_interval="1mins" --docker="docker" 
--docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io"; 
--docker_remove_delay="6hrs" --docker_socket="//./pipe/docker_engine" 
--docker_stop_timeout="0ns" 
--docker_store_dir="C:\Users\activeit\AppData\Local\Temp\mesos\store\docker" 
--docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume" 
--enforce_container_disk_quota="false" --executor_registration_timeout="1mins" 
--executor_reregistration_timeout="2secs" 
--executor_shutdown_grace_period="5secs" 
--fetcher_cache_dir="C:\Users\activeit\AppData\Local\Temp\mesos\fetch" 
--fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" 
--gc_disk_headroom="0.1" --hadoop_home="" --help="false" 
--hostname="10.19.10.206" --hostname_lookup="true" 
--http_command_executor="false" --http_heartbeat_interval="30secs" 
--initialize_driver_logging="true" --ip="10.19.10.206" 
--isolation="windows/cpu,windows/mem" --launcher="windows" 
--launcher_dir="c:\mesos\mesos\build\src" --log_dir="c:\mesos\logs" 
--logbufsecs="0" --logging_level="INFO" --master="zk://10.22.1.94:2181/mesos" 
--max_completed_executors_per_framework="150" 
--oversubscribed_resources_interval="15secs" --port="5051" 
--qos_correction_interval_min="0ns" --quiet="false" 
--reconfiguration_policy="equal" --recover="reconnect" 
--recovery_timeout="15mins" --registration_backoff_factor="1secs" 
--runtime_dir="C:\ProgramData\mesos\runtime" 
--sandbox_directory="C:\mesos\sandbox" --strict="true" --version="false" 
--work_dir="c:\mesos\work_dir" --zk_session_timeout="10secs"I0202 
12:52:47.604887  3348 slave.cpp:612] Agent resources: 
[{"name":"cpus","scalar":{"value":4.0},"type":"SCALAR"},{"name":"mem","scalar":{"value":15290.0},"type":"SCALAR"},{"name":"disk","scalar":{"value":470301.0},"type":"SCALAR"},{"name":"ports","ranges":{"range":[{"begin":31000,"end":32000}]},"type":"RANGES"}]I0202
 12:52:47.725885  3348 slave.cpp:620] Agent attributes: [ os=windows ]I0202 
12:52:47.727886  3348 slave.cpp:629] Agent hostname: 10.19.10.206I0202 
12:52:47.735886  7652 task_status_update_manager.cpp:181] Pausing sending task 
status updatesI0202 12:52:47.738890  4052 group.cpp:341] Group process 
(zookeeper-group(1)@10.19.10.206:5051) connected to ZooKeeperI0202 
12:52:47.739887  4052 group.cpp:831] Syncing group operations: queue size 
(joins, cancels, datas) = (0, 0, 0)I0202 12:52:47.740885  4052 group.cpp:419] 
Trying to create path '/mesos' in ZooKeeperI0202 12:52:47.773885  5168 
state.cpp:66] Recovering state from 'c:\mesos\work_dir\meta'E0202 
12:52:47.773885  3348 slave.cpp:1009] Failed to attach 
'c:\mesos\logs\mesos-agent.exe.INFO' to virtual path '/slave/log': Failed to 
get realpath of 'c:\mesos\logs\mesos-agent.exe.INFO': Failed to get attributes 
for file 'c:\mesos\logs\mesos-agent.exe.INFO': The system cannot find the file 
specified.
I0202 12:52:47.774884  5168 state.cpp:724] No committed checkpointed resources 
found at 'c:\mesos\work_dir\meta\resources\resources.info'I0202 12:52:47.779883 
 5168 state.cpp:110] Failed to find the latest agent from 
'c:\mesos\work_dir\meta'I0202 12:52:47.781888  3528 
task_status_update_manager.cpp:207] Recovering task status update managerI0202 
12:52:47.782883  3348 docker.cpp:890] Recovering Docker containersI0202 
12:52:47.782883  7652 containerizer.cpp:674] Recovering containerizerI0202 
12:52:47.807888  3768 provisioner.cpp:495] Provisioner recovery completeI0202 
12:52:47.891667  5168 detector.cpp:152] Detected a new leader: (id='1171')I0202 
12:52:47.892666  7652 group.cpp:700] Trying to get 
'/mesos/json.info_0000001171' in ZooKeeperI0202 12:52:47.970657  5168 
zookeeper.cpp:262] A new leading master (UPID=master@10.22.1.94:5050) is 
detectedI0202 12:52:48.011252  7652 slave.cpp:6776] Finished recoveryI0202 
12:52:48.020246  3768 task_status_update_manager.cpp:181] Pausing sending task 
status updatesI0202 12:52:48.020246  7652 slave.cpp:1055] New master detected 
at master@10.22.1.94:5050I0202 12:52:48.021251  7652 slave.cpp:1099] No 
credentials provided. Attempting to register without authenticationI0202 
12:52:48.023254  7652 slave.cpp:1110] Detecting new masterI0202 12:52:48.330085 
 4052 slave.cpp:1275] Registered with master master@10.22.1.94:5050; given 
agent ID a0664e60-846a-42d0-9586-cf97e997eba3-S0I0202 12:52:48.331082  5168 
task_status_update_manager.cpp:188] Resuming sending task status updatesI0202 
12:52:48.348086  4052 slave.cpp:1352] Forwarding agent update 
{"offer_operations":{},"resource_version_uuid":{"value":"DEVEk\/KOR5KLtmOgVG9qvw=="},"slave_id":{"value":"a0664e60-846a-42d0-9586-cf97e997eba3-S0"},"update_oversubscribed_resources":true}W0202
 12:52:48.351085  4052 slave.cpp:1334] Already registered with master 
master@10.22.1.94:5050I0202 12:52:48.356086  4052 slave.cpp:1352] Forwarding 
agent update 
{"offer_operations":{},"resource_version_uuid":{"value":"DEVEk\/KOR5KLtmOgVG9qvw=="},"slave_id":{"value":"a0664e60-846a-42d0-9586-cf97e997eba3-S0"},"update_oversubscribed_resources":true}W0202
 12:52:48.358086  4052 slave.cpp:1334] Already registered with master 
master@10.22.1.94:5050I0202 12:52:48.359086  4052 slave.cpp:1352] Forwarding 
agent update 
{"offer_operations":{},"resource_version_uuid":{"value":"DEVEk\/KOR5KLtmOgVG9qvw=="},"slave_id":{"value":"a0664e60-846a-42d0-9586-cf97e997eba3-S0"},"update_oversubscribed_resources":true}W0202
 12:52:48.362089  4052 slave.cpp:1334] Already registered with master 
master@10.22.1.94:5050I0202 12:52:48.363085  4052 slave.cpp:1352] Forwarding 
agent update 
{"offer_operations":{},"resource_version_uuid":{"value":"DEVEk\/KOR5KLtmOgVG9qvw=="},"slave_id":{"value":"a0664e60-846a-42d0-9586-cf97e997eba3-S0"},"update_oversubscribed_resources":true}W0202
 12:52:48.364082  4052 slave.cpp:1334] Already registered with master 
master@10.22.1.94:5050I0202 12:52:48.365085  4052 slave.cpp:1352] Forwarding 
agent update 
{"offer_operations":{},"resource_version_uuid":{"value":"DEVEk\/KOR5KLtmOgVG9qvw=="},"slave_id":{"value":"a0664e60-846a-42d0-9586-cf97e997eba3-S0"},"update_oversubscribed_resources":true}I0202
 12:52:50.938498  7652 slave.cpp:1831] Got assigned task 
'myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88' for 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000I0202 12:52:50.962504  7652 
slave.cpp:2101] Authorizing task 
'myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88' for 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000I0202 12:52:50.965504  3768 
slave.cpp:2494] Launching task 
'myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88' for 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000I0202 12:52:50.988512  3768 
slave.cpp:8373] Launching executor 
'myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88' of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000 with resources 
[{"allocation_info":{"role":"*"},"name":"cpus","scalar":{"value":0.1},"type":"SCALAR"},{"allocation_info":{"role":"*"},"name":"mem","scalar":{"value":32.0},"type":"SCALAR"}]
 in work directory 
'c:\mesos\work_dir\slaves\a0664e60-846a-42d0-9586-cf97e997eba3-S0\frameworks\0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000\executors\myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88\runs\74298e92-9700-486d-b211-a42e5fd0bf85'I0202
 12:52:50.995501  3768 slave.cpp:3046] Launching container 
74298e92-9700-486d-b211-a42e5fd0bf85 for executor 
'myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88' of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000I0202 12:52:51.010500  3768 
slave.cpp:2580] Queued task 
'myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88' for 
executor 'myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88' 
of framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000I0202 12:52:51.017498  
3348 docker.cpp:1144] Starting container '74298e92-9700-486d-b211-a42e5fd0bf85' 
for task 'myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88' 
(and executor 
'myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88') of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000I0202 12:52:53.731667  1104 
docker.cpp:784] Checkpointing pid 7732 to 
'c:\mesos\work_dir\meta\slaves\a0664e60-846a-42d0-9586-cf97e997eba3-S0\frameworks\0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000\executors\myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88\runs\74298e92-9700-486d-b211-a42e5fd0bf85\pids\forked.pid'I0202
 12:52:53.894371  4052 slave.cpp:4314] Got registration for executor 
'myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88' of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000 from 
executor(1)@10.19.10.206:49855I0202 12:52:53.911371  1104 docker.cpp:1627] 
Ignoring updating container 74298e92-9700-486d-b211-a42e5fd0bf85 because 
resources passed to update are identical to existing resourcesI0202 
12:52:53.914371  3768 slave.cpp:2785] Sending queued task 
'myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88' to 
executor 'myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88' 
of framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000 at 
executor(1)@10.19.10.206:49855I0202 12:52:53.931371  7652 slave.cpp:4771] 
Handling status update TASK_STARTING (Status UUID: 
ef5adc2f-6f66-44c3-bc98-7697c1315ebf) for task 
myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88 of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000 from 
executor(1)@10.19.10.206:49855I0202 12:52:53.942371  5168 
task_status_update_manager.cpp:328] Received task status update TASK_STARTING 
(Status UUID: ef5adc2f-6f66-44c3-bc98-7697c1315ebf) for task 
myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88 of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000I0202 12:52:53.948371  5168 
task_status_update_manager.cpp:842] Checkpointing UPDATE for task status update 
TASK_STARTING (Status UUID: ef5adc2f-6f66-44c3-bc98-7697c1315ebf) for task 
myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88 of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000I0202 12:52:53.950371  1104 
slave.cpp:5254] Forwarding the update TASK_STARTING (Status UUID: 
ef5adc2f-6f66-44c3-bc98-7697c1315ebf) for task 
myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88 of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000 to 
master@10.22.1.94:5050I0202 12:52:53.953371  1104 slave.cpp:5163] Sending 
acknowledgement for status update TASK_STARTING (Status UUID: 
ef5adc2f-6f66-44c3-bc98-7697c1315ebf) for task 
myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88 of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000 to 
executor(1)@10.19.10.206:49855I0202 12:52:54.049816  3348 
task_status_update_manager.cpp:401] Received task status update acknowledgement 
(UUID: ef5adc2f-6f66-44c3-bc98-7697c1315ebf) for task 
myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88 of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000I0202 12:52:54.051817  3348 
task_status_update_manager.cpp:842] Checkpointing ACK for task status update 
TASK_STARTING (Status UUID: ef5adc2f-6f66-44c3-bc98-7697c1315ebf) for task 
myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88 of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000I0202 12:52:59.255755  4052 
slave.cpp:4771] Handling status update TASK_FAILED (Status UUID: 
c0775c86-4f1b-44a6-ae8f-347486f6fa9f) for task 
myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88 of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000 from 
executor(1)@10.19.10.206:49855I0202 12:52:59.260759  4052 
task_status_update_manager.cpp:328] Received task status update TASK_FAILED 
(Status UUID: c0775c86-4f1b-44a6-ae8f-347486f6fa9f) for task 
myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88 of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000I0202 12:52:59.261757  4052 
task_status_update_manager.cpp:842] Checkpointing UPDATE for task status update 
TASK_FAILED (Status UUID: c0775c86-4f1b-44a6-ae8f-347486f6fa9f) for task 
myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88 of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000I0202 12:52:59.263756  5168 
slave.cpp:5254] Forwarding the update TASK_FAILED (Status UUID: 
c0775c86-4f1b-44a6-ae8f-347486f6fa9f) for task 
myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88 of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000 to 
master@10.22.1.94:5050I0202 12:52:59.265756  5168 slave.cpp:5163] Sending 
acknowledgement for status update TASK_FAILED (Status UUID: 
c0775c86-4f1b-44a6-ae8f-347486f6fa9f) for task 
myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88 of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000 to 
executor(1)@10.19.10.206:49855I0202 12:52:59.367189  7052 
task_status_update_manager.cpp:401] Received task status update acknowledgement 
(UUID: c0775c86-4f1b-44a6-ae8f-347486f6fa9f) for task 
myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88 of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000I0202 12:52:59.368187  7052 
task_status_update_manager.cpp:842] Checkpointing ACK for task status update 
TASK_FAILED (Status UUID: c0775c86-4f1b-44a6-ae8f-347486f6fa9f) for task 
myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88 of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000I0202 12:53:00.261153  4052 
slave.cpp:5386] Got exited event for executor(1)@10.19.10.206:49855I0202 
12:53:00.471400  7052 docker.cpp:2415] Executor for container 
74298e92-9700-486d-b211-a42e5fd0bf85 has exitedI0202 12:53:00.472362  7052 
docker.cpp:2186] Destroying container 74298e92-9700-486d-b211-a42e5fd0bf85 in 
RUNNING stateI0202 12:53:00.474362  7052 docker.cpp:2236] Running docker stop 
on container 74298e92-9700-486d-b211-a42e5fd0bf85I0202 12:53:00.477478  3348 
slave.cpp:5795] Executor 
'myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88' of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000 exited with status 0I0202 
12:53:00.478476  3348 slave.cpp:5899] Cleaning up executor 
'myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88' of 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000 at 
executor(1)@10.19.10.206:49855I0202 12:53:00.481472  4052 gc.cpp:90] Scheduling 
'c:\mesos\work_dir\slaves\a0664e60-846a-42d0-9586-cf97e997eba3-S0\frameworks\0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000\executors\myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88\runs\74298e92-9700-486d-b211-a42e5fd0bf85'
 for gc 6.99989026072889days in the futureI0202 12:53:00.483475  3528 
gc.cpp:90] Scheduling 
'c:\mesos\work_dir\slaves\a0664e60-846a-42d0-9586-cf97e997eba3-S0\frameworks\0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000\executors\myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88'
 for gc 6.99987866347259days in the futureI0202 12:53:00.484474  5168 
gc.cpp:90] Scheduling 
'c:\mesos\work_dir\meta\slaves\a0664e60-846a-42d0-9586-cf97e997eba3-S0\frameworks\0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000\executors\myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88\runs\74298e92-9700-486d-b211-a42e5fd0bf85'
 for gc 6.99999439265185days in the futureI0202 12:53:00.485474  5168 
gc.cpp:90] Scheduling 
'c:\mesos\work_dir\meta\slaves\a0664e60-846a-42d0-9586-cf97e997eba3-S0\frameworks\0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000\executors\myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88'
 for gc 6.99987864033482days in the futureI0202 12:53:00.485474  3348 
slave.cpp:6006] Cleaning up framework 
0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000I0202 12:53:00.486479  1104 
task_status_update_manager.cpp:289] Closing task status update streams for 
framework 0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000I0202 12:53:00.487473  3768 
gc.cpp:90] Scheduling 
'c:\mesos\work_dir\slaves\a0664e60-846a-42d0-9586-cf97e997eba3-S0\frameworks\0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000'
 for gc 6.9998786172days in the futureI0202 12:53:00.488477  3768 gc.cpp:90] 
Scheduling 
'c:\mesos\work_dir\meta\slaves\a0664e60-846a-42d0-9586-cf97e997eba3-S0\frameworks\0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000'
 for gc 6.99987860557926days in the futureI0202 12:53:47.742332  7052 
slave.cpp:6314] Current disk usage 24.73%. Max allowed age: 
4.568714599279827daysI0202 12:54:01.675030  7052 slave.cpp:6222] Framework 
0ca2eae6-8912-4f6a-984a-d501ac02ff88-0000 seems to have exited. Ignoring 
registration timeout for executor 
'myattempt11_20180202203339zVpxc.07298e1c-085b-11e8-bc6d-ae95ed0c8d88'I0202 
12:54:03.169529  3348 slave.cpp:970] Received SIGUSR1 signal; unregistering and 
shutting downI0202 12:54:03.170536  3348 slave.cpp:931] Agent terminatingI0202 
12:54:03.199530  3308 process.cpp:887] Failed to accept socket: future discarded

in DCOS web-ui -> Jobs -> myattempt11 -> Run History  there is also no 
information.

Are there any good troubleshooting tips / ideas what to try or where to find 
more informative logs to run a Docker container on Windows using Mesos? 

Are there any more suitable alternative orchestration tools to run Docker 
Windows containers in a cluster?

Reply via email to