-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/63953/#review193480
-----------------------------------------------------------



FAIL: Some Mesos tests failed.

Reviews applied: `['63953']`

Failed command: `D:\DCOS\mesos\src\mesos-tests.exe --verbose`

All the build artifacts available at: 
http://dcos-win.westus.cloudapp.azure.com/mesos-build/review/63953

Relevant logs:

- 
[mesos-tests-stdout.log](http://dcos-win.westus.cloudapp.azure.com/mesos-build/review/63953/logs/mesos-tests-stdout.log):

```

[----------] 1 test from IsolationFlag/CpuIsolatorTest
[ RUN      ] IsolationFlag/CpuIsolatorTest.ROOT_UserCpuUsage/0
[       OK ] IsolationFlag/CpuIsolatorTest.ROOT_UserCpuUsage/0 (2307 ms)
[----------] 1 test from IsolationFlag/CpuIsolatorTest (2331 ms total)

[----------] 1 test from IsolationFlag/MemoryIsolatorTest
[ RUN      ] IsolationFlag/MemoryIsolatorTest.ROOT_MemUsage/0
[       OK ] IsolationFlag/MemoryIsolatorTest.ROOT_MemUsage/0 (2358 ms)
[----------] 1 test from IsolationFlag/MemoryIsolatorTest (2380 ms total)

[----------] Global test environment tear-down
[==========] 829 tests from 84 test cases ran. (302987 ms total)
[  PASSED  ] 819 tests.
[  FAILED  ] 10 tests, listed below:
[  FAILED  ] OfferOperationStatusUpdateManagerTest.UpdateAndAckNonTerminalUpdate
[  FAILED  ] OfferOperationStatusUpdateManagerTest.RecoverCheckpointedStream
[  FAILED  ] OfferOperationStatusUpdateManagerTest.RecoverEmptyFile
[  FAILED  ] OfferOperationStatusUpdateManagerTest.RecoverTerminatedStream
[  FAILED  ] OfferOperationStatusUpdateManagerTest.IgnoreDuplicateUpdate
[  FAILED  ] 
OfferOperationStatusUpdateManagerTest.IgnoreDuplicateUpdateAfterRecover
[  FAILED  ] OfferOperationStatusUpdateManagerTest.RejectDuplicateAck
[  FAILED  ] 
OfferOperationStatusUpdateManagerTest.RejectDuplicateAckAfterRecover
[  FAILED  ] 
OfferOperationStatusUpdateManagerTest.NonStrictRecoveryCorruptedFile
[  FAILED  ] SlaveTest.ResourceProviderPublishAll

10 FAILED TESTS
  YOU HAVE 204 DISABLED TESTS

```

- 
[mesos-tests-stderr.log](http://dcos-win.westus.cloudapp.azure.com/mesos-build/review/63953/logs/mesos-tests-stderr.log):

```
I1212 01:37:02.820304  8228 master.cpp:10114] Updating the state of task 
270402a7-df4c-4d4f-9c0b-b466840ec473 of framework 97I1212 01:37:02.163326  9000 
exec.cpp:162] Version: 1.5.0
I1212 01:37:02.185326  5300 exec.cpp:237] Executor registered on agent 
970bff38-a78e-49a2-922e-8e1d332cd7d5-S0
I1212 01:37:02.188302  5336 executor.cpp:171] Received SUBSCRIBED event
I1212 01:37:02.192327  5336 executor.cpp:175] Subscribed executor on 
build-srv-04.zq4gs31qjdiunm1ryi1452nvnh.dx.internal.cloudapp.net
I1212 01:37:02.193325  5336 executor.cpp:171] Received LAUNCH event
I1212 01:37:02.197322  5336 executor.cpp:637] Starting task 
270402a7-df4c-4d4f-9c0b-b466840ec473
I1212 01:37:02.271325  5336 executor.cpp:477] Running 
'D:\DCOS\mesos\src\mesos-containerizer.exe launch <POSSIBLY-SENSITIVE-DATA>'
I1212 01:37:02.796293  5336 executor.cpp:650] Forked command at 6904
I1212 01:37:02.822293  6160 exec.cpp:435] Executor asked to shutdown
I1212 01:37:02.823293  8720 executor.cpp:171] Received SHUTDOWN event
I1212 01:37:02.823293  8720 executor.cpp:747] Shutting down
I1212 01:37:02.823293  8720 executor.cpp:854] Sending SIGTERM to process tree 
at pid 60bff38-a78e-49a2-922e-8e1d332cd7d5-0000 (latest state: TASK_KILLED, 
status update state: TASK_KILLED)
I1212 01:37:02.820304  8692 slave.cpp:3400] Shutting down framework 
970bff38-a78e-49a2-922e-8e1d332cd7d5-0000
I1212 01:37:02.821293  8692 slave.cpp:6091] Shutting down executor 
'270402a7-df4c-4d4f-9c0b-b466840ec473' of framework 
970bff38-a78e-49a2-922e-8e1d332cd7d5-0000 at executor(1)@10.3.1.5:54077
I1212 01:37:02.822293  8692 slave.cpp:909] Agent terminating
W1212 01:37:02.822293  8692 slave.cpp:3396] Ignoring shutdown framework 
970bff38-a78e-49a2-922e-8e1d332cd7d5-0000 because it is terminating
I1212 01:37:02.823293  8228 master.cpp:10220] Removing task 
270402a7-df4c-4d4f-9c0b-b466840ec473 with resources cpus(allocated: *):4; 
mem(allocated: *):2048; disk(allocated: *):1024; ports(allocated: 
*):[31000-32000] of framework 970bff38-a78e-49a2-922e-8e1d332cd7d5-0000 on 
agent 970bff38-a78e-49a2-922e-8e1d332cd7d5-S0 at slave(326)@10.3.1.5:54056 
(build-srv-04.zq4gs31qjdiunm1ryi1452nvnh.dx.internal.cloudapp.net)
I1212 01:37:02.825301  8820 containerizer.cpp:2328] Destroying container 
1969a483-0fca-4b28-afad-65c33170c43e in RUNNING state
I1212 01:37:02.825301  8228 master.cpp:1305] Agent 
970bff38-a78e-49a2-922e-8e1d332cd7d5-S0 at slave(326)@10.3.1.5:54056 
(build-srv-04.zq4gs31qjdiunm1ryi1452nvnh.dx.internal.cloudapp.net) disconnected
I1212 01:37:02.826293  8228 master.cpp:3364] Disconnecting agent 
970bff38-a78e-49a2-922e-8e1d332cd7d5-S0 at slave(326)@10.3.1.5:54056 
(build-srv-04.zq4gs31qjdiunm1ryi1452nvnh.dx.internal.cloudapp.net)
I1212 01:37:02.826293  8820 containerizer.cpp:2944] Transitioning the state of 
container 1969a483-0fca-4b28-afad-65c33170c43e from RUNNING to DESTROYING
I1212 01:37:02.826293  8228 master.cpp:3383] Deactivating agent 
970bff38-a78e-49a2-922e-8e1d332cd7d5-S0 at slave(326)@10.3.1.5:54056 
(build-srv-04.zq4gs31qjdiunm1ryi1452nvnh.dx.internal.cloudapp.net)
I1212 01:37:02.826293  6788 hierarchical.cpp:344] Removed framework 
970bff38-a78e-49a2-922e-8e1d332cd7d5-0000
I1212 01:37:02.826293  6788 hierarchical.cpp:762] Agent 
970bff38-a78e-49a2-922e-8e1d332cd7d5-S0 deactivated
I1212 01:37:02.826293  8820 launcher.cpp:156] Asked to destroy container 
1969a483-0fca-4b28-afad-65c33170c43e
I1212 01:37:02.925451  1376 containerizer.cpp:2781] Container 
1969a483-0fca-4b28-afad-65c33170c43e has exited
I1212 01:37:02.952891  5268 master.cpp:1147] Master terminating
I1212 01:37:02.954892  8228 hierarchical.cpp:605] Removed agent 
970bff38-a78e-49a2-922e-8e1d332cd7d5-S0
I1212 01:37:03.259255  1640 process.cpp:887] Failed to accept socket: future 
discarded
```

- Mesos Reviewbot Windows


On Dec. 6, 2017, 2:47 p.m., Armand Grillet wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/63953/
> -----------------------------------------------------------
> 
> (Updated Dec. 6, 2017, 2:47 p.m.)
> 
> 
> Review request for mesos and Alexander Rukletsov.
> 
> 
> Bugs: MESOS-7361
>     https://issues.apache.org/jira/browse/MESOS-7361
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> This change adjusts log level based on the container class.
> If the class is `DEBUG` we print the log entry at a verbose
> level 1, otherwise we print it at the `INFO` level.
> 
> We use the added macro in mesos containerizer so that COMMAND
> checks cause less INFO logs (15 lines instead of 26 before).
> 
> 
> Diffs
> -----
> 
>   src/slave/containerizer/mesos/containerizer.hpp 
> e2739e017cb8dda37d94ad809ca1bd461f308bfb 
>   src/slave/containerizer/mesos/containerizer.cpp 
> 7f3b86d87cf82429c2627d4a32eb0d5adbcc3f29 
> 
> 
> Diff: https://reviews.apache.org/r/63953/diff/6/
> 
> 
> Testing
> -------
> 
> Started a Mesos cluster and used `mesos-execute` with this task group to 
> check that the behaviour after this patch is the one expected:
> 
> ```
> {
>   "tasks": [
>     {
>       "name": "Name of the task",
>       "task_id": {
>         "value": "task-group"
>       },
>       "agent_id": {
>         "value": ""
>       },
>       "resources": [
>         {
>           "name": "cpus",
>           "type": "SCALAR",
>           "scalar": {
>             "value": 0.01
>           }
>         },
>         {
>           "name": "mem",
>           "type": "SCALAR",
>           "scalar": {
>             "value": 2
>           }
>         }
>       ],
>       "command": {
>         "value": "sleep 1000"
>       },
>       "check": {
>         "type": "COMMAND",
>         "command": {
>           "command": {
>             "value": "echo \"Bonjour\""
>           },
>           "uris": []
>         }
>       }
>     }
>   ]
> }
> ```
> 
> And:
> ```
> $ nice make check
> ```
> 
> 
> Thanks,
> 
> Armand Grillet
> 
>

Reply via email to