[ https://issues.apache.org/jira/browse/MESOS-4736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15159308#comment-15159308 ]
Joseph Wu commented on MESOS-4736: ---------------------------------- This seems to be a problem with how Docker mounts volumes specified via {{-v <host path>:<container path>}}. This must do a recursive bind mount for the persistent volume to show up correctly. (It appears to only do a normal bind mount.) Centos6 only supports up to Docker 1.7.1. I'll try downgrading another OS (Centos7) to Docker 1.7.1 and see if the same failure exists. > DockerContainerizerTest.ROOT_DOCKER_LaunchWithPersistentVolumes is flaky > ------------------------------------------------------------------------ > > Key: MESOS-4736 > URL: https://issues.apache.org/jira/browse/MESOS-4736 > Project: Mesos > Issue Type: Bug > Affects Versions: 0.28.0 > Environment: Centos6 + GCC 4.9 on AWS > Reporter: Joseph Wu > Assignee: Joseph Wu > Labels: flaky, mesosphere, test > > This test passes consistently on other OS's, but fails consistently on CentOS > 6. > Verbose logs from test failure: > {code} > [ RUN ] DockerContainerizerTest.ROOT_DOCKER_LaunchWithPersistentVolumes > I0222 18:16:12.327957 26681 leveldb.cpp:174] Opened db in 7.466102ms > I0222 18:16:12.330528 26681 leveldb.cpp:181] Compacted db in 2.540139ms > I0222 18:16:12.330580 26681 leveldb.cpp:196] Created db iterator in 16908ns > I0222 18:16:12.330592 26681 leveldb.cpp:202] Seeked to beginning of db in > 1403ns > I0222 18:16:12.330600 26681 leveldb.cpp:271] Iterated through 0 keys in the > db in 315ns > I0222 18:16:12.330634 26681 replica.cpp:779] Replica recovered with log > positions 0 -> 0 with 1 holes and 0 unlearned > I0222 18:16:12.331082 26698 recover.cpp:447] Starting replica recovery > I0222 18:16:12.331289 26698 recover.cpp:473] Replica is in EMPTY status > I0222 18:16:12.332162 26703 replica.cpp:673] Replica in EMPTY status received > a broadcasted recover request from (13761)@172.30.2.148:35274 > I0222 18:16:12.332701 26701 recover.cpp:193] Received a recover response from > a replica in EMPTY status > I0222 18:16:12.333230 26699 recover.cpp:564] Updating replica status to > STARTING > I0222 18:16:12.334102 26698 master.cpp:376] Master > 652149b4-3932-4d8b-ba6f-8c9d9045be70 (ip-172-30-2-148.mesosphere.io) started > on 172.30.2.148:35274 > I0222 18:16:12.334116 26698 master.cpp:378] Flags at startup: --acls="" > --allocation_interval="1secs" --allocator="HierarchicalDRF" > --authenticate="true" --authenticate_http="true" --authenticate_slaves="true" > --authenticators="crammd5" --authorizers="local" > --credentials="/tmp/QEhLBS/credentials" --framework_sorter="drf" > --help="false" --hostname_lookup="true" --http_authenticators="basic" > --initialize_driver_logging="true" --log_auto_initialize="true" > --logbufsecs="0" --logging_level="INFO" --max_completed_frameworks="50" > --max_completed_tasks_per_framework="1000" --max_slave_ping_timeouts="5" > --quiet="false" --recovery_slave_removal_limit="100%" > --registry="replicated_log" --registry_fetch_timeout="1mins" > --registry_store_timeout="100secs" --registry_strict="true" > --root_submissions="true" --slave_ping_timeout="15secs" > --slave_reregister_timeout="10mins" --user_sorter="drf" --version="false" > --webui_dir="/usr/local/share/mesos/webui" --work_dir="/tmp/QEhLBS/master" > --zk_session_timeout="10secs" > I0222 18:16:12.334354 26698 master.cpp:423] Master only allowing > authenticated frameworks to register > I0222 18:16:12.334363 26698 master.cpp:428] Master only allowing > authenticated slaves to register > I0222 18:16:12.334369 26698 credentials.hpp:35] Loading credentials for > authentication from '/tmp/QEhLBS/credentials' > I0222 18:16:12.335366 26698 master.cpp:468] Using default 'crammd5' > authenticator > I0222 18:16:12.335492 26698 master.cpp:537] Using default 'basic' HTTP > authenticator > I0222 18:16:12.335623 26698 master.cpp:571] Authorization enabled > I0222 18:16:12.335752 26703 leveldb.cpp:304] Persisting metadata (8 bytes) to > leveldb took 2.314693ms > I0222 18:16:12.335769 26700 whitelist_watcher.cpp:77] No whitelist given > I0222 18:16:12.335778 26703 replica.cpp:320] Persisted replica status to > STARTING > I0222 18:16:12.335821 26697 hierarchical.cpp:144] Initialized hierarchical > allocator process > I0222 18:16:12.335965 26701 recover.cpp:473] Replica is in STARTING status > I0222 18:16:12.336771 26703 replica.cpp:673] Replica in STARTING status > received a broadcasted recover request from (13763)@172.30.2.148:35274 > I0222 18:16:12.337191 26696 recover.cpp:193] Received a recover response from > a replica in STARTING status > I0222 18:16:12.337635 26700 recover.cpp:564] Updating replica status to VOTING > I0222 18:16:12.337671 26703 master.cpp:1712] The newly elected leader is > master@172.30.2.148:35274 with id 652149b4-3932-4d8b-ba6f-8c9d9045be70 > I0222 18:16:12.337698 26703 master.cpp:1725] Elected as the leading master! > I0222 18:16:12.337713 26703 master.cpp:1470] Recovering from registrar > I0222 18:16:12.337828 26696 registrar.cpp:307] Recovering registrar > I0222 18:16:12.339972 26702 leveldb.cpp:304] Persisting metadata (8 bytes) to > leveldb took 2.06039ms > I0222 18:16:12.339994 26702 replica.cpp:320] Persisted replica status to > VOTING > I0222 18:16:12.340082 26700 recover.cpp:578] Successfully joined the Paxos > group > I0222 18:16:12.340267 26700 recover.cpp:462] Recover process terminated > I0222 18:16:12.340591 26699 log.cpp:659] Attempting to start the writer > I0222 18:16:12.341594 26698 replica.cpp:493] Replica received implicit > promise request from (13764)@172.30.2.148:35274 with proposal 1 > I0222 18:16:12.343598 26698 leveldb.cpp:304] Persisting metadata (8 bytes) to > leveldb took 1.97941ms > I0222 18:16:12.343619 26698 replica.cpp:342] Persisted promised to 1 > I0222 18:16:12.344182 26698 coordinator.cpp:238] Coordinator attempting to > fill missing positions > I0222 18:16:12.345285 26702 replica.cpp:388] Replica received explicit > promise request from (13765)@172.30.2.148:35274 for position 0 with proposal 2 > I0222 18:16:12.347275 26702 leveldb.cpp:341] Persisting action (8 bytes) to > leveldb took 1.960198ms > I0222 18:16:12.347296 26702 replica.cpp:712] Persisted action at 0 > I0222 18:16:12.348201 26703 replica.cpp:537] Replica received write request > for position 0 from (13766)@172.30.2.148:35274 > I0222 18:16:12.348247 26703 leveldb.cpp:436] Reading position from leveldb > took 21399ns > I0222 18:16:12.350667 26703 leveldb.cpp:341] Persisting action (14 bytes) to > leveldb took 2.39166ms > I0222 18:16:12.350690 26703 replica.cpp:712] Persisted action at 0 > I0222 18:16:12.351191 26696 replica.cpp:691] Replica received learned notice > for position 0 from @0.0.0.0:0 > I0222 18:16:12.353152 26696 leveldb.cpp:341] Persisting action (16 bytes) to > leveldb took 1.935798ms > I0222 18:16:12.353173 26696 replica.cpp:712] Persisted action at 0 > I0222 18:16:12.353188 26696 replica.cpp:697] Replica learned NOP action at > position 0 > I0222 18:16:12.353639 26696 log.cpp:675] Writer started with ending position 0 > I0222 18:16:12.354508 26697 leveldb.cpp:436] Reading position from leveldb > took 25625ns > I0222 18:16:12.355274 26696 registrar.cpp:340] Successfully fetched the > registry (0B) in 17.406976ms > I0222 18:16:12.355357 26696 registrar.cpp:439] Applied 1 operations in > 20977ns; attempting to update the 'registry' > I0222 18:16:12.355929 26697 log.cpp:683] Attempting to append 210 bytes to > the log > I0222 18:16:12.356032 26703 coordinator.cpp:348] Coordinator attempting to > write APPEND action at position 1 > I0222 18:16:12.356657 26698 replica.cpp:537] Replica received write request > for position 1 from (13767)@172.30.2.148:35274 > I0222 18:16:12.358566 26698 leveldb.cpp:341] Persisting action (229 bytes) to > leveldb took 1.881945ms > I0222 18:16:12.358588 26698 replica.cpp:712] Persisted action at 1 > I0222 18:16:12.359081 26697 replica.cpp:691] Replica received learned notice > for position 1 from @0.0.0.0:0 > I0222 18:16:12.361002 26697 leveldb.cpp:341] Persisting action (231 bytes) to > leveldb took 1.894331ms > I0222 18:16:12.361023 26697 replica.cpp:712] Persisted action at 1 > I0222 18:16:12.361038 26697 replica.cpp:697] Replica learned APPEND action at > position 1 > I0222 18:16:12.361883 26697 registrar.cpp:484] Successfully updated the > 'registry' in 6.482944ms > I0222 18:16:12.361981 26697 registrar.cpp:370] Successfully recovered > registrar > I0222 18:16:12.362052 26701 log.cpp:702] Attempting to truncate the log to 1 > I0222 18:16:12.362167 26703 coordinator.cpp:348] Coordinator attempting to > write TRUNCATE action at position 2 > I0222 18:16:12.362421 26696 master.cpp:1522] Recovered 0 slaves from the > Registry (171B) ; allowing 10mins for slaves to re-register > I0222 18:16:12.362447 26698 hierarchical.cpp:171] Skipping recovery of > hierarchical allocator: nothing to recover > I0222 18:16:12.362911 26701 replica.cpp:537] Replica received write request > for position 2 from (13768)@172.30.2.148:35274 > I0222 18:16:12.364760 26701 leveldb.cpp:341] Persisting action (16 bytes) to > leveldb took 1.819954ms > I0222 18:16:12.364783 26701 replica.cpp:712] Persisted action at 2 > I0222 18:16:12.365384 26697 replica.cpp:691] Replica received learned notice > for position 2 from @0.0.0.0:0 > I0222 18:16:12.367961 26697 leveldb.cpp:341] Persisting action (18 bytes) to > leveldb took 2.55143ms > I0222 18:16:12.368015 26697 leveldb.cpp:399] Deleting ~1 keys from leveldb > took 28196ns > I0222 18:16:12.368028 26697 replica.cpp:712] Persisted action at 2 > I0222 18:16:12.368044 26697 replica.cpp:697] Replica learned TRUNCATE action > at position 2 > I0222 18:16:12.376824 26703 slave.cpp:193] Slave started on > 396)@172.30.2.148:35274 > I0222 18:16:12.376838 26703 slave.cpp:194] Flags at startup: > --appc_simple_discovery_uri_prefix="http://" > --appc_store_dir="/tmp/mesos/store/appc" --authenticatee="crammd5" > --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false" > --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" > --cgroups_root="mesos" --container_disk_watch_interval="15secs" > --containerizers="mesos" > --credential="/tmp/DockerContainerizerTest_ROOT_DOCKER_LaunchWithPersistentVolumes_U5vZX1/credential" > --default_role="*" --disk_watch_interval="1mins" --docker="docker" > --docker_auth_server="https://auth.docker.io" --docker_kill_orphans="true" > --docker_puller_timeout="60" --docker_registry="https://registry-1.docker.io" > --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" > --docker_stop_timeout="0ns" --docker_store_dir="/tmp/mesos/store/docker" > --enforce_container_disk_quota="false" > --executor_registration_timeout="1mins" > --executor_shutdown_grace_period="5secs" > --fetcher_cache_dir="/tmp/DockerContainerizerTest_ROOT_DOCKER_LaunchWithPersistentVolumes_U5vZX1/fetch" > --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" > --gc_disk_headroom="0.1" --hadoop_home="" --help="false" > --hostname_lookup="true" --image_provisioner_backend="copy" > --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem" > --launcher_dir="/mnt/teamcity/work/4240ba9ddd0997c3/build/src" > --logbufsecs="0" --logging_level="INFO" > --oversubscribed_resources_interval="15secs" --perf_duration="10secs" > --perf_interval="1mins" --qos_correction_interval_min="0ns" --quiet="false" > --recover="reconnect" --recovery_timeout="15mins" > --registration_backoff_factor="10ms" > --resources="cpu:2;mem:2048;disk(role1):2048" > --revocable_cpu_low_priority="true" --sandbox_directory="/mnt/mesos/sandbox" > --strict="true" --switch_user="true" --systemd_enable_support="true" > --systemd_runtime_directory="/run/systemd/system" --version="false" > --work_dir="/tmp/DockerContainerizerTest_ROOT_DOCKER_LaunchWithPersistentVolumes_U5vZX1" > I0222 18:16:12.377109 26703 credentials.hpp:83] Loading credential for > authentication from > '/tmp/DockerContainerizerTest_ROOT_DOCKER_LaunchWithPersistentVolumes_U5vZX1/credential' > I0222 18:16:12.377300 26703 slave.cpp:324] Slave using credential for: > test-principal > I0222 18:16:12.377439 26703 resources.cpp:576] Parsing resources as JSON > failed: cpu:2;mem:2048;disk(role1):2048 > Trying semicolon-delimited string format instead > I0222 18:16:12.377804 26703 slave.cpp:464] Slave resources: cpu(*):2; > mem(*):2048; disk(role1):2048; cpus(*):8; ports(*):[31000-32000] > I0222 18:16:12.377881 26703 slave.cpp:472] Slave attributes: [ ] > I0222 18:16:12.377889 26703 slave.cpp:477] Slave hostname: > ip-172-30-2-148.mesosphere.io > I0222 18:16:12.378779 26701 state.cpp:58] Recovering state from > '/tmp/DockerContainerizerTest_ROOT_DOCKER_LaunchWithPersistentVolumes_U5vZX1/meta' > I0222 18:16:12.379092 26697 status_update_manager.cpp:200] Recovering status > update manager > I0222 18:16:12.379156 26681 sched.cpp:222] Version: 0.28.0 > I0222 18:16:12.379250 26697 docker.cpp:722] Recovering Docker containers > I0222 18:16:12.379421 26703 slave.cpp:4565] Finished recovery > I0222 18:16:12.379627 26700 sched.cpp:326] New master detected at > master@172.30.2.148:35274 > I0222 18:16:12.379735 26703 slave.cpp:4737] Querying resource estimator for > oversubscribable resources > I0222 18:16:12.379765 26700 sched.cpp:382] Authenticating with master > master@172.30.2.148:35274 > I0222 18:16:12.379781 26700 sched.cpp:389] Using default CRAM-MD5 > authenticatee > I0222 18:16:12.379964 26696 status_update_manager.cpp:174] Pausing sending > status updates > I0222 18:16:12.379992 26702 authenticatee.cpp:121] Creating new client SASL > connection > I0222 18:16:12.380030 26697 slave.cpp:796] New master detected at > master@172.30.2.148:35274 > I0222 18:16:12.380106 26697 slave.cpp:859] Authenticating with master > master@172.30.2.148:35274 > I0222 18:16:12.380127 26697 slave.cpp:864] Using default CRAM-MD5 > authenticatee > I0222 18:16:12.380188 26699 master.cpp:5526] Authenticating > scheduler-1850b1cd-3396-4479-b2f3-47ee6c3fa270@172.30.2.148:35274 > I0222 18:16:12.380269 26700 authenticator.cpp:413] Starting authentication > session for crammd5_authenticatee(832)@172.30.2.148:35274 > I0222 18:16:12.380280 26698 authenticatee.cpp:121] Creating new client SASL > connection > I0222 18:16:12.380307 26697 slave.cpp:832] Detecting new master > I0222 18:16:12.380450 26697 slave.cpp:4751] Received oversubscribable > resources from the resource estimator > I0222 18:16:12.380452 26699 master.cpp:5526] Authenticating > slave(396)@172.30.2.148:35274 > I0222 18:16:12.380506 26698 authenticator.cpp:98] Creating new server SASL > connection > I0222 18:16:12.380540 26697 authenticator.cpp:413] Starting authentication > session for crammd5_authenticatee(833)@172.30.2.148:35274 > I0222 18:16:12.380635 26700 authenticatee.cpp:212] Received SASL > authentication mechanisms: CRAM-MD5 > I0222 18:16:12.380659 26700 authenticatee.cpp:238] Attempting to authenticate > with mechanism 'CRAM-MD5' > I0222 18:16:12.380762 26700 authenticator.cpp:203] Received SASL > authentication start > I0222 18:16:12.380765 26701 authenticator.cpp:98] Creating new server SASL > connection > I0222 18:16:12.380843 26700 authenticator.cpp:325] Authentication requires > more steps > I0222 18:16:12.380911 26698 authenticatee.cpp:212] Received SASL > authentication mechanisms: CRAM-MD5 > I0222 18:16:12.380931 26702 authenticatee.cpp:258] Received SASL > authentication step > I0222 18:16:12.380936 26698 authenticatee.cpp:238] Attempting to authenticate > with mechanism 'CRAM-MD5' > I0222 18:16:12.381036 26702 authenticator.cpp:231] Received SASL > authentication step > I0222 18:16:12.381052 26698 authenticator.cpp:203] Received SASL > authentication start > I0222 18:16:12.381062 26702 auxprop.cpp:107] Request to lookup properties for > user: 'test-principal' realm: 'ip-172-30-2-148' server FQDN: > 'ip-172-30-2-148' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false > I0222 18:16:12.381072 26702 auxprop.cpp:179] Looking up auxiliary property > '*userPassword' > I0222 18:16:12.381104 26702 auxprop.cpp:179] Looking up auxiliary property > '*cmusaslsecretCRAM-MD5' > I0222 18:16:12.381104 26698 authenticator.cpp:325] Authentication requires > more steps > I0222 18:16:12.381134 26702 auxprop.cpp:107] Request to lookup properties for > user: 'test-principal' realm: 'ip-172-30-2-148' server FQDN: > 'ip-172-30-2-148' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true > I0222 18:16:12.381142 26702 auxprop.cpp:129] Skipping auxiliary property > '*userPassword' since SASL_AUXPROP_AUTHZID == true > I0222 18:16:12.381147 26702 auxprop.cpp:129] Skipping auxiliary property > '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true > I0222 18:16:12.381162 26702 authenticator.cpp:317] Authentication success > I0222 18:16:12.381184 26698 authenticatee.cpp:258] Received SASL > authentication step > I0222 18:16:12.381247 26699 authenticatee.cpp:298] Authentication success > I0222 18:16:12.381283 26696 authenticator.cpp:231] Received SASL > authentication step > I0222 18:16:12.381311 26696 auxprop.cpp:107] Request to lookup properties for > user: 'test-principal' realm: 'ip-172-30-2-148' server FQDN: > 'ip-172-30-2-148' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false > I0222 18:16:12.381325 26696 auxprop.cpp:179] Looking up auxiliary property > '*userPassword' > I0222 18:16:12.381319 26701 master.cpp:5556] Successfully authenticated > principal 'test-principal' at > scheduler-1850b1cd-3396-4479-b2f3-47ee6c3fa270@172.30.2.148:35274 > I0222 18:16:12.381345 26700 authenticator.cpp:431] Authentication session > cleanup for crammd5_authenticatee(832)@172.30.2.148:35274 > I0222 18:16:12.381361 26696 auxprop.cpp:179] Looking up auxiliary property > '*cmusaslsecretCRAM-MD5' > I0222 18:16:12.381397 26696 auxprop.cpp:107] Request to lookup properties for > user: 'test-principal' realm: 'ip-172-30-2-148' server FQDN: > 'ip-172-30-2-148' SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true > I0222 18:16:12.381413 26696 auxprop.cpp:129] Skipping auxiliary property > '*userPassword' since SASL_AUXPROP_AUTHZID == true > I0222 18:16:12.381422 26696 auxprop.cpp:129] Skipping auxiliary property > '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true > I0222 18:16:12.381441 26696 authenticator.cpp:317] Authentication success > I0222 18:16:12.381548 26698 sched.cpp:471] Successfully authenticated with > master master@172.30.2.148:35274 > I0222 18:16:12.381563 26698 sched.cpp:776] Sending SUBSCRIBE call to > master@172.30.2.148:35274 > I0222 18:16:12.381634 26700 authenticatee.cpp:298] Authentication success > I0222 18:16:12.381660 26698 sched.cpp:809] Will retry registration in > 770.60771ms if necessary > I0222 18:16:12.381675 26697 master.cpp:5556] Successfully authenticated > principal 'test-principal' at slave(396)@172.30.2.148:35274 > I0222 18:16:12.381734 26702 authenticator.cpp:431] Authentication session > cleanup for crammd5_authenticatee(833)@172.30.2.148:35274 > I0222 18:16:12.381811 26697 master.cpp:2280] Received SUBSCRIBE call for > framework 'default' at > scheduler-1850b1cd-3396-4479-b2f3-47ee6c3fa270@172.30.2.148:35274 > I0222 18:16:12.381882 26697 master.cpp:1751] Authorizing framework principal > 'test-principal' to receive offers for role 'role1' > I0222 18:16:12.382004 26698 slave.cpp:927] Successfully authenticated with > master master@172.30.2.148:35274 > I0222 18:16:12.382123 26698 slave.cpp:1321] Will retry registration in > 8.1941ms if necessary > I0222 18:16:12.382282 26701 master.cpp:4240] Registering slave at > slave(396)@172.30.2.148:35274 (ip-172-30-2-148.mesosphere.io) with id > 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 > I0222 18:16:12.382482 26701 master.cpp:2351] Subscribing framework default > with checkpointing disabled and capabilities [ ] > I0222 18:16:12.382612 26703 registrar.cpp:439] Applied 1 operations in > 46327ns; attempting to update the 'registry' > I0222 18:16:12.382829 26699 hierarchical.cpp:265] Added framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:12.382910 26699 hierarchical.cpp:1434] No resources available to > allocate! > I0222 18:16:12.382915 26701 sched.cpp:703] Framework registered with > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:12.382936 26699 hierarchical.cpp:1529] No inverse offers to send > out! > I0222 18:16:12.382953 26699 hierarchical.cpp:1127] Performed allocation for 0 > slaves in 89949ns > I0222 18:16:12.382982 26701 sched.cpp:717] Scheduler::registered took 46498ns > I0222 18:16:12.383536 26698 log.cpp:683] Attempting to append 423 bytes to > the log > I0222 18:16:12.383628 26699 coordinator.cpp:348] Coordinator attempting to > write APPEND action at position 3 > I0222 18:16:12.384196 26700 replica.cpp:537] Replica received write request > for position 3 from (13775)@172.30.2.148:35274 > I0222 18:16:12.386602 26700 leveldb.cpp:341] Persisting action (442 bytes) to > leveldb took 2.377119ms > I0222 18:16:12.386625 26700 replica.cpp:712] Persisted action at 3 > I0222 18:16:12.387104 26698 replica.cpp:691] Replica received learned notice > for position 3 from @0.0.0.0:0 > I0222 18:16:12.389159 26698 leveldb.cpp:341] Persisting action (444 bytes) to > leveldb took 2.032301ms > I0222 18:16:12.389181 26698 replica.cpp:712] Persisted action at 3 > I0222 18:16:12.389196 26698 replica.cpp:697] Replica learned APPEND action at > position 3 > I0222 18:16:12.390281 26698 registrar.cpp:484] Successfully updated the > 'registry' in 7.619072ms > I0222 18:16:12.390444 26702 log.cpp:702] Attempting to truncate the log to 3 > I0222 18:16:12.390569 26701 coordinator.cpp:348] Coordinator attempting to > write TRUNCATE action at position 4 > I0222 18:16:12.390904 26701 slave.cpp:3482] Received ping from > slave-observer(364)@172.30.2.148:35274 > I0222 18:16:12.391054 26700 master.cpp:4308] Registered slave > 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 at slave(396)@172.30.2.148:35274 > (ip-172-30-2-148.mesosphere.io) with cpu(*):2; mem(*):2048; disk(role1):2048; > cpus(*):8; ports(*):[31000-32000] > I0222 18:16:12.391144 26703 slave.cpp:971] Registered with master > master@172.30.2.148:35274; given slave ID > 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 > I0222 18:16:12.391168 26703 fetcher.cpp:81] Clearing fetcher cache > I0222 18:16:12.391238 26700 replica.cpp:537] Replica received write request > for position 4 from (13776)@172.30.2.148:35274 > I0222 18:16:12.391263 26701 status_update_manager.cpp:181] Resuming sending > status updates > I0222 18:16:12.391304 26697 hierarchical.cpp:473] Added slave > 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 (ip-172-30-2-148.mesosphere.io) with > cpu(*):2; mem(*):2048; disk(role1):2048; cpus(*):8; ports(*):[31000-32000] > (allocated: ) > I0222 18:16:12.391388 26703 slave.cpp:994] Checkpointing SlaveInfo to > '/tmp/DockerContainerizerTest_ROOT_DOCKER_LaunchWithPersistentVolumes_U5vZX1/meta/slaves/652149b4-3932-4d8b-ba6f-8c9d9045be70-S0/slave.info' > I0222 18:16:12.391636 26703 slave.cpp:1030] Forwarding total oversubscribed > resources > I0222 18:16:12.391772 26699 master.cpp:4649] Received update of slave > 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 at slave(396)@172.30.2.148:35274 > (ip-172-30-2-148.mesosphere.io) with total oversubscribed resources > I0222 18:16:12.392011 26697 hierarchical.cpp:1529] No inverse offers to send > out! > I0222 18:16:12.392053 26697 hierarchical.cpp:1147] Performed allocation for > slave 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 in 708377ns > I0222 18:16:12.392307 26703 master.cpp:5355] Sending 1 offers to framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 (default) at > scheduler-1850b1cd-3396-4479-b2f3-47ee6c3fa270@172.30.2.148:35274 > I0222 18:16:12.392374 26697 hierarchical.cpp:531] Slave > 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 (ip-172-30-2-148.mesosphere.io) > updated with oversubscribed resources (total: cpu(*):2; mem(*):2048; > disk(role1):2048; cpus(*):8; ports(*):[31000-32000], allocated: > disk(role1):2048; cpu(*):2; mem(*):2048; cpus(*):8; ports(*):[31000-32000]) > I0222 18:16:12.392500 26697 hierarchical.cpp:1434] No resources available to > allocate! > I0222 18:16:12.392531 26697 hierarchical.cpp:1529] No inverse offers to send > out! > I0222 18:16:12.392556 26697 hierarchical.cpp:1147] Performed allocation for > slave 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 in 136779ns > I0222 18:16:12.392704 26701 sched.cpp:873] Scheduler::resourceOffers took > 94330ns > I0222 18:16:12.393086 26681 resources.cpp:576] Parsing resources as JSON > failed: cpus:1;mem:64; > Trying semicolon-delimited string format instead > I0222 18:16:12.393600 26700 leveldb.cpp:341] Persisting action (16 bytes) to > leveldb took 2.326382ms > I0222 18:16:12.393625 26700 replica.cpp:712] Persisted action at 4 > I0222 18:16:12.394162 26696 replica.cpp:691] Replica received learned notice > for position 4 from @0.0.0.0:0 > I0222 18:16:12.394533 26701 master.cpp:3138] Processing ACCEPT call for > offers: [ 652149b4-3932-4d8b-ba6f-8c9d9045be70-O0 ] on slave > 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 at slave(396)@172.30.2.148:35274 > (ip-172-30-2-148.mesosphere.io) for framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 (default) at > scheduler-1850b1cd-3396-4479-b2f3-47ee6c3fa270@172.30.2.148:35274 > I0222 18:16:12.394567 26701 master.cpp:2926] Authorizing principal > 'test-principal' to create volumes > I0222 18:16:12.394628 26701 master.cpp:2825] Authorizing framework principal > 'test-principal' to launch task 1 as user 'root' > I0222 18:16:12.395519 26701 master.cpp:3467] Applying CREATE operation for > volumes disk(role1)[id1:path1]:64 from framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 (default) at > scheduler-1850b1cd-3396-4479-b2f3-47ee6c3fa270@172.30.2.148:35274 to slave > 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 at slave(396)@172.30.2.148:35274 > (ip-172-30-2-148.mesosphere.io) > I0222 18:16:12.395808 26701 master.cpp:6589] Sending checkpointed resources > disk(role1)[id1:path1]:64 to slave 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 at > slave(396)@172.30.2.148:35274 (ip-172-30-2-148.mesosphere.io) > I0222 18:16:12.396316 26696 leveldb.cpp:341] Persisting action (18 bytes) to > leveldb took 2.130659ms > I0222 18:16:12.396317 26703 slave.cpp:2341] Updated checkpointed resources > from to disk(role1)[id1:path1]:64 > I0222 18:16:12.396368 26696 leveldb.cpp:399] Deleting ~2 keys from leveldb > took 30004ns > I0222 18:16:12.396381 26696 replica.cpp:712] Persisted action at 4 > I0222 18:16:12.396397 26696 replica.cpp:697] Replica learned TRUNCATE action > at position 4 > I0222 18:16:12.396533 26701 master.hpp:176] Adding task 1 with resources > cpus(*):1; mem(*):64; disk(role1)[id1:path1]:64 on slave > 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 (ip-172-30-2-148.mesosphere.io) > I0222 18:16:12.396680 26701 master.cpp:3623] Launching task 1 of framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 (default) at > scheduler-1850b1cd-3396-4479-b2f3-47ee6c3fa270@172.30.2.148:35274 with > resources cpus(*):1; mem(*):64; disk(role1)[id1:path1]:64 on slave > 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 at slave(396)@172.30.2.148:35274 > (ip-172-30-2-148.mesosphere.io) > I0222 18:16:12.397009 26696 slave.cpp:1361] Got assigned task 1 for framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:12.397143 26696 resources.cpp:576] Parsing resources as JSON > failed: cpus:0.1;mem:32 > Trying semicolon-delimited string format instead > I0222 18:16:12.397306 26699 hierarchical.cpp:653] Updated allocation of > framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 on slave > 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 from disk(role1):2048; cpu(*):2; > mem(*):2048; cpus(*):8; ports(*):[31000-32000] to disk(role1):1984; cpu(*):2; > mem(*):2048; cpus(*):8; ports(*):[31000-32000]; disk(role1)[id1:path1]:64 > I0222 18:16:12.397625 26699 hierarchical.cpp:892] Recovered disk(role1):1984; > cpu(*):2; mem(*):1984; cpus(*):7; ports(*):[31000-32000] (total: cpu(*):2; > mem(*):2048; disk(role1):1984; cpus(*):8; ports(*):[31000-32000]; > disk(role1)[id1:path1]:64, allocated: disk(role1)[id1:path1]:64; cpus(*):1; > mem(*):64) on slave 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 from framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:12.397857 26696 slave.cpp:1480] Launching task 1 for framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:12.397943 26696 resources.cpp:576] Parsing resources as JSON > failed: cpus:0.1;mem:32 > Trying semicolon-delimited string format instead > I0222 18:16:12.398560 26696 paths.cpp:474] Trying to chown > '/tmp/DockerContainerizerTest_ROOT_DOCKER_LaunchWithPersistentVolumes_U5vZX1/slaves/652149b4-3932-4d8b-ba6f-8c9d9045be70-S0/frameworks/652149b4-3932-4d8b-ba6f-8c9d9045be70-0000/executors/1/runs/207172a3-0ebd-4faa-946b-75a829fc75fc' > to user 'root' > I0222 18:16:12.403491 26696 slave.cpp:5367] Launching executor 1 of framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 with resources cpus(*):0.1; > mem(*):32 in work directory > '/tmp/DockerContainerizerTest_ROOT_DOCKER_LaunchWithPersistentVolumes_U5vZX1/slaves/652149b4-3932-4d8b-ba6f-8c9d9045be70-S0/frameworks/652149b4-3932-4d8b-ba6f-8c9d9045be70-0000/executors/1/runs/207172a3-0ebd-4faa-946b-75a829fc75fc' > I0222 18:16:12.404115 26696 slave.cpp:1698] Queuing task '1' for executor '1' > of framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:12.405709 26696 slave.cpp:749] Successfully attached file > '/tmp/DockerContainerizerTest_ROOT_DOCKER_LaunchWithPersistentVolumes_U5vZX1/slaves/652149b4-3932-4d8b-ba6f-8c9d9045be70-S0/frameworks/652149b4-3932-4d8b-ba6f-8c9d9045be70-0000/executors/1/runs/207172a3-0ebd-4faa-946b-75a829fc75fc' > I0222 18:16:12.408308 26697 docker.cpp:1019] Starting container > '207172a3-0ebd-4faa-946b-75a829fc75fc' for task '1' (and executor '1') of > framework '652149b4-3932-4d8b-ba6f-8c9d9045be70-0000' > I0222 18:16:12.408592 26697 docker.cpp:1053] Running docker -H > unix:///var/run/docker.sock inspect alpine:latest > I0222 18:16:12.520663 26702 docker.cpp:390] Docker pull alpine completed > I0222 18:16:12.520853 26702 docker.cpp:479] Changing the ownership of the > persistent volume at > '/tmp/DockerContainerizerTest_ROOT_DOCKER_LaunchWithPersistentVolumes_U5vZX1/volumes/roles/role1/id1' > with uid 0 and gid 0 > I0222 18:16:12.524782 26702 docker.cpp:500] Mounting > '/tmp/DockerContainerizerTest_ROOT_DOCKER_LaunchWithPersistentVolumes_U5vZX1/volumes/roles/role1/id1' > to > '/tmp/DockerContainerizerTest_ROOT_DOCKER_LaunchWithPersistentVolumes_U5vZX1/slaves/652149b4-3932-4d8b-ba6f-8c9d9045be70-S0/frameworks/652149b4-3932-4d8b-ba6f-8c9d9045be70-0000/executors/1/runs/207172a3-0ebd-4faa-946b-75a829fc75fc/path1' > for persistent volume disk(role1)[id1:path1]:64 of container > 207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:12.580834 26700 slave.cpp:2643] Got registration for executor '1' > of framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 from > executor(1)@172.30.2.148:56026 > I0222 18:16:12.581961 26699 docker.cpp:1299] Ignoring updating container > '207172a3-0ebd-4faa-946b-75a829fc75fc' with resources passed to update is > identical to existing resources > I0222 18:16:12.582307 26698 slave.cpp:1863] Sending queued task '1' to > executor '1' of framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 at > executor(1)@172.30.2.148:56026 > I0222 18:16:13.295573 26703 slave.cpp:3002] Handling status update > TASK_RUNNING (UUID: 0f6cc8cc-cd72-4bda-ba53-2e573ea1e0a0) for task 1 of > framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 from > executor(1)@172.30.2.148:56026 > I0222 18:16:13.295940 26703 slave.cpp:3002] Handling status update > TASK_FINISHED (UUID: ed5a5eb7-65cc-42fa-bb85-3aaf65d86e6b) for task 1 of > framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 from > executor(1)@172.30.2.148:56026 > I0222 18:16:13.296381 26701 status_update_manager.cpp:320] Received status > update TASK_RUNNING (UUID: 0f6cc8cc-cd72-4bda-ba53-2e573ea1e0a0) for task 1 > of framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:13.296422 26701 status_update_manager.cpp:497] Creating > StatusUpdate stream for task 1 of framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:13.296422 26703 slave.cpp:5677] Terminating task 1 > I0222 18:16:13.296839 26701 status_update_manager.cpp:374] Forwarding update > TASK_RUNNING (UUID: 0f6cc8cc-cd72-4bda-ba53-2e573ea1e0a0) for task 1 of > framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 to the slave > I0222 18:16:13.296902 26702 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:13.299427 26699 slave.cpp:3400] Forwarding the update > TASK_RUNNING (UUID: 0f6cc8cc-cd72-4bda-ba53-2e573ea1e0a0) for task 1 of > framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 to > master@172.30.2.148:35274 > I0222 18:16:13.299921 26699 slave.cpp:3294] Status update manager > successfully handled status update TASK_RUNNING (UUID: > 0f6cc8cc-cd72-4bda-ba53-2e573ea1e0a0) for task 1 of framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:13.299969 26699 slave.cpp:3310] Sending acknowledgement for > status update TASK_RUNNING (UUID: 0f6cc8cc-cd72-4bda-ba53-2e573ea1e0a0) for > task 1 of framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 to > executor(1)@172.30.2.148:56026 > I0222 18:16:13.300130 26696 master.cpp:4794] Status update TASK_RUNNING > (UUID: 0f6cc8cc-cd72-4bda-ba53-2e573ea1e0a0) for task 1 of framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 from slave > 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 at slave(396)@172.30.2.148:35274 > (ip-172-30-2-148.mesosphere.io) > I0222 18:16:13.300176 26696 master.cpp:4842] Forwarding status update > TASK_RUNNING (UUID: 0f6cc8cc-cd72-4bda-ba53-2e573ea1e0a0) for task 1 of > framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:13.300375 26696 master.cpp:6450] Updating the state of task 1 of > framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 (latest state: > TASK_FINISHED, status update state: TASK_RUNNING) > I0222 18:16:13.300765 26703 sched.cpp:981] Scheduler::statusUpdate took > 164263ns > I0222 18:16:13.300962 26700 hierarchical.cpp:892] Recovered cpus(*):1; > mem(*):64; disk(role1)[id1:path1]:64 (total: cpu(*):2; mem(*):2048; > disk(role1):1984; cpus(*):8; ports(*):[31000-32000]; > disk(role1)[id1:path1]:64, allocated: ) on slave > 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 from framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:13.301178 26699 master.cpp:3952] Processing ACKNOWLEDGE call > 0f6cc8cc-cd72-4bda-ba53-2e573ea1e0a0 for task 1 of framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 (default) at > scheduler-1850b1cd-3396-4479-b2f3-47ee6c3fa270@172.30.2.148:35274 on slave > 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 > I0222 18:16:13.301450 26699 status_update_manager.cpp:392] Received status > update acknowledgement (UUID: 0f6cc8cc-cd72-4bda-ba53-2e573ea1e0a0) for task > 1 of framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:13.301697 26701 slave.cpp:2412] Status update manager > successfully handled status update acknowledgement (UUID: > 0f6cc8cc-cd72-4bda-ba53-2e573ea1e0a0) for task 1 of framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:13.327133 26697 status_update_manager.cpp:320] Received status > update TASK_FINISHED (UUID: ed5a5eb7-65cc-42fa-bb85-3aaf65d86e6b) for task 1 > of framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:13.327280 26697 status_update_manager.cpp:374] Forwarding update > TASK_FINISHED (UUID: ed5a5eb7-65cc-42fa-bb85-3aaf65d86e6b) for task 1 of > framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 to the slave > I0222 18:16:13.327481 26696 slave.cpp:3400] Forwarding the update > TASK_FINISHED (UUID: ed5a5eb7-65cc-42fa-bb85-3aaf65d86e6b) for task 1 of > framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 to > master@172.30.2.148:35274 > I0222 18:16:13.327621 26696 slave.cpp:3294] Status update manager > successfully handled status update TASK_FINISHED (UUID: > ed5a5eb7-65cc-42fa-bb85-3aaf65d86e6b) for task 1 of framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:13.327679 26696 slave.cpp:3310] Sending acknowledgement for > status update TASK_FINISHED (UUID: ed5a5eb7-65cc-42fa-bb85-3aaf65d86e6b) for > task 1 of framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 to > executor(1)@172.30.2.148:56026 > I0222 18:16:13.327800 26698 master.cpp:4794] Status update TASK_FINISHED > (UUID: ed5a5eb7-65cc-42fa-bb85-3aaf65d86e6b) for task 1 of framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 from slave > 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 at slave(396)@172.30.2.148:35274 > (ip-172-30-2-148.mesosphere.io) > I0222 18:16:13.327850 26698 master.cpp:4842] Forwarding status update > TASK_FINISHED (UUID: ed5a5eb7-65cc-42fa-bb85-3aaf65d86e6b) for task 1 of > framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:13.327977 26698 master.cpp:6450] Updating the state of task 1 of > framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 (latest state: > TASK_FINISHED, status update state: TASK_FINISHED) > I0222 18:16:13.328248 26699 sched.cpp:981] Scheduler::statusUpdate took > 100279ns > I0222 18:16:13.328588 26700 master.cpp:3952] Processing ACKNOWLEDGE call > ed5a5eb7-65cc-42fa-bb85-3aaf65d86e6b for task 1 of framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 (default) at > scheduler-1850b1cd-3396-4479-b2f3-47ee6c3fa270@172.30.2.148:35274 on slave > 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 > I0222 18:16:13.328662 26681 sched.cpp:1903] Asked to stop the driver > I0222 18:16:13.328630 26700 master.cpp:6516] Removing task 1 with resources > cpus(*):1; mem(*):64; disk(role1)[id1:path1]:64 of framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 on slave > 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 at slave(396)@172.30.2.148:35274 > (ip-172-30-2-148.mesosphere.io) > I0222 18:16:13.328747 26697 sched.cpp:1143] Stopping framework > '652149b4-3932-4d8b-ba6f-8c9d9045be70-0000' > I0222 18:16:13.329064 26696 status_update_manager.cpp:392] Received status > update acknowledgement (UUID: ed5a5eb7-65cc-42fa-bb85-3aaf65d86e6b) for task > 1 of framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:13.329069 26700 master.cpp:5926] Processing TEARDOWN call for > framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 (default) at > scheduler-1850b1cd-3396-4479-b2f3-47ee6c3fa270@172.30.2.148:35274 > I0222 18:16:13.329100 26700 master.cpp:5938] Removing framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 (default) at > scheduler-1850b1cd-3396-4479-b2f3-47ee6c3fa270@172.30.2.148:35274 > I0222 18:16:13.329200 26696 status_update_manager.cpp:528] Cleaning up status > update stream for task 1 of framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:13.329218 26703 hierarchical.cpp:375] Deactivated framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:13.329309 26697 slave.cpp:2079] Asked to shut down framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 by master@172.30.2.148:35274 > I0222 18:16:13.329346 26697 slave.cpp:2104] Shutting down framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:13.329418 26697 slave.cpp:4198] Shutting down executor '1' of > framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 at > executor(1)@172.30.2.148:56026 > I0222 18:16:13.329578 26699 hierarchical.cpp:326] Removed framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:13.329684 26697 slave.cpp:2412] Status update manager > successfully handled status update acknowledgement (UUID: > ed5a5eb7-65cc-42fa-bb85-3aaf65d86e6b) for task 1 of framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:13.329733 26697 slave.cpp:5718] Completing task 1 > I0222 18:16:13.337236 26703 hierarchical.cpp:1434] No resources available to > allocate! > I0222 18:16:13.337266 26703 hierarchical.cpp:1127] Performed allocation for 1 > slaves in 153077ns > I0222 18:16:14.297827 26702 slave.cpp:3528] executor(1)@172.30.2.148:56026 > exited > I0222 18:16:14.332489 26697 docker.cpp:1915] Executor for container > '207172a3-0ebd-4faa-946b-75a829fc75fc' has exited > I0222 18:16:14.332512 26697 docker.cpp:1679] Destroying container > '207172a3-0ebd-4faa-946b-75a829fc75fc' > I0222 18:16:14.332600 26697 docker.cpp:1807] Running docker stop on container > '207172a3-0ebd-4faa-946b-75a829fc75fc' > I0222 18:16:14.333111 26697 docker.cpp:908] Unmounting volume for container > '207172a3-0ebd-4faa-946b-75a829fc75fc' > I0222 18:16:14.333288 26700 slave.cpp:3886] Executor '1' of framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 exited with status 0 > I0222 18:16:14.333340 26700 slave.cpp:3990] Cleaning up executor '1' of > framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 at > executor(1)@172.30.2.148:56026 > I0222 18:16:14.333603 26703 gc.cpp:54] Scheduling > '/tmp/DockerContainerizerTest_ROOT_DOCKER_LaunchWithPersistentVolumes_U5vZX1/slaves/652149b4-3932-4d8b-ba6f-8c9d9045be70-S0/frameworks/652149b4-3932-4d8b-ba6f-8c9d9045be70-0000/executors/1/runs/207172a3-0ebd-4faa-946b-75a829fc75fc' > for gc 6.99999614056593days in the future > I0222 18:16:14.333669 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:14.333704 26700 slave.cpp:4078] Cleaning up framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:14.333726 26703 gc.cpp:54] Scheduling > '/tmp/DockerContainerizerTest_ROOT_DOCKER_LaunchWithPersistentVolumes_U5vZX1/slaves/652149b4-3932-4d8b-ba6f-8c9d9045be70-S0/frameworks/652149b4-3932-4d8b-ba6f-8c9d9045be70-0000/executors/1' > for gc 6.99999613825185days in the future > I0222 18:16:14.336545 26703 gc.cpp:54] Scheduling > '/tmp/DockerContainerizerTest_ROOT_DOCKER_LaunchWithPersistentVolumes_U5vZX1/slaves/652149b4-3932-4d8b-ba6f-8c9d9045be70-S0/frameworks/652149b4-3932-4d8b-ba6f-8c9d9045be70-0000' > for gc 6.9999961115763days in the future > I0222 18:16:14.336699 26701 status_update_manager.cpp:282] Closing status > update streams for framework 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 > I0222 18:16:14.338240 26699 hierarchical.cpp:1434] No resources available to > allocate! > I0222 18:16:14.338270 26699 hierarchical.cpp:1127] Performed allocation for 1 > slaves in 191822ns > I0222 18:16:14.635416 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:14.940042 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:15.245256 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:15.339015 26697 hierarchical.cpp:1434] No resources available to > allocate! > I0222 18:16:15.339053 26697 hierarchical.cpp:1127] Performed allocation for 1 > slaves in 265093ns > I0222 18:16:15.549804 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:15.854646 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:16.159210 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:16.339910 26698 hierarchical.cpp:1434] No resources available to > allocate! > I0222 18:16:16.339951 26698 hierarchical.cpp:1127] Performed allocation for 1 > slaves in 255857ns > I0222 18:16:16.463809 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:16.768708 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:17.073479 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:17.340798 26696 hierarchical.cpp:1434] No resources available to > allocate! > I0222 18:16:17.340864 26696 hierarchical.cpp:1127] Performed allocation for 1 > slaves in 260467ns > I0222 18:16:17.377902 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:17.683398 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:17.988231 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:18.292505 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:18.330112 26700 slave.cpp:4231] Framework > 652149b4-3932-4d8b-ba6f-8c9d9045be70-0000 seems to have exited. Ignoring > shutdown timeout for executor '1' > I0222 18:16:18.341600 26702 hierarchical.cpp:1434] No resources available to > allocate! > I0222 18:16:18.341634 26702 hierarchical.cpp:1127] Performed allocation for 1 > slaves in 252012ns > I0222 18:16:18.596279 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:18.901157 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:19.204834 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:19.342326 26699 hierarchical.cpp:1434] No resources available to > allocate! > I0222 18:16:19.342358 26699 hierarchical.cpp:1127] Performed allocation for 1 > slaves in 186829ns > I0222 18:16:19.508533 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:19.812255 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:20.116345 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:20.343556 26698 hierarchical.cpp:1434] No resources available to > allocate! > I0222 18:16:20.343588 26698 hierarchical.cpp:1127] Performed allocation for 1 > slaves in 194704ns > I0222 18:16:20.420814 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:20.724819 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:21.029549 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:21.334319 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:21.344859 26702 hierarchical.cpp:1434] No resources available to > allocate! > I0222 18:16:21.344892 26702 hierarchical.cpp:1127] Performed allocation for 1 > slaves in 241099ns > I0222 18:16:21.638164 26681 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > ../../src/tests/containerizer/docker_containerizer_tests.cpp:1434: Failure > os::read(path::join(volumePath, "file")): Failed to open file > '/tmp/DockerContainerizerTest_ROOT_DOCKER_LaunchWithPersistentVolumes_U5vZX1/volumes/roles/role1/id1/file': > No such file or directory > I0222 18:16:21.943008 26703 master.cpp:1027] Master terminating > I0222 18:16:21.943635 26696 hierarchical.cpp:505] Removed slave > 652149b4-3932-4d8b-ba6f-8c9d9045be70-S0 > I0222 18:16:21.943989 26702 slave.cpp:3528] master@172.30.2.148:35274 exited > W0222 18:16:21.944016 26702 slave.cpp:3531] Master disconnected! Waiting for > a new master to be elected > I0222 18:16:21.948807 26699 slave.cpp:668] Slave terminating > I0222 18:16:21.951902 26681 docker.cpp:885] Running docker -H > unix:///var/run/docker.sock ps -a > I0222 18:16:22.044273 26698 docker.cpp:766] Running docker -H > unix:///var/run/docker.sock inspect > mesos-652149b4-3932-4d8b-ba6f-8c9d9045be70-S0.207172a3-0ebd-4faa-946b-75a829fc75fc > I0222 18:16:22.148877 26681 docker.cpp:727] Running docker -H > unix:///var/run/docker.sock rm -f -v > 422bfef31d51d2d3d2aafcf49b3e502654354bd98a98b076f4089b9a8e274d05 > [ FAILED ] DockerContainerizerTest.ROOT_DOCKER_LaunchWithPersistentVolumes > (10535 ms) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)