[ https://issues.apache.org/jira/browse/MESOS-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Greg Mann reassigned MESOS-1594: -------------------------------- Assignee: Greg Mann > SlaveRecoveryTest/0.ReconcileKillTask is flaky > ---------------------------------------------- > > Key: MESOS-1594 > URL: https://issues.apache.org/jira/browse/MESOS-1594 > Project: Mesos > Issue Type: Bug > Components: test > Affects Versions: 0.20.0 > Environment: Ubuntu 12.10 with GCC > Reporter: Vinod Kone > Assignee: Greg Mann > Labels: flaky > > Observed this on Jenkins. > {code} > [ RUN ] SlaveRecoveryTest/0.ReconcileKillTask > Using temporary directory '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_3zJ6DG' > I0714 15:08:43.915114 27216 leveldb.cpp:176] Opened db in 474.695188ms > I0714 15:08:43.933645 27216 leveldb.cpp:183] Compacted db in 18.068942ms > I0714 15:08:43.934129 27216 leveldb.cpp:198] Created db iterator in 7860ns > I0714 15:08:43.934439 27216 leveldb.cpp:204] Seeked to beginning of db in > 2560ns > I0714 15:08:43.934779 27216 leveldb.cpp:273] Iterated through 0 keys in the > db in 1400ns > I0714 15:08:43.935098 27216 replica.cpp:741] Replica recovered with log > positions 0 -> 0 with 1 holes and 0 unlearned > I0714 15:08:43.936027 27238 recover.cpp:425] Starting replica recovery > I0714 15:08:43.936225 27238 recover.cpp:451] Replica is in EMPTY status > I0714 15:08:43.936867 27238 replica.cpp:638] Replica in EMPTY status received > a broadcasted recover request > I0714 15:08:43.937049 27238 recover.cpp:188] Received a recover response from > a replica in EMPTY status > I0714 15:08:43.937232 27238 recover.cpp:542] Updating replica status to > STARTING > I0714 15:08:43.945600 27235 master.cpp:288] Master > 20140714-150843-16842879-55850-27216 (quantal) started on 127.0.1.1:55850 > I0714 15:08:43.945643 27235 master.cpp:325] Master only allowing > authenticated frameworks to register > I0714 15:08:43.945651 27235 master.cpp:330] Master only allowing > authenticated slaves to register > I0714 15:08:43.945658 27235 credentials.hpp:36] Loading credentials for > authentication from > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_3zJ6DG/credentials' > I0714 15:08:43.945808 27235 master.cpp:359] Authorization enabled > I0714 15:08:43.946369 27235 hierarchical_allocator_process.hpp:301] > Initializing hierarchical allocator process with master : > master@127.0.1.1:55850 > I0714 15:08:43.946419 27235 master.cpp:122] No whitelist given. Advertising > offers for all slaves > I0714 15:08:43.946614 27235 master.cpp:1128] The newly elected leader is > master@127.0.1.1:55850 with id 20140714-150843-16842879-55850-27216 > I0714 15:08:43.946630 27235 master.cpp:1141] Elected as the leading master! > I0714 15:08:43.946637 27235 master.cpp:959] Recovering from registrar > I0714 15:08:43.946707 27235 registrar.cpp:313] Recovering registrar > I0714 15:08:43.957895 27238 leveldb.cpp:306] Persisting metadata (8 bytes) to > leveldb took 20.529301ms > I0714 15:08:43.957978 27238 replica.cpp:320] Persisted replica status to > STARTING > I0714 15:08:43.958142 27238 recover.cpp:451] Replica is in STARTING status > I0714 15:08:43.958664 27238 replica.cpp:638] Replica in STARTING status > received a broadcasted recover request > I0714 15:08:43.958762 27238 recover.cpp:188] Received a recover response from > a replica in STARTING status > I0714 15:08:43.958945 27238 recover.cpp:542] Updating replica status to VOTING > I0714 15:08:43.975685 27238 leveldb.cpp:306] Persisting metadata (8 bytes) to > leveldb took 16.646136ms > I0714 15:08:43.976367 27238 replica.cpp:320] Persisted replica status to > VOTING > I0714 15:08:43.976824 27241 recover.cpp:556] Successfully joined the Paxos > group > I0714 15:08:43.977072 27242 recover.cpp:440] Recover process terminated > I0714 15:08:43.980590 27236 log.cpp:656] Attempting to start the writer > I0714 15:08:43.981385 27236 replica.cpp:474] Replica received implicit > promise request with proposal 1 > I0714 15:08:43.999141 27236 leveldb.cpp:306] Persisting metadata (8 bytes) to > leveldb took 17.705787ms > I0714 15:08:43.999222 27236 replica.cpp:342] Persisted promised to 1 > I0714 15:08:44.004451 27240 coordinator.cpp:230] Coordinator attemping to > fill missing position > I0714 15:08:44.004914 27240 replica.cpp:375] Replica received explicit > promise request for position 0 with proposal 2 > I0714 15:08:44.021456 27240 leveldb.cpp:343] Persisting action (8 bytes) to > leveldb took 16.499775ms > I0714 15:08:44.021533 27240 replica.cpp:676] Persisted action at 0 > I0714 15:08:44.022006 27240 replica.cpp:508] Replica received write request > for position 0 > I0714 15:08:44.022043 27240 leveldb.cpp:438] Reading position from leveldb > took 21376ns > I0714 15:08:44.035969 27240 leveldb.cpp:343] Persisting action (14 bytes) to > leveldb took 13.885907ms > I0714 15:08:44.036365 27240 replica.cpp:676] Persisted action at 0 > I0714 15:08:44.040156 27238 replica.cpp:655] Replica received learned notice > for position 0 > I0714 15:08:44.058082 27238 leveldb.cpp:343] Persisting action (16 bytes) to > leveldb took 17.860707ms > I0714 15:08:44.058161 27238 replica.cpp:676] Persisted action at 0 > I0714 15:08:44.058176 27238 replica.cpp:661] Replica learned NOP action at > position 0 > I0714 15:08:44.058526 27238 log.cpp:672] Writer started with ending position 0 > I0714 15:08:44.058872 27238 leveldb.cpp:438] Reading position from leveldb > took 25660ns > I0714 15:08:44.060556 27238 registrar.cpp:346] Successfully fetched the > registry (0B) > I0714 15:08:44.060845 27238 registrar.cpp:422] Attempting to update the > 'registry' > I0714 15:08:44.062304 27238 log.cpp:680] Attempting to append 120 bytes to > the log > I0714 15:08:44.062866 27236 coordinator.cpp:340] Coordinator attempting to > write APPEND action at position 1 > I0714 15:08:44.063154 27236 replica.cpp:508] Replica received write request > for position 1 > I0714 15:08:44.082813 27236 leveldb.cpp:343] Persisting action (137 bytes) to > leveldb took 19.61683ms > I0714 15:08:44.082890 27236 replica.cpp:676] Persisted action at 1 > I0714 15:08:44.083256 27236 replica.cpp:655] Replica received learned notice > for position 1 > I0714 15:08:44.097398 27236 leveldb.cpp:343] Persisting action (139 bytes) to > leveldb took 14.104796ms > I0714 15:08:44.097475 27236 replica.cpp:676] Persisted action at 1 > I0714 15:08:44.097488 27236 replica.cpp:661] Replica learned APPEND action at > position 1 > I0714 15:08:44.098569 27236 registrar.cpp:479] Successfully updated 'registry' > I0714 15:08:44.098906 27240 log.cpp:699] Attempting to truncate the log to 1 > I0714 15:08:44.099608 27240 coordinator.cpp:340] Coordinator attempting to > write TRUNCATE action at position 2 > I0714 15:08:44.100005 27240 replica.cpp:508] Replica received write request > for position 2 > I0714 15:08:44.100566 27236 registrar.cpp:372] Successfully recovered > registrar > I0714 15:08:44.101227 27239 master.cpp:986] Recovered 0 slaves from the > Registry (84B) ; allowing 10mins for slaves to re-register > I0714 15:08:44.118376 27240 leveldb.cpp:343] Persisting action (16 bytes) to > leveldb took 18.329495ms > I0714 15:08:44.118455 27240 replica.cpp:676] Persisted action at 2 > I0714 15:08:44.122258 27242 replica.cpp:655] Replica received learned notice > for position 2 > I0714 15:08:44.137336 27242 leveldb.cpp:343] Persisting action (18 bytes) to > leveldb took 15.023553ms > I0714 15:08:44.137460 27242 leveldb.cpp:401] Deleting ~1 keys from leveldb > took 55049ns > I0714 15:08:44.137480 27242 replica.cpp:676] Persisted action at 2 > I0714 15:08:44.137492 27242 replica.cpp:661] Replica learned TRUNCATE action > at position 2 > I0714 15:08:44.143729 27216 containerizer.cpp:124] Using isolation: > posix/cpu,posix/mem > I0714 15:08:44.145934 27242 slave.cpp:168] Slave started on > 43)@127.0.1.1:55850 > I0714 15:08:44.145953 27242 credentials.hpp:84] Loading credential for > authentication from > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/credential' > I0714 15:08:44.146040 27242 slave.cpp:266] Slave using credential for: > test-principal > I0714 15:08:44.146136 27242 slave.cpp:279] Slave resources: cpus(*):2; > mem(*):1024; disk(*):1024; ports(*):[31000-32000] > I0714 15:08:44.146198 27242 slave.cpp:324] Slave hostname: quantal > I0714 15:08:44.146209 27242 slave.cpp:325] Slave checkpoint: true > I0714 15:08:44.146708 27242 state.cpp:33] Recovering state from > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta' > I0714 15:08:44.146824 27242 status_update_manager.cpp:193] Recovering status > update manager > I0714 15:08:44.146901 27242 containerizer.cpp:287] Recovering containerizer > I0714 15:08:44.147228 27242 slave.cpp:3126] Finished recovery > I0714 15:08:44.147531 27242 slave.cpp:599] New master detected at > master@127.0.1.1:55850 > I0714 15:08:44.147562 27242 slave.cpp:675] Authenticating with master > master@127.0.1.1:55850 > I0714 15:08:44.147614 27242 slave.cpp:648] Detecting new master > I0714 15:08:44.147652 27242 status_update_manager.cpp:167] New master > detected at master@127.0.1.1:55850 > I0714 15:08:44.147691 27242 authenticatee.hpp:128] Creating new client SASL > connection > I0714 15:08:44.148533 27235 master.cpp:3507] Authenticating > slave(43)@127.0.1.1:55850 > I0714 15:08:44.148666 27235 authenticator.hpp:156] Creating new server SASL > connection > I0714 15:08:44.149054 27242 authenticatee.hpp:219] Received SASL > authentication mechanisms: CRAM-MD5 > I0714 15:08:44.149447 27242 authenticatee.hpp:245] Attempting to authenticate > with mechanism 'CRAM-MD5' > I0714 15:08:44.149917 27236 authenticator.hpp:262] Received SASL > authentication start > I0714 15:08:44.149974 27236 authenticator.hpp:384] Authentication requires > more steps > I0714 15:08:44.150208 27242 authenticatee.hpp:265] Received SASL > authentication step > I0714 15:08:44.150720 27239 authenticator.hpp:290] Received SASL > authentication step > I0714 15:08:44.150749 27239 auxprop.cpp:81] Request to lookup properties for > user: 'test-principal' realm: 'quantal' server FQDN: 'quantal' > SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false > SASL_AUXPROP_AUTHZID: false > I0714 15:08:44.150758 27239 auxprop.cpp:153] Looking up auxiliary property > '*userPassword' > I0714 15:08:44.150771 27239 auxprop.cpp:153] Looking up auxiliary property > '*cmusaslsecretCRAM-MD5' > I0714 15:08:44.150781 27239 auxprop.cpp:81] Request to lookup properties for > user: 'test-principal' realm: 'quantal' server FQDN: 'quantal' > SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false > SASL_AUXPROP_AUTHZID: true > I0714 15:08:44.150787 27239 auxprop.cpp:103] Skipping auxiliary property > '*userPassword' since SASL_AUXPROP_AUTHZID == true > I0714 15:08:44.150792 27239 auxprop.cpp:103] Skipping auxiliary property > '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true > I0714 15:08:44.150804 27239 authenticator.hpp:376] Authentication success > I0714 15:08:44.150848 27239 master.cpp:3547] Successfully authenticated > principal 'test-principal' at slave(43)@127.0.1.1:55850 > I0714 15:08:44.157696 27242 authenticatee.hpp:305] Authentication success > I0714 15:08:44.158855 27242 slave.cpp:732] Successfully authenticated with > master master@127.0.1.1:55850 > I0714 15:08:44.158936 27242 slave.cpp:970] Will retry registration in > 10.352612ms if necessary > I0714 15:08:44.161813 27216 sched.cpp:139] Version: 0.20.0 > I0714 15:08:44.162608 27236 sched.cpp:235] New master detected at > master@127.0.1.1:55850 > I0714 15:08:44.162637 27236 sched.cpp:285] Authenticating with master > master@127.0.1.1:55850 > I0714 15:08:44.162747 27236 authenticatee.hpp:128] Creating new client SASL > connection > I0714 15:08:44.163506 27239 master.cpp:2789] Registering slave at > slave(43)@127.0.1.1:55850 (quantal) with id > 20140714-150843-16842879-55850-27216-0 > I0714 15:08:44.164086 27238 registrar.cpp:422] Attempting to update the > 'registry' > I0714 15:08:44.165694 27238 log.cpp:680] Attempting to append 295 bytes to > the log > I0714 15:08:44.166231 27240 coordinator.cpp:340] Coordinator attempting to > write APPEND action at position 3 > I0714 15:08:44.166517 27240 replica.cpp:508] Replica received write request > for position 3 > I0714 15:08:44.167199 27239 master.cpp:3507] Authenticating > scheduler-225679c4-a9fd-4119-9deb-c7712eba37e1@127.0.1.1:55850 > I0714 15:08:44.167867 27241 authenticator.hpp:156] Creating new server SASL > connection > I0714 15:08:44.168058 27241 authenticatee.hpp:219] Received SASL > authentication mechanisms: CRAM-MD5 > I0714 15:08:44.168081 27241 authenticatee.hpp:245] Attempting to authenticate > with mechanism 'CRAM-MD5' > I0714 15:08:44.168107 27241 authenticator.hpp:262] Received SASL > authentication start > I0714 15:08:44.168149 27241 authenticator.hpp:384] Authentication requires > more steps > I0714 15:08:44.168176 27241 authenticatee.hpp:265] Received SASL > authentication step > I0714 15:08:44.168215 27241 authenticator.hpp:290] Received SASL > authentication step > I0714 15:08:44.168233 27241 auxprop.cpp:81] Request to lookup properties for > user: 'test-principal' realm: 'quantal' server FQDN: 'quantal' > SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false > SASL_AUXPROP_AUTHZID: false > I0714 15:08:44.168793 27241 auxprop.cpp:153] Looking up auxiliary property > '*userPassword' > I0714 15:08:44.168820 27241 auxprop.cpp:153] Looking up auxiliary property > '*cmusaslsecretCRAM-MD5' > I0714 15:08:44.168834 27241 auxprop.cpp:81] Request to lookup properties for > user: 'test-principal' realm: 'quantal' server FQDN: 'quantal' > SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false > SASL_AUXPROP_AUTHZID: true > I0714 15:08:44.168840 27241 auxprop.cpp:103] Skipping auxiliary property > '*userPassword' since SASL_AUXPROP_AUTHZID == true > I0714 15:08:44.168845 27241 auxprop.cpp:103] Skipping auxiliary property > '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true > I0714 15:08:44.168858 27241 authenticator.hpp:376] Authentication success > I0714 15:08:44.168895 27241 authenticatee.hpp:305] Authentication success > I0714 15:08:44.168970 27241 sched.cpp:359] Successfully authenticated with > master master@127.0.1.1:55850 > I0714 15:08:44.168987 27241 sched.cpp:478] Sending registration request to > master@127.0.1.1:55850 > I0714 15:08:44.169426 27239 master.cpp:1239] Queuing up registration request > from scheduler-225679c4-a9fd-4119-9deb-c7712eba37e1@127.0.1.1:55850 because > authentication is still in progress > I0714 15:08:44.169958 27239 master.cpp:3547] Successfully authenticated > principal 'test-principal' at > scheduler-225679c4-a9fd-4119-9deb-c7712eba37e1@127.0.1.1:55850 > I0714 15:08:44.170440 27241 slave.cpp:970] Will retry registration in > 8.76707ms if necessary > I0714 15:08:44.175359 27239 master.cpp:2777] Ignoring register slave message > from slave(43)@127.0.1.1:55850 (quantal) as admission is already in progress > I0714 15:08:44.175916 27239 master.cpp:1247] Received registration request > from scheduler-225679c4-a9fd-4119-9deb-c7712eba37e1@127.0.1.1:55850 > I0714 15:08:44.176298 27239 master.cpp:1207] Authorizing framework principal > 'test-principal' to receive offers for role '*' > I0714 15:08:44.176858 27239 master.cpp:1306] Registering framework > 20140714-150843-16842879-55850-27216-0000 at > scheduler-225679c4-a9fd-4119-9deb-c7712eba37e1@127.0.1.1:55850 > I0714 15:08:44.177408 27236 sched.cpp:409] Framework registered with > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.177443 27236 sched.cpp:423] Scheduler::registered took 12527ns > I0714 15:08:44.177727 27241 hierarchical_allocator_process.hpp:331] Added > framework 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.177747 27241 hierarchical_allocator_process.hpp:724] No > resources available to allocate! > I0714 15:08:44.177753 27241 hierarchical_allocator_process.hpp:686] Performed > allocation for 0 slaves in 8120ns > I0714 15:08:44.179908 27241 slave.cpp:970] Will retry registration in > 66.781028ms if necessary > I0714 15:08:44.180007 27241 master.cpp:2777] Ignoring register slave message > from slave(43)@127.0.1.1:55850 (quantal) as admission is already in progress > I0714 15:08:44.183082 27240 leveldb.cpp:343] Persisting action (314 bytes) to > leveldb took 16.533189ms > I0714 15:08:44.183125 27240 replica.cpp:676] Persisted action at 3 > I0714 15:08:44.183465 27240 replica.cpp:655] Replica received learned notice > for position 3 > I0714 15:08:44.203276 27240 leveldb.cpp:343] Persisting action (316 bytes) to > leveldb took 19.768951ms > I0714 15:08:44.203376 27240 replica.cpp:676] Persisted action at 3 > I0714 15:08:44.203392 27240 replica.cpp:661] Replica learned APPEND action at > position 3 > I0714 15:08:44.204033 27240 registrar.cpp:479] Successfully updated 'registry' > I0714 15:08:44.204138 27240 log.cpp:699] Attempting to truncate the log to 3 > I0714 15:08:44.204221 27240 master.cpp:2829] Registered slave > 20140714-150843-16842879-55850-27216-0 at slave(43)@127.0.1.1:55850 (quantal) > I0714 15:08:44.204241 27240 master.cpp:3975] Adding slave > 20140714-150843-16842879-55850-27216-0 at slave(43)@127.0.1.1:55850 (quantal) > with cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] > I0714 15:08:44.204387 27240 coordinator.cpp:340] Coordinator attempting to > write TRUNCATE action at position 4 > I0714 15:08:44.204489 27240 slave.cpp:766] Registered with master > master@127.0.1.1:55850; given slave ID 20140714-150843-16842879-55850-27216-0 > I0714 15:08:44.204745 27240 slave.cpp:779] Checkpointing SlaveInfo to > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/slave.info' > I0714 15:08:44.204954 27240 hierarchical_allocator_process.hpp:444] Added > slave 20140714-150843-16842879-55850-27216-0 (quantal) with cpus(*):2; > mem(*):1024; disk(*):1024; ports(*):[31000-32000] (and cpus(*):2; > mem(*):1024; disk(*):1024; ports(*):[31000-32000] available) > I0714 15:08:44.205023 27240 hierarchical_allocator_process.hpp:750] Offering > cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave > 20140714-150843-16842879-55850-27216-0 to framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.205122 27240 hierarchical_allocator_process.hpp:706] Performed > allocation for slave 20140714-150843-16842879-55850-27216-0 in 131192ns > I0714 15:08:44.205189 27240 slave.cpp:2323] Received ping from > slave-observer(32)@127.0.1.1:55850 > I0714 15:08:44.205258 27240 master.hpp:801] Adding offer > 20140714-150843-16842879-55850-27216-0 with resources cpus(*):2; mem(*):1024; > disk(*):1024; ports(*):[31000-32000] on slave > 20140714-150843-16842879-55850-27216-0 (quantal) > I0714 15:08:44.205303 27240 master.cpp:3454] Sending 1 offers to framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.205469 27240 sched.cpp:546] Scheduler::resourceOffers took > 23591ns > I0714 15:08:44.206351 27241 replica.cpp:508] Replica received write request > for position 4 > I0714 15:08:44.208353 27237 master.hpp:811] Removing offer > 20140714-150843-16842879-55850-27216-0 with resources cpus(*):2; mem(*):1024; > disk(*):1024; ports(*):[31000-32000] on slave > 20140714-150843-16842879-55850-27216-0 (quantal) > I0714 15:08:44.208436 27237 master.cpp:2133] Processing reply for offers: [ > 20140714-150843-16842879-55850-27216-0 ] on slave > 20140714-150843-16842879-55850-27216-0 at slave(43)@127.0.1.1:55850 (quantal) > for framework 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.208472 27237 master.cpp:2219] Authorizing framework principal > 'test-principal' to launch task 4a6783aa-8d07-46e3-8399-2a5d047f0021 as user > 'jenkins' > I0714 15:08:44.208909 27237 master.hpp:773] Adding task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 with resources cpus(*):2; mem(*):1024; > disk(*):1024; ports(*):[31000-32000] on slave > 20140714-150843-16842879-55850-27216-0 (quantal) > I0714 15:08:44.208947 27237 master.cpp:2285] Launching task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 with resources cpus(*):2; > mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave > 20140714-150843-16842879-55850-27216-0 at slave(43)@127.0.1.1:55850 (quantal) > I0714 15:08:44.209090 27237 slave.cpp:1001] Got assigned task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 for framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.209190 27237 slave.cpp:3398] Checkpointing FrameworkInfo to > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/framework.info' > I0714 15:08:44.209413 27237 slave.cpp:3405] Checkpointing framework pid > 'scheduler-225679c4-a9fd-4119-9deb-c7712eba37e1@127.0.1.1:55850' to > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/framework.pid' > I0714 15:08:44.209710 27237 slave.cpp:1111] Launching task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 for framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.210978 27237 slave.cpp:3720] Checkpointing ExecutorInfo to > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021/executor.info' > I0714 15:08:44.211520 27237 slave.cpp:3835] Checkpointing TaskInfo to > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021/runs/19c466f8-bb5a-4842-a152-f585ff88762a/tasks/4a6783aa-8d07-46e3-8399-2a5d047f0021/task.info' > I0714 15:08:44.211714 27237 slave.cpp:1221] Queuing task > '4a6783aa-8d07-46e3-8399-2a5d047f0021' for executor > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > '20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.211937 27236 containerizer.cpp:427] Starting container > '19c466f8-bb5a-4842-a152-f585ff88762a' for executor > '4a6783aa-8d07-46e3-8399-2a5d047f0021' of framework > '20140714-150843-16842879-55850-27216-0000' > I0714 15:08:44.212242 27236 slave.cpp:560] Successfully attached file > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021/runs/19c466f8-bb5a-4842-a152-f585ff88762a' > I0714 15:08:44.216187 27236 launcher.cpp:137] Forked child with pid '28451' > for container '19c466f8-bb5a-4842-a152-f585ff88762a' > I0714 15:08:44.217281 27236 containerizer.cpp:705] Checkpointing executor's > forked pid 28451 to > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021/runs/19c466f8-bb5a-4842-a152-f585ff88762a/pids/forked.pid' > I0714 15:08:44.219408 27236 containerizer.cpp:537] Fetching URIs for > container '19c466f8-bb5a-4842-a152-f585ff88762a' using command > '/var/jenkins/workspace/mesos-ubuntu-12.10-gcc/src/mesos-fetcher' > I0714 15:08:44.223963 27241 leveldb.cpp:343] Persisting action (16 bytes) to > leveldb took 17.554461ms > I0714 15:08:44.224501 27241 replica.cpp:676] Persisted action at 4 > I0714 15:08:44.225051 27241 replica.cpp:655] Replica received learned notice > for position 4 > I0714 15:08:44.242923 27241 leveldb.cpp:343] Persisting action (18 bytes) to > leveldb took 17.806547ms > I0714 15:08:44.243057 27241 leveldb.cpp:401] Deleting ~2 keys from leveldb > took 57154ns > I0714 15:08:44.243078 27241 replica.cpp:676] Persisted action at 4 > I0714 15:08:44.243096 27241 replica.cpp:661] Replica learned TRUNCATE action > at position 4 > I0714 15:08:44.401140 27241 slave.cpp:2468] Monitoring executor > '4a6783aa-8d07-46e3-8399-2a5d047f0021' of framework > '20140714-150843-16842879-55850-27216-0000' in container > '19c466f8-bb5a-4842-a152-f585ff88762a' > WARNING: Logging before InitGoogleLogging() is written to STDERR > I0714 15:08:44.434221 28486 process.cpp:1671] libprocess is initialized on > 127.0.1.1:34669 for 8 cpus > I0714 15:08:44.436146 28486 exec.cpp:131] Version: 0.20.0 > I0714 15:08:44.438555 28500 exec.cpp:181] Executor started at: > executor(1)@127.0.1.1:34669 with pid 28486 > I0714 15:08:44.440846 27241 slave.cpp:1732] Got registration for executor > '4a6783aa-8d07-46e3-8399-2a5d047f0021' of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.440917 27241 slave.cpp:1817] Checkpointing executor pid > 'executor(1)@127.0.1.1:34669' to > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021/runs/19c466f8-bb5a-4842-a152-f585ff88762a/pids/libprocess.pid' > I0714 15:08:44.442373 27243 process.cpp:1098] Socket closed while receiving > I0714 15:08:44.442790 27241 slave.cpp:1851] Flushing queued task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 for executor > '4a6783aa-8d07-46e3-8399-2a5d047f0021' of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.443192 27243 process.cpp:1098] Socket closed while receiving > I0714 15:08:44.443994 28508 process.cpp:1037] Socket closed while receiving > I0714 15:08:44.444144 28508 process.cpp:1037] Socket closed while receiving > I0714 15:08:44.444741 28500 exec.cpp:205] Executor registered on slave > 20140714-150843-16842879-55850-27216-0 > Registered executor on quantal > I0714 15:08:44.446338 28500 exec.cpp:217] Executor::registered took 534236ns > I0714 15:08:44.446715 28500 exec.cpp:292] Executor asked to run task > '4a6783aa-8d07-46e3-8399-2a5d047f0021' > Starting task 4a6783aa-8d07-46e3-8399-2a5d047f0021 > I0714 15:08:44.447548 28500 exec.cpp:301] Executor::launchTask took 584306ns > sh -c 'sleep 1000' > Forked command at 28509 > I0714 15:08:44.451202 28506 exec.cpp:524] Executor sending status update > TASK_RUNNING (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.452327 27239 slave.cpp:2086] Handling status update > TASK_RUNNING (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 from executor(1)@127.0.1.1:34669 > I0714 15:08:44.452503 27239 status_update_manager.cpp:320] Received status > update TASK_RUNNING (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.452520 27239 status_update_manager.cpp:499] Creating > StatusUpdate stream for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of > framework 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.452775 27239 status_update_manager.hpp:342] Checkpointing > UPDATE for status update TASK_RUNNING (UUID: > 323fc20a-b5b8-475d-8752-b1f853797f55) for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.472384 27239 status_update_manager.cpp:373] Forwarding status > update TASK_RUNNING (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 to master@127.0.1.1:55850 > I0714 15:08:44.472764 27237 master.cpp:3115] Status update TASK_RUNNING > (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 from slave > 20140714-150843-16842879-55850-27216-0 at slave(43)@127.0.1.1:55850 (quantal) > I0714 15:08:44.472854 27237 sched.cpp:637] Scheduler::statusUpdate took > 17656ns > I0714 15:08:44.472920 27237 master.cpp:2639] Forwarding status update > acknowledgement 323fc20a-b5b8-475d-8752-b1f853797f55 for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 to slave > 20140714-150843-16842879-55850-27216-0 at slave(43)@127.0.1.1:55850 (quantal) > I0714 15:08:44.473122 27239 status_update_manager.cpp:398] Received status > update acknowledgement (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.473146 27239 status_update_manager.hpp:342] Checkpointing ACK > for status update TASK_RUNNING (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) > for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.473244 27237 slave.cpp:2244] Status update manager > successfully handled status update TASK_RUNNING (UUID: > 323fc20a-b5b8-475d-8752-b1f853797f55) for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.473258 27237 slave.cpp:2250] Sending acknowledgement for > status update TASK_RUNNING (UUID: 323fc20a-b5b8-475d-8752-b1f853797f55) for > task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 to executor(1)@127.0.1.1:34669 > I0714 15:08:44.473567 27243 process.cpp:1098] Socket closed while receiving > I0714 15:08:44.474095 28508 process.cpp:1037] Socket closed while receiving > I0714 15:08:44.474676 28502 exec.cpp:338] Executor received status update > acknowledgement 323fc20a-b5b8-475d-8752-b1f853797f55 for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.491111 27239 slave.cpp:1672] Status update manager > successfully handled status update acknowledgement (UUID: > 323fc20a-b5b8-475d-8752-b1f853797f55) for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.491761 27216 slave.cpp:484] Slave terminating > I0714 15:08:44.492559 27216 containerizer.cpp:124] Using isolation: > posix/cpu,posix/mem > I0714 15:08:44.494635 27237 master.cpp:766] Slave > 20140714-150843-16842879-55850-27216-0 at slave(43)@127.0.1.1:55850 (quantal) > disconnected > I0714 15:08:44.494663 27237 master.cpp:1608] Disconnecting slave > 20140714-150843-16842879-55850-27216-0 > I0714 15:08:44.495120 27237 slave.cpp:168] Slave started on > 44)@127.0.1.1:55850 > I0714 15:08:44.495133 27237 credentials.hpp:84] Loading credential for > authentication from > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/credential' > I0714 15:08:44.495226 27237 slave.cpp:266] Slave using credential for: > test-principal > I0714 15:08:44.495322 27237 slave.cpp:279] Slave resources: cpus(*):2; > mem(*):1024; disk(*):1024; ports(*):[31000-32000] > I0714 15:08:44.495407 27237 slave.cpp:324] Slave hostname: quantal > I0714 15:08:44.495419 27237 slave.cpp:325] Slave checkpoint: true > I0714 15:08:44.495939 27242 master.cpp:2469] Asked to kill task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.496207 27238 state.cpp:33] Recovering state from > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta' > I0714 15:08:44.498291 27240 slave.cpp:3194] Recovering framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.498325 27240 slave.cpp:3570] Recovering executor > '4a6783aa-8d07-46e3-8399-2a5d047f0021' of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.498940 27240 status_update_manager.cpp:193] Recovering status > update manager > I0714 15:08:44.498956 27240 status_update_manager.cpp:201] Recovering > executor '4a6783aa-8d07-46e3-8399-2a5d047f0021' of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.498975 27240 status_update_manager.cpp:499] Creating > StatusUpdate stream for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of > framework 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.499092 27240 status_update_manager.hpp:306] Replaying status > update stream for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 > I0714 15:08:44.499241 27240 slave.cpp:560] Successfully attached file > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021/runs/19c466f8-bb5a-4842-a152-f585ff88762a' > I0714 15:08:44.499433 27240 containerizer.cpp:287] Recovering containerizer > I0714 15:08:44.499457 27240 containerizer.cpp:329] Recovering container > '19c466f8-bb5a-4842-a152-f585ff88762a' for executor > '4a6783aa-8d07-46e3-8399-2a5d047f0021' of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.495811 27237 hierarchical_allocator_process.hpp:483] Slave > 20140714-150843-16842879-55850-27216-0 disconnected > I0714 15:08:44.501255 27240 slave.cpp:3067] Sending reconnect request to > executor 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 at executor(1)@127.0.1.1:34669 > I0714 15:08:44.502030 28501 exec.cpp:251] Received reconnect request from > slave 20140714-150843-16842879-55850-27216-0 > I0714 15:08:44.502627 27243 process.cpp:1098] Socket closed while receiving > I0714 15:08:44.502681 28508 process.cpp:1037] Socket closed while receiving > I0714 15:08:44.503211 27240 slave.cpp:1911] Re-registering executor > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:44.504238 28501 exec.cpp:228] Executor re-registered on slave > 20140714-150843-16842879-55850-27216-0 > I0714 15:08:44.505033 28501 exec.cpp:240] Executor::reregistered took 45053ns > Re-registered executor on quantal > I0714 15:08:44.505507 27243 process.cpp:1098] Socket closed while receiving > I0714 15:08:44.505558 28508 process.cpp:1037] Socket closed while receiving > I0714 15:08:44.948043 27241 hierarchical_allocator_process.hpp:686] Performed > allocation for 1 slaves in 124255ns > I0714 15:08:45.948671 27237 hierarchical_allocator_process.hpp:686] Performed > allocation for 1 slaves in 61521ns > I0714 15:08:46.503978 27238 slave.cpp:2035] Cleaning up un-reregistered > executors > I0714 15:08:46.504050 27238 slave.cpp:3126] Finished recovery > I0714 15:08:46.504590 27238 slave.cpp:599] New master detected at > master@127.0.1.1:55850 > I0714 15:08:46.504639 27238 slave.cpp:675] Authenticating with master > master@127.0.1.1:55850 > I0714 15:08:46.504729 27238 slave.cpp:648] Detecting new master > I0714 15:08:46.504772 27238 status_update_manager.cpp:167] New master > detected at master@127.0.1.1:55850 > I0714 15:08:46.504863 27238 authenticatee.hpp:128] Creating new client SASL > connection > I0714 15:08:46.505091 27238 master.cpp:3507] Authenticating > slave(44)@127.0.1.1:55850 > I0714 15:08:46.505239 27238 authenticator.hpp:156] Creating new server SASL > connection > I0714 15:08:46.505363 27238 authenticatee.hpp:219] Received SASL > authentication mechanisms: CRAM-MD5 > I0714 15:08:46.505393 27238 authenticatee.hpp:245] Attempting to authenticate > with mechanism 'CRAM-MD5' > I0714 15:08:46.505420 27238 authenticator.hpp:262] Received SASL > authentication start > I0714 15:08:46.505476 27238 authenticator.hpp:384] Authentication requires > more steps > I0714 15:08:46.505506 27238 authenticatee.hpp:265] Received SASL > authentication step > I0714 15:08:46.505542 27238 authenticator.hpp:290] Received SASL > authentication step > I0714 15:08:46.505558 27238 auxprop.cpp:81] Request to lookup properties for > user: 'test-principal' realm: 'quantal' server FQDN: 'quantal' > SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false > SASL_AUXPROP_AUTHZID: false > I0714 15:08:46.505566 27238 auxprop.cpp:153] Looking up auxiliary property > '*userPassword' > I0714 15:08:46.505584 27238 auxprop.cpp:153] Looking up auxiliary property > '*cmusaslsecretCRAM-MD5' > I0714 15:08:46.505595 27238 auxprop.cpp:81] Request to lookup properties for > user: 'test-principal' realm: 'quantal' server FQDN: 'quantal' > SASL_AUXPROP_VERIFY_AGAINST_HASH: false SASL_AUXPROP_OVERRIDE: false > SASL_AUXPROP_AUTHZID: true > I0714 15:08:46.505601 27238 auxprop.cpp:103] Skipping auxiliary property > '*userPassword' since SASL_AUXPROP_AUTHZID == true > I0714 15:08:46.505606 27238 auxprop.cpp:103] Skipping auxiliary property > '*cmusaslsecretCRAM-MD5' since SASL_AUXPROP_AUTHZID == true > I0714 15:08:46.505616 27238 authenticator.hpp:376] Authentication success > I0714 15:08:46.505646 27238 authenticatee.hpp:305] Authentication success > I0714 15:08:46.505671 27238 master.cpp:3547] Successfully authenticated > principal 'test-principal' at slave(44)@127.0.1.1:55850 > I0714 15:08:46.505769 27238 slave.cpp:732] Successfully authenticated with > master master@127.0.1.1:55850 > I0714 15:08:46.505873 27238 slave.cpp:970] Will retry registration in > 17.903094ms if necessary > W0714 15:08:46.505991 27238 master.cpp:2904] Slave at > slave(44)@127.0.1.1:55850 (quantal) is being allowed to re-register with an > already in use id (20140714-150843-16842879-55850-27216-0) > W0714 15:08:46.506063 27238 master.cpp:3679] Slave > 20140714-150843-16842879-55850-27216-0 at slave(44)@127.0.1.1:55850 (quantal) > has non-terminal task 4a6783aa-8d07-46e3-8399-2a5d047f0021 that is supposed > to be killed. Killing it now! > I0714 15:08:46.506150 27238 slave.cpp:816] Re-registered with master > master@127.0.1.1:55850 > I0714 15:08:46.506186 27238 slave.cpp:1277] Asked to kill task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:46.507275 27241 hierarchical_allocator_process.hpp:497] Slave > 20140714-150843-16842879-55850-27216-0 reconnected > I0714 15:08:46.508061 28504 exec.cpp:312] Executor asked to kill task > '4a6783aa-8d07-46e3-8399-2a5d047f0021' > I0714 15:08:46.508117 28504 exec.cpp:321] Executor::killTask took 24954ns > Shutting down > Sending SIGTERM to process tree at pid 28509 > I0714 15:08:46.512238 27243 process.cpp:1098] Socket closed while receiving > I0714 15:08:46.512508 27238 slave.cpp:1582] Updating framework > 20140714-150843-16842879-55850-27216-0000 pid to > scheduler-225679c4-a9fd-4119-9deb-c7712eba37e1@127.0.1.1:55850 > I0714 15:08:46.512846 27238 slave.cpp:1590] Checkpointing framework pid > 'scheduler-225679c4-a9fd-4119-9deb-c7712eba37e1@127.0.1.1:55850' to > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/framework.pid' > I0714 15:08:46.513419 28508 process.cpp:1037] Socket closed while receiving > Killing the following process trees: > [ > -+- 28509 sh -c sleep 1000 > \--- 28510 sleep 1000 > ] > Command terminated with signal Terminated (pid: 28509) > I0714 15:08:46.940232 28506 exec.cpp:524] Executor sending status update > TASK_KILLED (UUID: e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:46.940918 27240 slave.cpp:2086] Handling status update > TASK_KILLED (UUID: e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 from executor(1)@127.0.1.1:34669 > I0714 15:08:46.940979 27240 slave.cpp:3768] Terminating task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 > I0714 15:08:46.941603 27240 status_update_manager.cpp:320] Received status > update TASK_KILLED (UUID: e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:46.941644 27240 status_update_manager.hpp:342] Checkpointing > UPDATE for status update TASK_KILLED (UUID: > e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:46.949417 27236 hierarchical_allocator_process.hpp:686] Performed > allocation for 1 slaves in 63926ns > I0714 15:08:46.965200 27240 status_update_manager.cpp:373] Forwarding status > update TASK_KILLED (UUID: e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 to master@127.0.1.1:55850 > I0714 15:08:46.965625 27239 master.cpp:3115] Status update TASK_KILLED (UUID: > e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 from slave > 20140714-150843-16842879-55850-27216-0 at slave(44)@127.0.1.1:55850 (quantal) > I0714 15:08:46.965724 27239 master.hpp:791] Removing task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 with resources cpus(*):2; mem(*):1024; > disk(*):1024; ports(*):[31000-32000] on slave > 20140714-150843-16842879-55850-27216-0 (quantal) > I0714 15:08:46.965903 27239 sched.cpp:637] Scheduler::statusUpdate took > 39326ns > I0714 15:08:46.966022 27239 hierarchical_allocator_process.hpp:635] Recovered > cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] (total > allocatable: cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000]) on > slave 20140714-150843-16842879-55850-27216-0 from framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:46.966120 27239 master.cpp:2639] Forwarding status update > acknowledgement e3a5f8fd-eefc-42c6-94a7-086c93c01968 for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 to slave > 20140714-150843-16842879-55850-27216-0 at slave(44)@127.0.1.1:55850 (quantal) > I0714 15:08:46.966501 27241 slave.cpp:2244] Status update manager > successfully handled status update TASK_KILLED (UUID: > e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:46.966519 27241 slave.cpp:2250] Sending acknowledgement for > status update TASK_KILLED (UUID: e3a5f8fd-eefc-42c6-94a7-086c93c01968) for > task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 to executor(1)@127.0.1.1:34669 > I0714 15:08:46.966754 27240 status_update_manager.cpp:398] Received status > update acknowledgement (UUID: e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:46.966785 27240 status_update_manager.hpp:342] Checkpointing ACK > for status update TASK_KILLED (UUID: e3a5f8fd-eefc-42c6-94a7-086c93c01968) > for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:46.967386 28500 exec.cpp:338] Executor received status update > acknowledgement e3a5f8fd-eefc-42c6-94a7-086c93c01968 for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:46.967562 27243 process.cpp:1098] Socket closed while receiving > I0714 15:08:46.968147 28508 process.cpp:1037] Socket closed while receiving > I0714 15:08:46.984608 27240 status_update_manager.cpp:530] Cleaning up status > update stream for task 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:46.985239 27236 slave.cpp:1672] Status update manager > successfully handled status update acknowledgement (UUID: > e3a5f8fd-eefc-42c6-94a7-086c93c01968) for task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:46.985280 27236 slave.cpp:3810] Completing task > 4a6783aa-8d07-46e3-8399-2a5d047f0021 > I0714 15:08:47.940703 27243 process.cpp:1037] Socket closed while receiving > I0714 15:08:47.940984 27238 containerizer.cpp:1019] Executor for container > '19c466f8-bb5a-4842-a152-f585ff88762a' has exited > I0714 15:08:47.941007 27238 containerizer.cpp:903] Destroying container > '19c466f8-bb5a-4842-a152-f585ff88762a' > I0714 15:08:47.950192 27241 hierarchical_allocator_process.hpp:750] Offering > cpus(*):2; mem(*):1024; disk(*):1024; ports(*):[31000-32000] on slave > 20140714-150843-16842879-55850-27216-0 to framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:47.950405 27241 hierarchical_allocator_process.hpp:686] Performed > allocation for 1 slaves in 320604ns > I0714 15:08:47.950518 27241 master.hpp:801] Adding offer > 20140714-150843-16842879-55850-27216-1 with resources cpus(*):2; mem(*):1024; > disk(*):1024; ports(*):[31000-32000] on slave > 20140714-150843-16842879-55850-27216-0 (quantal) > I0714 15:08:47.950572 27241 master.cpp:3454] Sending 1 offers to framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:47.950774 27241 sched.cpp:546] Scheduler::resourceOffers took > 37944ns > I0714 15:08:47.951179 27216 master.cpp:625] Master terminating > I0714 15:08:47.951263 27216 master.hpp:811] Removing offer > 20140714-150843-16842879-55850-27216-1 with resources cpus(*):2; mem(*):1024; > disk(*):1024; ports(*):[31000-32000] on slave > 20140714-150843-16842879-55850-27216-0 (quantal) > I0714 15:08:47.953447 27242 sched.cpp:747] Stopping framework > '20140714-150843-16842879-55850-27216-0000' > I0714 15:08:47.953547 27242 slave.cpp:2330] master@127.0.1.1:55850 exited > W0714 15:08:47.953567 27242 slave.cpp:2333] Master disconnected! Waiting for > a new master to be elected > I0714 15:08:47.964512 27238 slave.cpp:2526] Executor > '4a6783aa-8d07-46e3-8399-2a5d047f0021' of framework > 20140714-150843-16842879-55850-27216-0000 exited with status 0 > I0714 15:08:47.968690 27238 slave.cpp:2660] Cleaning up executor > '4a6783aa-8d07-46e3-8399-2a5d047f0021' of framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:47.969348 27236 gc.cpp:56] Scheduling > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021/runs/19c466f8-bb5a-4842-a152-f585ff88762a' > for gc 6.99998878298667days in the future > I0714 15:08:47.969751 27241 gc.cpp:56] Scheduling > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021' > for gc 6.99998877682963days in the future > I0714 15:08:47.970082 27239 gc.cpp:56] Scheduling > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021/runs/19c466f8-bb5a-4842-a152-f585ff88762a' > for gc 6.99998877336889days in the future > I0714 15:08:47.970379 27242 gc.cpp:56] Scheduling > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000/executors/4a6783aa-8d07-46e3-8399-2a5d047f0021' > for gc 6.99998876968889days in the future > I0714 15:08:47.970587 27238 slave.cpp:2735] Cleaning up framework > 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:47.970960 27237 status_update_manager.cpp:282] Closing status > update streams for framework 20140714-150843-16842879-55850-27216-0000 > I0714 15:08:47.971225 27236 gc.cpp:56] Scheduling > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000' > for gc 6.99998875966519days in the future > I0714 15:08:47.971549 27241 gc.cpp:56] Scheduling > '/tmp/SlaveRecoveryTest_0_ReconcileKillTask_Zl9DUt/meta/slaves/20140714-150843-16842879-55850-27216-0/frameworks/20140714-150843-16842879-55850-27216-0000' > for gc 6.99998875612148days in the future > W0714 15:08:47.975971 27235 containerizer.cpp:893] Ignoring destroy of > unknown container: 19c466f8-bb5a-4842-a152-f585ff88762a > ./tests/cluster.hpp:530: Failure > (wait).failure(): Unknown container: 19c466f8-bb5a-4842-a152-f585ff88762a > # > # A fatal error has been detected by the Java Runtime Environment: > # > # SIGSEGV (0xb) at pc=0x00000000005a0299, pid=27216, tid=47907931709760 > # > # JRE version: OpenJDK Runtime Environment (7.0_55-b14) (build 1.7.0_55-b14) > # Java VM: OpenJDK 64-Bit Server VM (24.51-b03 mixed mode linux-amd64 > compressed oops) > # Problematic frame: > # C [lt-mesos-tests+0x1a0299] mlock@@GLIBC_2.2.5+0x1a0299 > # > # Failed to write core dump. Core dumps have been disabled. To enable core > dumping, try "ulimit -c unlimited" before starting Java again > # > # An error report file with more information is saved as: > # /var/jenkins/workspace/mesos-ubuntu-12.10-gcc/src/hs_err_pid27216.log > # > # If you would like to submit a bug report, please include > # instructions on how to reproduce the bug and visit: > # http://icedtea.classpath.org/bugzilla > # The crash happened outside the Java Virtual Machine in native code. > # See problematic frame for where to report the bug. > # > make[3]: *** [check-local] Aborted > make[3]: Leaving directory `/var/jenkins/workspace/mesos-ubuntu-12.10-gcc/src' > make[2]: *** [check-am] Error 2 > make[2]: Leaving directory `/var/jenkins/workspace/mesos-ubuntu-12.10-gcc/src' > make[1]: *** [check] Error 2 > make[1]: Leaving directory `/var/jenkins/workspace/mesos-ubuntu-12.10-gcc/src' > make: *** [check-recursive] Error 1 > Build step 'Execute shell' marked build as failure > erreicht: 1854109 > Sending e-mails to: kernel-t...@twitter.com apache-me...@twitter.com > Finished: FAILURE > Help us localize this page Page generated: Jul 14, 2014 5:57:17 PMREST > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)