Mesos 0.20.1 release status
We are targetting 17 tickets. It include improvements and bug fixes. 4 of them are still in progress. http://s.apache.org/mesos-0.20.1-unresolved-issues I'll cut the tag for RC1 and send for voting, once these issues are reviewed/submitted or 9/15 @6pm PDT, whichever comes first! The open issues (if any) will be moved to next release, at that point in time. In case you got questions, let me know. Thank you, -- Regards, Bhuvan Arumugam www.livecipher.com
Re: Mesos 0.20.1 release status
Tim, sorry. I meant today, 9/16 @6pm PDT. Vinod, yes, Adam is helping me to push CHANGELOG. I'll work with him to create tag, mvn push, etc. Jie, i'm hoping to go with tags. I'll create a tag for 0.20.1-rc1. For new RC builds (if any), i'll create new tag from previous RC and cherry-pick the bug fixes. Team, if you are working on any of these tickets, please follow-up with reviewers to +1 the patch. If these patches are not merged before 6pm, it may miss this release. I'd prefer not to delay the release schedule, considering next major release 0.21.0 will be out in 3-4 weeks. http://s.apache.org/mesos-0.20.1-unresolved-issues On Mon, Sep 15, 2014 at 11:35 PM, Timothy Chen tnac...@gmail.com wrote: It's already past 9/16 6 pm PDT? Tim On Tue, Sep 16, 2014 at 2:19 AM, Bhuvan Arumugam bhu...@apache.org wrote: We are targetting 17 tickets. It include improvements and bug fixes. 4 of them are still in progress. http://s.apache.org/mesos-0.20.1-unresolved-issues I'll cut the tag for RC1 and send for voting, once these issues are reviewed/submitted or 9/15 @6pm PDT, whichever comes first! The open issues (if any) will be moved to next release, at that point in time. In case you got questions, let me know. Thank you, -- Regards, Bhuvan Arumugam www.livecipher.com -- Regards, Bhuvan Arumugam www.livecipher.com
Re: Mesos 0.20.1 release status
Release update: We have finalized the issue/commits that will make it for this release. We are waiting to merge these 2 patches, before we cut 0.20.1 RC1: https://reviews.apache.org/r/25403/ https://reviews.apache.org/r/25523/ On Tue, Sep 16, 2014 at 2:14 PM, Bhuvan Arumugam bhu...@apache.org wrote: Tim, sorry. I meant today, 9/16 @6pm PDT. Vinod, yes, Adam is helping me to push CHANGELOG. I'll work with him to create tag, mvn push, etc. Jie, i'm hoping to go with tags. I'll create a tag for 0.20.1-rc1. For new RC builds (if any), i'll create new tag from previous RC and cherry-pick the bug fixes. Team, if you are working on any of these tickets, please follow-up with reviewers to +1 the patch. If these patches are not merged before 6pm, it may miss this release. I'd prefer not to delay the release schedule, considering next major release 0.21.0 will be out in 3-4 weeks. http://s.apache.org/mesos-0.20.1-unresolved-issues On Mon, Sep 15, 2014 at 11:35 PM, Timothy Chen tnac...@gmail.com wrote: It's already past 9/16 6 pm PDT? Tim On Tue, Sep 16, 2014 at 2:19 AM, Bhuvan Arumugam bhu...@apache.org wrote: We are targetting 17 tickets. It include improvements and bug fixes. 4 of them are still in progress. http://s.apache.org/mesos-0.20.1-unresolved-issues I'll cut the tag for RC1 and send for voting, once these issues are reviewed/submitted or 9/15 @6pm PDT, whichever comes first! The open issues (if any) will be moved to next release, at that point in time. In case you got questions, let me know. Thank you, -- Regards, Bhuvan Arumugam www.livecipher.com -- Regards, Bhuvan Arumugam www.livecipher.com -- Regards, Bhuvan Arumugam www.livecipher.com
Re: Review Request 25270: Enable bridge network in Mesos
On Sept. 15, 2014, 6 a.m., Derek Zhang wrote: Ship It! any more volunteers to bless submit this patch? - Bhuvan --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/25270/#review53312 --- On Sept. 11, 2014, 5:48 p.m., Timothy Chen wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/25270/ --- (Updated Sept. 11, 2014, 5:48 p.m.) Review request for mesos, Benjamin Hindman, Jie Yu, and Timothy St. Clair. Bugs: MESOS-1621 https://issues.apache.org/jira/browse/MESOS-1621 Repository: mesos-git Description --- Review: https://reviews.apache.org/r/25270 Diffs - include/mesos/mesos.proto dea51f94d130c131421c43e7fd774ceb8941f501 src/docker/docker.cpp af51ac9058382aede61b09e06e312ad2ce6de03e src/slave/slave.cpp 1b3dc7370a2441e4159aa5ee552b64ca5e511e96 src/tests/docker_containerizer_tests.cpp 8654f9c787bd207f6a7b821651e0c083bea9dc8a src/tests/docker_tests.cpp 826a8c1ef1b3089d416e5775fa2cf4e5cb0c26d1 Diff: https://reviews.apache.org/r/25270/diff/ Testing --- make check Thanks, Timothy Chen
Contributor role in Jira
Vinod, I'm working with Adam Bordelon to manage 0.20.1 release. Can you grant me Contributor access in jira to manage versions and changelogs? Thank you, -- Regards, Bhuvan Arumugam www.livecipher.com
Re: Review Request 25237: Avoid Docker pull on each run
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/25237/#review53297 --- Timothy Chen: do you have an ETA for a revised patch? This would help to plan 0.20.1 release. - Bhuvan Arumugam On Sept. 1, 2014, 7:16 p.m., Timothy Chen wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/25237/ --- (Updated Sept. 1, 2014, 7:16 p.m.) Review request for mesos, Benjamin Hindman and Jie Yu. Repository: mesos-git Description --- Avoid Docker pull on each run. Currently each Docker run will run a docker pull which calls the docker registry each time. To avoid this this patch adds a docker inspect image and skip calling pull if it already exists. Diffs - src/slave/containerizer/docker.cpp 0febbac5df4126f6c8d9a06dd0ba1668d041b34a Diff: https://reviews.apache.org/r/25237/diff/ Testing --- make check Thanks, Timothy Chen
Re: Review Request 25270: Enable bridge network in Mesos
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/25270/#review53298 --- src/tests/docker_containerizer_tests.cpp https://reviews.apache.org/r/25270/#comment92903 Timothy Chen: do you intend to use the image mesosphere/test-executor? - Bhuvan Arumugam On Sept. 11, 2014, 5:48 p.m., Timothy Chen wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/25270/ --- (Updated Sept. 11, 2014, 5:48 p.m.) Review request for mesos, Benjamin Hindman, Jie Yu, and Timothy St. Clair. Bugs: MESOS-1621 https://issues.apache.org/jira/browse/MESOS-1621 Repository: mesos-git Description --- Review: https://reviews.apache.org/r/25270 Diffs - include/mesos/mesos.proto dea51f94d130c131421c43e7fd774ceb8941f501 src/docker/docker.cpp af51ac9058382aede61b09e06e312ad2ce6de03e src/slave/slave.cpp 1b3dc7370a2441e4159aa5ee552b64ca5e511e96 src/tests/docker_containerizer_tests.cpp 8654f9c787bd207f6a7b821651e0c083bea9dc8a src/tests/docker_tests.cpp 826a8c1ef1b3089d416e5775fa2cf4e5cb0c26d1 Diff: https://reviews.apache.org/r/25270/diff/ Testing --- make check Thanks, Timothy Chen
Re: Differentiate user requests protobuf messages
On Mon, Aug 25, 2014 at 5:03 PM, Vinod Kone vinodk...@gmail.com wrote: See my answers inline. Based on what you say, looks like there are more HTTP endpoints (rw) exposed to slaves and frameworks, like /shutdown. We don't want to implement auth for these endpoints, atm. Yes. There are more user visible endpoints. See master:port/help for the list of endpoints. That said, i think, we should authenticate /master/state.json only. Can I assume, this can be implemented in Master::Http::state method, using process::http::Request and process::http::Response? Or, does slave/framework use /master/state.json endpoint? Any changes to this method will not affect protobuf message exchange between master and slave/framework, I think. Correct me if i'm wrong. For authorizing static http endpoints, we could resurrect some code that didn't make it into 0.20.0. See the diff here ( https://github.com/apache/mesos/commit/a5cc9b435aad080a79230f0366a6ce77116c95a4) and let me know if that is what you are looking for. It's more for authorizing frameworks. We looking for authenticating users for certain HTTP endpoints. If i understand right, the above patch expect certain authz credentials like principal role for framework registration and principal user for running tasks. For web requests, i don't think angularJS is exchanging any credential or query-param with master. It's always GET specific json. We are thinking to solve this problem by running a proxy in front of each mesos master and implement authN based on path and/or User-Agent. We'll keep you posted if we want any changes to be made in mesos. Note, the HTTP endpoints exposed by master for web requests do not impact the internal HTTP endpoints used for communicating with frameworks/slaves. Good to know. This will make our implementation simple, as we don't want to authenticate HTTP requests from frameworks/slaves. Thank you, -- Regards, Bhuvan Arumugam www.livecipher.com
Re: Differentiate user requests protobuf messages
We want t On Mon, Aug 25, 2014 at 10:40 AM, Vinod Kone vinodk...@gmail.com wrote: Hey Bhuvan, The ShutdownFramework ACL is an example of authN/authZ of HTTP endpoint (/shutdown) from a user perspective. Depending on what HTTP endpoints you are planning to auth we could conceivably add more ACLs or add a generic HTTP endpoint ACL. Of course this still doesn't give you sessions, caching, or encryption. Vinod, we want to authenticate all web requests, all read-only. Irrespective of the link/tabs we click {/slaves, /frameworks, /offers}, server always return this json /master/state.json. The angularjs does the filtering, based on the user action. Based on what you say, looks like there are more HTTP endpoints (rw) exposed to slaves and frameworks, like /shutdown. We don't want to implement auth for these endpoints, atm. That said, i think, we should authenticate /master/state.json only. Can I assume, this can be implemented in Master::Http::state method, using process::http::Request and process::http::Response? Or, does slave/framework use /master/state.json endpoint? Any changes to this method will not affect protobuf message exchange between master and slave/framework, I think. Correct me if i'm wrong. On Fri, Aug 22, 2014 at 5:36 PM, Bhuvan Arumugam bhu...@apache.org wrote: Hello, We use auth/authz implementation for frameworks and slaves. They are neat! This thread is about auth for web ui, between master and user. We are implementing authentication for master web ui (port: 5050). The master seem to serve both user requests and protobuf messages from slave frameworks on same port. Right? We want to authenticate user requests only. Is there a way to differentiate these messages? Based on how these messages can be differentiated, we are thinking to run mesos master behind a proxy, apache or apache traffic server, primarily for 2 reasons: 1. authentication. The auth could be implemented through apache module or ATS plugin. 2. security. serve user requests through https. If we use ATS, it may also solve caching problem; but we aren't solving this problem right now. Making changes to mesos to address these concern doesn't look neat. Mesos seem to return complete json blob and all magic is done at the client side, in angularjs. Mesos master isn't a full fletched http server. It's not meant to keep track of user session; dealing with http cookies/headers/redirection are non-trivial. Anyone running mesos master behind proxy, or solved same problem differently? -- Regards, Bhuvan Arumugam www.livecipher.com -- Regards, Bhuvan Arumugam www.livecipher.com
Differentiate user requests protobuf messages
Hello, We use auth/authz implementation for frameworks and slaves. They are neat! This thread is about auth for web ui, between master and user. We are implementing authentication for master web ui (port: 5050). The master seem to serve both user requests and protobuf messages from slave frameworks on same port. Right? We want to authenticate user requests only. Is there a way to differentiate these messages? Based on how these messages can be differentiated, we are thinking to run mesos master behind a proxy, apache or apache traffic server, primarily for 2 reasons: 1. authentication. The auth could be implemented through apache module or ATS plugin. 2. security. serve user requests through https. If we use ATS, it may also solve caching problem; but we aren't solving this problem right now. Making changes to mesos to address these concern doesn't look neat. Mesos seem to return complete json blob and all magic is done at the client side, in angularjs. Mesos master isn't a full fletched http server. It's not meant to keep track of user session; dealing with http cookies/headers/redirection are non-trivial. Anyone running mesos master behind proxy, or solved same problem differently? -- Regards, Bhuvan Arumugam www.livecipher.com
Re: Review Request 23104: Added documentation for authorization.
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23104/#review50236 --- docs/authorization.md https://reviews.apache.org/r/23104/#comment87886 Vinod, can you clarify the diff between users and roles objects? In framework like aurora, the role is the unix user that run the task on mesos slave. In this situation, how they are different? - Bhuvan Arumugam On June 27, 2014, 9:52 p.m., Vinod Kone wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23104/ --- (Updated June 27, 2014, 9:52 p.m.) Review request for mesos, Benjamin Hindman and Ben Mahler. Bugs: MESOS-1480 https://issues.apache.org/jira/browse/MESOS-1480 Repository: mesos-git Description --- Let me know what you think. Diffs - docs/authorization.md PRE-CREATION Diff: https://reviews.apache.org/r/23104/diff/ Testing --- Thanks, Vinod Kone
[jira] [Commented] (MESOS-1524) Implement Docker support in Mesos
[ https://issues.apache.org/jira/browse/MESOS-1524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039922#comment-14039922 ] Bhuvan Arumugam commented on MESOS-1524: {code} Mounting the sandbox directory can be problematic if the path contains a colon, due to docker's CLI parser. {code} This issue is fixed in {{0.18.0}}. The sandbox directory created with newer mesos slaves, don't contain colon. {code} Docker doesn't actually pull an image if it exists in it's local cache. This is problematic because if you update a tag on a docker registry (e.g a private one) a slave that has previously launched an executor with that image won't download the new one... so you can get various inconsistencies. The only way around this currently is to pull first. {code} [~tarnfeld] Docker can ignore local cache and pull from private or upstream registry when using {{--no-cache}} option. With this flag, the image/tag will be pulled from registry even though it was built in the same host earlier. I'm not saying it's the way to go, but it's an option. This option is exposed to {{docker build}} command and hopefully it'll be exposed in other commands like {{docker run}}. {code} --no-cache=false Do not use cache when building the image {code} Implement Docker support in Mesos - Key: MESOS-1524 URL: https://issues.apache.org/jira/browse/MESOS-1524 Project: Mesos Issue Type: Epic Reporter: Tobi Knaup Assignee: Benjamin Hindman There have been two projects to add Docker support to Mesos, first via an executor, and more recently via an external containerizer written in Python - Deimos: https://github.com/mesosphere/deimos We've got a lot of feedback from folks who use Docker and Mesos, and the main wish was to make Docker a first class citizen in Mesos instead of a plugin that needs to be installed separately. Mesos has been using Linux containers for a long time, first via LXC, then via cgroups, and now also via the external containerizer. For a long time it wasn't clear what the winning technology would be, but with Docker becoming the de-facto standard for handling containers I think Mesos should make it a first class citizen and part of core. Let's use this JIRA to track wishes/feedback on the implementation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (MESOS-1524) Implement Docker support in Mesos
[ https://issues.apache.org/jira/browse/MESOS-1524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039922#comment-14039922 ] Bhuvan Arumugam edited comment on MESOS-1524 at 6/21/14 6:57 PM: - {code} Mounting the sandbox directory can be problematic if the path contains a colon, due to docker's CLI parser. {code} This issue is fixed in {{0.18.0}}. The sandbox directory created with newer mesos slaves, don't contain colon. {code} Docker doesn't actually pull an image if it exists in it's local cache. This is problematic because if you update a tag on a docker registry (e.g a private one) a slave that has previously launched an executor with that image won't download the new one... so you can get various inconsistencies. The only way around this currently is to pull first. {code} [~tarnfeld] Docker can ignore local cache and pull from private or upstream registry when using {{--no-cache}} option. With this flag, the image/tag will be pulled from registry even though it was built in the same host earlier. I'm not saying it's the way to go, but it's an option. This option is exposed to {{docker build}} command and hopefully it'll be exposed in other commands like {{docker run}} soon. {code} --no-cache=false Do not use cache when building the image {code} was (Author: bhuvan): {code} Mounting the sandbox directory can be problematic if the path contains a colon, due to docker's CLI parser. {code} This issue is fixed in {{0.18.0}}. The sandbox directory created with newer mesos slaves, don't contain colon. {code} Docker doesn't actually pull an image if it exists in it's local cache. This is problematic because if you update a tag on a docker registry (e.g a private one) a slave that has previously launched an executor with that image won't download the new one... so you can get various inconsistencies. The only way around this currently is to pull first. {code} [~tarnfeld] Docker can ignore local cache and pull from private or upstream registry when using {{--no-cache}} option. With this flag, the image/tag will be pulled from registry even though it was built in the same host earlier. I'm not saying it's the way to go, but it's an option. This option is exposed to {{docker build}} command and hopefully it'll be exposed in other commands like {{docker run}}. {code} --no-cache=false Do not use cache when building the image {code} Implement Docker support in Mesos - Key: MESOS-1524 URL: https://issues.apache.org/jira/browse/MESOS-1524 Project: Mesos Issue Type: Epic Reporter: Tobi Knaup Assignee: Benjamin Hindman There have been two projects to add Docker support to Mesos, first via an executor, and more recently via an external containerizer written in Python - Deimos: https://github.com/mesosphere/deimos We've got a lot of feedback from folks who use Docker and Mesos, and the main wish was to make Docker a first class citizen in Mesos instead of a plugin that needs to be installed separately. Mesos has been using Linux containers for a long time, first via LXC, then via cgroups, and now also via the external containerizer. For a long time it wasn't clear what the winning technology would be, but with Docker becoming the de-facto standard for handling containers I think Mesos should make it a first class citizen and part of core. Let's use this JIRA to track wishes/feedback on the implementation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (MESOS-1377) --work_dir mandatory for mesos-master
Bhuvan Arumugam created MESOS-1377: -- Summary: --work_dir mandatory for mesos-master Key: MESOS-1377 URL: https://issues.apache.org/jira/browse/MESOS-1377 Project: Mesos Issue Type: Documentation Components: documentation Affects Versions: 0.19.0 Reporter: Bhuvan Arumugam In v0.19.0, the default persistence strategy for registry is changed from in-memory to replicated_log. It mean, a) the {{--work_dir}} option is mandatory. the replication logs reside under work_dir/replicated_log}} b) the {{--quorum}} option is mandatory, if zookeeper is used. document these changes in http://mesos.apache.org/documentation/latest/configuration/. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (MESOS-1196) create annotated tag for v0.19.0
Bhuvan Arumugam created MESOS-1196: -- Summary: create annotated tag for v0.19.0 Key: MESOS-1196 URL: https://issues.apache.org/jira/browse/MESOS-1196 Project: Mesos Issue Type: Task Components: release Reporter: Bhuvan Arumugam To facilitate setting up CI for mesos repository, we should create annotated tag at the beginning of each release. This is follow up to http://www.mail-archive.com/dev@mesos.apache.org/msg10915.html Can you, a) create one based on this hash 99985d27857fb5a10b26ded8da1a36100780d18b, wherein master was pointed to 0.19.0 release? b) document the step to create annotated tag at beginning of every release c) document the step to create lightweight tag for every RC release -- This message was sent by Atlassian JIRA (v6.2#6252)
Fwd: [proposal] Annotated tags for mesos release
Hello, Here's a proposal on how we tag our releases. This is primarily to help continuous integration and stay in line with other communities. This should also help git describe do the right thing. For those who don't have the context, currently we don't create tags at the beginning of the release. We create one (non-annotated) for every RC release. This doesn't help when we want to setup CI system for Mesos. There is no way to find the current version/release, going by tags, unless we hack around and parse configure.ac script. Current behavior: rainbow:mesos bhuvan$ git describe fatal: No annotated tags can describe '185dba5d8d52034ac6a8e29c2686f0f7dc4cf102'. However, there were unannotated tags: try --tags. rainbow:mesos bhuvan$ git describe --tags 0.18.0-rc6 The proposal is to create annotated tag at the beginning of every release and lightweight tag for RC releases. This way when we setup CI to build/package Mesos, we could find current version/release using git describe. New behavior or something in these lines: Note the tag 0.19.0 is created locally in my repository ... rainbow:mesos bhuvan$ git tag -a 0.19.0 -m '0.19.0 release' 99985d27 rainbow:mesos bhuvan$ git describe 0.19.0-220-gca84d5f If there are no disagreement, I'll file a ticket to create one for 0.19.0 release and document the release/tagging steps for future releases. -- Regards, Bhuvan Arumugam www.livecipher.com
[proposal] Annotated tags for mesos release
Hello, Here's a proposal on how we tag our releases. This is primarily to help continuous integration and stay in line with other communities. This should also help git describe do the right thing. For those who don't have the context, currently we don't create tags at the beginning of the release. We create one (non-annotated) for every RC release. This doesn't help when we want to setup CI system for Mesos. There is no way to find the current version/release, going by tags, unless we hack around and parse configure.ac script. rainbow:mesos bhuvan$ git describe fatal: No annotated tags can describe '185dba5d8d52034ac6a8e29c2686f0f7dc4cf102'. However, there were unannotated tags: try --tags. rainbow:mesos bhuvan$ git describe --tags 0.18.0-rc6 The proposal is to create annotated tag at the beginning of every release and lightweight tag for RC releases. This way when we setup CI to build/package Mesos, we could find current version/release using git describe. Something in these lines: Note the tag 0.19.0 is created locally in my repository ... rainbow:mesos bhuvan$ git tag -a 0.19.0 -m '0.19.0 release' 99985d27 rainbow:mesos bhuvan$ git describe 0.19.0-220-gca84d5f If there are no disagreement, I'll file a ticket to create one for 0.19.0 release and document the release/tagging steps for future releases. -- Regards, Bhuvan Arumugam www.livecipher.com
[proposal] Annotated tags for mesos release
Hello, Here's a proposal on how we tag our releases. This is primarily to help continuous integration and stay in line with other communities. This should also help git describe do the right thing. For those who don't have the context, currently we don't create tags at the beginning of the release. We create one (non-annotated) for every RC release. This doesn't help when we want to setup CI system for Mesos. There is no way to find the current version/release, going by tags, unless we hack around and parse configure.ac script. rainbow:mesos bhuvan$ git describe fatal: No annotated tags can describe '185dba5d8d52034ac6a8e29c2686f0f7dc4cf102'. However, there were unannotated tags: try --tags. rainbow:mesos bhuvan$ git describe --tags 0.18.0-rc6 The proposal is to create annotated tag at the beginning of every release and lightweight tag for RC releases. This way when we setup CI to build/package Mesos, we could find current version/release using git describe. Something in these lines. Note the tag 0.19.0 is created locally in my repository ... rainbow:mesos bhuvan$ git tag -a 0.19.0 -m '0.19.0 release' 99985d27 rainbow:mesos bhuvan$ git describe 0.19.0-220-gca84d5f If there are no disagreement, I'll file a ticket to create one for 0.19.0 release and document the release/tagging steps for future releases. -- Regards, Bhuvan Arumugam www.livecipher.com
[jira] [Commented] (MESOS-995) Extend Subprocess to support environment variables, changing user and working directory
[ https://issues.apache.org/jira/browse/MESOS-995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958945#comment-13958945 ] Bhuvan Arumugam commented on MESOS-995: --- Nevermind. I'm unable to reproduce this failure. I'll report back if it is reproducible. Extend Subprocess to support environment variables, changing user and working directory --- Key: MESOS-995 URL: https://issues.apache.org/jira/browse/MESOS-995 Project: Mesos Issue Type: Improvement Components: libprocess Reporter: Ian Downes Assignee: Dominic Hamon Priority: Minor Fix For: 0.19.0 These are frequently needed so we should support them in Subprocess. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (MESOS-1191) OsTest and ProcTest unit tests flaky
Bhuvan Arumugam created MESOS-1191: -- Summary: OsTest and ProcTest unit tests flaky Key: MESOS-1191 URL: https://issues.apache.org/jira/browse/MESOS-1191 Project: Mesos Issue Type: Bug Components: build Affects Versions: 0.19.0 Reporter: Bhuvan Arumugam Priority: Minor It doesn't happen all the time. {code} $ make check -j3 . . [ RUN ] ProcTest.MultipleThreads ../../../../3rdparty/libprocess/3rdparty/stout/tests/proc_tests.cpp:181: Failure Value of: (procThreads).get() Actual: { 10050, 10053 } Expected: childThreads Which is: { 0 } [ FAILED ] ProcTest.MultipleThreads (2 ms) . . [ RUN ] OsTest.children ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:361: Failure Value of: children.get().size() Actual: 1 Expected: 0u Which is: 0 ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:379: Failure Value of: children.get().size() Actual: 2 Expected: 1u Which is: 1 [ FAILED ] OsTest.children (31 ms) [ RUN ] OsTest.process [ OK ] OsTest.process (0 ms) [ RUN ] OsTest.processes [ OK ] OsTest.processes (11 ms) [ RUN ] OsTest.killtree [ OK ] OsTest.killtree (64 ms) [ RUN ] OsTest.pstree ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:604: Failure Value of: tree.get().children.size() Actual: 1 Expected: 0u Which is: 0 -+- 10048 ./stout-tests \--- 10050 () [ FAILED ] OsTest.pstree (21 ms) . . [--] Global test environment tear-down [==] 118 tests from 24 test cases ran. (480 ms total) [ PASSED ] 115 tests. [ FAILED ] 3 tests, listed below: [ FAILED ] ProcTest.MultipleThreads [ FAILED ] OsTest.children [ FAILED ] OsTest.pstree {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MESOS-1191) OsTest and ProcTest unit tests flaky
[ https://issues.apache.org/jira/browse/MESOS-1191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13959186#comment-13959186 ] Bhuvan Arumugam commented on MESOS-1191: It also happen with review-bot https://reviews.apache.org/r/19259/, at times. OsTest and ProcTest unit tests flaky Key: MESOS-1191 URL: https://issues.apache.org/jira/browse/MESOS-1191 Project: Mesos Issue Type: Bug Components: build Affects Versions: 0.19.0 Reporter: Bhuvan Arumugam Priority: Minor Labels: test-fail It doesn't happen all the time. {code} $ make check -j3 . . [ RUN ] ProcTest.MultipleThreads ../../../../3rdparty/libprocess/3rdparty/stout/tests/proc_tests.cpp:181: Failure Value of: (procThreads).get() Actual: { 10050, 10053 } Expected: childThreads Which is: { 0 } [ FAILED ] ProcTest.MultipleThreads (2 ms) . . [ RUN ] OsTest.children ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:361: Failure Value of: children.get().size() Actual: 1 Expected: 0u Which is: 0 ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:379: Failure Value of: children.get().size() Actual: 2 Expected: 1u Which is: 1 [ FAILED ] OsTest.children (31 ms) [ RUN ] OsTest.process [ OK ] OsTest.process (0 ms) [ RUN ] OsTest.processes [ OK ] OsTest.processes (11 ms) [ RUN ] OsTest.killtree [ OK ] OsTest.killtree (64 ms) [ RUN ] OsTest.pstree ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:604: Failure Value of: tree.get().children.size() Actual: 1 Expected: 0u Which is: 0 -+- 10048 ./stout-tests \--- 10050 () [ FAILED ] OsTest.pstree (21 ms) . . [--] Global test environment tear-down [==] 118 tests from 24 test cases ran. (480 ms total) [ PASSED ] 115 tests. [ FAILED ] 3 tests, listed below: [ FAILED ] ProcTest.MultipleThreads [ FAILED ] OsTest.children [ FAILED ] OsTest.pstree {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MESOS-995) Extend Subprocess to support environment variables, changing user and working directory
[ https://issues.apache.org/jira/browse/MESOS-995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13959187#comment-13959187 ] Bhuvan Arumugam commented on MESOS-995: --- Filed MESOS-1191 to track fix for flaky tests ... Extend Subprocess to support environment variables, changing user and working directory --- Key: MESOS-995 URL: https://issues.apache.org/jira/browse/MESOS-995 Project: Mesos Issue Type: Improvement Components: libprocess Reporter: Ian Downes Assignee: Dominic Hamon Priority: Minor Fix For: 0.19.0 These are frequently needed so we should support them in Subprocess. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (MESOS-1191) ProcTest unit tests flaky
[ https://issues.apache.org/jira/browse/MESOS-1191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bhuvan Arumugam updated MESOS-1191: --- Summary: ProcTest unit tests flaky (was: OsTest and ProcTest unit tests flaky) ProcTest unit tests flaky - Key: MESOS-1191 URL: https://issues.apache.org/jira/browse/MESOS-1191 Project: Mesos Issue Type: Bug Components: test Affects Versions: 0.19.0 Reporter: Bhuvan Arumugam Priority: Minor It doesn't happen all the time. {code} $ make check -j3 . . [ RUN ] ProcTest.MultipleThreads ../../../../3rdparty/libprocess/3rdparty/stout/tests/proc_tests.cpp:181: Failure Value of: (procThreads).get() Actual: { 10050, 10053 } Expected: childThreads Which is: { 0 } [ FAILED ] ProcTest.MultipleThreads (2 ms) . . [ RUN ] OsTest.children ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:361: Failure Value of: children.get().size() Actual: 1 Expected: 0u Which is: 0 ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:379: Failure Value of: children.get().size() Actual: 2 Expected: 1u Which is: 1 [ FAILED ] OsTest.children (31 ms) [ RUN ] OsTest.process [ OK ] OsTest.process (0 ms) [ RUN ] OsTest.processes [ OK ] OsTest.processes (11 ms) [ RUN ] OsTest.killtree [ OK ] OsTest.killtree (64 ms) [ RUN ] OsTest.pstree ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:604: Failure Value of: tree.get().children.size() Actual: 1 Expected: 0u Which is: 0 -+- 10048 ./stout-tests \--- 10050 () [ FAILED ] OsTest.pstree (21 ms) . . [--] Global test environment tear-down [==] 118 tests from 24 test cases ran. (480 ms total) [ PASSED ] 115 tests. [ FAILED ] 3 tests, listed below: [ FAILED ] ProcTest.MultipleThreads [ FAILED ] OsTest.children [ FAILED ] OsTest.pstree {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MESOS-995) Extend Subprocess to support environment variables, changing user and working directory
[ https://issues.apache.org/jira/browse/MESOS-995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958383#comment-13958383 ] Bhuvan Arumugam commented on MESOS-995: --- It seem to break unit tests, osTest and ProcTest. Let me know if I should upload verbose log output, if it isn't obvious. $ make check -j3 . . [ RUN ] ProcTest.MultipleThreads ../../../../3rdparty/libprocess/3rdparty/stout/tests/proc_tests.cpp:181: Failure Value of: (procThreads).get() Actual: { 10050, 10053 } Expected: childThreads Which is: { 0 } [ FAILED ] ProcTest.MultipleThreads (2 ms) . . [ RUN ] OsTest.children ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:361: Failure Value of: children.get().size() Actual: 1 Expected: 0u Which is: 0 ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:379: Failure Value of: children.get().size() Actual: 2 Expected: 1u Which is: 1 [ FAILED ] OsTest.children (31 ms) [ RUN ] OsTest.process [ OK ] OsTest.process (0 ms) [ RUN ] OsTest.processes [ OK ] OsTest.processes (11 ms) [ RUN ] OsTest.killtree [ OK ] OsTest.killtree (64 ms) [ RUN ] OsTest.pstree ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:604: Failure Value of: tree.get().children.size() Actual: 1 Expected: 0u Which is: 0 -+- 10048 ./stout-tests \--- 10050 () [ FAILED ] OsTest.pstree (21 ms) . . [--] Global test environment tear-down [==] 118 tests from 24 test cases ran. (480 ms total) [ PASSED ] 115 tests. [ FAILED ] 3 tests, listed below: [ FAILED ] ProcTest.MultipleThreads [ FAILED ] OsTest.children [ FAILED ] OsTest.pstree Extend Subprocess to support environment variables, changing user and working directory --- Key: MESOS-995 URL: https://issues.apache.org/jira/browse/MESOS-995 Project: Mesos Issue Type: Improvement Components: libprocess Reporter: Ian Downes Assignee: Dominic Hamon Priority: Minor Fix For: 0.19.0 These are frequently needed so we should support them in Subprocess. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (MESOS-995) Extend Subprocess to support environment variables, changing user and working directory
[ https://issues.apache.org/jira/browse/MESOS-995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958383#comment-13958383 ] Bhuvan Arumugam edited comment on MESOS-995 at 4/3/14 12:35 AM: It seem to break unit tests, osTest and ProcTest. Let me know if I should upload verbose log output, if it isn't obvious. ``` $ make check -j3 . . [ RUN ] ProcTest.MultipleThreads ../../../../3rdparty/libprocess/3rdparty/stout/tests/proc_tests.cpp:181: Failure Value of: (procThreads).get() Actual: { 10050, 10053 } Expected: childThreads Which is: { 0 } [ FAILED ] ProcTest.MultipleThreads (2 ms) . . [ RUN ] OsTest.children ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:361: Failure Value of: children.get().size() Actual: 1 Expected: 0u Which is: 0 ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:379: Failure Value of: children.get().size() Actual: 2 Expected: 1u Which is: 1 [ FAILED ] OsTest.children (31 ms) [ RUN ] OsTest.process [ OK ] OsTest.process (0 ms) [ RUN ] OsTest.processes [ OK ] OsTest.processes (11 ms) [ RUN ] OsTest.killtree [ OK ] OsTest.killtree (64 ms) [ RUN ] OsTest.pstree ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:604: Failure Value of: tree.get().children.size() Actual: 1 Expected: 0u Which is: 0 -+- 10048 ./stout-tests \--- 10050 () [ FAILED ] OsTest.pstree (21 ms) . . [--] Global test environment tear-down [==] 118 tests from 24 test cases ran. (480 ms total) [ PASSED ] 115 tests. [ FAILED ] 3 tests, listed below: [ FAILED ] ProcTest.MultipleThreads [ FAILED ] OsTest.children [ FAILED ] OsTest.pstree ``` was (Author: bhuvan): It seem to break unit tests, osTest and ProcTest. Let me know if I should upload verbose log output, if it isn't obvious. $ make check -j3 . . [ RUN ] ProcTest.MultipleThreads ../../../../3rdparty/libprocess/3rdparty/stout/tests/proc_tests.cpp:181: Failure Value of: (procThreads).get() Actual: { 10050, 10053 } Expected: childThreads Which is: { 0 } [ FAILED ] ProcTest.MultipleThreads (2 ms) . . [ RUN ] OsTest.children ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:361: Failure Value of: children.get().size() Actual: 1 Expected: 0u Which is: 0 ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:379: Failure Value of: children.get().size() Actual: 2 Expected: 1u Which is: 1 [ FAILED ] OsTest.children (31 ms) [ RUN ] OsTest.process [ OK ] OsTest.process (0 ms) [ RUN ] OsTest.processes [ OK ] OsTest.processes (11 ms) [ RUN ] OsTest.killtree [ OK ] OsTest.killtree (64 ms) [ RUN ] OsTest.pstree ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:604: Failure Value of: tree.get().children.size() Actual: 1 Expected: 0u Which is: 0 -+- 10048 ./stout-tests \--- 10050 () [ FAILED ] OsTest.pstree (21 ms) . . [--] Global test environment tear-down [==] 118 tests from 24 test cases ran. (480 ms total) [ PASSED ] 115 tests. [ FAILED ] 3 tests, listed below: [ FAILED ] ProcTest.MultipleThreads [ FAILED ] OsTest.children [ FAILED ] OsTest.pstree Extend Subprocess to support environment variables, changing user and working directory --- Key: MESOS-995 URL: https://issues.apache.org/jira/browse/MESOS-995 Project: Mesos Issue Type: Improvement Components: libprocess Reporter: Ian Downes Assignee: Dominic Hamon Priority: Minor Fix For: 0.19.0 These are frequently needed so we should support them in Subprocess. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (MESOS-995) Extend Subprocess to support environment variables, changing user and working directory
[ https://issues.apache.org/jira/browse/MESOS-995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958383#comment-13958383 ] Bhuvan Arumugam edited comment on MESOS-995 at 4/3/14 12:37 AM: It seem to break unit tests, osTest and ProcTest. Let me know if I should upload verbose log output, if it isn't obvious. {code} $ make check -j3 . . [ RUN ] ProcTest.MultipleThreads ../../../../3rdparty/libprocess/3rdparty/stout/tests/proc_tests.cpp:181: Failure Value of: (procThreads).get() Actual: { 10050, 10053 } Expected: childThreads Which is: { 0 } [ FAILED ] ProcTest.MultipleThreads (2 ms) . . [ RUN ] OsTest.children ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:361: Failure Value of: children.get().size() Actual: 1 Expected: 0u Which is: 0 ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:379: Failure Value of: children.get().size() Actual: 2 Expected: 1u Which is: 1 [ FAILED ] OsTest.children (31 ms) [ RUN ] OsTest.process [ OK ] OsTest.process (0 ms) [ RUN ] OsTest.processes [ OK ] OsTest.processes (11 ms) [ RUN ] OsTest.killtree [ OK ] OsTest.killtree (64 ms) [ RUN ] OsTest.pstree ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:604: Failure Value of: tree.get().children.size() Actual: 1 Expected: 0u Which is: 0 -+- 10048 ./stout-tests \--- 10050 () [ FAILED ] OsTest.pstree (21 ms) . . [--] Global test environment tear-down [==] 118 tests from 24 test cases ran. (480 ms total) [ PASSED ] 115 tests. [ FAILED ] 3 tests, listed below: [ FAILED ] ProcTest.MultipleThreads [ FAILED ] OsTest.children [ FAILED ] OsTest.pstree {code} was (Author: bhuvan): It seem to break unit tests, osTest and ProcTest. Let me know if I should upload verbose log output, if it isn't obvious. {{{ $ make check -j3 . . [ RUN ] ProcTest.MultipleThreads ../../../../3rdparty/libprocess/3rdparty/stout/tests/proc_tests.cpp:181: Failure Value of: (procThreads).get() Actual: { 10050, 10053 } Expected: childThreads Which is: { 0 } [ FAILED ] ProcTest.MultipleThreads (2 ms) . . [ RUN ] OsTest.children ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:361: Failure Value of: children.get().size() Actual: 1 Expected: 0u Which is: 0 ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:379: Failure Value of: children.get().size() Actual: 2 Expected: 1u Which is: 1 [ FAILED ] OsTest.children (31 ms) [ RUN ] OsTest.process [ OK ] OsTest.process (0 ms) [ RUN ] OsTest.processes [ OK ] OsTest.processes (11 ms) [ RUN ] OsTest.killtree [ OK ] OsTest.killtree (64 ms) [ RUN ] OsTest.pstree ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:604: Failure Value of: tree.get().children.size() Actual: 1 Expected: 0u Which is: 0 -+- 10048 ./stout-tests \--- 10050 () [ FAILED ] OsTest.pstree (21 ms) . . [--] Global test environment tear-down [==] 118 tests from 24 test cases ran. (480 ms total) [ PASSED ] 115 tests. [ FAILED ] 3 tests, listed below: [ FAILED ] ProcTest.MultipleThreads [ FAILED ] OsTest.children [ FAILED ] OsTest.pstree }}} Extend Subprocess to support environment variables, changing user and working directory --- Key: MESOS-995 URL: https://issues.apache.org/jira/browse/MESOS-995 Project: Mesos Issue Type: Improvement Components: libprocess Reporter: Ian Downes Assignee: Dominic Hamon Priority: Minor Fix For: 0.19.0 These are frequently needed so we should support them in Subprocess. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (MESOS-995) Extend Subprocess to support environment variables, changing user and working directory
[ https://issues.apache.org/jira/browse/MESOS-995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958383#comment-13958383 ] Bhuvan Arumugam edited comment on MESOS-995 at 4/3/14 12:36 AM: It seem to break unit tests, osTest and ProcTest. Let me know if I should upload verbose log output, if it isn't obvious. {{ $ make check -j3 . . [ RUN ] ProcTest.MultipleThreads ../../../../3rdparty/libprocess/3rdparty/stout/tests/proc_tests.cpp:181: Failure Value of: (procThreads).get() Actual: { 10050, 10053 } Expected: childThreads Which is: { 0 } [ FAILED ] ProcTest.MultipleThreads (2 ms) . . [ RUN ] OsTest.children ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:361: Failure Value of: children.get().size() Actual: 1 Expected: 0u Which is: 0 ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:379: Failure Value of: children.get().size() Actual: 2 Expected: 1u Which is: 1 [ FAILED ] OsTest.children (31 ms) [ RUN ] OsTest.process [ OK ] OsTest.process (0 ms) [ RUN ] OsTest.processes [ OK ] OsTest.processes (11 ms) [ RUN ] OsTest.killtree [ OK ] OsTest.killtree (64 ms) [ RUN ] OsTest.pstree ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:604: Failure Value of: tree.get().children.size() Actual: 1 Expected: 0u Which is: 0 -+- 10048 ./stout-tests \--- 10050 () [ FAILED ] OsTest.pstree (21 ms) . . [--] Global test environment tear-down [==] 118 tests from 24 test cases ran. (480 ms total) [ PASSED ] 115 tests. [ FAILED ] 3 tests, listed below: [ FAILED ] ProcTest.MultipleThreads [ FAILED ] OsTest.children [ FAILED ] OsTest.pstree }} was (Author: bhuvan): It seem to break unit tests, osTest and ProcTest. Let me know if I should upload verbose log output, if it isn't obvious. ``` $ make check -j3 . . [ RUN ] ProcTest.MultipleThreads ../../../../3rdparty/libprocess/3rdparty/stout/tests/proc_tests.cpp:181: Failure Value of: (procThreads).get() Actual: { 10050, 10053 } Expected: childThreads Which is: { 0 } [ FAILED ] ProcTest.MultipleThreads (2 ms) . . [ RUN ] OsTest.children ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:361: Failure Value of: children.get().size() Actual: 1 Expected: 0u Which is: 0 ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:379: Failure Value of: children.get().size() Actual: 2 Expected: 1u Which is: 1 [ FAILED ] OsTest.children (31 ms) [ RUN ] OsTest.process [ OK ] OsTest.process (0 ms) [ RUN ] OsTest.processes [ OK ] OsTest.processes (11 ms) [ RUN ] OsTest.killtree [ OK ] OsTest.killtree (64 ms) [ RUN ] OsTest.pstree ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:604: Failure Value of: tree.get().children.size() Actual: 1 Expected: 0u Which is: 0 -+- 10048 ./stout-tests \--- 10050 () [ FAILED ] OsTest.pstree (21 ms) . . [--] Global test environment tear-down [==] 118 tests from 24 test cases ran. (480 ms total) [ PASSED ] 115 tests. [ FAILED ] 3 tests, listed below: [ FAILED ] ProcTest.MultipleThreads [ FAILED ] OsTest.children [ FAILED ] OsTest.pstree ``` Extend Subprocess to support environment variables, changing user and working directory --- Key: MESOS-995 URL: https://issues.apache.org/jira/browse/MESOS-995 Project: Mesos Issue Type: Improvement Components: libprocess Reporter: Ian Downes Assignee: Dominic Hamon Priority: Minor Fix For: 0.19.0 These are frequently needed so we should support them in Subprocess. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (MESOS-995) Extend Subprocess to support environment variables, changing user and working directory
[ https://issues.apache.org/jira/browse/MESOS-995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958383#comment-13958383 ] Bhuvan Arumugam edited comment on MESOS-995 at 4/3/14 12:36 AM: It seem to break unit tests, osTest and ProcTest. Let me know if I should upload verbose log output, if it isn't obvious. {{{ $ make check -j3 . . [ RUN ] ProcTest.MultipleThreads ../../../../3rdparty/libprocess/3rdparty/stout/tests/proc_tests.cpp:181: Failure Value of: (procThreads).get() Actual: { 10050, 10053 } Expected: childThreads Which is: { 0 } [ FAILED ] ProcTest.MultipleThreads (2 ms) . . [ RUN ] OsTest.children ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:361: Failure Value of: children.get().size() Actual: 1 Expected: 0u Which is: 0 ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:379: Failure Value of: children.get().size() Actual: 2 Expected: 1u Which is: 1 [ FAILED ] OsTest.children (31 ms) [ RUN ] OsTest.process [ OK ] OsTest.process (0 ms) [ RUN ] OsTest.processes [ OK ] OsTest.processes (11 ms) [ RUN ] OsTest.killtree [ OK ] OsTest.killtree (64 ms) [ RUN ] OsTest.pstree ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:604: Failure Value of: tree.get().children.size() Actual: 1 Expected: 0u Which is: 0 -+- 10048 ./stout-tests \--- 10050 () [ FAILED ] OsTest.pstree (21 ms) . . [--] Global test environment tear-down [==] 118 tests from 24 test cases ran. (480 ms total) [ PASSED ] 115 tests. [ FAILED ] 3 tests, listed below: [ FAILED ] ProcTest.MultipleThreads [ FAILED ] OsTest.children [ FAILED ] OsTest.pstree }}} was (Author: bhuvan): It seem to break unit tests, osTest and ProcTest. Let me know if I should upload verbose log output, if it isn't obvious. {{ $ make check -j3 . . [ RUN ] ProcTest.MultipleThreads ../../../../3rdparty/libprocess/3rdparty/stout/tests/proc_tests.cpp:181: Failure Value of: (procThreads).get() Actual: { 10050, 10053 } Expected: childThreads Which is: { 0 } [ FAILED ] ProcTest.MultipleThreads (2 ms) . . [ RUN ] OsTest.children ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:361: Failure Value of: children.get().size() Actual: 1 Expected: 0u Which is: 0 ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:379: Failure Value of: children.get().size() Actual: 2 Expected: 1u Which is: 1 [ FAILED ] OsTest.children (31 ms) [ RUN ] OsTest.process [ OK ] OsTest.process (0 ms) [ RUN ] OsTest.processes [ OK ] OsTest.processes (11 ms) [ RUN ] OsTest.killtree [ OK ] OsTest.killtree (64 ms) [ RUN ] OsTest.pstree ../../../../3rdparty/libprocess/3rdparty/stout/tests/os_tests.cpp:604: Failure Value of: tree.get().children.size() Actual: 1 Expected: 0u Which is: 0 -+- 10048 ./stout-tests \--- 10050 () [ FAILED ] OsTest.pstree (21 ms) . . [--] Global test environment tear-down [==] 118 tests from 24 test cases ran. (480 ms total) [ PASSED ] 115 tests. [ FAILED ] 3 tests, listed below: [ FAILED ] ProcTest.MultipleThreads [ FAILED ] OsTest.children [ FAILED ] OsTest.pstree }} Extend Subprocess to support environment variables, changing user and working directory --- Key: MESOS-995 URL: https://issues.apache.org/jira/browse/MESOS-995 Project: Mesos Issue Type: Improvement Components: libprocess Reporter: Ian Downes Assignee: Dominic Hamon Priority: Minor Fix For: 0.19.0 These are frequently needed so we should support them in Subprocess. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (MESOS-1156) make check-local fail on OEL6
[ https://issues.apache.org/jira/browse/MESOS-1156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13951178#comment-13951178 ] Bhuvan Arumugam commented on MESOS-1156: Thank you, Ben. That fixed it. Leaving it open to fix doc and/or Getting started guide. make check-local fail on OEL6 - Key: MESOS-1156 URL: https://issues.apache.org/jira/browse/MESOS-1156 Project: Mesos Issue Type: Bug Components: build Affects Versions: 0.19.0 Environment: Oracle Linux 6.5 Reporter: Bhuvan Arumugam Attachments: MESOS-1156-test.txt {code} [bhuvan@build mesos]$ uname -a Linux build 2.6.32-431.el6.x86_64 #1 SMP Wed Nov 20 23:56:07 PST 2013 x86_64 x86_64 x86_64 GNU/Linux [bhuvan@build mesos]$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.5 (Santiago) {code} {code} $ make check -j3 . . make check-local make[3]: Entering directory `/Volumes/apple/mesos/mesos/build/src' ./mesos-tests Source directory: /Volumes/apple/mesos/mesos Build directory: /Volumes/apple/mesos/mesos/build Note: Google Test filter = *-CgroupsAnyHierarchyTest.ROOT_CGROUPS_Enabled:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Subsystems:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Mounted:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Get: CgroupsAnyHierarchyTest.ROOT_CGROUPS_NestedCgroups:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Tasks:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Read:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Write:CgroupsAnyHierarchyTest.ROOT_ CGROUPS_Cfs_Big_Quota:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_Busy:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_SubsystemsHierarchy:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_MountedSubs ystems:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_CreateRemove:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_Listen:CgroupsNoHierarchyTest.ROOT_CGROUPS_NOHIERARCHY_MountUnmountHierarchy:CgroupsAnyH ierarchyWithCpuAcctMemoryTest.ROOT_CGROUPS_Stat:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Freeze:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Kill:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Dest roy:ContainerizerTest.ROOT_CGROUPS_BalloonFramework:CpuIsolatorTest/1.UserCpuUsage:CpuIsolatorTest/1.SystemCpuUsage:LimitedCpuIsolatorTest.ROOT_CGROUPS_Cfs:LimitedCpuIsolatorTest.ROOT_CGROUPS_Cfs_Big_Quota:Me mIsolatorTest/0.MemUsage:MemIsolatorTest/1.MemUsage: [==] Running 280 tests from 49 test cases. [--] Global test environment set-up. [--] 2 tests from AllocatorZooKeeperTest/0, where TypeParam = mesos::internal::master::allocator::HierarchicalAllocatorProcessmesos::internal::master::allocator::DRFSorter, mesos::internal::master::a llocator::DRFSorter [ RUN ] AllocatorZooKeeperTest/0.FrameworkReregistersFirst ../../src/tests/allocator_zookeeper_tests.cpp:147: Failure Failed to wait 10secs for status 2014-03-28 01:00:19,559:23815(0x7f33fbfff700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:19,559:23815(0x7f3419ac2700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2994ms) 2014-03-28 01:00:19,559:23815(0x7f33fabfd700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:19,559:23815(0x7f341aec4700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:22,896:23815(0x7f3419ac2700):ZOO_ERROR@handle_socket_error_msg@1739: Socket [127.0.0.1:40511] zk retcode=-112, errno=116(Stale file handle): sessionId=0x145062af1430001 has expired. Lost leadership... committing suicide! 2014-03-28 01:00:22,897:23815(0x7f341aec4700):ZOO_ERROR@handle_socket_error_msg@1739: Socket [127.0.0.1:40511] zk retcode=-112, errno=116(Stale file handle): sessionId=0x145062af143 has expired. ../../3rdparty/libprocess/include/process/gmock.hpp:138: ERROR: this mock object (used in test AllocatorZooKeeperTest/0.FrameworkReregistersFirst) should be deleted but never is. Its address is @0x2a391e8. ../../src/tests/containerizer.cpp:183: ERROR: this mock object (used in test AllocatorZooKeeperTest/0.FrameworkReregistersFirst) should be deleted but never is. Its address is @0x2a47220. ../../src/tests/allocator_zookeeper_tests.cpp:128: ERROR: this mock object (used in test AllocatorZooKeeperTest/0
[jira] [Created] (MESOS-1156) make check-local fail on OEL6
Bhuvan Arumugam created MESOS-1156: -- Summary: make check-local fail on OEL6 Key: MESOS-1156 URL: https://issues.apache.org/jira/browse/MESOS-1156 Project: Mesos Issue Type: Bug Components: build Affects Versions: 0.19.0 Environment: Oracle Linux 6.5 Reporter: Bhuvan Arumugam [bhuvan@build mesos]$ uname -a Linux build 2.6.32-431.el6.x86_64 #1 SMP Wed Nov 20 23:56:07 PST 2013 x86_64 x86_64 x86_64 GNU/Linux [bhuvan@build mesos]$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.5 (Santiago) $ make check -j3 . . make check-local make[3]: Entering directory `/Volumes/apple/mesos/mesos/build/src' ./mesos-tests Source directory: /Volumes/apple/mesos/mesos Build directory: /Volumes/apple/mesos/mesos/build Note: Google Test filter = *-CgroupsAnyHierarchyTest.ROOT_CGROUPS_Enabled:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Subsystems:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Mounted:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Get: CgroupsAnyHierarchyTest.ROOT_CGROUPS_NestedCgroups:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Tasks:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Read:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Write:CgroupsAnyHierarchyTest.ROOT_ CGROUPS_Cfs_Big_Quota:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_Busy:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_SubsystemsHierarchy:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_MountedSubs ystems:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_CreateRemove:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_Listen:CgroupsNoHierarchyTest.ROOT_CGROUPS_NOHIERARCHY_MountUnmountHierarchy:CgroupsAnyH ierarchyWithCpuAcctMemoryTest.ROOT_CGROUPS_Stat:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Freeze:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Kill:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Dest roy:ContainerizerTest.ROOT_CGROUPS_BalloonFramework:CpuIsolatorTest/1.UserCpuUsage:CpuIsolatorTest/1.SystemCpuUsage:LimitedCpuIsolatorTest.ROOT_CGROUPS_Cfs:LimitedCpuIsolatorTest.ROOT_CGROUPS_Cfs_Big_Quota:Me mIsolatorTest/0.MemUsage:MemIsolatorTest/1.MemUsage: [==] Running 280 tests from 49 test cases. [--] Global test environment set-up. [--] 2 tests from AllocatorZooKeeperTest/0, where TypeParam = mesos::internal::master::allocator::HierarchicalAllocatorProcessmesos::internal::master::allocator::DRFSorter, mesos::internal::master::a llocator::DRFSorter [ RUN ] AllocatorZooKeeperTest/0.FrameworkReregistersFirst ../../src/tests/allocator_zookeeper_tests.cpp:147: Failure Failed to wait 10secs for status 2014-03-28 01:00:19,559:23815(0x7f33fbfff700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:19,559:23815(0x7f3419ac2700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2994ms) 2014-03-28 01:00:19,559:23815(0x7f33fabfd700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:19,559:23815(0x7f341aec4700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:22,896:23815(0x7f3419ac2700):ZOO_ERROR@handle_socket_error_msg@1739: Socket [127.0.0.1:40511] zk retcode=-112, errno=116(Stale file handle): sessionId=0x145062af1430001 has expired. Lost leadership... committing suicide! 2014-03-28 01:00:22,897:23815(0x7f341aec4700):ZOO_ERROR@handle_socket_error_msg@1739: Socket [127.0.0.1:40511] zk retcode=-112, errno=116(Stale file handle): sessionId=0x145062af143 has expired. ../../3rdparty/libprocess/include/process/gmock.hpp:138: ERROR: this mock object (used in test AllocatorZooKeeperTest/0.FrameworkReregistersFirst) should be deleted but never is. Its address is @0x2a391e8. ../../src/tests/containerizer.cpp:183: ERROR: this mock object (used in test AllocatorZooKeeperTest/0.FrameworkReregistersFirst) should be deleted but never is. Its address is @0x2a47220. ../../src/tests/allocator_zookeeper_tests.cpp:128: ERROR: this mock object (used in test AllocatorZooKeeperTest/0.FrameworkReregistersFirst) should be deleted but never is. Its address is @0x7fff0bc53630. ../../src/tests/allocator_zookeeper_tests.cpp:136: ERROR: this mock object (used in test AllocatorZooKeeperTest/0.FrameworkReregistersFirst) should be deleted but never is. Its address is @0x7fff0bc53ba0. ERROR: 4 leaked mock objects found at program exit. make[3]: *** [check-local] Error 1 make[3]: Leaving directory `/Volumes/apple
[jira] [Updated] (MESOS-1156) make check-local fail on OEL6
[ https://issues.apache.org/jira/browse/MESOS-1156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bhuvan Arumugam updated MESOS-1156: --- Attachment: MESOS-1156-test.txt Please find attached the output of following command: $ cd mesos/build $ ./bin/mesos-tests.sh --verbose I think i've all the sasl rpms installed: [bhuvan@build ~]$ rpm -qa | grep sasl -i libgsasl-1.4.0-4.el6.x86_64 cyrus-sasl-plain-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-devel-2.1.23-13.el6_3.1.x86_64 libgsasl-devel-1.4.0-4.el6.x86_64 cyrus-sasl-lib-2.1.23-13.el6_3.1.x86_64 make check-local fail on OEL6 - Key: MESOS-1156 URL: https://issues.apache.org/jira/browse/MESOS-1156 Project: Mesos Issue Type: Bug Components: build Affects Versions: 0.19.0 Environment: Oracle Linux 6.5 Reporter: Bhuvan Arumugam Attachments: MESOS-1156-test.txt [bhuvan@build mesos]$ uname -a Linux build 2.6.32-431.el6.x86_64 #1 SMP Wed Nov 20 23:56:07 PST 2013 x86_64 x86_64 x86_64 GNU/Linux [bhuvan@build mesos]$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.5 (Santiago) $ make check -j3 . . make check-local make[3]: Entering directory `/Volumes/apple/mesos/mesos/build/src' ./mesos-tests Source directory: /Volumes/apple/mesos/mesos Build directory: /Volumes/apple/mesos/mesos/build Note: Google Test filter = *-CgroupsAnyHierarchyTest.ROOT_CGROUPS_Enabled:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Subsystems:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Mounted:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Get: CgroupsAnyHierarchyTest.ROOT_CGROUPS_NestedCgroups:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Tasks:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Read:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Write:CgroupsAnyHierarchyTest.ROOT_ CGROUPS_Cfs_Big_Quota:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_Busy:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_SubsystemsHierarchy:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_MountedSubs ystems:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_CreateRemove:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_Listen:CgroupsNoHierarchyTest.ROOT_CGROUPS_NOHIERARCHY_MountUnmountHierarchy:CgroupsAnyH ierarchyWithCpuAcctMemoryTest.ROOT_CGROUPS_Stat:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Freeze:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Kill:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Dest roy:ContainerizerTest.ROOT_CGROUPS_BalloonFramework:CpuIsolatorTest/1.UserCpuUsage:CpuIsolatorTest/1.SystemCpuUsage:LimitedCpuIsolatorTest.ROOT_CGROUPS_Cfs:LimitedCpuIsolatorTest.ROOT_CGROUPS_Cfs_Big_Quota:Me mIsolatorTest/0.MemUsage:MemIsolatorTest/1.MemUsage: [==] Running 280 tests from 49 test cases. [--] Global test environment set-up. [--] 2 tests from AllocatorZooKeeperTest/0, where TypeParam = mesos::internal::master::allocator::HierarchicalAllocatorProcessmesos::internal::master::allocator::DRFSorter, mesos::internal::master::a llocator::DRFSorter [ RUN ] AllocatorZooKeeperTest/0.FrameworkReregistersFirst ../../src/tests/allocator_zookeeper_tests.cpp:147: Failure Failed to wait 10secs for status 2014-03-28 01:00:19,559:23815(0x7f33fbfff700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:19,559:23815(0x7f3419ac2700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2994ms) 2014-03-28 01:00:19,559:23815(0x7f33fabfd700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:19,559:23815(0x7f341aec4700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:22,896:23815(0x7f3419ac2700):ZOO_ERROR@handle_socket_error_msg@1739: Socket [127.0.0.1:40511] zk retcode=-112, errno=116(Stale file handle): sessionId=0x145062af1430001 has expired. Lost leadership... committing suicide! 2014-03-28 01:00:22,897:23815(0x7f341aec4700):ZOO_ERROR@handle_socket_error_msg@1739: Socket [127.0.0.1:40511] zk retcode=-112, errno=116(Stale file handle): sessionId=0x145062af143 has expired. ../../3rdparty/libprocess/include/process/gmock.hpp:138: ERROR: this mock object (used in test AllocatorZooKeeperTest/0.FrameworkReregistersFirst) should be deleted but never is. Its address is @0x2a391e8. ../../src/tests/containerizer.cpp:183: ERROR
[jira] [Comment Edited] (MESOS-1156) make check-local fail on OEL6
[ https://issues.apache.org/jira/browse/MESOS-1156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13950270#comment-13950270 ] Bhuvan Arumugam edited comment on MESOS-1156 at 3/28/14 1:58 AM: - Please find attached the output of following command: {{{ $ cd mesos/build $ ./bin/mesos-tests.sh --verbose }}} I think i've all the sasl rpms installed: {{{ [bhuvan@build ~]$ rpm -qa | grep sasl -i libgsasl-1.4.0-4.el6.x86_64 cyrus-sasl-plain-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-devel-2.1.23-13.el6_3.1.x86_64 libgsasl-devel-1.4.0-4.el6.x86_64 cyrus-sasl-lib-2.1.23-13.el6_3.1.x86_64 }}} was (Author: bhuvan): Please find attached the output of following command: $ cd mesos/build $ ./bin/mesos-tests.sh --verbose I think i've all the sasl rpms installed: [bhuvan@build ~]$ rpm -qa | grep sasl -i libgsasl-1.4.0-4.el6.x86_64 cyrus-sasl-plain-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-devel-2.1.23-13.el6_3.1.x86_64 libgsasl-devel-1.4.0-4.el6.x86_64 cyrus-sasl-lib-2.1.23-13.el6_3.1.x86_64 make check-local fail on OEL6 - Key: MESOS-1156 URL: https://issues.apache.org/jira/browse/MESOS-1156 Project: Mesos Issue Type: Bug Components: build Affects Versions: 0.19.0 Environment: Oracle Linux 6.5 Reporter: Bhuvan Arumugam Attachments: MESOS-1156-test.txt [bhuvan@build mesos]$ uname -a Linux build 2.6.32-431.el6.x86_64 #1 SMP Wed Nov 20 23:56:07 PST 2013 x86_64 x86_64 x86_64 GNU/Linux [bhuvan@build mesos]$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.5 (Santiago) $ make check -j3 . . make check-local make[3]: Entering directory `/Volumes/apple/mesos/mesos/build/src' ./mesos-tests Source directory: /Volumes/apple/mesos/mesos Build directory: /Volumes/apple/mesos/mesos/build Note: Google Test filter = *-CgroupsAnyHierarchyTest.ROOT_CGROUPS_Enabled:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Subsystems:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Mounted:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Get: CgroupsAnyHierarchyTest.ROOT_CGROUPS_NestedCgroups:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Tasks:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Read:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Write:CgroupsAnyHierarchyTest.ROOT_ CGROUPS_Cfs_Big_Quota:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_Busy:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_SubsystemsHierarchy:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_MountedSubs ystems:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_CreateRemove:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_Listen:CgroupsNoHierarchyTest.ROOT_CGROUPS_NOHIERARCHY_MountUnmountHierarchy:CgroupsAnyH ierarchyWithCpuAcctMemoryTest.ROOT_CGROUPS_Stat:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Freeze:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Kill:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Dest roy:ContainerizerTest.ROOT_CGROUPS_BalloonFramework:CpuIsolatorTest/1.UserCpuUsage:CpuIsolatorTest/1.SystemCpuUsage:LimitedCpuIsolatorTest.ROOT_CGROUPS_Cfs:LimitedCpuIsolatorTest.ROOT_CGROUPS_Cfs_Big_Quota:Me mIsolatorTest/0.MemUsage:MemIsolatorTest/1.MemUsage: [==] Running 280 tests from 49 test cases. [--] Global test environment set-up. [--] 2 tests from AllocatorZooKeeperTest/0, where TypeParam = mesos::internal::master::allocator::HierarchicalAllocatorProcessmesos::internal::master::allocator::DRFSorter, mesos::internal::master::a llocator::DRFSorter [ RUN ] AllocatorZooKeeperTest/0.FrameworkReregistersFirst ../../src/tests/allocator_zookeeper_tests.cpp:147: Failure Failed to wait 10secs for status 2014-03-28 01:00:19,559:23815(0x7f33fbfff700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:19,559:23815(0x7f3419ac2700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2994ms) 2014-03-28 01:00:19,559:23815(0x7f33fabfd700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:19,559:23815(0x7f341aec4700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:22,896:23815(0x7f3419ac2700):ZOO_ERROR@handle_socket_error_msg@1739: Socket [127.0.0.1:40511] zk retcode=-112, errno=116(Stale file handle): sessionId=0x145062af1430001 has
[jira] [Comment Edited] (MESOS-1156) make check-local fail on OEL6
[ https://issues.apache.org/jira/browse/MESOS-1156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13950270#comment-13950270 ] Bhuvan Arumugam edited comment on MESOS-1156 at 3/28/14 1:59 AM: - Please find attached the output of following command: {{ $ cd mesos/build $ ./bin/mesos-tests.sh --verbose }} I think i've all the sasl rpms installed: {{ [bhuvan@build ~]$ rpm -qa | grep sasl -i libgsasl-1.4.0-4.el6.x86_64 cyrus-sasl-plain-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-devel-2.1.23-13.el6_3.1.x86_64 libgsasl-devel-1.4.0-4.el6.x86_64 cyrus-sasl-lib-2.1.23-13.el6_3.1.x86_64 }} was (Author: bhuvan): Please find attached the output of following command: {{{ $ cd mesos/build $ ./bin/mesos-tests.sh --verbose }}} I think i've all the sasl rpms installed: {{{ [bhuvan@build ~]$ rpm -qa | grep sasl -i libgsasl-1.4.0-4.el6.x86_64 cyrus-sasl-plain-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-devel-2.1.23-13.el6_3.1.x86_64 libgsasl-devel-1.4.0-4.el6.x86_64 cyrus-sasl-lib-2.1.23-13.el6_3.1.x86_64 }}} make check-local fail on OEL6 - Key: MESOS-1156 URL: https://issues.apache.org/jira/browse/MESOS-1156 Project: Mesos Issue Type: Bug Components: build Affects Versions: 0.19.0 Environment: Oracle Linux 6.5 Reporter: Bhuvan Arumugam Attachments: MESOS-1156-test.txt [bhuvan@build mesos]$ uname -a Linux build 2.6.32-431.el6.x86_64 #1 SMP Wed Nov 20 23:56:07 PST 2013 x86_64 x86_64 x86_64 GNU/Linux [bhuvan@build mesos]$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.5 (Santiago) $ make check -j3 . . make check-local make[3]: Entering directory `/Volumes/apple/mesos/mesos/build/src' ./mesos-tests Source directory: /Volumes/apple/mesos/mesos Build directory: /Volumes/apple/mesos/mesos/build Note: Google Test filter = *-CgroupsAnyHierarchyTest.ROOT_CGROUPS_Enabled:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Subsystems:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Mounted:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Get: CgroupsAnyHierarchyTest.ROOT_CGROUPS_NestedCgroups:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Tasks:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Read:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Write:CgroupsAnyHierarchyTest.ROOT_ CGROUPS_Cfs_Big_Quota:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_Busy:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_SubsystemsHierarchy:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_MountedSubs ystems:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_CreateRemove:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_Listen:CgroupsNoHierarchyTest.ROOT_CGROUPS_NOHIERARCHY_MountUnmountHierarchy:CgroupsAnyH ierarchyWithCpuAcctMemoryTest.ROOT_CGROUPS_Stat:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Freeze:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Kill:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Dest roy:ContainerizerTest.ROOT_CGROUPS_BalloonFramework:CpuIsolatorTest/1.UserCpuUsage:CpuIsolatorTest/1.SystemCpuUsage:LimitedCpuIsolatorTest.ROOT_CGROUPS_Cfs:LimitedCpuIsolatorTest.ROOT_CGROUPS_Cfs_Big_Quota:Me mIsolatorTest/0.MemUsage:MemIsolatorTest/1.MemUsage: [==] Running 280 tests from 49 test cases. [--] Global test environment set-up. [--] 2 tests from AllocatorZooKeeperTest/0, where TypeParam = mesos::internal::master::allocator::HierarchicalAllocatorProcessmesos::internal::master::allocator::DRFSorter, mesos::internal::master::a llocator::DRFSorter [ RUN ] AllocatorZooKeeperTest/0.FrameworkReregistersFirst ../../src/tests/allocator_zookeeper_tests.cpp:147: Failure Failed to wait 10secs for status 2014-03-28 01:00:19,559:23815(0x7f33fbfff700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:19,559:23815(0x7f3419ac2700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2994ms) 2014-03-28 01:00:19,559:23815(0x7f33fabfd700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:19,559:23815(0x7f341aec4700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:22,896:23815(0x7f3419ac2700):ZOO_ERROR@handle_socket_error_msg@1739: Socket [127.0.0.1:40511] zk retcode=-112, errno=116(Stale file handle): sessionId
[jira] [Comment Edited] (MESOS-1156) make check-local fail on OEL6
[ https://issues.apache.org/jira/browse/MESOS-1156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13950270#comment-13950270 ] Bhuvan Arumugam edited comment on MESOS-1156 at 3/28/14 2:01 AM: - Please find attached the output of following command: {code} $ cd mesos/build $ ./bin/mesos-tests.sh --verbose {code} I think i've all the sasl rpms installed: {code} [bhuvan@build ~]$ rpm -qa | grep sasl -i libgsasl-1.4.0-4.el6.x86_64 cyrus-sasl-plain-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-devel-2.1.23-13.el6_3.1.x86_64 libgsasl-devel-1.4.0-4.el6.x86_64 cyrus-sasl-lib-2.1.23-13.el6_3.1.x86_64 {code} was (Author: bhuvan): Please find attached the output of following command: {{ $ cd mesos/build $ ./bin/mesos-tests.sh --verbose }} I think i've all the sasl rpms installed: {{ [bhuvan@build ~]$ rpm -qa | grep sasl -i libgsasl-1.4.0-4.el6.x86_64 cyrus-sasl-plain-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-devel-2.1.23-13.el6_3.1.x86_64 libgsasl-devel-1.4.0-4.el6.x86_64 cyrus-sasl-lib-2.1.23-13.el6_3.1.x86_64 }} make check-local fail on OEL6 - Key: MESOS-1156 URL: https://issues.apache.org/jira/browse/MESOS-1156 Project: Mesos Issue Type: Bug Components: build Affects Versions: 0.19.0 Environment: Oracle Linux 6.5 Reporter: Bhuvan Arumugam Attachments: MESOS-1156-test.txt [bhuvan@build mesos]$ uname -a Linux build 2.6.32-431.el6.x86_64 #1 SMP Wed Nov 20 23:56:07 PST 2013 x86_64 x86_64 x86_64 GNU/Linux [bhuvan@build mesos]$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.5 (Santiago) $ make check -j3 . . make check-local make[3]: Entering directory `/Volumes/apple/mesos/mesos/build/src' ./mesos-tests Source directory: /Volumes/apple/mesos/mesos Build directory: /Volumes/apple/mesos/mesos/build Note: Google Test filter = *-CgroupsAnyHierarchyTest.ROOT_CGROUPS_Enabled:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Subsystems:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Mounted:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Get: CgroupsAnyHierarchyTest.ROOT_CGROUPS_NestedCgroups:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Tasks:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Read:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Write:CgroupsAnyHierarchyTest.ROOT_ CGROUPS_Cfs_Big_Quota:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_Busy:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_SubsystemsHierarchy:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_MountedSubs ystems:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_CreateRemove:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_Listen:CgroupsNoHierarchyTest.ROOT_CGROUPS_NOHIERARCHY_MountUnmountHierarchy:CgroupsAnyH ierarchyWithCpuAcctMemoryTest.ROOT_CGROUPS_Stat:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Freeze:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Kill:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Dest roy:ContainerizerTest.ROOT_CGROUPS_BalloonFramework:CpuIsolatorTest/1.UserCpuUsage:CpuIsolatorTest/1.SystemCpuUsage:LimitedCpuIsolatorTest.ROOT_CGROUPS_Cfs:LimitedCpuIsolatorTest.ROOT_CGROUPS_Cfs_Big_Quota:Me mIsolatorTest/0.MemUsage:MemIsolatorTest/1.MemUsage: [==] Running 280 tests from 49 test cases. [--] Global test environment set-up. [--] 2 tests from AllocatorZooKeeperTest/0, where TypeParam = mesos::internal::master::allocator::HierarchicalAllocatorProcessmesos::internal::master::allocator::DRFSorter, mesos::internal::master::a llocator::DRFSorter [ RUN ] AllocatorZooKeeperTest/0.FrameworkReregistersFirst ../../src/tests/allocator_zookeeper_tests.cpp:147: Failure Failed to wait 10secs for status 2014-03-28 01:00:19,559:23815(0x7f33fbfff700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:19,559:23815(0x7f3419ac2700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2994ms) 2014-03-28 01:00:19,559:23815(0x7f33fabfd700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:19,559:23815(0x7f341aec4700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:22,896:23815(0x7f3419ac2700):ZOO_ERROR@handle_socket_error_msg@1739: Socket [127.0.0.1:40511] zk retcode=-112, errno=116(Stale file handle): sessionId
[jira] [Updated] (MESOS-1156) make check-local fail on OEL6
[ https://issues.apache.org/jira/browse/MESOS-1156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bhuvan Arumugam updated MESOS-1156: --- Description: {code} [bhuvan@build mesos]$ uname -a Linux build 2.6.32-431.el6.x86_64 #1 SMP Wed Nov 20 23:56:07 PST 2013 x86_64 x86_64 x86_64 GNU/Linux [bhuvan@build mesos]$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.5 (Santiago) {code} {code} $ make check -j3 . . make check-local make[3]: Entering directory `/Volumes/apple/mesos/mesos/build/src' ./mesos-tests Source directory: /Volumes/apple/mesos/mesos Build directory: /Volumes/apple/mesos/mesos/build Note: Google Test filter = *-CgroupsAnyHierarchyTest.ROOT_CGROUPS_Enabled:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Subsystems:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Mounted:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Get: CgroupsAnyHierarchyTest.ROOT_CGROUPS_NestedCgroups:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Tasks:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Read:CgroupsAnyHierarchyTest.ROOT_CGROUPS_Write:CgroupsAnyHierarchyTest.ROOT_ CGROUPS_Cfs_Big_Quota:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_Busy:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_SubsystemsHierarchy:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_MountedSubs ystems:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_CreateRemove:CgroupsAnyHierarchyWithCpuMemoryTest.ROOT_CGROUPS_Listen:CgroupsNoHierarchyTest.ROOT_CGROUPS_NOHIERARCHY_MountUnmountHierarchy:CgroupsAnyH ierarchyWithCpuAcctMemoryTest.ROOT_CGROUPS_Stat:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Freeze:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Kill:CgroupsAnyHierarchyWithFreezerTest.ROOT_CGROUPS_Dest roy:ContainerizerTest.ROOT_CGROUPS_BalloonFramework:CpuIsolatorTest/1.UserCpuUsage:CpuIsolatorTest/1.SystemCpuUsage:LimitedCpuIsolatorTest.ROOT_CGROUPS_Cfs:LimitedCpuIsolatorTest.ROOT_CGROUPS_Cfs_Big_Quota:Me mIsolatorTest/0.MemUsage:MemIsolatorTest/1.MemUsage: [==] Running 280 tests from 49 test cases. [--] Global test environment set-up. [--] 2 tests from AllocatorZooKeeperTest/0, where TypeParam = mesos::internal::master::allocator::HierarchicalAllocatorProcessmesos::internal::master::allocator::DRFSorter, mesos::internal::master::a llocator::DRFSorter [ RUN ] AllocatorZooKeeperTest/0.FrameworkReregistersFirst ../../src/tests/allocator_zookeeper_tests.cpp:147: Failure Failed to wait 10secs for status 2014-03-28 01:00:19,559:23815(0x7f33fbfff700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:19,559:23815(0x7f3419ac2700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2994ms) 2014-03-28 01:00:19,559:23815(0x7f33fabfd700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:19,559:23815(0x7f341aec4700):ZOO_ERROR@handle_socket_error_msg@1643: Socket [127.0.0.1:40511] zk retcode=-7, errno=110(Connection timed out): connection to 127.0.0.1:40511 timed out (exceeded timeout by 2993ms) 2014-03-28 01:00:22,896:23815(0x7f3419ac2700):ZOO_ERROR@handle_socket_error_msg@1739: Socket [127.0.0.1:40511] zk retcode=-112, errno=116(Stale file handle): sessionId=0x145062af1430001 has expired. Lost leadership... committing suicide! 2014-03-28 01:00:22,897:23815(0x7f341aec4700):ZOO_ERROR@handle_socket_error_msg@1739: Socket [127.0.0.1:40511] zk retcode=-112, errno=116(Stale file handle): sessionId=0x145062af143 has expired. ../../3rdparty/libprocess/include/process/gmock.hpp:138: ERROR: this mock object (used in test AllocatorZooKeeperTest/0.FrameworkReregistersFirst) should be deleted but never is. Its address is @0x2a391e8. ../../src/tests/containerizer.cpp:183: ERROR: this mock object (used in test AllocatorZooKeeperTest/0.FrameworkReregistersFirst) should be deleted but never is. Its address is @0x2a47220. ../../src/tests/allocator_zookeeper_tests.cpp:128: ERROR: this mock object (used in test AllocatorZooKeeperTest/0.FrameworkReregistersFirst) should be deleted but never is. Its address is @0x7fff0bc53630. ../../src/tests/allocator_zookeeper_tests.cpp:136: ERROR: this mock object (used in test AllocatorZooKeeperTest/0.FrameworkReregistersFirst) should be deleted but never is. Its address is @0x7fff0bc53ba0. ERROR: 4 leaked mock objects found at program exit. make[3]: *** [check-local] Error 1 make[3]: Leaving directory `/Volumes/apple/mesos/mesos/build/src' make[2]: *** [check-am] Error 2 make[2]: Leaving directory `/Volumes/apple/mesos/mesos/build/src' make[1]: *** [check] Error 2 make[1]: Leaving directory