[jira] [Commented] (MESOS-5151) Marathon Pass Dynamic Value with Parameters Resource in Docker Configuration

2016-04-10 Thread Jesada Gonkratoke (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234513#comment-15234513
 ] 

Jesada Gonkratoke commented on MESOS-5151:
--

Hi, Greg Mann
Thank you for your fast respond, in my case, I want to use marathon for scaling 
new docker container and each container must be connected itself with service 
discovery (Consul) which is running on its docker host so I need to pass 
current host's ip like I can add hostname of host to docker container in docker 
cli mode such as docker run -d -ti --name="test" --add-host 
"dockerhost:$(hostname -i)" centos:7.2 but I cannot do this on Marathon. I 
think this is important because when we need to run docker container in the 
flexible way, this becomes a blocker and another case, I need to ingest log 
from each container to fluentd with unique identifier id like I can do with 
docker cli such as docker run --log-driver=fluentd --log-opt 
fluentd-tag=docker.{{.ID}} ubuntu echo "...". I cannot do like this on marathon 
as well.

So now, I am the promoter for Mesos solution in my company, so I want to ensure 
that Mesos Team can help me for flexibility.


Best Regards,
Jesada Gonkratoke

> Marathon Pass Dynamic Value with Parameters Resource in Docker Configuration
> 
>
> Key: MESOS-5151
> URL: https://issues.apache.org/jira/browse/MESOS-5151
> Project: Mesos
>  Issue Type: Wish
>  Components: docker
>Affects Versions: 0.28.0
> Environment: software
>Reporter: Jesada Gonkratoke
>
> "parameters": [
>{ "key": "add-host", "value": "dockerhost:$(hostname -i)" }
>   ]
> },
> # I want to add dynamic host ip



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (MESOS-4459) Implement AuthN handling on the scheduler library

2016-04-10 Thread Anand Mazumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-4459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anand Mazumdar reassigned MESOS-4459:
-

Assignee: Anand Mazumdar

> Implement AuthN handling on the scheduler library
> -
>
> Key: MESOS-4459
> URL: https://issues.apache.org/jira/browse/MESOS-4459
> Project: Mesos
>  Issue Type: Task
>Reporter: Anand Mazumdar
>Assignee: Anand Mazumdar
>  Labels: mesosphere
>
> Currently, we do not have the ability of passing {{Credentials}} via the 
> scheduler library. Once the master supports AuthN handling for the 
> {{/scheduler}} endpoint, we would need to add this support to the library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-3923) Implement AuthN handling in Master for the Scheduler endpoint

2016-04-10 Thread Anand Mazumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-3923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anand Mazumdar updated MESOS-3923:
--
Shepherd: Vinod Kone
  Sprint: Mesosphere Sprint 33

> Implement AuthN handling in Master for the Scheduler endpoint
> -
>
> Key: MESOS-3923
> URL: https://issues.apache.org/jira/browse/MESOS-3923
> Project: Mesos
>  Issue Type: Bug
>  Components: framework, HTTP API, master
>Affects Versions: 0.25.0
>Reporter: Ben Whitehead
>Assignee: Anand Mazumdar
>  Labels: mesosphere
>
> If authentication(AuthN) is enabled on a master, frameworks attempting to use 
> the HTTP Scheduler API can't register.
> {code}
> $ cat /tmp/subscribe-943257503176798091.bin | http --print=HhBb --stream 
> --pretty=colors --auth verification:password1 POST :5050/api/v1/scheduler 
> Accept:application/x-protobuf Content-Type:application/x-protobuf
> POST /api/v1/scheduler HTTP/1.1
> Connection: keep-alive
> Content-Type: application/x-protobuf
> Accept-Encoding: gzip, deflate
> Accept: application/x-protobuf
> Content-Length: 126
> User-Agent: HTTPie/0.9.0
> Host: localhost:5050
> Authorization: Basic dmVyaWZpY2F0aW9uOnBhc3N3b3JkMQ==
> +-+
> | NOTE: binary data not shown in terminal |
> +-+
> HTTP/1.1 401 Unauthorized
> Date: Fri, 13 Nov 2015 20:00:45 GMT
> WWW-authenticate: Basic realm="Mesos master"
> Content-Length: 65
> HTTP schedulers are not supported when authentication is required
> {code}
> Authorization(AuthZ) is already supported for HTTP based frameworks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-4459) Implement AuthN handling on the scheduler library

2016-04-10 Thread Anand Mazumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-4459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anand Mazumdar updated MESOS-4459:
--
Sprint: Mesosphere Sprint 33

> Implement AuthN handling on the scheduler library
> -
>
> Key: MESOS-4459
> URL: https://issues.apache.org/jira/browse/MESOS-4459
> Project: Mesos
>  Issue Type: Task
>Reporter: Anand Mazumdar
>  Labels: mesosphere
>
> Currently, we do not have the ability of passing {{Credentials}} via the 
> scheduler library. Once the master supports AuthN handling for the 
> {{/scheduler}} endpoint, we would need to add this support to the library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5148) Supporting Container Images in Mesos Containerizer doesn't work by using marathon api

2016-04-10 Thread haosdent (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234460#comment-15234460
 ] 

haosdent commented on MESOS-5148:
-

Do you mean log in the container which created through Marathon? Yes, you 
always could log in. But the container you login didn't use the image which you 
specified in json file.

> Supporting Container Images in Mesos Containerizer doesn't work by using 
> marathon api
> -
>
> Key: MESOS-5148
> URL: https://issues.apache.org/jira/browse/MESOS-5148
> Project: Mesos
>  Issue Type: Bug
>Reporter: wangqun
>
> Hi
> I use the marathon api to create tasks to test Supporting Container 
> Images in Mesos Containerizer .
> My steps is the following:
> 1) to run the process in master node.
> sudo /usr/sbin/mesos-master --zk=zk://10.0.0.4:2181/mesos --port=5050 
> --log_dir=/var/log/mesos --cluster=mesosbay --hostname=10.0.0.4 --ip=10.0.0.4 
> --quorum=1 --work_dir=/var/lib/mesos
> 2) to run the process in slave node.
> sudo /usr/sbin/mesos-slave --master=zk://10.0.0.4:2181/mesos 
> --log_dir=/var/log/mesos --containerizers=docker,mesos 
> --executor_registration_timeout=5mins --hostname=10.0.0.5 --ip=10.0.0.5 
> --isolation=docker/runtime,filesystem/linux --work_dir=/tmp/mesos/slave 
> --image_providers=docker --executor_environment_variables="{}"
> 3) to create one json file to specify the container to be managed by mesos.
> sudo  touch mesos.json
> sudo vim  mesos.json
> {
>   "container": {
> "type": "MESOS",
> "docker": {
>   "image": "library/redis"
> }
>   },
>   "id": "ubuntumesos",
>   "instances": 1,
>   "cpus": 0.5,
>   "mem": 512,
>   "uris": [],
>   "cmd": "ping 8.8.8.8"
> }
> 4)sudo curl -X POST -H "Content-Type: application/json" 
> localhost:8080/v2/apps -d...@mesos.json
> 5)sudo  curl http://localhost:8080/v2/tasks
> {"tasks":[{"id":"ubuntumesos.fc1879be-fc9f-11e5-81e0-024294de4967","host":"10.0.0.5","ipAddresses":[],"ports":[31597],"startedAt":"2016-04-07T09:06:24.900Z","stagedAt":"2016-04-07T09:06:16.611Z","version":"2016-04-07T09:06:14.354Z","slaveId":"058fb5a7-9273-4bfa-83bb-8cb091621e19-S1","appId":"/ubuntumesos","servicePorts":[1]}]}
> 6) sudo docker run -ti --net=host redis redis-cli  
> Could not connect to Redis at 127.0.0.1:6379: Connection refused
> not connected> 
> 7)
> I0409 01:43:48.774868 3492 slave.cpp:3886] Executor 
> 'ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce' of framework 
> ffb72d7c-dd63-4c30-abea-bb746ab2c326- exited with status 0
> I0409 01:43:48.781307 3492 slave.cpp:3990] Cleaning up executor 
> 'ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce' of framework 
> ffb72d7c-dd63-4c30-abea-bb746ab2c326- at executor(1)@10.0.0.5:60134
> I0409 01:43:48.808364 3492 slave.cpp:4078] Cleaning up framework 
> ffb72d7c-dd63-4c30-abea-bb746ab2c326-
> I0409 01:43:48.811336 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce/runs/24d0872d-1ba1-4384-be11-a20c82893ea4'
>  for gc 6.9070953778days in the future
> I0409 01:43:48.817401 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce'
>  for gc 6.9065992889days in the future
> I0409 01:43:48.823158 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/meta/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce/runs/24d0872d-1ba1-4384-be11-a20c82893ea4'
>  for gc 6.9065273185days in the future
> I0409 01:43:48.826216 3491 status_update_manager.cpp:282] Closing status 
> update streams for framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-
> I0409 01:43:48.835602 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/meta/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce'
>  for gc 6.9064716444days in the future
> I0409 01:43:48.838580 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-'
>  for gc 6.9041064889days in the future
> I0409 01:43:48.844699 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/meta/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-'
>  for gc 6.902654163days in the future
> I0409 01:44:01.623440 3494 slave.cpp:4374] Current disk usage 27.10%. Max 
> allowed age: 4.403153217546436days
> I0409 01:44:32.339310 3494 slave.cpp:1361] Got assigned task 

[jira] [Commented] (MESOS-5148) Supporting Container Images in Mesos Containerizer doesn't work by using marathon api

2016-04-10 Thread wangqun (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234456#comment-15234456
 ] 

wangqun commented on MESOS-5148:


[~haosd...@gmail.com] I'm sorry for discussing my problem in jara. Because I'm 
using mesos and not familiar for it. I will change the way accoring to you the 
next time.

I test it again. I create mesos containerizer using the mesos.json by marathon 
api firstly. Indeed there isn't the image to be found in the 
/tmp/mesos/store/docker. So I confirm marathon api doesn't supporte Container 
Images in Mesos Containerizer. But why I can log in the container? Can you tell 
me the reason. 

Thanks.

> Supporting Container Images in Mesos Containerizer doesn't work by using 
> marathon api
> -
>
> Key: MESOS-5148
> URL: https://issues.apache.org/jira/browse/MESOS-5148
> Project: Mesos
>  Issue Type: Bug
>Reporter: wangqun
>
> Hi
> I use the marathon api to create tasks to test Supporting Container 
> Images in Mesos Containerizer .
> My steps is the following:
> 1) to run the process in master node.
> sudo /usr/sbin/mesos-master --zk=zk://10.0.0.4:2181/mesos --port=5050 
> --log_dir=/var/log/mesos --cluster=mesosbay --hostname=10.0.0.4 --ip=10.0.0.4 
> --quorum=1 --work_dir=/var/lib/mesos
> 2) to run the process in slave node.
> sudo /usr/sbin/mesos-slave --master=zk://10.0.0.4:2181/mesos 
> --log_dir=/var/log/mesos --containerizers=docker,mesos 
> --executor_registration_timeout=5mins --hostname=10.0.0.5 --ip=10.0.0.5 
> --isolation=docker/runtime,filesystem/linux --work_dir=/tmp/mesos/slave 
> --image_providers=docker --executor_environment_variables="{}"
> 3) to create one json file to specify the container to be managed by mesos.
> sudo  touch mesos.json
> sudo vim  mesos.json
> {
>   "container": {
> "type": "MESOS",
> "docker": {
>   "image": "library/redis"
> }
>   },
>   "id": "ubuntumesos",
>   "instances": 1,
>   "cpus": 0.5,
>   "mem": 512,
>   "uris": [],
>   "cmd": "ping 8.8.8.8"
> }
> 4)sudo curl -X POST -H "Content-Type: application/json" 
> localhost:8080/v2/apps -d...@mesos.json
> 5)sudo  curl http://localhost:8080/v2/tasks
> {"tasks":[{"id":"ubuntumesos.fc1879be-fc9f-11e5-81e0-024294de4967","host":"10.0.0.5","ipAddresses":[],"ports":[31597],"startedAt":"2016-04-07T09:06:24.900Z","stagedAt":"2016-04-07T09:06:16.611Z","version":"2016-04-07T09:06:14.354Z","slaveId":"058fb5a7-9273-4bfa-83bb-8cb091621e19-S1","appId":"/ubuntumesos","servicePorts":[1]}]}
> 6) sudo docker run -ti --net=host redis redis-cli  
> Could not connect to Redis at 127.0.0.1:6379: Connection refused
> not connected> 
> 7)
> I0409 01:43:48.774868 3492 slave.cpp:3886] Executor 
> 'ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce' of framework 
> ffb72d7c-dd63-4c30-abea-bb746ab2c326- exited with status 0
> I0409 01:43:48.781307 3492 slave.cpp:3990] Cleaning up executor 
> 'ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce' of framework 
> ffb72d7c-dd63-4c30-abea-bb746ab2c326- at executor(1)@10.0.0.5:60134
> I0409 01:43:48.808364 3492 slave.cpp:4078] Cleaning up framework 
> ffb72d7c-dd63-4c30-abea-bb746ab2c326-
> I0409 01:43:48.811336 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce/runs/24d0872d-1ba1-4384-be11-a20c82893ea4'
>  for gc 6.9070953778days in the future
> I0409 01:43:48.817401 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce'
>  for gc 6.9065992889days in the future
> I0409 01:43:48.823158 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/meta/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce/runs/24d0872d-1ba1-4384-be11-a20c82893ea4'
>  for gc 6.9065273185days in the future
> I0409 01:43:48.826216 3491 status_update_manager.cpp:282] Closing status 
> update streams for framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-
> I0409 01:43:48.835602 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/meta/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce'
>  for gc 6.9064716444days in the future
> I0409 01:43:48.838580 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-'
>  for gc 6.9041064889days in the future
> I0409 01:43:48.844699 3493 gc.cpp:55] Scheduling 
> 

[jira] [Commented] (MESOS-5048) MesosContainerizerSlaveRecoveryTest.ResourceStatistics is flaky

2016-04-10 Thread Jian Qiu (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234451#comment-15234451
 ] 

Jian Qiu commented on MESOS-5048:
-

Yes, it is what I run on my local machine and I just simply use ../configure. 
It happens almost every time when I run ./bin/mesos-tests.sh 
--gtest_filter=MesosContainerizerSlaveRecoveryTest.ResourceStatistics 
--gtest_repeat=100 --gtest_break_on_failure. And I also saw it once in RB.

> MesosContainerizerSlaveRecoveryTest.ResourceStatistics is flaky
> ---
>
> Key: MESOS-5048
> URL: https://issues.apache.org/jira/browse/MESOS-5048
> Project: Mesos
>  Issue Type: Bug
>  Components: tests
>Affects Versions: 0.28.0
> Environment: Ubuntu 15.04
>Reporter: Jian Qiu
>  Labels: flaky-test
>
> ./mesos-tests.sh 
> --gtest_filter=MesosContainerizerSlaveRecoveryTest.ResourceStatistics 
> --gtest_repeat=100 --gtest_break_on_failure
> This is found in rb, and reproduced in my local machine. There are two types 
> of failures. However, the failure does not appear when enabling verbose...
> {code}
> ../../src/tests/environment.cpp:790: Failure
> Failed
> Tests completed with child processes remaining:
> -+- 1446 /mesos/mesos-0.29.0/_build/src/.libs/lt-mesos-tests 
>  \-+- 9171 sh -c /mesos/mesos-0.29.0/_build/src/mesos-executor 
>\--- 9185 /mesos/mesos-0.29.0/_build/src/.libs/lt-mesos-executor 
> {code}
> And
> {code}
> I0328 15:42:36.982471  5687 exec.cpp:150] Version: 0.29.0
> I0328 15:42:37.008765  5708 exec.cpp:225] Executor registered on slave 
> 731fb93b-26fe-4c7c-a543-fc76f106a62e-S0
> Registered executor on mesos
> ../../src/tests/slave_recovery_tests.cpp:3506: Failure
> Value of: containers.get().size()
>   Actual: 0
> Expected: 1u
> Which is: 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5159) Add test to verify error when requesting fractional GPUs

2016-04-10 Thread Kevin Klues (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234449#comment-15234449
 ] 

Kevin Klues commented on MESOS-5159:


Currently they return a TASK_FAILED because we don't do the validation on the 
master, but rather in the prepare state of the isolator.  I talked with 
[~bmahler] about this on Friday though, and I think we are going to move it. We 
were hoping to avoid doing a special case validation for GPUs, but it seems 
pretty unavoidable if we want to provide intuitive semantics.

> Add test to verify error when requesting fractional GPUs
> 
>
> Key: MESOS-5159
> URL: https://issues.apache.org/jira/browse/MESOS-5159
> Project: Mesos
>  Issue Type: Task
>Reporter: Kevin Klues
>Assignee: Kevin Klues
>  Labels: gpu, mesosphere
>
> Fractional GPU requests should immediately cause a TASK_FAILED without ever 
> launching the task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5163) LKVM Containerization

2016-04-10 Thread Vaibhav Khanduja (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234446#comment-15234446
 ] 

Vaibhav Khanduja commented on MESOS-5163:
-

The Mesos-2717 seems to have a bigger agenda, with supporting KVM as VM using 
Mesos. Though supporting KVM might bring in to support lkvm, but it is 
important to address the big picture of PaaS and support only or extent only 
those options which will build up PaaS story.

> LKVM Containerization
> -
>
> Key: MESOS-5163
> URL: https://issues.apache.org/jira/browse/MESOS-5163
> Project: Mesos
>  Issue Type: Epic
>  Components: containerization
>Reporter: Vaibhav Khanduja
>  Labels: container, containerizer
>
> LKVM is lightweight kernel based hypervisors. The hypervisor is eventually 
> designed to land inside kernel code, it may be good step to consider 
> supporting as one the container option. LKVM comes with the advantage of been 
> light weight container along with its own kernel footprint. Having a separate 
> kernel footprint goes way forward in solving issue of security with 
> containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5163) LKVM Containerization

2016-04-10 Thread Vaibhav Khanduja (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234440#comment-15234440
 ] 

Vaibhav Khanduja commented on MESOS-5163:
-

ClearContainers is not a commercial technology but through this Intel is 
demonstrating how lkvm can be used to support light weight kernel based 
containers.

> LKVM Containerization
> -
>
> Key: MESOS-5163
> URL: https://issues.apache.org/jira/browse/MESOS-5163
> Project: Mesos
>  Issue Type: Epic
>  Components: containerization
>Reporter: Vaibhav Khanduja
>  Labels: container, containerizer
>
> LKVM is lightweight kernel based hypervisors. The hypervisor is eventually 
> designed to land inside kernel code, it may be good step to consider 
> supporting as one the container option. LKVM comes with the advantage of been 
> light weight container along with its own kernel footprint. Having a separate 
> kernel footprint goes way forward in solving issue of security with 
> containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5163) LKVM Containerization

2016-04-10 Thread Guangya Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234434#comment-15234434
 ] 

Guangya Liu commented on MESOS-5163:


I think that this also relates to MESOS-2717 , we can move those to same 
category.

> LKVM Containerization
> -
>
> Key: MESOS-5163
> URL: https://issues.apache.org/jira/browse/MESOS-5163
> Project: Mesos
>  Issue Type: Epic
>  Components: containerization
>Reporter: Vaibhav Khanduja
>  Labels: container, containerizer
>
> LKVM is lightweight kernel based hypervisors. The hypervisor is eventually 
> designed to land inside kernel code, it may be good step to consider 
> supporting as one the container option. LKVM comes with the advantage of been 
> light weight container along with its own kernel footprint. Having a separate 
> kernel footprint goes way forward in solving issue of security with 
> containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-4891) Add a '/containers' endpoint to the agent to list all the active containers.

2016-04-10 Thread Jay Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234422#comment-15234422
 ] 

Jay Guo commented on MESOS-4891:


Docs updated and patch submitted! Please take a look here: 
https://reviews.apache.org/r/45014/

Thanks!!

> Add a '/containers' endpoint to the agent to list all the active containers.
> 
>
> Key: MESOS-4891
> URL: https://issues.apache.org/jira/browse/MESOS-4891
> Project: Mesos
>  Issue Type: Improvement
>  Components: slave
>Reporter: Jie Yu
>Assignee: Jay Guo
>  Labels: mesosphere
>
> This endpoint will be similar to /monitor/statistics.json endpoint, but it'll 
> also contain the 'container_status' about the container (see ContainerStatus 
> in mesos.proto). We'll eventually deprecate the /monitor/statistics.json 
> endpoint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5163) LKVM Containerization

2016-04-10 Thread Fan Du (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234420#comment-15234420
 ] 

Fan Du commented on MESOS-5163:
---

[~vaibhav_khanduja]
Does this ticket is intened for Intel Clear Container, which based on lkvm?

> LKVM Containerization
> -
>
> Key: MESOS-5163
> URL: https://issues.apache.org/jira/browse/MESOS-5163
> Project: Mesos
>  Issue Type: Epic
>  Components: containerization
>Reporter: Vaibhav Khanduja
>  Labels: container, containerizer
>
> LKVM is lightweight kernel based hypervisors. The hypervisor is eventually 
> designed to land inside kernel code, it may be good step to consider 
> supporting as one the container option. LKVM comes with the advantage of been 
> light weight container along with its own kernel footprint. Having a separate 
> kernel footprint goes way forward in solving issue of security with 
> containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (MESOS-5163) LKVM Containerization

2016-04-10 Thread Fan Du (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234420#comment-15234420
 ] 

Fan Du edited comment on MESOS-5163 at 4/11/16 2:31 AM:


[~vaibhav_khanduja]
Is this ticket intened for Intel Clear Container, which based on lkvm?


was (Author: fan.du):
[~vaibhav_khanduja]
Does this ticket is intened for Intel Clear Container, which based on lkvm?

> LKVM Containerization
> -
>
> Key: MESOS-5163
> URL: https://issues.apache.org/jira/browse/MESOS-5163
> Project: Mesos
>  Issue Type: Epic
>  Components: containerization
>Reporter: Vaibhav Khanduja
>  Labels: container, containerizer
>
> LKVM is lightweight kernel based hypervisors. The hypervisor is eventually 
> designed to land inside kernel code, it may be good step to consider 
> supporting as one the container option. LKVM comes with the advantage of been 
> light weight container along with its own kernel footprint. Having a separate 
> kernel footprint goes way forward in solving issue of security with 
> containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-5163) LKVM Containerization

2016-04-10 Thread haosdent (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

haosdent updated MESOS-5163:

Labels: container containerizer  (was: )

> LKVM Containerization
> -
>
> Key: MESOS-5163
> URL: https://issues.apache.org/jira/browse/MESOS-5163
> Project: Mesos
>  Issue Type: Epic
>  Components: containerization
>Reporter: Vaibhav Khanduja
>  Labels: container, containerizer
>
> LKVM is lightweight kernel based hypervisors. The hypervisor is eventually 
> designed to land inside kernel code, it may be good step to consider 
> supporting as one the container option. LKVM comes with the advantage of been 
> light weight container along with its own kernel footprint. Having a separate 
> kernel footprint goes way forward in solving issue of security with 
> containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-5163) LKVM Containerization

2016-04-10 Thread Vaibhav Khanduja (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Khanduja updated MESOS-5163:

Description: LKVM is lightweight kernel based hypervisors. The hypervisor 
is eventually designed to land inside kernel code, it may be good step to 
consider supporting as one the container option. LKVM comes with the advantage 
of been light weight container along with its own kernel footprint. Having a 
separate kernel footprint goes way forward in solving issue of security with 
containers.

> LKVM Containerization
> -
>
> Key: MESOS-5163
> URL: https://issues.apache.org/jira/browse/MESOS-5163
> Project: Mesos
>  Issue Type: Improvement
>  Components: containerization
>Reporter: Vaibhav Khanduja
>
> LKVM is lightweight kernel based hypervisors. The hypervisor is eventually 
> designed to land inside kernel code, it may be good step to consider 
> supporting as one the container option. LKVM comes with the advantage of been 
> light weight container along with its own kernel footprint. Having a separate 
> kernel footprint goes way forward in solving issue of security with 
> containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MESOS-5163) LKVM Containerization

2016-04-10 Thread Vaibhav Khanduja (JIRA)
Vaibhav Khanduja created MESOS-5163:
---

 Summary: LKVM Containerization
 Key: MESOS-5163
 URL: https://issues.apache.org/jira/browse/MESOS-5163
 Project: Mesos
  Issue Type: Improvement
  Components: containerization
Reporter: Vaibhav Khanduja






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-4922) Setup proper /etc/hostname, /etc/hosts and /etc/resolv.conf for containers in network/cni isolator.

2016-04-10 Thread Avinash Sridharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-4922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Avinash Sridharan updated MESOS-4922:
-
Story Points: 5  (was: 1)

> Setup proper /etc/hostname, /etc/hosts and /etc/resolv.conf for containers in 
> network/cni isolator.
> ---
>
> Key: MESOS-4922
> URL: https://issues.apache.org/jira/browse/MESOS-4922
> Project: Mesos
>  Issue Type: Bug
>  Components: isolation
>Reporter: Qian Zhang
>Assignee: Avinash Sridharan
>  Labels: mesosphere
>
> The network/cni isolator needs to properly setup /etc/hostname and /etc/hosts 
> for the container with a hostname (e.g., randomly generated) and the assigned 
> IP returned by CNI plugin.
> We should consider the following cases:
> 1) container is using host filesystem
> 2) container is using a different filesystem
> 3) custom executor and command executor



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-4922) Setup proper /etc/hostname, /etc/hosts and /etc/resolv.conf for containers in network/cni isolator.

2016-04-10 Thread Avinash Sridharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-4922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Avinash Sridharan updated MESOS-4922:
-
Summary: Setup proper /etc/hostname, /etc/hosts and /etc/resolv.conf for 
containers in network/cni isolator.  (was: Setup proper /etc/hostname and 
/etc/hosts for containers in network/cni isolator.)

> Setup proper /etc/hostname, /etc/hosts and /etc/resolv.conf for containers in 
> network/cni isolator.
> ---
>
> Key: MESOS-4922
> URL: https://issues.apache.org/jira/browse/MESOS-4922
> Project: Mesos
>  Issue Type: Bug
>  Components: isolation
>Reporter: Qian Zhang
>Assignee: Avinash Sridharan
>  Labels: mesosphere
>
> The network/cni isolator needs to properly setup /etc/hostname and /etc/hosts 
> for the container with a hostname (e.g., randomly generated) and the assigned 
> IP returned by CNI plugin.
> We should consider the following cases:
> 1) container is using host filesystem
> 2) container is using a different filesystem
> 3) custom executor and command executor



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5148) Supporting Container Images in Mesos Containerizer doesn't work by using marathon api

2016-04-10 Thread Tim Anderegg (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234197#comment-15234197
 ] 

Tim Anderegg commented on MESOS-5148:
-

[~wangqun] Sorry, I misled you since I thought Marathon did support the recent 
updates to the Mesos containerzier (i.e. support for Docker images), as 
[~kaysoky] points out it doesn't.  The Marathon API theoretically supports 
something like this, but the implementation isn't there.

> Supporting Container Images in Mesos Containerizer doesn't work by using 
> marathon api
> -
>
> Key: MESOS-5148
> URL: https://issues.apache.org/jira/browse/MESOS-5148
> Project: Mesos
>  Issue Type: Bug
>Reporter: wangqun
>
> Hi
> I use the marathon api to create tasks to test Supporting Container 
> Images in Mesos Containerizer .
> My steps is the following:
> 1) to run the process in master node.
> sudo /usr/sbin/mesos-master --zk=zk://10.0.0.4:2181/mesos --port=5050 
> --log_dir=/var/log/mesos --cluster=mesosbay --hostname=10.0.0.4 --ip=10.0.0.4 
> --quorum=1 --work_dir=/var/lib/mesos
> 2) to run the process in slave node.
> sudo /usr/sbin/mesos-slave --master=zk://10.0.0.4:2181/mesos 
> --log_dir=/var/log/mesos --containerizers=docker,mesos 
> --executor_registration_timeout=5mins --hostname=10.0.0.5 --ip=10.0.0.5 
> --isolation=docker/runtime,filesystem/linux --work_dir=/tmp/mesos/slave 
> --image_providers=docker --executor_environment_variables="{}"
> 3) to create one json file to specify the container to be managed by mesos.
> sudo  touch mesos.json
> sudo vim  mesos.json
> {
>   "container": {
> "type": "MESOS",
> "docker": {
>   "image": "library/redis"
> }
>   },
>   "id": "ubuntumesos",
>   "instances": 1,
>   "cpus": 0.5,
>   "mem": 512,
>   "uris": [],
>   "cmd": "ping 8.8.8.8"
> }
> 4)sudo curl -X POST -H "Content-Type: application/json" 
> localhost:8080/v2/apps -d...@mesos.json
> 5)sudo  curl http://localhost:8080/v2/tasks
> {"tasks":[{"id":"ubuntumesos.fc1879be-fc9f-11e5-81e0-024294de4967","host":"10.0.0.5","ipAddresses":[],"ports":[31597],"startedAt":"2016-04-07T09:06:24.900Z","stagedAt":"2016-04-07T09:06:16.611Z","version":"2016-04-07T09:06:14.354Z","slaveId":"058fb5a7-9273-4bfa-83bb-8cb091621e19-S1","appId":"/ubuntumesos","servicePorts":[1]}]}
> 6) sudo docker run -ti --net=host redis redis-cli  
> Could not connect to Redis at 127.0.0.1:6379: Connection refused
> not connected> 
> 7)
> I0409 01:43:48.774868 3492 slave.cpp:3886] Executor 
> 'ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce' of framework 
> ffb72d7c-dd63-4c30-abea-bb746ab2c326- exited with status 0
> I0409 01:43:48.781307 3492 slave.cpp:3990] Cleaning up executor 
> 'ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce' of framework 
> ffb72d7c-dd63-4c30-abea-bb746ab2c326- at executor(1)@10.0.0.5:60134
> I0409 01:43:48.808364 3492 slave.cpp:4078] Cleaning up framework 
> ffb72d7c-dd63-4c30-abea-bb746ab2c326-
> I0409 01:43:48.811336 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce/runs/24d0872d-1ba1-4384-be11-a20c82893ea4'
>  for gc 6.9070953778days in the future
> I0409 01:43:48.817401 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce'
>  for gc 6.9065992889days in the future
> I0409 01:43:48.823158 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/meta/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce/runs/24d0872d-1ba1-4384-be11-a20c82893ea4'
>  for gc 6.9065273185days in the future
> I0409 01:43:48.826216 3491 status_update_manager.cpp:282] Closing status 
> update streams for framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-
> I0409 01:43:48.835602 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/meta/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce'
>  for gc 6.9064716444days in the future
> I0409 01:43:48.838580 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-'
>  for gc 6.9041064889days in the future
> I0409 01:43:48.844699 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/meta/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-'
>  for gc 6.902654163days in the future
> I0409 01:44:01.623440 3494 slave.cpp:4374] Current disk 

[jira] [Commented] (MESOS-4705) Slave failed to sample container with perf event

2016-04-10 Thread haosdent (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234063#comment-15234063
 ] 

haosdent commented on MESOS-4705:
-

[~bmahler] Thanks a lot! [~fan.du] Do you mind update the patch according ben's 
suggestions? Or I help to updated and credited by you?

> Slave failed to sample container with perf event
> 
>
> Key: MESOS-4705
> URL: https://issues.apache.org/jira/browse/MESOS-4705
> Project: Mesos
>  Issue Type: Bug
>  Components: cgroups, isolation
>Affects Versions: 0.27.1
>Reporter: Fan Du
>Assignee: Fan Du
>
> When sampling container with perf event on Centos7 with kernel 
> 3.10.0-123.el7.x86_64, slave complained with below error spew:
> {code}
> E0218 16:32:00.591181  8376 perf_event.cpp:408] Failed to get perf sample: 
> Failed to parse perf sample: Failed to parse perf sample line 
> '25871993253,,cycles,mesos/5f23ffca-87ed-4ff6-84f2-6ec3d4098ab8,10059827422,100.00':
>  Unexpected number of fields
> {code}
> it's caused by the current perf format [assumption | 
> https://git-wip-us.apache.org/repos/asf?p=mesos.git;a=blob;f=src/linux/perf.cpp;h=1c113a2b3f57877e132bbd65e01fb2f045132128;hb=HEAD#l430]
>  with kernel version below 3.12 
> On 3.10.0-123.el7.x86_64 kernel, the format is with 6 tokens as below:
> value,unit,event,cgroup,running,ratio
> A local modification fixed this error on my test bed, please review this 
> ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5148) Supporting Container Images in Mesos Containerizer doesn't work by using marathon api

2016-04-10 Thread haosdent (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234059#comment-15234059
 ] 

haosdent commented on MESOS-5148:
-

[~wangqun] For
{quote}
1)sudo docker images
I don't find the redis image.
{quote}
You could not use docker command to find image when you use docker image in 
{{MesosContainerizer}}, usually the image stored in {{--docker_store_dir}} 
flag, the default value of it is {{"/tmp/mesos/store/docker"}}.

{quote}
Can it validate marathon api support mesos containerizer?
{quote}

Could not, because you start your container through {{mesos-execute}} here 
instead of marathon. By the way, do you mind send to user mailing list  if you 
encounter more problems instead of continue posted in jira here? Because this 
jira ticket have already been closed. And usually jira isn't a right place to 
discuss usage questions. Thank you for your understanding!

> Supporting Container Images in Mesos Containerizer doesn't work by using 
> marathon api
> -
>
> Key: MESOS-5148
> URL: https://issues.apache.org/jira/browse/MESOS-5148
> Project: Mesos
>  Issue Type: Bug
>Reporter: wangqun
>
> Hi
> I use the marathon api to create tasks to test Supporting Container 
> Images in Mesos Containerizer .
> My steps is the following:
> 1) to run the process in master node.
> sudo /usr/sbin/mesos-master --zk=zk://10.0.0.4:2181/mesos --port=5050 
> --log_dir=/var/log/mesos --cluster=mesosbay --hostname=10.0.0.4 --ip=10.0.0.4 
> --quorum=1 --work_dir=/var/lib/mesos
> 2) to run the process in slave node.
> sudo /usr/sbin/mesos-slave --master=zk://10.0.0.4:2181/mesos 
> --log_dir=/var/log/mesos --containerizers=docker,mesos 
> --executor_registration_timeout=5mins --hostname=10.0.0.5 --ip=10.0.0.5 
> --isolation=docker/runtime,filesystem/linux --work_dir=/tmp/mesos/slave 
> --image_providers=docker --executor_environment_variables="{}"
> 3) to create one json file to specify the container to be managed by mesos.
> sudo  touch mesos.json
> sudo vim  mesos.json
> {
>   "container": {
> "type": "MESOS",
> "docker": {
>   "image": "library/redis"
> }
>   },
>   "id": "ubuntumesos",
>   "instances": 1,
>   "cpus": 0.5,
>   "mem": 512,
>   "uris": [],
>   "cmd": "ping 8.8.8.8"
> }
> 4)sudo curl -X POST -H "Content-Type: application/json" 
> localhost:8080/v2/apps -d...@mesos.json
> 5)sudo  curl http://localhost:8080/v2/tasks
> {"tasks":[{"id":"ubuntumesos.fc1879be-fc9f-11e5-81e0-024294de4967","host":"10.0.0.5","ipAddresses":[],"ports":[31597],"startedAt":"2016-04-07T09:06:24.900Z","stagedAt":"2016-04-07T09:06:16.611Z","version":"2016-04-07T09:06:14.354Z","slaveId":"058fb5a7-9273-4bfa-83bb-8cb091621e19-S1","appId":"/ubuntumesos","servicePorts":[1]}]}
> 6) sudo docker run -ti --net=host redis redis-cli  
> Could not connect to Redis at 127.0.0.1:6379: Connection refused
> not connected> 
> 7)
> I0409 01:43:48.774868 3492 slave.cpp:3886] Executor 
> 'ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce' of framework 
> ffb72d7c-dd63-4c30-abea-bb746ab2c326- exited with status 0
> I0409 01:43:48.781307 3492 slave.cpp:3990] Cleaning up executor 
> 'ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce' of framework 
> ffb72d7c-dd63-4c30-abea-bb746ab2c326- at executor(1)@10.0.0.5:60134
> I0409 01:43:48.808364 3492 slave.cpp:4078] Cleaning up framework 
> ffb72d7c-dd63-4c30-abea-bb746ab2c326-
> I0409 01:43:48.811336 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce/runs/24d0872d-1ba1-4384-be11-a20c82893ea4'
>  for gc 6.9070953778days in the future
> I0409 01:43:48.817401 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce'
>  for gc 6.9065992889days in the future
> I0409 01:43:48.823158 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/meta/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce/runs/24d0872d-1ba1-4384-be11-a20c82893ea4'
>  for gc 6.9065273185days in the future
> I0409 01:43:48.826216 3491 status_update_manager.cpp:282] Closing status 
> update streams for framework ffb72d7c-dd63-4c30-abea-bb746ab2c326-
> I0409 01:43:48.835602 3493 gc.cpp:55] Scheduling 
> '/tmp/mesos/slave/meta/slaves/da0e09ff-d5b2-4680-bd7e-b58a2a206497-S0/frameworks/ffb72d7c-dd63-4c30-abea-bb746ab2c326-/executors/ubuntumesos.a0b45838-fdf0-11e5-8b4b-0242e2dedfce'
>  for gc 6.9064716444days in the future
> I0409 01:43:48.838580 3493 gc.cpp:55] 

[jira] [Updated] (MESOS-2043) framework auth fail with timeout error and never get authenticated

2016-04-10 Thread Adam B (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam B updated MESOS-2043:
--
Sprint:   (was: Mesosphere Sprint 33)

> framework auth fail with timeout error and never get authenticated
> --
>
> Key: MESOS-2043
> URL: https://issues.apache.org/jira/browse/MESOS-2043
> Project: Mesos
>  Issue Type: Bug
>  Components: master, scheduler driver, security, slave
>Affects Versions: 0.21.0
>Reporter: Bhuvan Arumugam
>Assignee: Greg Mann
>Priority: Critical
>  Labels: mesosphere, security
> Fix For: 0.29.0
>
> Attachments: aurora-scheduler.20141104-1606-1706.log, master.log, 
> mesos-master.20141104-1606-1706.log, slave.log
>
>
> I'm facing this issue in master as of 
> https://github.com/apache/mesos/commit/74ea59e144d131814c66972fb0cc14784d3503d4
> As [~adam-mesos] mentioned in IRC, this sounds similar to MESOS-1866. I'm 
> running 1 master and 1 scheduler (aurora). The framework authentication fail 
> due to time out:
> error on mesos master:
> {code}
> I1104 19:37:17.741449  8329 master.cpp:3874] Authenticating 
> scheduler-d2d4437b-d375-4467-a583-362152fe065a@SCHEDULER_IP:8083
> I1104 19:37:17.741585  8329 master.cpp:3885] Using default CRAM-MD5 
> authenticator
> I1104 19:37:17.742106  8336 authenticator.hpp:169] Creating new server SASL 
> connection
> W1104 19:37:22.742959  8329 master.cpp:3953] Authentication timed out
> W1104 19:37:22.743548  8329 master.cpp:3930] Failed to authenticate 
> scheduler-d2d4437b-d375-4467-a583-362152fe065a@SCHEDULER_IP:8083: 
> Authentication discarded
> {code}
> scheduler error:
> {code}
> I1104 19:38:57.885486 49012 sched.cpp:283] Authenticating with master 
> master@MASTER_IP:PORT
> I1104 19:38:57.885928 49002 authenticatee.hpp:133] Creating new client SASL 
> connection
> I1104 19:38:57.890581 49007 authenticatee.hpp:224] Received SASL 
> authentication mechanisms: CRAM-MD5
> I1104 19:38:57.890656 49007 authenticatee.hpp:250] Attempting to authenticate 
> with mechanism 'CRAM-MD5'
> W1104 19:39:02.891196 49005 sched.cpp:378] Authentication timed out
> I1104 19:39:02.891850 49018 sched.cpp:338] Failed to authenticate with master 
> master@MASTER_IP:PORT: Authentication discarded
> {code}
> Looks like 2 instances {{scheduler-20f88a53-5945-4977-b5af-28f6c52d3c94}} & 
> {{scheduler-d2d4437b-d375-4467-a583-362152fe065a}} of same framework is 
> trying to authenticate and fail.
> {code}
> W1104 19:36:30.769420  8319 master.cpp:3930] Failed to authenticate 
> scheduler-20f88a53-5945-4977-b5af-28f6c52d3c94@SCHEDULER_IP:8083: Failed to 
> communicate with authenticatee
> I1104 19:36:42.701441  8328 master.cpp:3860] Queuing up authentication 
> request from scheduler-d2d4437b-d375-4467-a583-362152fe065a@SCHEDULER_IP:8083 
> because authentication is still in progress
> {code}
> Restarting master and scheduler didn't fix it. 
> This particular issue happen with 1 master and 1 scheduler after MESOS-1866 
> is fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-2043) framework auth fail with timeout error and never get authenticated

2016-04-10 Thread Adam B (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam B updated MESOS-2043:
--
Sprint: Mesosphere Sprint 33

> framework auth fail with timeout error and never get authenticated
> --
>
> Key: MESOS-2043
> URL: https://issues.apache.org/jira/browse/MESOS-2043
> Project: Mesos
>  Issue Type: Bug
>  Components: master, scheduler driver, security, slave
>Affects Versions: 0.21.0
>Reporter: Bhuvan Arumugam
>Assignee: Greg Mann
>Priority: Critical
>  Labels: mesosphere, security
> Fix For: 0.29.0
>
> Attachments: aurora-scheduler.20141104-1606-1706.log, master.log, 
> mesos-master.20141104-1606-1706.log, slave.log
>
>
> I'm facing this issue in master as of 
> https://github.com/apache/mesos/commit/74ea59e144d131814c66972fb0cc14784d3503d4
> As [~adam-mesos] mentioned in IRC, this sounds similar to MESOS-1866. I'm 
> running 1 master and 1 scheduler (aurora). The framework authentication fail 
> due to time out:
> error on mesos master:
> {code}
> I1104 19:37:17.741449  8329 master.cpp:3874] Authenticating 
> scheduler-d2d4437b-d375-4467-a583-362152fe065a@SCHEDULER_IP:8083
> I1104 19:37:17.741585  8329 master.cpp:3885] Using default CRAM-MD5 
> authenticator
> I1104 19:37:17.742106  8336 authenticator.hpp:169] Creating new server SASL 
> connection
> W1104 19:37:22.742959  8329 master.cpp:3953] Authentication timed out
> W1104 19:37:22.743548  8329 master.cpp:3930] Failed to authenticate 
> scheduler-d2d4437b-d375-4467-a583-362152fe065a@SCHEDULER_IP:8083: 
> Authentication discarded
> {code}
> scheduler error:
> {code}
> I1104 19:38:57.885486 49012 sched.cpp:283] Authenticating with master 
> master@MASTER_IP:PORT
> I1104 19:38:57.885928 49002 authenticatee.hpp:133] Creating new client SASL 
> connection
> I1104 19:38:57.890581 49007 authenticatee.hpp:224] Received SASL 
> authentication mechanisms: CRAM-MD5
> I1104 19:38:57.890656 49007 authenticatee.hpp:250] Attempting to authenticate 
> with mechanism 'CRAM-MD5'
> W1104 19:39:02.891196 49005 sched.cpp:378] Authentication timed out
> I1104 19:39:02.891850 49018 sched.cpp:338] Failed to authenticate with master 
> master@MASTER_IP:PORT: Authentication discarded
> {code}
> Looks like 2 instances {{scheduler-20f88a53-5945-4977-b5af-28f6c52d3c94}} & 
> {{scheduler-d2d4437b-d375-4467-a583-362152fe065a}} of same framework is 
> trying to authenticate and fail.
> {code}
> W1104 19:36:30.769420  8319 master.cpp:3930] Failed to authenticate 
> scheduler-20f88a53-5945-4977-b5af-28f6c52d3c94@SCHEDULER_IP:8083: Failed to 
> communicate with authenticatee
> I1104 19:36:42.701441  8328 master.cpp:3860] Queuing up authentication 
> request from scheduler-d2d4437b-d375-4467-a583-362152fe065a@SCHEDULER_IP:8083 
> because authentication is still in progress
> {code}
> Restarting master and scheduler didn't fix it. 
> This particular issue happen with 1 master and 1 scheduler after MESOS-1866 
> is fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-5152) Add authentication to agent's /monitor/statistics endpoint

2016-04-10 Thread Adam B (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam B updated MESOS-5152:
--
Assignee: Benjamin Bannier

> Add authentication to agent's /monitor/statistics endpoint
> --
>
> Key: MESOS-5152
> URL: https://issues.apache.org/jira/browse/MESOS-5152
> Project: Mesos
>  Issue Type: Task
>  Components: security, slave
>Reporter: Adam B
>Assignee: Benjamin Bannier
>  Labels: authentication, mesosphere, security
> Fix For: 0.29.0
>
>
> Operators may want to enforce that only authenticated users (and subsequently 
> only specific authorized users) be able to view per-executor resource usage 
> statistics.
> Since this endpoint is handled by the ResourceMonitorProcess, I would expect 
> the work necessary to be similar to what was done for /files or /registry 
> endpoint authn.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-4316) Support get non-default weights by /weights

2016-04-10 Thread Adam B (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234055#comment-15234055
 ] 

Adam B commented on MESOS-4316:
---

I committed these first two patches, but we still need at least one test for 
the positive path, where somebody requests to GET /weights, and is returned 
some basic values. This issue cannot be closed until such a test has been 
created.

commit 0e5680c51ce7479f294cc654007a007f2eaeb05d
Author: Yongqiao Wang 
Date:   Sun Apr 10 00:40:50 2016 -0700

Added authentication test for /weights GET request.

Review: https://reviews.apache.org/r/45203/

commit a28f9e5e4180c26cd708cdf6dada3ba24813926d
Author: Yongqiao Wang 
Date:   Sun Apr 10 00:19:43 2016 -0700

Supported querying weight infos via /weights.

Review: https://reviews.apache.org/r/44512/


> Support get non-default weights by /weights
> ---
>
> Key: MESOS-4316
> URL: https://issues.apache.org/jira/browse/MESOS-4316
> Project: Mesos
>  Issue Type: Task
>Reporter: Yongqiao Wang
>Assignee: Yongqiao Wang
>Priority: Minor
>  Labels: mesosphere
> Fix For: 0.29.0
>
>
> Like /quota, we should also add query logic for /weights to keep consistent. 
> Then /roles no longer needs to show weight information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-5152) Add authentication to agent's /monitor/statistics endpoint

2016-04-10 Thread Adam B (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam B updated MESOS-5152:
--
Fix Version/s: 0.29.0

> Add authentication to agent's /monitor/statistics endpoint
> --
>
> Key: MESOS-5152
> URL: https://issues.apache.org/jira/browse/MESOS-5152
> Project: Mesos
>  Issue Type: Task
>  Components: security, slave
>Reporter: Adam B
>  Labels: authentication, mesosphere, security
> Fix For: 0.29.0
>
>
> Operators may want to enforce that only authenticated users (and subsequently 
> only specific authorized users) be able to view per-executor resource usage 
> statistics.
> Since this endpoint is handled by the ResourceMonitorProcess, I would expect 
> the work necessary to be similar to what was done for /files or /registry 
> endpoint authn.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-4951) Enable actors to pass an authentication realm to libprocess

2016-04-10 Thread Adam B (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-4951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam B updated MESOS-4951:
--
Fix Version/s: 0.29.0

> Enable actors to pass an authentication realm to libprocess
> ---
>
> Key: MESOS-4951
> URL: https://issues.apache.org/jira/browse/MESOS-4951
> Project: Mesos
>  Issue Type: Improvement
>  Components: libprocess, slave
>Reporter: Greg Mann
>Assignee: Greg Mann
>  Labels: authentication, http, mesosphere, security
> Fix For: 0.29.0
>
>
> To prepare for MESOS-4902, the Mesos master and agent need a way to pass the 
> desired authentication realm to libprocess. Since some endpoints (like 
> {{/profiler/*}}) get installed in libprocess, the master/agent should be able 
> to specify during initialization what authentication realm the 
> libprocess-level endpoints will be authenticated under.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-5153) Sandboxes contents should be protected from unauthorized users

2016-04-10 Thread Adam B (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam B updated MESOS-5153:
--
Fix Version/s: 0.29.0

> Sandboxes contents should be protected from unauthorized users
> --
>
> Key: MESOS-5153
> URL: https://issues.apache.org/jira/browse/MESOS-5153
> Project: Mesos
>  Issue Type: Bug
>  Components: security, slave
>Reporter: Alexander Rojas
>Assignee: Alexander Rojas
>  Labels: mesosphere, security
> Fix For: 0.29.0
>
>
> MESOS-4956 introduced authentication support for the sandboxes. However, 
> authentication can only go as far as to tell whether an user is known to 
> mesos or not. An extra additional step is necessary to verify whether the 
> known user is allowed to executed the requested operation on the sandbox 
> (browse, read, download, debug).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-4902) Add authentication to libprocess endpoints

2016-04-10 Thread Adam B (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-4902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam B updated MESOS-4902:
--
Fix Version/s: 0.29.0

> Add authentication to libprocess endpoints
> --
>
> Key: MESOS-4902
> URL: https://issues.apache.org/jira/browse/MESOS-4902
> Project: Mesos
>  Issue Type: Improvement
>  Components: HTTP API
>Reporter: Greg Mann
>Assignee: Greg Mann
>  Labels: authentication, http, mesosphere, security
> Fix For: 0.29.0
>
>
> In addition to the endpoints addressed by MESOS-4850 and MESOS-5152, the 
> following endpoints would also benefit from HTTP authentication:
> * {{/profiler/*}}
> * {{/logging/toggle}}
> * {{/metrics/snapshot}}
> * {{/system/stats.json}}
> Adding HTTP authentication to these endpoints is a bit more complicated 
> because they are defined at the libprocess level.
> While working on MESOS-4850, it became apparent that since our tests use the 
> same instance of libprocess for both master and agent, different default 
> authentication realms must be used for master/agent so that HTTP 
> authentication can be independently enabled/disabled for each.
> We should establish a mechanism for making an endpoint authenticated that 
> allows us to:
> 1) Install an endpoint like {{/files}}, whose code is shared by the master 
> and agent, with different authentication realms for the master and agent
> 2) Avoid hard-coding a default authentication realm into libprocess, to 
> permit the use of different authentication realms for the master and agent 
> and to keep application-level concerns from leaking into libprocess
> Another option would be to use a single default authentication realm and 
> always enable or disable HTTP authentication for *both* the master and agent 
> in tests. However, this wouldn't allow us to test scenarios where HTTP 
> authentication is enabled on one but disabled on the other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-5142) Add agent flags for HTTP authorization

2016-04-10 Thread Adam B (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam B updated MESOS-5142:
--
Fix Version/s: 0.29.0

> Add agent flags for HTTP authorization
> --
>
> Key: MESOS-5142
> URL: https://issues.apache.org/jira/browse/MESOS-5142
> Project: Mesos
>  Issue Type: Bug
>  Components: security, slave
>Reporter: Jan Schlicht
>Assignee: Jan Schlicht
>  Labels: mesosphere, security
> Fix For: 0.29.0
>
>
> Flags should be added to the agent to:
> 1. Enable authorization ({{--authorizers}})
> 2. Provide ACLs ({{--acls}})



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-4932) Propose Design for Authorization based filtering for endpoints.

2016-04-10 Thread Adam B (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-4932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam B updated MESOS-4932:
--
Fix Version/s: 0.29.0

> Propose Design for Authorization based filtering for endpoints.
> ---
>
> Key: MESOS-4932
> URL: https://issues.apache.org/jira/browse/MESOS-4932
> Project: Mesos
>  Issue Type: Task
>  Components: security
>Reporter: Joerg Schad
>Assignee: Joerg Schad
>  Labels: authorization, mesosphere, security
> Fix For: 0.29.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-4931) Authorization based filtering for endpoints.

2016-04-10 Thread Adam B (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-4931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam B updated MESOS-4931:
--
Fix Version/s: 0.29.0

> Authorization based filtering for endpoints.
> 
>
> Key: MESOS-4931
> URL: https://issues.apache.org/jira/browse/MESOS-4931
> Project: Mesos
>  Issue Type: Epic
>  Components: security
>Reporter: Joerg Schad
>  Labels: authorization, mesosphere, security
> Fix For: 0.29.0
>
>
> Some endpoints -such as state- should be filtered depending on which 
> information the user is authorized to see. For example a user should be only 
> able to see tasks he is authorized to see.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-5153) Sandboxes contents should be protected from unauthorized users

2016-04-10 Thread Adam B (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam B updated MESOS-5153:
--
Sprint: Mesosphere Sprint 33

> Sandboxes contents should be protected from unauthorized users
> --
>
> Key: MESOS-5153
> URL: https://issues.apache.org/jira/browse/MESOS-5153
> Project: Mesos
>  Issue Type: Bug
>  Components: security, slave
>Reporter: Alexander Rojas
>Assignee: Alexander Rojas
>  Labels: mesosphere, security
>
> MESOS-4956 introduced authentication support for the sandboxes. However, 
> authentication can only go as far as to tell whether an user is known to 
> mesos or not. An extra additional step is necessary to verify whether the 
> known user is allowed to executed the requested operation on the sandbox 
> (browse, read, download, debug).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-5152) Add authentication to agent's /monitor/statistics endpoint

2016-04-10 Thread Adam B (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam B updated MESOS-5152:
--
Sprint: Mesosphere Sprint 33

> Add authentication to agent's /monitor/statistics endpoint
> --
>
> Key: MESOS-5152
> URL: https://issues.apache.org/jira/browse/MESOS-5152
> Project: Mesos
>  Issue Type: Task
>  Components: security, slave
>Reporter: Adam B
>  Labels: authentication, mesosphere, security
>
> Operators may want to enforce that only authenticated users (and subsequently 
> only specific authorized users) be able to view per-executor resource usage 
> statistics.
> Since this endpoint is handled by the ResourceMonitorProcess, I would expect 
> the work necessary to be similar to what was done for /files or /registry 
> endpoint authn.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-4785) Reorganize ACL subject/object descriptions

2016-04-10 Thread Adam B (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam B updated MESOS-4785:
--
Sprint: Mesosphere Sprint 33

> Reorganize ACL subject/object descriptions
> --
>
> Key: MESOS-4785
> URL: https://issues.apache.org/jira/browse/MESOS-4785
> Project: Mesos
>  Issue Type: Documentation
>  Components: documentation
>Reporter: Greg Mann
>Assignee: Alexander Rojas
>  Labels: documentation, mesosphere, security
> Fix For: 0.29.0
>
>
> The authorization documentation would benefit from a reorganization of the 
> ACL subject/object descriptions. Instead of simple lists of the available 
> subjects and objects, it would be nice to see a table showing which subject 
> and object is used with each action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-4902) Add authentication to libprocess endpoints

2016-04-10 Thread Adam B (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-4902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam B updated MESOS-4902:
--
Sprint: Mesosphere Sprint 33

> Add authentication to libprocess endpoints
> --
>
> Key: MESOS-4902
> URL: https://issues.apache.org/jira/browse/MESOS-4902
> Project: Mesos
>  Issue Type: Improvement
>  Components: HTTP API
>Reporter: Greg Mann
>Assignee: Greg Mann
>  Labels: authentication, http, mesosphere, security
>
> In addition to the endpoints addressed by MESOS-4850 and MESOS-5152, the 
> following endpoints would also benefit from HTTP authentication:
> * {{/profiler/*}}
> * {{/logging/toggle}}
> * {{/metrics/snapshot}}
> * {{/system/stats.json}}
> Adding HTTP authentication to these endpoints is a bit more complicated 
> because they are defined at the libprocess level.
> While working on MESOS-4850, it became apparent that since our tests use the 
> same instance of libprocess for both master and agent, different default 
> authentication realms must be used for master/agent so that HTTP 
> authentication can be independently enabled/disabled for each.
> We should establish a mechanism for making an endpoint authenticated that 
> allows us to:
> 1) Install an endpoint like {{/files}}, whose code is shared by the master 
> and agent, with different authentication realms for the master and agent
> 2) Avoid hard-coding a default authentication realm into libprocess, to 
> permit the use of different authentication realms for the master and agent 
> and to keep application-level concerns from leaking into libprocess
> Another option would be to use a single default authentication realm and 
> always enable or disable HTTP authentication for *both* the master and agent 
> in tests. However, this wouldn't allow us to test scenarios where HTTP 
> authentication is enabled on one but disabled on the other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-4951) Enable actors to pass an authentication realm to libprocess

2016-04-10 Thread Adam B (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-4951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam B updated MESOS-4951:
--
Sprint: Mesosphere Sprint 33

> Enable actors to pass an authentication realm to libprocess
> ---
>
> Key: MESOS-4951
> URL: https://issues.apache.org/jira/browse/MESOS-4951
> Project: Mesos
>  Issue Type: Improvement
>  Components: libprocess, slave
>Reporter: Greg Mann
>Assignee: Greg Mann
>  Labels: authentication, http, mesosphere, security
>
> To prepare for MESOS-4902, the Mesos master and agent need a way to pass the 
> desired authentication realm to libprocess. Since some endpoints (like 
> {{/profiler/*}}) get installed in libprocess, the master/agent should be able 
> to specify during initialization what authentication realm the 
> libprocess-level endpoints will be authenticated under.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-3553) LIBPROCESS_IP not passed when executor's environment is specified

2016-04-10 Thread Adam B (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-3553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233997#comment-15233997
 ] 

Adam B commented on MESOS-3553:
---

[~jieyu], I believe this came up with a customer that does not have DNS in 
their cluster, so all agents are configured with LIBPROCESS_IP manually. Rather 
than passing the IP from the agent's environment into the executor's, would you 
suggest that the operator set LIBPROCESS_IP in the 
executor_environment_variables flag? Or do we just need to ensure that we 
prefer a LIBPROCESS_IP explicitly passed in CommandInfo.environment, as Niklas 
suggested in his post-commit comment?

> LIBPROCESS_IP not passed when executor's environment is specified
> -
>
> Key: MESOS-3553
> URL: https://issues.apache.org/jira/browse/MESOS-3553
> Project: Mesos
>  Issue Type: Bug
>Affects Versions: 0.24.1
>Reporter: Greg Mann
>Assignee: Greg Mann
>  Labels: mesosphere
> Fix For: 0.26.0
>
>
> When the executor's environment is specified explicitly via 
> {{\-\-executor_environment_variables}}, {{LIBPROCESS_IP}} will not be passed, 
> leading to errors in some cases - for example, when no DNS is available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (MESOS-2717) Qemu/KVM containerizer

2016-04-10 Thread Alex Glikson (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-2717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233927#comment-15233927
 ] 

Alex Glikson edited comment on MESOS-2717 at 4/10/16 6:45 AM:
--

Great initiative!
I have an idea which is likely to raise some eyebrows.
Given that lots of effort has been already invested in managing a cluster of 
KVM/Qemu hypervisors, with OpenStack being a very notable one, why don't we 
piggy-back on that, and have a 'framework' and a 'containerizer' that delegate 
to OpenStack APIs? We actually have a prototype of this working (with some 
limitations), and can share impressions/code if folks are interested.

Regards,
Alex



was (Author: glikson):
Great initiative!
I have an idea which is likely to raise some eyebrows.
Assuming that lots of effort has been already invested in managing a cluster of 
KVM/Qemu hypervisors, with OpenStack being a very notable one, why don't we 
piggy-back on that, and have a 'framework' and a 'containerizer' that delegate 
to OpenStack APIs? We actually have a prototype of this working (with some 
limitations), and can share impressions/code if folks are interested.

Regards,
Alex


> Qemu/KVM containerizer
> --
>
> Key: MESOS-2717
> URL: https://issues.apache.org/jira/browse/MESOS-2717
> Project: Mesos
>  Issue Type: Wish
>  Components: containerization
>Reporter: Pierre-Yves Ritschard
>Assignee: Abhishek Dasgupta
>
> I think it would make sense for Mesos to have the ability to treat 
> hypervisors as containerizers and the most sensible one to start with would 
> probably be Qemu/KVM.
> There are a few workloads that can require full-fledged VMs (the most obvious 
> one being Windows workloads).
> The containerization code is well decoupled and seems simple enough, I can 
> definitely take a shot at it. VMs do bring some questions with them here is 
> my take on them:
> 1. Routing, network strategy
> ==
> The simplest approach here might very well be to go for bridged networks
> and leave the setup and inter slave routing up to the administrator
> 2. IP Address assignment
> 
> At first, it can be up to the Frameworks to deal with IP assignment.
> The simplest way to address this could be to have an executor running
> on slaves providing the qemu/kvm containerizer which would instrument a DHCP 
> server and collect IP + Mac address resources from slaves. While it may be up 
> to the frameworks to provide this, an example should most likely be provided.
> 3. VM Templates
> ==
> VM templates should probably leverage the fetcher and could thus be copied 
> locally or fetch from HTTP(s) / HDFS.
> 4. Resource limiting
> 
> Mapping resouce constraints to the qemu command line is probably the easiest 
> part, Additional command line should also be fetchable. For Unix VMs, the 
> sandbox could show the output of the serial console
> 5. Libvirt / plain Qemu
> =
> I tend to favor limiting the amount of necessary hoops to jump through and 
> would thus investigate working directly with Qemu, maintaining an open 
> connection to the monitor to assert status.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-2717) Qemu/KVM containerizer

2016-04-10 Thread Alex Glikson (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-2717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233927#comment-15233927
 ] 

Alex Glikson commented on MESOS-2717:
-

Great initiative!
I have an idea which is likely to raise some eyebrows.
Assuming that lots of effort has been already invested in managing a cluster of 
KVM/Qemu hypervisors, with OpenStack being a very notable one, why don't we 
piggy-back on that, and have a 'framework' and a 'containerizer' that delegate 
to OpenStack APIs? We actually have a prototype of this working (with some 
limitations), and can share impressions/code if folks are interested.

Regards,
Alex


> Qemu/KVM containerizer
> --
>
> Key: MESOS-2717
> URL: https://issues.apache.org/jira/browse/MESOS-2717
> Project: Mesos
>  Issue Type: Wish
>  Components: containerization
>Reporter: Pierre-Yves Ritschard
>Assignee: Abhishek Dasgupta
>
> I think it would make sense for Mesos to have the ability to treat 
> hypervisors as containerizers and the most sensible one to start with would 
> probably be Qemu/KVM.
> There are a few workloads that can require full-fledged VMs (the most obvious 
> one being Windows workloads).
> The containerization code is well decoupled and seems simple enough, I can 
> definitely take a shot at it. VMs do bring some questions with them here is 
> my take on them:
> 1. Routing, network strategy
> ==
> The simplest approach here might very well be to go for bridged networks
> and leave the setup and inter slave routing up to the administrator
> 2. IP Address assignment
> 
> At first, it can be up to the Frameworks to deal with IP assignment.
> The simplest way to address this could be to have an executor running
> on slaves providing the qemu/kvm containerizer which would instrument a DHCP 
> server and collect IP + Mac address resources from slaves. While it may be up 
> to the frameworks to provide this, an example should most likely be provided.
> 3. VM Templates
> ==
> VM templates should probably leverage the fetcher and could thus be copied 
> locally or fetch from HTTP(s) / HDFS.
> 4. Resource limiting
> 
> Mapping resouce constraints to the qemu command line is probably the easiest 
> part, Additional command line should also be fetchable. For Unix VMs, the 
> sandbox could show the output of the serial console
> 5. Libvirt / plain Qemu
> =
> I tend to favor limiting the amount of necessary hoops to jump through and 
> would thus investigate working directly with Qemu, maintaining an open 
> connection to the monitor to assert status.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-5162) Commit message hook behaves incorrectly when a message includes a "*".

2016-04-10 Thread Michael Park (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Park updated MESOS-5162:

Description: 
If there is a "\*" in a commit message (there often is when we have bulleted 
lists), due to the current use of {{echo $LINE}}, the {{$LINE}} gets expanded 
with a "*" in it, which becomes a matcher in bash and therefore subsequently 
gets expanded into the list of files/directories in the current directory.

In order to avoid this mess, we need to wrap such variables in quotes, like so: 
{{echo "$LINE"}}.

  was:
If there is a "*" in a commit message (there often is when we have bulleted 
lists), due to the current use of {{echo $LINE}}, the {{$LINE}} gets expanded 
with a "*" in it, which becomes a matcher in bash and therefore subsequently 
gets expanded into the list of files/directories in the current directory.

In order to avoid this mess, we need to wrap such variables in quotes, like so: 
{{echo "$LINE"}}.


> Commit message hook behaves incorrectly when a message includes a "*".
> --
>
> Key: MESOS-5162
> URL: https://issues.apache.org/jira/browse/MESOS-5162
> Project: Mesos
>  Issue Type: Bug
>Reporter: Michael Park
>Assignee: Michael Park
>  Labels: mesosphere
>
> If there is a "\*" in a commit message (there often is when we have bulleted 
> lists), due to the current use of {{echo $LINE}}, the {{$LINE}} gets expanded 
> with a "*" in it, which becomes a matcher in bash and therefore subsequently 
> gets expanded into the list of files/directories in the current directory.
> In order to avoid this mess, we need to wrap such variables in quotes, like 
> so: {{echo "$LINE"}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MESOS-5162) Commit message hook behaves incorrectly when a message includes a "*".

2016-04-10 Thread Michael Park (JIRA)
Michael Park created MESOS-5162:
---

 Summary: Commit message hook behaves incorrectly when a message 
includes a "*".
 Key: MESOS-5162
 URL: https://issues.apache.org/jira/browse/MESOS-5162
 Project: Mesos
  Issue Type: Bug
Reporter: Michael Park
Assignee: Michael Park


If there is a "*" in a commit message (there often is when we have bulleted 
lists), due to the current use of {{echo $LINE}}, the {{$LINE}} gets expanded 
with a "*" in it, which becomes a matcher in bash and therefore subsequently 
gets expanded into the list of files/directories in the current directory.

In order to avoid this mess, we need to wrap such variables in quotes, like so: 
{{echo "$LINE"}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)