Re: [openstack-dev] [magnum] Temporarily remove swarm func test from gate

2016-01-19 Thread Egor Guz
Ton,

I belive so, I will create separate patch from 
https://review.openstack.org/#/c/251158/
Also we need to explore possibility to create volume at /dev/vda2 device (it 
has about 5G free space).
Unfortunately Atomic has very little documentation, so the plan is use Cinder 
volume until we can
figure out better way.

—
Egor

On Jan 18, 2016, at 22:27, Ton Ngo <t...@us.ibm.com<mailto:t...@us.ibm.com>> 
wrote:


Hi Egor,
Do we need to add a cinder volume to the master nodes for Kubernetes as well? 
We did not run Docker on the master node before so the volume was not needed.
Ton Ngo,


Hongbin Lu ---01/18/2016 12:29:09 PM---Hi Egor, Thanks for 
investigating on the issue. I will review the patch. Agreed. We can definitely e

From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
To: Egor Guz <e...@walmartlabs.com<mailto:e...@walmartlabs.com>>, OpenStack 
Development Mailing List 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: 01/18/2016 12:29 PM
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate





Hi Egor,

Thanks for investigating on the issue. I will review the patch. Agreed. We can 
definitely enable the swarm tests if everything works fine.

Best regards,
Hongbin

-Original Message-
From: Egor Guz [mailto:e...@walmartlabs.com]
Sent: January-18-16 2:42 PM
To: OpenStack Development Mailing List
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

Hongbin,

I did some digging and found that docker storage driver wasn’t configured 
correctly at agent nodes.
Also it looks like Atomic folks recommend use deicated volumes for DeviceMapper 
(http://www.projectatomic.io/blog/2015/06/notes-on-fedora-centos-and-docker-storage-drivers/).
So added Cinder volume for the master as well (I tried create volumes at local 
storage, but it’s not even enough space for 1G volume).

Please take a look at https://review.openstack.org/#/c/267996, did around ~12 
gates run and got only 2 failures (tests cannot connect to master, but all 
containers logs looks alrignt. e.g. 
http://logs.openstack.org/96/267996/3/check/gate-functional-dsvm-magnum-swarm/d8d855b/console.html#_2016-01-18_04_31_17_312),
 we have similar error rates with Kub. So after merging this code we can try to 
enable voting for Swarm tests, thoughts?

—
Egor

On Jan 8, 2016, at 12:01, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com><mailto:hongbin...@huawei.com>>
 wrote:

There are other symptoms as well, which I have no idea without a deep dip.

-Original Message-
From: Egor Guz [mailto:e...@walmartlabs.com]
Sent: January-08-16 2:14 PM
To: 
openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

Hongbin,

I belive most failures are related to containers tests. Maybe we should comment 
only them out and keep Swarm cluster provisioning.
Thoughts?

—
Egor

On Jan 8, 2016, at 06:37, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com><mailto:hongbin...@huawei.com><mailto:hongbin...@huawei.com>>
 wrote:

Done: https://review.openstack.org/#/c/264998/

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: January-07-16 10:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

Hongbin,

I’m not aware of any viable options besides using a nonvoting gate job. Are 
there other alternatives to consider? If not, let’s proceed with that approach.

Adrian

On Jan 7, 2016, at 3:34 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com><mailto:hongbin...@huawei.com><mailto:hongbin...@huawei.com>>
 wrote:

Clark,

That is true. The check pipeline must pass in order to enter the gate pipeline. 
Here is the problem we are facing. A patch that was able to pass the check 
pipeline is blocked in gate pipeline, due to the instability of the test. The 
removal of unstable test from gate pipeline aims to unblock the patches that 
already passed the check.

An alternative is to remove the unstable test from check pipeline as well or 
mark it as non-voting test. If that is what the team prefers, I will adjust the 
review accordingly.

Best regards,
Honbgin

-Original Message-
From: Clark Boylan [mailto:cboy...@sapwetik.org]
Sent: January-07-16 6:04 PM
To: 
openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

On Thu

Re: [openstack-dev] [magnum] Temporarily remove swarm func test from gate

2016-01-18 Thread Egor Guz
Hongbin,

I did some digging and found that docker storage driver wasn’t configured 
correctly at agent nodes.
Also it looks like Atomic folks recommend use deicated volumes for DeviceMapper 
(http://www.projectatomic.io/blog/2015/06/notes-on-fedora-centos-and-docker-storage-drivers/).
So added Cinder volume for the master as well (I tried create volumes at local 
storage, but it’s not even enough space for 1G volume).

Please take a look at https://review.openstack.org/#/c/267996, did around ~12 
gates run and got only 2 failures (tests cannot connect to master, but all 
containers logs looks alrignt. e.g. 
http://logs.openstack.org/96/267996/3/check/gate-functional-dsvm-magnum-swarm/d8d855b/console.html#_2016-01-18_04_31_17_312),
 we have similar error rates with Kub. So after merging this code we can try to 
enable voting for Swarm tests, thoughts?

—
Egor

On Jan 8, 2016, at 12:01, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

There are other symptoms as well, which I have no idea without a deep dip.

-Original Message-
From: Egor Guz [mailto:e...@walmartlabs.com]
Sent: January-08-16 2:14 PM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

Hongbin,

I belive most failures are related to containers tests. Maybe we should comment 
only them out and keep Swarm cluster provisioning.
Thoughts?

—
Egor

On Jan 8, 2016, at 06:37, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com><mailto:hongbin...@huawei.com>>
 wrote:

Done: https://review.openstack.org/#/c/264998/

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: January-07-16 10:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

Hongbin,

I’m not aware of any viable options besides using a nonvoting gate job. Are 
there other alternatives to consider? If not, let’s proceed with that approach.

Adrian

On Jan 7, 2016, at 3:34 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com><mailto:hongbin...@huawei.com>>
 wrote:

Clark,

That is true. The check pipeline must pass in order to enter the gate pipeline. 
Here is the problem we are facing. A patch that was able to pass the check 
pipeline is blocked in gate pipeline, due to the instability of the test. The 
removal of unstable test from gate pipeline aims to unblock the patches that 
already passed the check.

An alternative is to remove the unstable test from check pipeline as well or 
mark it as non-voting test. If that is what the team prefers, I will adjust the 
review accordingly.

Best regards,
Honbgin

-Original Message-
From: Clark Boylan [mailto:cboy...@sapwetik.org]
Sent: January-07-16 6:04 PM
To: 
openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

On Thu, Jan 7, 2016, at 02:59 PM, Hongbin Lu wrote:
Hi folks,

It looks the swarm func test is currently unstable, which negatively impacts 
the patch submission workflow. I proposed to remove it from Jenkins gate (but 
keep it in Jenkins check), until it becomes stable.
Please find the details in the review
(https://review.openstack.org/#/c/264998/) and let me know if you have any 
concern.

Removing it from gate but not from check doesn't necessarily help much because 
you can only enter the gate pipeline once the change has a +1 from Jenkins. 
Jenkins applies the +1 after check tests pass.

Clark

__
 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org><mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?s

Re: [openstack-dev] [magnum] Temporarily remove swarm func test from gate

2016-01-08 Thread Egor Guz
Hongbin,

I belive most failures are related to containers tests. Maybe we should comment 
only them out and keep Swarm cluster provisioning.
Thoughts?

—
Egor

On Jan 8, 2016, at 06:37, Hongbin Lu 
> wrote:

Done: https://review.openstack.org/#/c/264998/

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: January-07-16 10:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

Hongbin,

I’m not aware of any viable options besides using a nonvoting gate job. Are 
there other alternatives to consider? If not, let’s proceed with that approach.

Adrian

On Jan 7, 2016, at 3:34 PM, Hongbin Lu 
> wrote:

Clark,

That is true. The check pipeline must pass in order to enter the gate pipeline. 
Here is the problem we are facing. A patch that was able to pass the check 
pipeline is blocked in gate pipeline, due to the instability of the test. The 
removal of unstable test from gate pipeline aims to unblock the patches that 
already passed the check.

An alternative is to remove the unstable test from check pipeline as well or 
mark it as non-voting test. If that is what the team prefers, I will adjust the 
review accordingly.

Best regards,
Honbgin

-Original Message-
From: Clark Boylan [mailto:cboy...@sapwetik.org]
Sent: January-07-16 6:04 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func
test from gate

On Thu, Jan 7, 2016, at 02:59 PM, Hongbin Lu wrote:
Hi folks,

It looks the swarm func test is currently unstable, which negatively
impacts the patch submission workflow. I proposed to remove it from
Jenkins gate (but keep it in Jenkins check), until it becomes stable.
Please find the details in the review
(https://review.openstack.org/#/c/264998/) and let me know if you
have any concern.

Removing it from gate but not from check doesn't necessarily help much because 
you can only enter the gate pipeline once the change has a +1 from Jenkins. 
Jenkins applies the +1 after check tests pass.

Clark

__
 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Kuryr] Help with using docker-python client in gate

2015-12-17 Thread Egor Guz
Gal,

I think you need to setup your Docker environment to allow run cli without sudo 
permission (https://docs.docker.com/engine/installation/ubuntulinux/).
Or use tcp socket instead (https://docs.docker.com/v1.8/articles/basics/), 
Magnum/Swarm/docker-machine uses this approach all the time.

—
Egor

On Dec 17, 2015, at 07:53, Gal Sagie 
> wrote:

Hello Everyone,

We are trying to add some gate testing for Kuryr and hopefully convert these 
also to Rally
plugins.

What i am facing in the gate right now is this:
I configure the docker client:


self.docker_client = docker.Client(
base_url='unix://var/run/docker.sock')


And call this:
self.docker_client.create_network(name='fakenet', driver='kuryr')


This works locally, and i also tried to run this code with a different user

But on the gate this fails:
http://logs.openstack.org/79/258379/5/check/gate-kuryr-dsvm-fullstack-nv/f46ebdb/

2-17 
05:22:16.900
 | 2015-12-17 05:22:16.851 |
2015-12-17 
05:22:16.902
 | 2015-12-17 05:22:16.852 | {0} 
kuryr.tests.fullstack.test_network.NetworkTest.test_create_delete_network 
[0.093287s] ... FAILED
2015-12-17 
05:22:16.934
 | 2015-12-17 05:22:16.854 |
2015-12-17 
05:22:16.935
 | 2015-12-17 05:22:16.855 | Captured traceback:
2015-12-17 
05:22:16.935
 | 2015-12-17 05:22:16.856 | ~~~
2015-12-17 
05:22:16.935
 | 2015-12-17 05:22:16.857 | Traceback (most recent call last):
2015-12-17 
05:22:16.935
 | 2015-12-17 05:22:16.859 |   File 
"kuryr/tests/fullstack/test_network.py", line 27, in test_create_delete_network
2015-12-17 
05:22:16.936
 | 2015-12-17 05:22:16.860 | 
self.docker_client.create_network(name='fakenet', driver='kuryr')
2015-12-17 
05:22:16.936
 | 2015-12-17 05:22:16.861 |   File 
"/opt/stack/new/kuryr/.tox/fullstack/local/lib/python2.7/site-packages/docker/utils/decorators.py",
 line 35, in wrapper
2015-12-17 
05:22:16.936
 | 2015-12-17 05:22:16.862 | return f(self, *args, **kwargs)
2015-12-17 
05:22:16.936
 | 2015-12-17 05:22:16.864 |   File 
"/opt/stack/new/kuryr/.tox/fullstack/local/lib/python2.7/site-packages/docker/api/network.py",
 line 28, in create_network
2015-12-17 
05:22:16.936
 | 2015-12-17 05:22:16.865 | res = self._post_json(url, data=data)
2015-12-17 
05:22:16.937
 | 2015-12-17 05:22:16.866 |   File 
"/opt/stack/new/kuryr/.tox/fullstack/local/lib/python2.7/site-packages/docker/client.py",
 line 166, in _post_json
2015-12-17 
05:22:16.937
 | 2015-12-17 05:22:16.867 | return self._post(url, 
data=json.dumps(data2), **kwargs)
2015-12-17 
05:22:16.937
 | 2015-12-17 05:22:16.870 |   File 
"/opt/stack/new/kuryr/.tox/fullstack/local/lib/python2.7/site-packages/docker/client.py",
 line 107, in _post
2015-12-17 
05:22:16.937
 | 2015-12-17 05:22:16.871 | return self.post(url, 
**self._set_request_timeout(kwargs))
2015-12-17 
05:22:16.937
 | 2015-12-17 05:22:16.873 |   

Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-15 Thread Egor Guz
Vilobh/Tim, could you elaborate about your use-cases around Magnum quota?

My concern is that user will be easy lost in quotas ;). e.g. we already have 
nova/cinder/neutron and Kub/Mesos(Framework) quotas.

There are two use-cases:
- tenant has it’s own list of bays/clusters (nova/cinder/neutron quota will 
apply)
- operator provision shared cluster and relay at Kub/Mesos(Framework) quota 
management

Also user full access to native tools (Kub/Marathon/Swarm), how quota will be 
applied in this case?

—
Egor

From: Vilobh Meshram 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, December 15, 2015 at 11:11
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Cc: "OpenStack Mailing List (not for usage questions)" 
>, Belmiro 
Moreira >
Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

IMHO for Magnum and Nested Quota we need more discussion before proceeding 
ahead because :-

1. The main intent of hierarchical multi tenancy is creating a hierarchy of 
projects (so that its easier for the cloud provider to manage different 
projects) and nested quota driver being able to validate and impose those 
restrictions.
2. The tenancy boundary in Magnum is the bay. Bays offer both a management and 
security isolation between multiple tenants.
3. In Magnum there is no intent to share a single bay between multiple tenants.

So I would like to have a discussion on whether Nested Quota approach fits in 
our/Magnum's design and how will the resources be distributed in the hierarchy. 
I will include it in our Magnum weekly meeting agenda.

I have in-fact drafted a blueprint for it sometime back [1].

I am a huge supporter of hierarchical projects and nested quota approaches (as 
they if done correctly IMHO minimize admin pain of managing quotas) , just 
wanted to see a cleaner way we can get this done for Magnum.

JFYI, I am the primary author of Cinder Nested Quota [2]  and co-author of Nova 
Nested Quota[3] so I am familiar with the approach taken in both.

Thoughts ?

-Vilobh

[1]  Magnum Nested Quota : 
https://blueprints.launchpad.net/magnum/+spec/nested-quota-magnum
[2] Cinder Nested Quota Driver : https://review.openstack.org/#/c/205369/
[3] Nova Nested Quota Driver : https://review.openstack.org/#/c/242626/

On Tue, Dec 15, 2015 at 10:10 AM, Tim Bell 
> wrote:
Thanks… it is really important from the user experience that we keep the nested 
quota implementations in sync so we don’t have different semantics.

Tim

From: Adrian Otto 
[mailto:adrian.o...@rackspace.com]
Sent: 15 December 2015 18:44
To: OpenStack Development Mailing List (not for usage questions) 
>
Cc: OpenStack Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

Vilobh,

Thanks for advancing this important topic. I took a look at what Tim referenced 
how Nova is implementing nested quotas, and it seems to me that’s something we 
could fold in as well to our design. Do you agree?

Adrian

On Dec 14, 2015, at 10:22 PM, Tim Bell 
> wrote:

Can we have nested project quotas in from the beginning ? Nested projects are 
in Keystone V3 from Kilo onwards and retrofitting this is hard work.

For details, see the Nova functions at 
https://review.openstack.org/#/c/242626/. Cinder now also has similar functions.

Tim

From: Vilobh Meshram [mailto:vilobhmeshram.openst...@gmail.com]
Sent: 15 December 2015 01:59
To: OpenStack Development Mailing List (not for usage questions) 
>; 
OpenStack Mailing List (not for usage questions) 
>
Subject: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

Hi All,

Currently, it is possible to create unlimited number of resource like 
bay/pod/service/. In Magnum, there should be a limitation for user or project 
to create Magnum resource,
and the limitation should be configurable[1].

I proposed following design :-

1. Introduce new table magnum.quotas
++--+--+-+-++
| Field  | Type | Null | Key | Default | Extra  |
++--+--+-+-++
| id | int(11)  | NO   | PRI | NULL| 

Re: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite often for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

2015-12-15 Thread Egor Guz
Clark,


What about ephemeral storage at OVH vms? I see may storage related errors (see 
full output below) these days.
Basically it  means Docker cannot create storage device at local drive

-- Logs begin at Mon 2015-12-14 06:40:09 UTC, end at Mon 2015-12-14 07:00:38 
UTC. --
Dec 14 06:45:50 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 systemd[1]: Stopped 
Docker Application Container Engine.
Dec 14 06:47:54 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 systemd[1]: Starting 
Docker Application Container Engine...
Dec 14 06:48:00 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 docker[1022]: Warning: 
'-d' is deprecated, it will be removed soon. See usage.
Dec 14 06:48:00 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 docker[1022]: 
time="2015-12-14T06:48:00Z" level=warning msg="please use 'docker daemon' 
instead."
Dec 14 06:48:03 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 docker[1022]: 
time="2015-12-14T06:48:03.447936206Z" level=info msg="Listening for HTTP on 
unix (/var/run/docker.sock)"
Dec 14 06:48:06 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 docker[1022]: 
time="2015-12-14T06:48:06.280086735Z" level=fatal msg="Error starting daemon: 
error initializing graphdriver: Non existing device docker-docker--pool"
Dec 14 06:48:06 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 systemd[1]: 
docker.service: main process exited, code=exited, status=1/FAILURE
Dec 14 06:48:06 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 systemd[1]: Failed to 
start Docker Application Container Engine.
Dec 14 06:48:06 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 systemd[1]: Unit 
docker.service entered failed state.
Dec 14 06:48:06 

 te-egw4i5xthw-0-nmaiwpjhkqg6-kube-minion-5emvszmbwpi2 systemd[1]: 
docker.service failed.


http://logs.openstack.org/58/251158/3/check/gate-functional-dsvm-magnum-k8s/5ed0e01/logs/bay-nodes/worker-test_replication_controller_apis-172.24.5.11/docker.txt.gz


—
Egor




On 12/13/15, 10:51, "Clark Boylan"  wrote:

>On Sat, Dec 12, 2015, at 02:16 PM, Hongbin Lu wrote:
>> Hi,
>> 
>> As Kai Qiang mentioned, magnum gate recently had a bunch of random
>> failures, which occurred on creating a nova instance with 2G of RAM.
>> According to the error message, it seems that the hypervisor tried to
>> allocate memory to the nova instance but couldn’t find enough free memory
>> in the host. However, by adding a few “nova hypervisor-show XX” before,
>> during, and right after the test, it showed that the host has 6G of free
>> RAM, which is far more than 2G. Here is a snapshot of the output [1]. You
>> can find the full log here [2].
>If you look at the dstat log
>http://logs.openstack.org/07/244907/5/check/gate-functional-dsvm-magnum-k8s/5305d7a/logs/screen-dstat.txt.gz
>the host has nowhere near 6GB free memory and less than 2GB. I think you
>actually are just running out of memory.
>> 
>> Another observation is that most of the failure happened on a node with
>> name “devstack-trusty-ovh-*” (You can verify it by entering a query [3]
>> at http://logstash.openstack.org/ ). It seems that the 

Re: [openstack-dev] [magnum] Using docker container to run COE daemons

2015-11-27 Thread Egor Guz
Jay,

"A/B testing" for PROD Infra sounds very cool ;) (we are doing it with business 
apps all the time, but stuck with canary, incremental rollout or blue-green (if 
we have enough capacity ;)) deployments for infra), do you mind share details 
how are you doing it? My concern is that you need at least to change container 
version and restart container/service, it sounds like typical configuration 
push.

I agree with Hongbin’s concerns about blindly moving everything in containers. 
Actually we are moving everything into containers for LAB/DEV environments 
because it allow us to test/play with different versions/configs, but it’s not 
the case for PROD because we try to avoid adding extra complexity (e.g. need to 
monitor Docker daemon itself). And building new image (current process) is 
pretty trivial these days.

Have you tested slave/agent inside container? I was under impression that it 
doesn’t work until somebody from Kolla team pointed me to the 
https://hub.docker.com/u/mesoscloud/.
Also I belive you can try your approach without any changes at existing 
template, because it’s just start services and adding configurations. So you 
can build image which has  the same services as Docker containers with
volumes mapped to config folders at host.

―
Egor

From: Jay Lau >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, November 26, 2015 at 16:02
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] Using docker container to run COE daemons

One of the benefit of running daemons in docker container is that the cluster 
can upgrade more easily. Take mesos as an example, if I can make mesos running 
in container, then when update mesos slave with some hot fixes, I can upgrade 
the mesos slave to a new version in an gray upgrade, i.e. ABtest etc.

On Fri, Nov 27, 2015 at 12:01 AM, Hongbin Lu 
> wrote:
Jay,

Agree and disagree. Containerize some COE daemons will facilitate the version 
upgrade and maintenance. However, I don’t think it is correct to blindly 
containerize everything unless there is an investigation performed to 
understand the benefits and costs of doing that. Quoted from Egor, the common 
practice in k8s is to containerize everything except kublet, because it seems 
it is just too hard to containerize everything. In the case of mesos, I am not 
sure if it is a good idea to move everything to containers, given the fact that 
it is relatively easy to manage and upgrade debian packages at Ubuntu. However, 
in the new CoreOS mesos bay [1], meos daemons will run at containers.

In summary, I think the correct strategy is to selectively containerize some 
COE daemons, but we don’t have to containerize *all* COE daemons.

[1] https://blueprints.launchpad.net/magnum/+spec/mesos-bay-with-coreos

Best regards,
Hongbin

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: November-26-15 2:06 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Using docker container to run COE daemons

Thanks Kai Qing, I filed a bp for mesos bay here 
https://blueprints.launchpad.net/magnum/+spec/mesos-in-container

On Thu, Nov 26, 2015 at 8:11 AM, Kai Qiang Wu 
> wrote:

Hi Jay,

For the Kubernetes COE container ways, I think @Hua Wang is doing that.

For the swarm COE, the swarm already has master and agent running in container

For the mesos, it still not have container work until now, Maybe someone 
already draft bp on it ? Not quite sure



Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Jay Lau ---26/11/2015 07:15:59 am---Hi, It is 
becoming more and more popular to use docker container]Jay Lau ---26/11/2015 
07:15:59 am---Hi, It is becoming more and more popular to use docker container 
run some

From: Jay Lau >
To: OpenStack Development Mailing List 
>
Date: 26/11/2015 07:15 am
Subject: [openstack-dev] [magnum] Using docker container to run COE daemons





Hi,

It is becoming more and more popular to use docker 

Re: [openstack-dev] [magnum]storage for docker-bootstrap

2015-11-27 Thread Egor Guz
Wanghua,

I don’t think moving flannel to the container is good idea. This is setup great 
for dev environment, but become too complex from operator point of view (you 
add extra Docker daemon and need extra Cinder volume for this daemon, also
keep in mind it makes sense to keep etcd data folder at Cinder storage as well 
because etcd is database). flannel has just there files without extra 
dependencies and it’s much easy to download it during cloud-init ;)

I agree that we have pain with building Fedora Atomic images, but instead of 
simplify this process we should switch to another more “friendly” images (e.g. 
Fedora/CentOS/Ubuntu) which we can easy build with disk builder.
Also we can fix CoreOS template (I believe people more asked about it instead 
of Atomic), but we may face similar to Atomic issues when we will try to 
integrate not CoreOS products (e.g. Calico or Weave)

―
Egor

From: 王华 >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, November 26, 2015 at 00:15
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum]storage for docker-bootstrap

Hi Hongbin,

The docker in master node stores data in /dev/mapper/atomicos-docker--data and 
metadata in /dev/mapper/atomicos-docker--meta. 
/dev/mapper/atomicos-docker--data and /dev/mapper/atomicos-docker--meta are 
logic volumes. The docker in minion node store data in the cinder volume, but 
/dev/mapper/atomicos-docker--meta and /dev/mapper/atomicos-docker--meta are not 
used. If we want to leverage Cinder volume for docker in master, should we drop 
/dev/mapper/atomicos-docker--meta and /dev/mapper/atomicos-docker--meta? I 
think it is not necessary to allocate a Cinder volume. It is enough to allocate 
two logic volumes for docker, because only etcd, flannel, k8s run in the docker 
daemon which need not a large amount of storage.

Best regards,
Wanghua

On Thu, Nov 26, 2015 at 12:40 AM, Hongbin Lu 
> wrote:
Here is a bit more context.

Currently, at k8s and swarm bay, some required binaries (i.e. etcd and flannel) 
are built into image and run at host. We are exploring the possibility to 
containerize some of these system components. The rationales are (i) it is 
infeasible to build custom packages into an atomic image and (ii) it is 
infeasible to upgrade individual component. For example, if there is a bug in 
current version of flannel and we know the bug was fixed in the next version, 
we need to upgrade flannel by building a new image, which is a tedious process.

To containerize flannel, we need a second docker daemon, called 
docker-bootstrap [1]. In this setup, pods are running on the main docker 
daemon, and flannel and etcd are running on the second docker daemon. The 
reason is that flannel needs to manage the network of the main docker daemon, 
so it needs to run on a separated daemon.

Daneyon, I think it requires separated storage because it needs to run a 
separated docker daemon (unless there is a way to make two docker daemons share 
the same storage).

Wanghua, is it possible to leverage Cinder volume for that. Leveraging external 
storage is always preferred [2].

[1] 
http://kubernetes.io/v1.1/docs/getting-started-guides/docker-multinode.html#bootstrap-docker
[2] http://www.projectatomic.io/docs/docker-storage-recommendation/

Best regards,
Hongbin

From: Daneyon Hansen (danehans) 
[mailto:daneh...@cisco.com]
Sent: November-25-15 11:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]storage for docker-bootstrap



From: 王华 >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, November 25, 2015 at 5:00 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [magnum]storage for docker-bootstrap

Hi all,

I am working on containerizing etcd and flannel. But I met a problem. As 
described in [1], we need a docker-bootstrap. Docker and docker-bootstrap can 
not use the same storage, so we need some disk space for it.

I reviewed [1] and I do not see where the bootstrap docker instance requires 
separate storage.

The docker in master node stores data in /dev/mapper/atomicos-docker--data and 
metadata in /dev/mapper/atomicos-docker--meta. The disk space left is too same 
for docker-bootstrap. Even if the root_gb of the instance flavor is 20G, only 
8G can be used in our image. I want to make it bigger. One way is 

Re: [openstack-dev] [magnum] Mesos Conductor

2015-11-19 Thread Egor Guz
+1, I found that 'kubectl create -f FILENAME’ 
(https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/kubectl/kubectl_create.md)
 works very well for different type of objects and I think we should try to use 
it.

but I think we should support two use-cases
 - 'magnum container-create’, with simple list of options which work for 
Swarm/Mesos/Kub. it will be good option for users who just wants to try 
containers.
 - 'magnum create ’, with file which has Swarm/Mesos/Kub specific payload.

―
Egor

From: Adrian Otto >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, November 19, 2015 at 10:36
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] Mesos Conductor

I’m open to allowing magnum to pass a blob of data (such as a lump of JSON or 
YAML) to the Bay's native API. That approach strikes a balance that’s 
appropriate.

Adrian

On Nov 19, 2015, at 10:01 AM, bharath thiruveedula 
> wrote:

Hi,

At the present scenario, we can have mesos conductor with existing 
attributes[1]. Or we can add  extra options like 'portMappings', 'instances', 
'uris'[2]. And the other options is to take json file as input to 'magnum 
container-create' and dispatch it to corresponding conductor. And the conductor 
will handle the json input. Let me know your opinions.


Regards
Bharath T




[1]https://goo.gl/f46b4H
[2]https://mesosphere.github.io/marathon/docs/application-basics.html

To: openstack-dev@lists.openstack.org
From: wk...@cn.ibm.com
Date: Thu, 19 Nov 2015 10:47:33 +0800
Subject: Re: [openstack-dev] [magnum] Mesos Conductor

@bharath,

1) actually, if you mean use container-create(delete) to do on mesos bay for 
apps. I am not sure how different the interface between docker interface and 
mesos interface. One point that when you introduce that feature, please not 
make docker container interface more complicated than now. I worried that 
because it would confuse end-users a lot than the unified benefits. (maybe as 
optional parameter to pass one json file to create containers in mesos)

2) For the unified interface, I think it need more thoughts, we need not bring 
more trouble to end-users to learn new concepts or interfaces, except we could 
have more clear interface, but different COES vary a lot. It is very challenge.



Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for bharath thiruveedula ---19/11/2015 10:31:58 
am---@hongin, @adrian I agree with you. So can we go ahea]bharath thiruveedula 
---19/11/2015 10:31:58 am---@hongin, @adrian I agree with you. So can we go 
ahead with magnum container-create(delete) ... for

From:  bharath thiruveedula 
>
To:  OpenStack Development Mailing List not for usage questions 
>
Date:  19/11/2015 10:31 am
Subject:  Re: [openstack-dev] [magnum] Mesos Conductor





@hongin, @adrian I agree with you. So can we go ahead with magnum 
container-create(delete) ... for mesos bay (which actually create 
mesos(marathon) app internally)?

@jay, yes we multiple frameworks which are using mesos lib. But the mesos bay 
we are creating uses marathon. And we had discussion in irc on this topic, and 
I was asked to implement initial version for marathon. And agree with you to 
have unified client interface for creating pod,app.

Regards
Bharath T


Date: Thu, 19 Nov 2015 10:01:35 +0800
From: jay.lau@gmail.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Mesos Conductor

+1.

One problem I want to mention is that for mesos integration, we cannot limited 
to Marathon + Mesos as there are many frameworks can run on top of Mesos, such 
as Chronos, Kubernetes etc, we may need to consider more for Mesos integration 
as there is a huge eco-system build on top of Mesos.

On Thu, Nov 19, 2015 at 8:26 AM, Adrian Otto 

Re: [openstack-dev] [magnum] Autoscaling both clusters and containers

2015-11-17 Thread Egor Guz
Ryan

I haven’t seen any proposals/implementations from Mesos/Swarm (but  I am not 
following Mesos and Swam community very close these days).
But Kubernetes 1.1 has pod autoscaling 
(https://github.com/kubernetes/kubernetes/blob/master/docs/design/horizontal-pod-autoscaler.md),
which should cover containers auto-scaling. Also there is PR for cluster 
auto-scaling (https://github.com/kubernetes/kubernetes/pull/15304), which
has implementation for GCE, but OpenStack support can be added as well.

—
Egor

From: Ton Ngo >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, November 17, 2015 at 16:58
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] Autoscaling both clusters and containers


Hi Ryan,
There was a talk in the last Summit on this topics to explore the options with 
Magnum, Senlin, Heat, Kubernetes:
https://www.openstack.org/summit/tokyo-2015/videos/presentation/exploring-magnum-and-senlin-integration-for-autoscaling-containers
A demo was shown with Senlin interfacing to Magnum to autoscale.
There was also a Magnum design session to discuss this same topics. The use 
cases are similar to what you describe. Because the subject is complex, there 
are many moving parts, and multiple teams/projects are involved, one outcome of 
the design session is that we will write a spec on autoscaling containers and 
cluster. A patch should be coming soon, so it would be great to have your input 
on the spec.
Ton,

[Inactive hide details for Ryan Rossiter ---11/17/2015 02:05:48 PM---Hi all, I 
was having a discussion with a teammate with resp]Ryan Rossiter ---11/17/2015 
02:05:48 PM---Hi all, I was having a discussion with a teammate with respect to 
container

From: Ryan Rossiter 
>
To: openstack-dev@lists.openstack.org
Date: 11/17/2015 02:05 PM
Subject: [openstack-dev] [magnum] Autoscaling both clusters and containers





Hi all,

I was having a discussion with a teammate with respect to container
scaling. He likes the aspect of nova-docker that allows you to scale
(essentially) infinitely almost instantly, assuming you are using a
large pool of compute hosts. In the case of Magnum, if I'm a container
user, I don't want to be paying for a ton of vms that just sit idle, but
I also want to have enough vms to handle my scale when I infrequently
need it. But above all, when I need scale, I don't want to suddenly have
to go boot vms and wait for them to start up when I really need it.

I saw [1] which discusses container scaling, but I'm thinking we can
take this one step further. If I don't want to pay for a lot of vms when
I'm not using them, could I set up an autoscale policy that allows my
cluster to expand when my container concentration gets too high on my
existing cluster? It's kind of a case of nested autoscaling. The
containers are scaled based on request demand, and the cluster vms are
scaled based on container count.

I'm unsure of the details of Senlin, but at least looking at Heat
autoscaling [2], this would not be very hard to add to the Magnum
templates, and we would forward those on through the bay API. (I figure
we would do this through the bay, not baymodel, because I can see
similar clusters that would want to be scaled differently).

Let me know if I'm totally crazy or if this is a good idea (or if you
guys have already talked about this before). I would be interested in
your feedback.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078628.html
[2] https://wiki.openstack.org/wiki/Heat/AutoScaling#AutoScaling_API

--
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][heat][magnum] LBaaS of Neutron

2015-11-16 Thread Egor Guz
Eli,

you are correct Swarm support only one active/passive deployment model, but 
according to Docker documentation 
https://docs.docker.com/swarm/multi-manager-setup/
even replica can handle user request “You can use the docker command on any 
Docker Swarm primary manager or any replica."

it means "round-robin" should works.

—
Egor

From: "Qiao,Liyong" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Sunday, November 15, 2015 at 23:50
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [neutron][heat][magnum] LBaaS of Neutron

Hi Sergey

Thanks for your information, it's really help.
Actually I am from Magnum team and we are using heat to do orchestration on 
docker swarm bay.

Swarm master only support A-P mode (active-passive), I wonder if there any 
workaround to
implement my requirement :

VIP : 192.168.0.10

master-1 192.168.0.100 (A)
master-2 192.168.0.101 (P)

if I want to make VIP to alway connect with master-1(since it is A mode),
only switch to master-2 when master-1 down. what should I do?
---

Below link is the heat template of k8s(k8s supports A-A mode, so it can use 
ROUND_ROBIN).
https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubecluster.yaml#L343

P.S Copy to Magnum team.

thanks
Eli.


On 2015年11月16日 15:15, Sergey Kraynev wrote:
On 16 November 2015 at 09:46, Qiao,Liyong 
> wrote:
hi, I have some questions about neutorn LBaas.

seen from the wiki, the load balancer only support:


Table 4.6. Load Balancing Algorithms

Name
LEAST_CONNECTIONS
ROUND_ROBIN

https://wiki.openstack.org/wiki/Neutron/LBaaS/API

think about if I have a A-P mode HA

VIP : 192.168.0.10

master-1 192.168.0.100 (A)
master-2 192.168.0.101 (P)

if I want to use VIP to alway connect with master-1(since it is A mode),
only switch to master-2 when master-1 down. what should I do?
any plan to support more algorithms for neutron lbaas?

BTW, the usage is from heat:

  etcd_pool:
type: OS::Neutron::Pool
properties:
  protocol: HTTP
  monitors: [{get_resource: etcd_monitor}]
  subnet: {get_resource: fixed_subnet}
  lb_method: ROUND_ROBIN
  vip:
protocol_port: 2379



thanks,
Eli.

--
BR, Eli(Li Yong)Qiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Hi, Qiao,Liyong

I can not say about LBaaS team plans for supporting some additional algorithms 
:)
AFAIK, they do not plan add it to v1 API.
As I understand it may be discussed as part of v2 API [1]

In the Heat we have related BP [2], with several patches on review. So if it 
will be implemented on Neutron side, we may add such functionality too.

[1] http://developer.openstack.org/api-ref-networking-v2-ext.html#lbaas-v2.0
[2] https://blueprints.launchpad.net/heat/+spec/lbaasv2-suport

--
Regards,
Sergey.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
BR, Eli(Li Yong)Qiao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing on gate.

2015-11-12 Thread Egor Guz
Eli,

First of all I would like to say thank you for your effort (I never seen so 
many path sets ;)), but I don’t think we should remove “tls_disabled=True” 
tests from gates now (maybe in L).
It’s still vey commonly used feature and backup plan if TLS doesn’t work for 
some reasons.

I think it’s good idea to group tests per pipeline we should definitely follow 
it.

—
Egor

From: "Qiao,Liyong" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, November 11, 2015 at 23:02
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [Magnum][Testing] Reduce Functional testing on gate.

hello all:

I will update some Magnum functional testing status, functional/integration 
testing
is important to us, since we change/modify the Heat template rapidly, we need to
verify the modification is correct, so we need to cover all templates Magnum 
has.
and currently we only has k8s testing(only test with atomic image), we need to
add more, like swarm(WIP), mesos(under plan), also , we may need to support COS 
image.
lots of work need to be done.

for the functional testing time costing, we discussed during the Tokyo summit,
Adrian expected that we can reduce the timing cost to 20min.

I did some analyses on the functional/integrated testing on gate pipeline.
the stages will be follows:
take k8s functional testing for example, we did follow testing case:

1) baymodel creation
2) bay(tls_disabled=True) creation/deletion
3) bay(tls_disabled=False) creation to testing k8s api and delete it after 
testing.

for each stage, the time costing is follows:

  *   devstack prepare: 5-6 mins
  *   Running devstack: 15 mins(include downloading atomic image)
  *   1) and 2) 15 mins
  *   3) 15 +3 mins

totally about 60 mins currently a example is 1h 05m 57s
see 
http://logs.openstack.org/10/243910/1/check/gate-functional-dsvm-magnum-k8s/5e61039/console.html
for all time stamps.

I don't think it is possible to reduce time to 20 mins, since devstack setup 
will take 20 mins already.

To reduce time, I suggest to only create 1 bay each pipeline and do vary kinds 
of testing
on this bay, if want to test some specify bay (for example, network_driver 
etc), create
a new pipeline .

So, I think we can delete 2), since 3) will do similar things(create/delete), 
the different is
3) use tls_disabled=False. what do you think ?
see https://review.openstack.org/244378 for the time costing, will reduce to 45 
min (48m 50s in the example.)

=
For other related functional testing works:
I 'v done the split of functional testing per COE, we have pipeline as:

  *   gate-functional-dsvm-magnum-api 30 mins
  *   gate-functional-dsvm-magnum-k8s 60 mins

And for swam pipeline, patches is done, under reviewing now(works fine on gate)
https://review.openstack.org/244391
https://review.openstack.org/226125



--
BR, Eli(Li Yong)Qiao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Mesos orchestration as discussed at mid cycle (action required from core reviewers)

2015-11-03 Thread Egor Guz
Michal/Steve,


could you elaborate about choosing Marathon vs Aurora vs custom scheduler
(to implement very precise control around placement/failures/etc)?

‹
Egor


On 11/2/15, 22:44, "Michal Rostecki"  wrote:

>Hi,
>
>+1 to what Steven said about Kubernetes.
>
>I'd like to add that these 3 things (pid=host, net=host, -v) are
>supported by Marathon, so probably it's much less problematic for us
>than Kubernetes at this moment.
>
>Regards,
>Michal
>
>On 11/03/2015 12:18 AM, Steven Dake (stdake) wrote:
>> Gosh,
>>
>> Kubernetes as an underlay is an interesting idea.  We tried it for the
>> first 6 months of Kolla¹s existence and it almost killed the project.
>>   Essentially kubernetes lacks support for pid=host, net=host, and ­v
>> bind mounting.  All 3 are required to deliver an operational OpenStack.
>>
>> This is why current Kolla goes with a bare metal underlay ­ all docker
>> options we need are available.
>>
>> Regards
>> -steve
>>
>>
>> From: Georgy Okrokvertskhov > >
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> > >
>> Date: Monday, November 2, 2015 at 3:47 PM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> > >
>> Subject: Re: [openstack-dev] [kolla] Mesos orchestration as discussed at
>> mid cycle (action required from core reviewers)
>>
>> Hi Steve,
>>
>> Thank you for the update. This is really interesting direction for
>>Kolla.
>> I agree with Jeff. It is interesting to see what other frameworks will
>> be used. I suspect Marathon framework is under consideration as it adds
>> most of the application centric functionality like HA\restarter, scaling
>> and rolling-restarts\upgrades. Kubernetes might be also a good candidate
>> for that.
>>
>> Thanks
>> Gosha
>>
>> On Mon, Nov 2, 2015 at 2:00 PM, Jeff Peeler > > wrote:
>>
>> On Mon, Nov 2, 2015 at 12:02 PM, Steven Dake (stdake)
>> > wrote:
>> > Hey folks,
>> >
>> > We had an informal vote at the mid cycle from the core reviewers,
>>and it was
>> > a majority vote, so we went ahead and started the process of the
>> > introduction of mesos orchestration into Kolla.
>> >
>> > For background for our few core reviewers that couldn¹t make it
>>and the
>> > broader community, Angus Salkeld has committed himself and 3
>>other Mirantis
>> > engineers full time to investigate if Mesos could be used as an
>> > orchestration engine in place of Ansible.  We are NOT dropping
>>our Ansible
>> > implementation in the short or long term.  Kolla will continue to
>>lead with
>> > Ansible.  At some point in Mitaka or the N cycle we may move the
>>ansible
>> > bits to a repository called ³kolla-ansible² and the kolla
>>repository would
>> > end up containing the containers only.
>> >
>> > The general consensus was that if folks wanted to add additional
>> > orchestration systems for Kolla, they were free to do so if they
>>did the
>> > development and made a commitment to maintaining one core
>>reviewer team with
>> > broad expertise among the core reviewer team of how these various
>>systems
>> > work.
>> >
>> > Angus has agreed to the following
>> >
>> > A new team called ³kolla-mesos-core² with 2 members.  One of the
>>members is
>> > Angus Salkeld, the other is selected by Angus Salkeld since this
>>is a cookie
>> > cutter empty repository.  This is typical of how new projects
>>would operate,
>> > but we don¹t want a code dump and instead want an integrated core
>>team.  To
>> > prevent a situation which the current Ansible expertise shy away
>>from the
>> > Mesos implementation, the core reviewer team has committed to
>>reviewing the
>> > mesos code to get a feel for it.
>> > Over the next 6-8 weeks these two folks will strive to join the
>>Kolla core
>> > team by typical means 1) irc participation 2) code generation 3)
>>effective
>> > and quality reviews 4) mailing list participation
>> > Angus will create a technical specification which will we will
>>roll-call
>> > voted and only accepted once a majority of core review team is
>>satisfied
>> > with the solution.
>> > The kolla-mesos deliverable will be under Kolla governance and be
>>managed by
>> > the Kolla core reviewer team after the kolla-mesos-core team is
>>deprecated.
>> > If the experiment fails, kolla-mesos will be placed in the attic.
>> There is
>> > no specific window for the experiments, it is really up to Angus
>>to decide
>> > if the technique is viable down the road.
>> > For the purpose of voting, the kolla-mesos-core team 

Re: [openstack-dev] [magnum]generate config files by magnum

2015-11-02 Thread Egor Guz
Steve,actually Kub is moving to fully containerize model when you need only 
kublet running at the host 
(https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html) 
and all other services will come in containers (e.g. ui 
http://kubernetes.io/v1.0/docs/user-guide/ui.html). So we will have only etcd, 
flannel, kublet preinstalled and kublet will start all necessary containers 
(e.g. https://review.openstack.org/#/c/240818/).

Wanghua, we discussed concerns about curent Fedora Atomic images during the 
summit and there are some actions points:
1. Fix CoreOS template. I started working at it, but it will take some time 
because we need to coordinate it with template refactoring 
(https://review.openstack.org/#/c/211771/)
2. Try to minimize Fedora Atomic image (Ton, will take a look at it)
3. Build Ubuntu image/template (I or Ton will pickup it, feel free to join ;))

―
Egor

From: "Steven Dake (stdake)" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, November 2, 2015 at 06:37
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum]generate config files by magnum

The reason we don’t rely on cloudint more then we already do (sed is run via 
cloudiit) is because many modern distress like CentOS and Fedora Atomic have 
many parts of the host os as read-only.

I prefer the structure as it is.

Regards
-steve


From: 王华 >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, November 2, 2015 at 12:41 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [magnum]generate config files by magnum

Hi forks,

Magnum needs to prepare config files for k8s and docker and add these services 
to systemd. Now we use "sed" to replace some parameters in config files. The 
method has a disadvantage. Magnum code  depends on a specific image. Users may 
want to create images by themselves. The config files in their images may be 
different from ours. I think magnum shouldn't depends on the config files in 
the image. These config files should be generated by magnum. What magnum needs 
should be just the installation of k8s, docker, etc. Maybe we can use 
cloud-init to install the softwares automatically, so that we don't need to 
create images and what we needs is just a image with cloud-init.

Regards,
Wang Hua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Coe components status

2015-10-21 Thread Egor Guz
Vikas,

Could you clarify what do you mean under ’status’? I don’t seed this command in 
kubectl, so I assume it is get or describe?
Also for Docker, is it info, inspect or stats? We can get app/container details 
through Marathon API in Mesos, but it’s very depend what information we are 
looking for ;)

My two cents, I think we should implement/found common ground between 'kub 
describe’, ‘docker inspect’ and 'curl http://${MASTER_IP}:8080/v2/tasks' first. 
These commands
are very useful for troubleshooting.

About 'magnum container’ command for all COEs, we should definitely discuss 
this topic during summit. But challenge here that Marathon/Mesos app/container 
definition is
very different form Kub model.

—
Egor

From: Vikas Choudhary 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, October 20, 2015 at 20:56
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Magnum] Coe components status

@ Eli ,

I will look into how to support this feature for other COEs also(mesos and 
swarm). But anyways Magnum's goal is to provide users *atleast* what other coes 
are providing (if not something extra). All coes dont have common features, so 
we cant be very strict on providing common interface apis for all coes. For 
example "magnum container" commands work only with swarm not k8s or mesos.
It will not be justified if k8s is providing a way to monitor at more granular 
level but magnum will not allow user to use it just beacuse other coes does not 
provide this feature.

Agree that it will be nice if could support this feature for all. I will prefer 
to start with k8s first and if similar feature is supported by mesos and swarm 
also, incrementally will implement that also.

Regards
Vikas Choudhary

On Wed, Oct 21, 2015 at 6:50 AM, Qiao,Liyong 
> wrote:
hi Vikas,
thanks for propose this changes, I wonder if you can show some examples for 
other coes we currently supported:
swarm, mesos ?

if we propose a public api like you proposed, we'd better to support all coes 
instead of coe specific.

thanks
Eli.


On 2015年10月20日 18:14, Vikas Choudhary wrote:
Hi Team,

I would appreciate any opinion/concern regarding "coe-component-status" feature 
implementation [1].

For example in k8s, using API api/v1/namespaces/{namespace}/componentstatuses, 
status of each k8s component can be queried. My approach would be to provide a 
command in magnum like "magnum coe-component-status" leveraging coe provided 
rest api and result will be shown to user.

[1] https://blueprints.launchpad.net/magnum/+spec/coe-component-status



-Vikas Choudhary



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
BR, Eli(Li Yong)Qiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Egor Guz
Adrian,

I agree with Steve, otherwise it’s hard to find balance what should go to quick 
start guide (e.g. many operators worry about cpu or I/O instead of memory).
Also I belve auto-scalling deserve it’s own detail document.

—
Egor

From: Adrian Otto >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, October 8, 2015 at 13:04
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] Document adding --memory option to create 
containers

Steve,

I agree with the concept of a simple quickstart doc, but there also needs to be 
a comprehensive user guide, which does not yet exist. In the absence of the 
user guide, the quick start is the void where this stuff is starting to land. 
We simply need to put together a magnum reference document, and start moving 
content into that.

Adrian

On Oct 8, 2015, at 12:54 PM, Steven Dake (stdake) 
> wrote:

Quickstart guide should be dead dead dead dead simple.  The goal of the 
quickstart guide isn’t to tach people best practices around Magnum.  It is to 
get a developer operational to give them that sense of feeling that Magnum can 
be worked on.  The goal of any quickstart guide should be to encourage the 
thinking that a person involving themselves with the project the quickstart 
guide represents is a good use of the person’s limited time on the planet.

Regards
-steve


From: Hongbin Lu >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, October 8, 2015 at 9:00 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [magnum] Document adding --memory option to create 
containers

Hi team,

I want to move the discussion in the review below to here, so that we can get 
more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the memory size of 
containers. The specification of the memory size is optional, and the COE won’t 
reserve any memory to the containers with unspecified memory size. The debate 
is whether we should document this optional parameter in the quickstart guide. 
Below is the positions of both sides:

Pros:
· It is a good practice to always specifying the memory size, because 
containers with unspecified memory size won’t have QoS guarantee.
· The in-development autoscaling feature [1] will query the memory size 
of each container to estimate the residual capacity and triggers scaling 
accordingly. Containers with unspecified memory size will be treated as taking 
0 memory, which negatively affects the scaling decision.
Cons:
· The quickstart guide should be kept as simple as possible, so it is 
not a good idea to have the optional parameter in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Kuryr] Kuryr Open Tasks

2015-10-07 Thread Egor Guz
Gal, thx I a lot. I have created the pool 
http://doodle.com/poll/udpdw77evdpnsaq6 where everyone can vote for time slot.

—
Egor


From: Gal Sagie >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, October 6, 2015 at 12:08
To: "OpenStack Development Mailing List (not for usage questions)" 
>, 
Eran Gampel >, Antoni 
Segura Puimedon >, Irena Berezovsky 
>, Mohammad Banikazemi 
>, Taku Fukushima 
>, Salvatore Orlando 
>, sky fei 
>, 
"digambarpati...@yahoo.co.in" 
>, Digambar 
Patil >
Subject: [openstack-dev] [Neutron][Kuryr] Kuryr Open Tasks

Hello All,

I have opened a Trello board to track all Kuryr assigned tasks and their 
assignee.
In addition to all the non assigned tasks we have defined.

You can visit and look at the board here [1].
Please email back if i missed you or any task that you are working on, or a task
that you think needs to be on that list.

This is only a temporary solution until we get everything organised, we plan to 
track everything with launchpad bugs (and the assigned blueprints)

If you see any task from this list which doesn't have an assignee and you feel
you have the time and the desire to contribute, please contact me and i will 
provide
guideness.

Thanks
Gal

[1] https://trello.com/b/cbIAXrQ2/project-kuryr

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-10-03 Thread Egor Guz
Kris,

We are facing similar challenges/questions and there are some thoughts. We 
cannot ignore scalability limits: Kub ~ 100 nodes (there are plans to support 
1K next year), Swarm ~ ??? (I never heard
even about 100 nodes, definitely not ready for production yet (happy to be 
wrong ;))), Mesos ~ 100K nodes, but it’s scalability issues with many 
schedulers (e.g. each team develop/use their
own framework (Marathon/Aurora)). It looks like small clusters is better/save 
option today (even if you need to pay for additional control plane), but I 
belie situation will change in next twelve months.

—
Egor

From: "Kris G. Lindgren" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, September 30, 2015 at 16:26
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

>From my point of view an ideal use case for companies like ours 
>(yahoo/godaddy) would be able to support hierarchical projects in magnum.  
>That way we could create a project for each department, and then the subteams 
>of those departments can have their own projects.  We create a a bay per 
>department.  Sub-projects if they want to can support creation of their own 
>bays (but support of the kube cluster would then fall to that team).  When a 
>sub-project spins up a pod on a bay, minions get created inside that teams sub 
>projects and the containers in that pod run on the capacity that was spun up  
>under that project, the minions for each pod would be a in a scaling group and 
>as such grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don’t want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

>Joshua,
>
>If you share resources, you give up multi-tenancy.  No COE system has the
>concept of multi-tenancy (kubernetes has some basic implementation but it
>is totally insecure).  Not only does multi-tenancy have to “look like” it
>offers multiple tenants isolation, but it actually has to deliver the
>goods.

>

>I understand that at first glance a company like Yahoo may not want >separate 
>bays for their various applications because of the perceived >administrative 
>overhead. I would then challenge Yahoo to go deploy a COE >like kubernetes 
>(which has no multi-tenancy or a very basic implementation >of such) and get 
>it to work with hundreds of different competing >applications. I would 
>speculate the administrative overhead of getting >all that to work would be 
>greater then the administrative overhead of >simply doing a bay create for the 
>various tenants.

>

>Placing tenancy inside a COE seems interesting, but no COE does that >today. 
>Maybe in the future they will. Magnum was designed to present an >integration 
>point between COEs and OpenStack today, not five years down >the road. Its not 
>as if we took shortcuts to get to where we are.

>

>I will grant you that density is lower with the current design of Magnum >vs a 
>full on integration with OpenStack within the COE itself. However, >that model 
>which is what I believe you proposed is a huge design change to >each COE 
>which would overly complicate the COE at the gain of increased >density. I 
>personally don’t feel that pain is worth the gain.


___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread Egor Guz
definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum should only 
focus at deployment (I feel we will become another Puppet/Chef/Ansible module 
if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to OpenStack 
ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step in to 
Kub/Mesos/Swarm communities for that.

―
Egor

From: Adrian Otto <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) 
<daneh...@cisco.com<mailto:daneh...@cisco.com>> wrote:


+1

From: Tom Cammann <tom.camm...@hpe.com<mailto:tom.camm...@hpe.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate 
the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very 
difficult and probably a wasted effort trying to consolidate their separate 
APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine 
Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question: should 
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat 
resources can just interface with k8s instead of Magnum.
Ton Ngo,

Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose 
is just command line tool which doesn’t have any api or scheduling feat

From: Egor Guz <e...@walmartlabs.com><mailto:e...@walmartlabs.com>
To: 
"openstack-dev@lists.openstack.org"<mailto:openstack-dev@lists.openstack.org> 
<openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?




Also I belive docker compose is just command line tool which doesn’t have any 
api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose 
executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

―
Egor

From: Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>
Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to 
operate. We are intentionally avoiding re-inventing the wheel. Our goal is not 
to replace docker swarm (or other existing systems), but to compliment it/them. 
We want to offer users of Docker the richness of native APIs and supporting 
tools. This way they will not need to compromise features or wait longer for us 
to implement each new feature as it is added. Keep in mind that our pod, 
service, and replication controller resources pre-date this philosophy. If we 
started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, 王华 
<wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com><mailto:wanghua.hum...@gmail.com>>
 wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes 
container in swarm coe. As I know, swarm is only a scheduler of container, 
which is like 

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-28 Thread Egor Guz
Also I belive docker compose is just command line tool which doesn’t have any 
api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose 
executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

―
Egor

From: Adrian Otto >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to 
operate. We are intentionally avoiding re-inventing the wheel. Our goal is not 
to replace docker swarm (or other existing systems), but to compliment it/them. 
We want to offer users of Docker the richness of native APIs and supporting 
tools. This way they will not need to compromise features or wait longer for us 
to implement each new feature as it is added. Keep in mind that our pod, 
service, and replication controller resources pre-date this philosophy. If we 
started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, 王华 
> wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes 
container in swarm coe. As I know, swarm is only a scheduler of container, 
which is like nova in openstack. Docker compose is a orchestration program 
which is like heat in openstack. k8s is the combination of scheduler and 
orchestration. So I think it is better to expose the apis in compose to users 
which are at the same level as k8s.


Regards
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] [Kuryr] Handling password for k8s

2015-09-21 Thread Egor Guz
+1, to Hongbin’s concerns about exposing passwords. I think we should start 
with dedicated kub user in magnum config and moved to keystone domains after.

I just wondering how how Kuryr team planning to solve similar issue (I believe 
libnetwork driver require Neutron’s credentials). Can someone comment on it?

—
Egor

From: "Steven Dake (stdake)" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Sunday, September 20, 2015 at 19:34
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] Handling password for k8s

Hongbin,

I believe the domain approach is the preferred approach for the solution long 
term.  It will require more R to execute then other options but also be 
completely secure.

Regards
-steve


From: Hongbin Lu >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Sunday, September 20, 2015 at 4:26 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] Handling password for k8s

Hi Ton,

If I understand your proposal correctly, it means the inputted password will be 
exposed to users in the same tenant (since the password is passed as stack 
parameter, which is exposed within tenant). If users are not admin, they don’t 
have privilege to create a temp user. As a result, users have to expose their 
own password to create a bay, which is suboptimal.

A slightly amendment is to have operator to create a user that is dedicated for 
communication between k8s and neutron load balancer service. The password of 
the user can be written into config file, picked up by conductor and passed to 
heat. The drawback is that there is no multi-tenancy for openstack load 
balancer service, since all bays will share the same credential.

Another solution I can think of is to have magnum to create a keystone domain 
[1] for each bay (using admin credential in config file), and assign bay’s 
owner to that domain. As a result, the user will have privilege to create a bay 
user within that domain. It seems Heat supports native keystone resource [2], 
which makes the administration of keystone users much easier. The drawback is 
the implementation is more complicated.

[1] https://wiki.openstack.org/wiki/Domains
[2] 
http://specs.openstack.org/openstack/heat-specs/specs/kilo/keystone-resources.html

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: September-20-15 2:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Handling password for k8s


Hi everyone,
I am running into a potential issue in implementing the support for load 
balancer in k8s services. After a chat with sdake, I would like to run this by 
the team for feedback/suggestion.
First let me give a little background for context. In the current k8s cluster, 
all k8s pods and services run within a private subnet (on Flannel) and they can 
access each other but they cannot be accessed from external network. The way to 
publish an endpoint to the external network is by specifying this attribute in 
your service manifest:
type: LoadBalancer
Then k8s will talk to OpenStack Neutron to create the load balancer pool, 
members, VIP, monitor. The user would associate the VIP with a floating IP and 
then the endpoint of the service would be accessible from the external internet.
To talk to Neutron, k8s needs the user credential and this is stored in a 
config file on the master node. This includes the username, tenant name, 
password. When k8s starts up, it will load the config file and create an 
authenticated client with Keystone.
The issue we need to find a good solution for is how to handle the password. 
With the current effort on security to make Magnum production-ready, we want to 
make sure to handle the password properly.
Ideally, the best solution is to pass the authenticated token to k8s to use, 
but this will require sizeable change upstream in k8s. We have good reason to 
pursue this but it will take time.
For now, my current implementation is as follows:

  1.  In a bay-create, magnum client adds the password to the API call 
(normally it authenticates and sends the token)
  2.  The conductor picks it up and uses it as an input parameter to the heat 
templates
  3.  When configuring the master node, the password is saved in the config 
file for k8s services.
  4.  Magnum does not store the password internally.

This is probably not ideal, but it would let us proceed for now. We can 
deprecate it later when we have a 

Re: [openstack-dev] [magnum] Discovery

2015-09-17 Thread Egor Guz
+1 for stop using public discovery endpoint, most private cloud vms doesn’t 
have access to internet and operator must to run etcd instance somewhere just 
for discovery.

—
Egor

From: Andrew Melton 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, September 17, 2015 at 12:06
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] Discovery


Hey Daneyon,


I'm fairly partial towards #2 as well. Though, I'm wondering if it's possible 
to take it a step further. Could we run etcd in each Bay without using the 
public discovery endpoint? And then, configure Swarm to simply use the internal 
ectd as it's discovery mechanism? This could cut one of our external service 
dependencies and make it easier to run Magnum is an environment with locked 
down public internet access.​


Anyways, I think #2 could be a good start that we could iterate on later if 
need be.


--Andrew



From: Daneyon Hansen (danehans) >
Sent: Wednesday, September 16, 2015 11:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Discovery

All,

While implementing the flannel --network-driver for swarm, I have come across 
an issue that requires feedback from the community. Here is the breakdown of 
the issue:

  1.  Flannel [1] requires etcd to store network configuration. Meeting this 
requirement is simple for the kubernetes bay types since kubernetes requires 
etcd.
  2.  A discovery process is needed for bootstrapping etcd. Magnum implements 
the public discovery option [2].
  3.  A discovery process is also required to bootstrap a swarm bay type. 
Again, Magnum implements a publicly hosted (Docker Hub) option [3].
  4.  Magnum API exposes the discovery_url attribute that is leveraged by swarm 
and etcd discovery.
  5.  Etcd can not be implemented in swarm because discovery_url is associated 
to swarm’s discovery process and not etcd.

Here are a few options on how to overcome this obstacle:

  1.  Make the discovery_url more specific, for example etcd_discovery_url and 
swarm_discovery_url. However, this option would needlessly expose both 
discovery url’s to all bay types.
  2.  Swarm supports etcd as a discovery backend. This would mean discovery is 
similar for both bay types. With both bay types using the same mechanism for 
discovery, it will be easier to provide a private discovery option in the 
future.
  3.  Do not support flannel as a network-driver for k8s bay types. This would 
require adding support for a different driver that supports multi-host 
networking such as libnetwork. Note: libnetwork is only implemented in the 
Docker experimental release: 
https://github.com/docker/docker/tree/master/experimental.

I lean towards #2 but their may be other options, so feel free to share your 
thoughts. I would like to obtain feedback from the community before proceeding 
in a particular direction.

[1] https://github.com/coreos/flannel
[2] 
https://github.com/coreos/etcd/blob/master/Documentation/discovery_protocol.md
[3] https://docs.docker.com/swarm/discovery/

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] versioned objects changes

2015-08-28 Thread Egor Guz
Adrian, agree with your points. But I think we should discuss it during the 
next team meeting and address/answer all concerns which team members may have. 
Grzegorz, can you join?

—
Egor

From: Adrian Otto adrian.o...@rackspace.commailto:adrian.o...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, August 28, 2015 at 18:51
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] versioned objects changes

We are going to merge this work. I understand and respect Hongbin's position, 
but I respectfully disagree. When we are presented with ways to implement low 
overhead best practices like versioned objects, we will. It's not that hard to 
bump the version of an object when you change it. I like having systemic 
enforcement of that.

On the subject of review 211057, if you submit a review to remove comments, 
that is purely stylistic in nature, then you are inviting a discussion of style 
with our reviewers, and deserve to make that patch stylistically perfect.

If that patch had actual code in it tat made Magnum better, and several 
reviewers voted against the format of the comments, that would be stupid, and I 
would +2 it in spite of any -1 votes as long as it meets our rules for 
submission (like it must have a bug number).

Finally, meaningful -1 votes are valuable, and should not be viewed as a waste 
of effort. That's what we do as a team to help each other continually improve, 
and to make Magnum something we can all be proud of. With all that said, if you 
only have a stylistic comment, that should be a -0 vote with a comment, not a 
-1. If you are making stylistic and material comments together, that's fine, 
use a -1 vote.

Thanks,

Adrian

On Aug 28, 2015, at 5:21 PM, Davanum Srinivas 
dava...@gmail.commailto:dava...@gmail.com wrote:

Hongbin,

We are hearing the best advice available from the folks who started the 
library, evangelized it across nova, ironic, heat, neutron etc.

If we can spend so much time and energy (*FOUR* -1's on a review which just 
changes some commented lines - https://review.openstack.org/#/c/211057/) then 
we can and should clearly do better in things that really matter in the long 
run.

If we get into the rhythm of doing the right things and figuring out the steps 
needed right from the get go, it will pay off in the future.

My 2 cents.

Thanks,
Dims

PS: Note that i used we wearing my magnum core hat and not the o.vo/oslo core 
hat :)

On Fri, Aug 28, 2015 at 6:52 PM, Dan Smith 
d...@danplanet.commailto:d...@danplanet.com wrote:
 If you want my inexperienced opinion, a young project is the perfect
 time to start this.

^--- This ---^

 I understand that something like [2] will cause a test to fail when you
 make a major change to a versioned object. But you *want* that. It helps
 reviewers more easily catch contributors to say You need to update the
 version, because the hash changed. The sooner you start using versioned
 objects in the way they are designed, the smaller the upfront cost, and
 it will also be a major savings later on if something like [1] pops up.

...and the way it will be the least overhead is if it's part of the
culture of contributors and reviewers. It's infinitely harder to take
the culture shift after everyone is used to not having to think about
upgrades, not to mention the technical recovery Ryan mentioned.

It's not my call for Magnum, but long-term thinking definitely pays off
in this particular area.

--Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Egor Guz
+1 for non-Barbican support first, unfortunately Barbican is not very well 
adopted in existing installation.

Madhuri, also please keep in mind we should come with solution which should 
work with Swarm and Mesos as well in further.

—
Egor

From: Madhuri Rai madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, June 15, 2015 at 0:47
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Hi,

Thanks Adrian for the quick response. Please find my response inline.

On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto 
adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:
Madhuri,

On Jun 14, 2015, at 10:30 PM, Madhuri Rai 
madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com wrote:

Hi All,

This is to bring the blueprint  
secure-kuberneteshttps://blueprints.launchpad.net/magnum/+spec/secure-kubernetes
 in discussion. I have been trying to figure out what could be the possible 
change area to support this feature in Magnum. Below is just a rough idea on 
how to proceed further on it.

This task can be further broken in smaller pieces.

1. Add support for TLS in python-k8sclient.
The current auto-generated code doesn't support TLS. So this work will be to 
add TLS support in kubernetes python APIs.

2. Add support for Barbican in Magnum.
Barbican will be used to store all the keys and certificates.

Keep in mind that not all clouds will support Barbican yet, so this approach 
could impair adoption of Magnum until Barbican is universally supported. It 
might be worth considering a solution that would generate all keys on the 
client, and copy them to the Bay master for communication with other Bay nodes. 
This is less secure than using Barbican, but would allow for use of Magnum 
before Barbican is adopted.

+1, I agree. One question here, we are trying to secure the communication 
between magnum-conductor and kube-apiserver. Right?


If both methods were supported, the Barbican method should be the default, and 
we should put warning messages in the config file so that when the 
administrator relaxes the setting to use the non-Barbican configuration he/she 
is made aware that it requires a less secure mode of operation.

In non-Barbican support, client will generate the keys and pass the location of 
the key to the magnum services. Then again heat template will copy and 
configure the kubernetes services on master node. Same as the below step.


My suggestion is to completely implement the Barbican support first, and follow 
up that implementation with a non-Barbican option as a second iteration for the 
feature.

How about implementing the non-Barbican support first as this would be easy to 
implement, so that we can first concentrate on Point 1 and 3. And then after 
it, we can work on Barbican support with more insights.

Another possibility would be for Magnum to use its own private installation of 
Barbican in cases where it is not available in the service catalog. I dislike 
this option because it creates an operational burden for maintaining the 
private Barbican service, and additional complexities with securing it.

In my opinion, installation of Barbican should be independent of Magnum. My 
idea here is, if user wants to store his/her keys in Barbican then he/she will 
install it.
We will have a config paramter like store_secure when True means we have to 
store the keys in Barbican or else not.
What do you think?

3. Add support of TLS in Magnum.
This work mainly involves supporting the use of key and certificates in magnum 
to support TLS.

The user generates the keys, certificates and store them in Barbican. Now there 
is two way to access these keys while creating a bay.

Rather than the user generates the keys…, perhaps it might be better to word 
that as the magnum client library code generates the keys for the user…”.

It is user here. In my opinion, there could be users who don't want to use 
magnum client rather the APIs directly, in that case the user will generate the 
key themselves.

In our first implementation, we can support the user generating the keys and 
then later client generating the keys.

1. Heat will access Barbican directly.
While creating bay, the user will provide this key and heat templates will 
fetch this key from Barbican.

I think you mean that Heat will use the Barbican key to fetch the TLS key for 
accessing the native API service running on the Bay.
Yes.

2. Magnum-conductor access Barbican.
While creating bay, the user will provide this key and then Magnum-conductor 
will fetch this key from Barbican and provide this key to heat.

Then heat will copy this files on kubernetes master node. Then bay will use 
this key to start a Kubernetes services signed with 

Re: [openstack-dev] [Magnum] Discuss configurable-coe-api-port Blueprint

2015-06-10 Thread Egor Guz
Kai,

+1 for add it to baymodel, but I don’t see many use cases when people need to 
change it. And if they really need to change it they can always modify heat 
template.
-1 for opening it just for admins. I think everyone who create a model should 
be able specify it the same way as dns-nameserver for example.

―
Egor

From: Kai Qiang Wu wk...@cn.ibm.commailto:wk...@cn.ibm.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, June 10, 2015 at 18:35
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Magnum] Discuss configurable-coe-api-port Blueprint


I’m moving this whiteboard to the ML so we can have some discussion to refine 
it, and then go back and update the whiteboard.

Source:https://blueprints.launchpad.net/magnum/+spec/configurable-coe-api-port


@Sdake and I have some discussion now, but may need more input from your side.


1. keep apserver_port in baymodel, it may only allow admin to have (if we 
involved policy) create that baymodel, have less flexiblity for other users.


2. apiserver_port was designed in baymodel, if moved from baymodel to bay, it 
is big change, and if we have other better ways. (it also may apply for
other configuration fileds, like dns-nameserver etc.)



Thanks



Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.commailto:wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev