Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-19 Thread Daly, Louise M
Thanks Liping!

Yes we are using the kuryr ipam driver in the PoC code, otherwise we have to 
manage the IP addresses manually.
We chose IPVlan mostly due to the hardware limitation of the mac addresses on 
the NIC (i.e. No. of virtual devices created on a master exceeds the mac 
capacity and puts the NIC in promiscuous mode and performance is a concern)

Thanks,
Louise

From: Liping Mao (limao) [mailto:li...@cisco.com]
Sent: Monday, September 19, 2016 11:08 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Thanks for your reply, your poc is cool, Louise!
You already can use --ipam-driver=kuryr to use kuryr-ipam to do this in your 
poc code?
BTW, any reason you choose ipvlan, but not macvlan.

Regards,
Liping Mao

From: , Louise M mailto:louise.m.d...@intel.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: 2016年9月19日 星期一 下午5:26
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Liping,

I am also on the team working on the ipvlan proposal and I will try and answer 
your question the best I can.

I think from your blog post you understand the proposal correctly. The steps 
you mentioned are very similar to the code PoC with ipvlan.

These are the steps we followed when we manually set up a IPVlan PoC, this does 
not include IPAM. We managed the IP addresses manually.

1.  Create two VMs on a private network

2.  Create a docker ipvlan network (eg. docker network  create -d ipvlan 
--subnet=192.168.0.0/24 --gateway=192.168.0.1 -o ipvlan_mode=l2 -o parent=ens3 
ipvl_net)

3.  Run some containers on the network (eg. docker run --net=ipvl_net 
--ip=192.168.0.15 -it --rm alpine /bin/sh)

4.  Associate the IP addresses with the VM port
Containers can now ping each other and VMs on the same network.

In the code PoC we use the kuryr-ipam for IP address management, so the extra 
step of creating the ports in your version we have incorporated into the code 
PoC.

Thanks,
Louise
From: Liping Mao (limao) [mailto:li...@cisco.com]
Sent: Sunday, September 18, 2016 3:00 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Ivan,

I tried your proposal with manually steps in Mitaka, I use netns(instead of 
docker container) and macvlan(instead of ipvlan) in my test:
https://lipingmao.github.io/2016/09/18/kuryr_macvlan_ipvlan_datapath_poc.html

Did I understand correct? Any comments will be very appricated.
Thanks.

Regards,
Liping Mao

From:  Liping Mao mailto:li...@cisco.com>>
Reply-To:  OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date:  2016年9月13日星期二下午7:56
To:  OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject:  Re: [openstack-dev] [Kuryr] IPVLAN data path proposal


Hi Ivan,

It sounds cool!

for security group and allowed address pair,
Maybe we can disable port-security,because all the docker in one vm will share 
one security group on the vm port. I'm not sure how to use sg for each 
docker,maybe just disable port-security can be one of the choice. then do not
need allowed address pairs in this case.


Regards,
Lipimg Mao

在 2016年9月12日,19:31,Coughlan, Ivan 
mailto:ivan.cough...@intel.com>> 写道:



Overview
Kuryr proposes to address the issues of double encapsulation and exposure of 
containers as neutron entities when containers are running within VMs.
As an alternative to the vlan-aware-vms and use of ovs within the VM, we 
propose to:
-
Use allowed-address-pairs configuration for the VM neutron port
-
Use IPVLAN for wiring the Containers within VM
In this way:
-
Achieve efficient data path to container within VM
-
Better leverage OpenStack EPA(Enhanced Platform Awareness) features to 
accelerate the data path (more details below)
-
Mitigate the risk of vlan-aware-vms not making neutron in time
-
Provide a solution that works on existing and previous openstack releases
This work should be done in a way permitting the user to optionally select this 
feature.
Required ChangesThe four main changes we have identified in the current kuryr 
codebase are as follows:
*
Introduce an option of enabling “IPVLAN in VM” use case. This can be achieved 
by using a config file option or possibly passing a command line argument. The 
IPVLAN master interface must also be identified.
*
If using “IPVLAN in VM” use case, Kuryr should no longer create a new port in 
Neutron or the associated VEth pairs. Instead, Kuryr will create a new IPVLAN 
slave interface on top of the VM’s master interface and pass this
slave interface to the Container netns.
*
If using “IPVLAN in VM” use case, the VM’s port ID needs to be identified so we 
can associate the additional IPVLAN addresses with the port. This can be 
achieved by querying Neutron’s show-por

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-19 Thread Liping Mao (limao)
Thanks for your reply, your poc is cool, Louise!
You already can use --ipam-driver=kuryr to use kuryr-ipam to do this in your 
poc code?
BTW, any reason you choose ipvlan, but not macvlan.

Regards,
Liping Mao

From: , Louise M mailto:louise.m.d...@intel.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: 2016年9月19日 星期一 下午5:26
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Liping,

I am also on the team working on the ipvlan proposal and I will try and answer 
your question the best I can.

I think from your blog post you understand the proposal correctly. The steps 
you mentioned are very similar to the code PoC with ipvlan.

These are the steps we followed when we manually set up a IPVlan PoC, this does 
not include IPAM. We managed the IP addresses manually.

1.   Create two VMs on a private network

2.   Create a docker ipvlan network (eg. docker network  create -d ipvlan 
--subnet=192.168.0.0/24 --gateway=192.168.0.1 -o ipvlan_mode=l2 -o parent=ens3 
ipvl_net)

3.   Run some containers on the network (eg. docker run --net=ipvl_net 
--ip=192.168.0.15 -it --rm alpine /bin/sh)

4.   Associate the IP addresses with the VM port
Containers can now ping each other and VMs on the same network.

In the code PoC we use the kuryr-ipam for IP address management, so the extra 
step of creating the ports in your version we have incorporated into the code 
PoC.

Thanks,
Louise
From: Liping Mao (limao) [mailto:li...@cisco.com]
Sent: Sunday, September 18, 2016 3:00 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Ivan,

I tried your proposal with manually steps in Mitaka, I use netns(instead of 
docker container) and macvlan(instead of ipvlan) in my test:
https://lipingmao.github.io/2016/09/18/kuryr_macvlan_ipvlan_datapath_poc.html

Did I understand correct? Any comments will be very appricated.
Thanks.

Regards,
Liping Mao

From:  Liping Mao mailto:li...@cisco.com>>
Reply-To:  OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date:  2016年9月13日星期二下午7:56
To:  OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject:  Re: [openstack-dev] [Kuryr] IPVLAN data path proposal


Hi Ivan,

It sounds cool!

for security group and allowed address pair,
Maybe we can disable port-security,because all the docker in one vm will share 
one security group on the vm port. I'm not sure how to use sg for each 
docker,maybe just disable port-security can be one of the choice. then do not
need allowed address pairs in this case.


Regards,
Lipimg Mao

在 2016年9月12日,19:31,Coughlan, Ivan 
mailto:ivan.cough...@intel.com>> 写道:



Overview
Kuryr proposes to address the issues of double encapsulation and exposure of 
containers as neutron entities when containers are running within VMs.
As an alternative to the vlan-aware-vms and use of ovs within the VM, we 
propose to:
-
Use allowed-address-pairs configuration for the VM neutron port
-
Use IPVLAN for wiring the Containers within VM
In this way:
-
Achieve efficient data path to container within VM
-
Better leverage OpenStack EPA(Enhanced Platform Awareness) features to 
accelerate the data path (more details below)
-
Mitigate the risk of vlan-aware-vms not making neutron in time
-
Provide a solution that works on existing and previous openstack releases
This work should be done in a way permitting the user to optionally select this 
feature.
Required ChangesThe four main changes we have identified in the current kuryr 
codebase are as follows:
・
Introduce an option of enabling “IPVLAN in VM” use case. This can be achieved 
by using a config file option or possibly passing a command line argument. The 
IPVLAN master interface must also be identified.
・
If using “IPVLAN in VM” use case, Kuryr should no longer create a new port in 
Neutron or the associated VEth pairs. Instead, Kuryr will create a new IPVLAN 
slave interface on top of the VM’s master interface and pass this
slave interface to the Container netns.
・
If using “IPVLAN in VM” use case, the VM’s port ID needs to be identified so we 
can associate the additional IPVLAN addresses with the port. This can be 
achieved by querying Neutron’s show-port function and passing the VMs
IP address.
・
If using “IPVLAN in VM” use case, Kuryr should associate the additional IPVLAN 
addresses with the VMs port. This can be achieved using Neutron’s
allowed-address-pairs flag in the
port-update function. We intend to make use of Kuryr’s existing IPAM 
functionality to request these IPs from Neutron.
Asks
We wish to discuss the pros and cons.
For example, containers exposure as proper neutron entities and the utility of 
neutron’s allowed-address-pairs is not yet well understood.
We also wish to understand if this approach is acce

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-19 Thread Daly, Louise M
Hi Liping,

I am also on the team working on the ipvlan proposal and I will try and answer 
your question the best I can.

I think from your blog post you understand the proposal correctly. The steps 
you mentioned are very similar to the code PoC with ipvlan.

These are the steps we followed when we manually set up a IPVlan PoC, this does 
not include IPAM. We managed the IP addresses manually.

1.   Create two VMs on a private network

2.   Create a docker ipvlan network (eg. docker network  create -d ipvlan 
--subnet=192.168.0.0/24 --gateway=192.168.0.1 -o ipvlan_mode=l2 -o parent=ens3 
ipvl_net)

3.   Run some containers on the network (eg. docker run --net=ipvl_net 
--ip=192.168.0.15 -it --rm alpine /bin/sh)

4.   Associate the IP addresses with the VM port
Containers can now ping each other and VMs on the same network.

In the code PoC we use the kuryr-ipam for IP address management, so the extra 
step of creating the ports in your version we have incorporated into the code 
PoC.

Thanks,
Louise
From: Liping Mao (limao) [mailto:li...@cisco.com]
Sent: Sunday, September 18, 2016 3:00 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Ivan,

I tried your proposal with manually steps in Mitaka, I use netns(instead of 
docker container) and macvlan(instead of ipvlan) in my test:
https://lipingmao.github.io/2016/09/18/kuryr_macvlan_ipvlan_datapath_poc.html

Did I understand correct? Any comments will be very appricated.
Thanks.

Regards,
Liping Mao

From:  Liping Mao mailto:li...@cisco.com>>
Reply-To:  OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date:  2016年9月13日 星期二 下午7:56
To:  OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject:  Re: [openstack-dev] [Kuryr] IPVLAN data path proposal


Hi Ivan,

It sounds cool!

for security group and allowed address pair,
Maybe we can disable port-security,because all the docker in one vm will share 
one security group on the vm port. I'm not sure how to use sg for each 
docker,maybe just disable port-security can be one of the choice. then do not
need allowed address pairs in this case.


Regards,
Lipimg Mao

在 2016年9月12日,19:31,Coughlan, Ivan 
mailto:ivan.cough...@intel.com>> 写道:



Overview
Kuryr proposes to address the issues of double encapsulation and exposure of 
containers as neutron entities when containers are running within VMs.
As an alternative to the vlan-aware-vms and use of ovs within the VM, we 
propose to:
-
Use allowed-address-pairs configuration for the VM neutron port
-
Use IPVLAN for wiring the Containers within VM
In this way:
-
Achieve efficient data path to container within VM
-
Better leverage OpenStack EPA(Enhanced Platform Awareness) features to 
accelerate the data path (more details below)
-
Mitigate the risk of vlan-aware-vms not making neutron in time
-
Provide a solution that works on existing and previous openstack releases
This work should be done in a way permitting the user to optionally select this 
feature.
Required ChangesThe four main changes we have identified in the current kuryr 
codebase are as follows:
*
Introduce an option of enabling “IPVLAN in VM” use case. This can be achieved 
by using a config file option or possibly passing a command line argument. The 
IPVLAN master interface must also be identified.
*
If using “IPVLAN in VM” use case, Kuryr should no longer create a new port in 
Neutron or the associated VEth pairs. Instead, Kuryr will create a new IPVLAN 
slave interface on top of the VM’s master interface and pass this
slave interface to the Container netns.
*
If using “IPVLAN in VM” use case, the VM’s port ID needs to be identified so we 
can associate the additional IPVLAN addresses with the port. This can be 
achieved by querying Neutron’s show-port function and passing the VMs
IP address.
*
If using “IPVLAN in VM” use case, Kuryr should associate the additional IPVLAN 
addresses with the VMs port. This can be achieved using Neutron’s
allowed-address-pairs flag in the
port-update function. We intend to make use of Kuryr’s existing IPAM 
functionality to request these IPs from Neutron.
Asks
We wish to discuss the pros and cons.
For example, containers exposure as proper neutron entities and the utility of 
neutron’s allowed-address-pairs is not yet well understood.
We also wish to understand if this approach is acceptable for kuryr?
EPA
The Enhanced Platform Awareness initiative is a continuous program to enable 
fine-tuning of the platform for virtualized network functions.
This is done by exposing the processor and platform capabilities through the 
management and orchestration layers.
When a virtual network function is instantiated by an Enhanced Platform 
Awareness enabled orchestrator, the application requirements can be more 
efficiently matched with the platform capabilities.
http://itpeernetwork.intel.com/openstack-kilo-release-is-shaping-up-t

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-18 Thread Liping Mao (limao)
Hi Ivan,

I tried your proposal with manually steps in Mitaka, I use netns(instead of 
docker container) and macvlan(instead of ipvlan) in my test:
https://lipingmao.github.io/2016/09/18/kuryr_macvlan_ipvlan_datapath_poc.html

Did I understand correct? Any comments will be very appricated.
Thanks.

Regards,
Liping Mao

From:  Liping Mao mailto:li...@cisco.com>>
Reply-To:  OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date:  2016年9月13日 星期二 下午7:56
To:  OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject:  Re: [openstack-dev] [Kuryr] IPVLAN data path proposal


Hi Ivan,

It sounds cool!

for security group and allowed address pair,
Maybe we can disable port-security,because all the docker in one vm will share 
one security group on the vm port. I'm not sure how to use sg for each 
docker,maybe just disable port-security can be one of the choice. then do not
need allowed address pairs in this case.


Regards,
Lipimg Mao

在 2016年9月12日,19:31,Coughlan, Ivan 
mailto:ivan.cough...@intel.com>> 写道:



Overview
Kuryr proposes to address the issues of double encapsulation and exposure of 
containers as neutron entities when containers are running within VMs.
As an alternative to the vlan-aware-vms and use of ovs within the VM, we 
propose to:
-
Use allowed-address-pairs configuration for the VM neutron port
-
Use IPVLAN for wiring the Containers within VM
In this way:
-
Achieve efficient data path to container within VM
-
Better leverage OpenStack EPA(Enhanced Platform Awareness) features to 
accelerate the data path (more details below)
-
Mitigate the risk of vlan-aware-vms not making neutron in time
-
Provide a solution that works on existing and previous openstack releases
This work should be done in a way permitting the user to optionally select this 
feature.
Required ChangesThe four main changes we have identified in the current kuryr 
codebase are as follows:
・
Introduce an option of enabling “IPVLAN in VM” use case. This can be achieved 
by using a config file option or possibly passing a command line argument. The 
IPVLAN master interface must also be identified.
・
If using “IPVLAN in VM” use case, Kuryr should no longer create a new port in 
Neutron or the associated VEth pairs. Instead, Kuryr will create a new IPVLAN 
slave interface on top of the VM’s master interface and pass this
slave interface to the Container netns.
・
If using “IPVLAN in VM” use case, the VM’s port ID needs to be identified so we 
can associate the additional IPVLAN addresses with the port. This can be 
achieved by querying Neutron’s show-port function and passing the VMs
IP address.
・
If using “IPVLAN in VM” use case, Kuryr should associate the additional IPVLAN 
addresses with the VMs port. This can be achieved using Neutron’s
allowed-address-pairs flag in the
port-update function. We intend to make use of Kuryr’s existing IPAM 
functionality to request these IPs from Neutron.
Asks
We wish to discuss the pros and cons.
For example, containers exposure as proper neutron entities and the utility of 
neutron’s allowed-address-pairs is not yet well understood.
We also wish to understand if this approach is acceptable for kuryr?
EPA
The Enhanced Platform Awareness initiative is a continuous program to enable 
fine-tuning of the platform for virtualized network functions.
This is done by exposing the processor and platform capabilities through the 
management and orchestration layers.
When a virtual network function is instantiated by an Enhanced Platform 
Awareness enabled orchestrator, the application requirements can be more 
efficiently matched with the platform capabilities.
http://itpeernetwork.intel.com/openstack-kilo-release-is-shaping-up-to-be-a-milestone-for-enhanced-platform-awareness/
https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
https://www.brighttalk.com/webcast/12229/181563/epa-features-in-openstack-kilo
Regards,
Ivan….

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all
copies.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Vikas Choudhary
On Wed, Sep 14, 2016 at 11:22 AM, Liping Mao (limao) 
wrote:

> You have a valid point regarding ipvlan support in newer kernel versions
>> but IIUC overlay mode might not help if nic has a limit on max number of
>> macs that it supports in hardware.
>>
>for example: http://www.brocade.com/content/html/en/
> configuration-guide/fastiron-08030b-securityguide/GUID-
> ED71C989-6295-4175-8CFE-7EABDEE83E1F.html
> <http://www.brocade.com/content/html/en/configuration-guide/fastiron-08030b-securityguide/GUID-ED71C989-6295-4175-8CFE-7EABDEE83E1F.html>
> Thanks vikas point out this.  Yes, It may cause problem if the mac of
> containers expose to hardware switch.
> In overlay case, AFAIK, hw should not learn container mac as it is in
> vxlan(gre) encapsulation.
>

gotcha, thanks Liping.

What is your opinion on the unicast macs limit that some drivers impose
which can enable promiscous mode on the vm if macvlan interfaces cross a
certain limit and thus may result into performance degradation by accepting
all the multicast/broadcast traffic within subnet ?

ipvlan has problems with dhcp and ipv6. I think its a topic worth
discussing.

-Vikas

>
>
> Regards,
> Liping Mao
>
> From: Vikas Choudhary 
> Reply-To: OpenStack List 
> Date: 2016年9月14日 星期三 下午1:10
>
> To: OpenStack List 
> Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>
>
>
> On Wed, Sep 14, 2016 at 10:33 AM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>>
>>
>> On Wed, Sep 14, 2016 at 9:39 AM, Liping Mao (limao) 
>> wrote:
>>
>>> > Though, not the best person to comment on macvlan vs ipvlan, one
>>> limitation of macvlan is that on physical interfaces, maximum possible
>>> number of random mac generations may not cope-up with large number of
>>> containers on same vm.
>>>
>>> Thanks, yes, it is a limitation, Vikas.
>>> This happened if you use vlan as tenant network. If tenant network use
>>> overlay mode, maybe it will be a little bit better for the mac problem.
>>> The reason why I mention macvlan can be one of choice is because ipvlan
>>> need a very new kernel , it maybe a little bit hard to use in prod
>>> env(AFAIK).
>>>
>>
>> You have a valid point regarding ipvlan support in newer kernel versions
>> but IIUC overlay mode might not help if nic has a limit on max number of
>> macs that it supports in hardware.
>>
>for example: http://www.brocade.com/content/html/en/configuration-
> guide/fastiron-08030b-securityguide/GUID-ED71C989-
> 6295-4175-8CFE-7EABDEE83E1F.html
> <http://www.brocade.com/content/html/en/configuration-guide/fastiron-08030b-securityguide/GUID-ED71C989-6295-4175-8CFE-7EABDEE83E1F.html>
>
>>
>>
>
>>
>>
>>>
>>> Regards,
>>> Liping Mao
>>>
>>> From: Vikas Choudhary 
>>> Reply-To: OpenStack List 
>>> Date: 2016年9月14日 星期三 上午11:50
>>>
>>> To: OpenStack List 
>>> Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>>>
>>>
>>>
>>> On Wed, Sep 14, 2016 at 7:10 AM, Liping Mao (limao) 
>>> wrote:
>>>
>>>> Hi Ivan and Gary,
>>>>
>>>> maybe we can use macvlan as ipvlan need very new kernel.
>>>> allow-address-pairs can aslo allow different mac in vm.
>>>> Do we consider macvlan here? Thanks.
>>>>
>>>
>>> Though, not the best person to comment on macvlan vs ipvlan, one
>>> limitation of macvlan is that on physical interfaces, maximum possible
>>> number of random mac generations may not cope-up with large number of
>>> containers on same vm.
>>>
>>>
>>>>
>>>> Regards,
>>>> Liping Mao
>>>>
>>>> From: Liping Mao 
>>>> Reply-To: OpenStack List 
>>>> Date: 2016年9月13日 星期二 下午9:09
>>>> To: OpenStack List 
>>>>
>>>> Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>>>>
>>>> Hi Gary,
>>>>
>>>> I mean maybe that can be one choice in my mind.
>>>>
>>>> Security Group is for each neutron port,in this case,all the docker on
>>>> one vm will share one neutron port(if I understand correct),then they will
>>>> share the security group on that port,it is not per container per security
>>>> group,not sure how to use security group in this case?
>>>>
>>>> Regards,
>>>> Liping Mao
>>>>
>>>> 在 2016年9月13日,20:31,Loughnane, Gary  写道

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Liping Mao (limao)
You have a valid point regarding ipvlan support in newer kernel versions but 
IIUC overlay mode might not help if nic has a limit on max number of macs that 
it supports in hardware.
   for example: 
http://www.brocade.com/content/html/en/configuration-guide/fastiron-08030b-securityguide/GUID-ED71C989-6295-4175-8CFE-7EABDEE83E1F.html
 
<http://www.brocade.com/content/html/en/configuration-guide/fastiron-08030b-securityguide/GUID-ED71C989-6295-4175-8CFE-7EABDEE83E1F.html>
Thanks vikas point out this.  Yes, It may cause problem if the mac of 
containers expose to hardware switch.
In overlay case, AFAIK, hw should not learn container mac as it is in 
vxlan(gre) encapsulation.


Regards,
Liping Mao

From: Vikas Choudhary 
mailto:choudharyvika...@gmail.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: 2016年9月14日 星期三 下午1:10
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal



On Wed, Sep 14, 2016 at 10:33 AM, Vikas Choudhary 
mailto:choudharyvika...@gmail.com>> wrote:


On Wed, Sep 14, 2016 at 9:39 AM, Liping Mao (limao) 
mailto:li...@cisco.com>> wrote:
> Though, not the best person to comment on macvlan vs ipvlan, one limitation 
> of macvlan is that on physical interfaces, maximum possible number of random 
> mac generations may not cope-up with large number of containers on same vm.

Thanks, yes, it is a limitation, Vikas.
This happened if you use vlan as tenant network. If tenant network use overlay 
mode, maybe it will be a little bit better for the mac problem.
The reason why I mention macvlan can be one of choice is because ipvlan need a 
very new kernel , it maybe a little bit hard to use in prod env(AFAIK).

You have a valid point regarding ipvlan support in newer kernel versions but 
IIUC overlay mode might not help if nic has a limit on max number of macs that 
it supports in hardware.
   for example: 
http://www.brocade.com/content/html/en/configuration-guide/fastiron-08030b-securityguide/GUID-ED71C989-6295-4175-8CFE-7EABDEE83E1F.html
 
<http://www.brocade.com/content/html/en/configuration-guide/fastiron-08030b-securityguide/GUID-ED71C989-6295-4175-8CFE-7EABDEE83E1F.html>




Regards,
Liping Mao

From: Vikas Choudhary 
mailto:choudharyvika...@gmail.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: 2016年9月14日 星期三 上午11:50

To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal



On Wed, Sep 14, 2016 at 7:10 AM, Liping Mao (limao) 
mailto:li...@cisco.com>> wrote:
Hi Ivan and Gary,

maybe we can use macvlan as ipvlan need very new kernel.
allow-address-pairs can aslo allow different mac in vm.
Do we consider macvlan here? Thanks.

Though, not the best person to comment on macvlan vs ipvlan, one limitation of 
macvlan is that on physical interfaces, maximum possible number of random mac 
generations may not cope-up with large number of containers on same vm.


Regards,
Liping Mao

From: Liping Mao mailto:li...@cisco.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: 2016年9月13日 星期二 下午9:09
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>

Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Gary,

I mean maybe that can be one choice in my mind.

Security Group is for each neutron port,in this case,all the docker on one vm 
will share one neutron port(if I understand correct),then they will share the 
security group on that port,it is not per container per security group,not sure 
how to use security group in this case?

Regards,
Liping Mao

在 2016年9月13日,20:31,Loughnane, Gary 
mailto:gary.loughn...@intel.com>> 写道:

Hi Liping,

Thank you for the feedback!

Do you mean to have disabled security groups as an optional configuration for 
Kuryr?
Do you have any opinion on the consequences/acceptability of disabling SG?

Regards,
Gary

From: Liping Mao (limao) [mailto:li...@cisco.com]
Sent: Tuesday, September 13, 2016 12:56 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Ivan,

It sounds cool!

for security group and allowed address pair,
Maybe we can disable port-security,because all the docker in one vm will share 
one security group on the vm port. I'm not sure how to use sg for each 
docker,maybe just disable port-security can be one of the choice. then do not 
need allowed address pairs in this case.


Regards,
Lipimg Mao

在 2016年9月12日,19:31,Coughlan, Ivan 
mailto:ivan.cough...@intel.com>> 写道:

Overview
Kuryr proposes to address the issues of double encapsulation and exposure of 
containers as neutron entities when containers are running within VMs.
As an alternative to the vlan-aware-vms and u

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Vikas Choudhary
On Wed, Sep 14, 2016 at 10:33 AM, Vikas Choudhary <
choudharyvika...@gmail.com> wrote:

>
>
> On Wed, Sep 14, 2016 at 9:39 AM, Liping Mao (limao) 
> wrote:
>
>> > Though, not the best person to comment on macvlan vs ipvlan, one
>> limitation of macvlan is that on physical interfaces, maximum possible
>> number of random mac generations may not cope-up with large number of
>> containers on same vm.
>>
>> Thanks, yes, it is a limitation, Vikas.
>> This happened if you use vlan as tenant network. If tenant network use
>> overlay mode, maybe it will be a little bit better for the mac problem.
>> The reason why I mention macvlan can be one of choice is because ipvlan
>> need a very new kernel , it maybe a little bit hard to use in prod
>> env(AFAIK).
>>
>
> You have a valid point regarding ipvlan support in newer kernel versions
> but IIUC overlay mode might not help if nic has a limit on max number of
> macs that it supports in hardware.
>
   for example:
http://www.brocade.com/content/html/en/configuration-guide/fastiron-08030b-securityguide/GUID-ED71C989-6295-4175-8CFE-7EABDEE83E1F.html
<http://www.brocade.com/content/html/en/configuration-guide/fastiron-08030b-securityguide/GUID-ED71C989-6295-4175-8CFE-7EABDEE83E1F.html>

>
>

>
>
>>
>> Regards,
>> Liping Mao
>>
>> From: Vikas Choudhary 
>> Reply-To: OpenStack List 
>> Date: 2016年9月14日 星期三 上午11:50
>>
>> To: OpenStack List 
>> Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>>
>>
>>
>> On Wed, Sep 14, 2016 at 7:10 AM, Liping Mao (limao) 
>> wrote:
>>
>>> Hi Ivan and Gary,
>>>
>>> maybe we can use macvlan as ipvlan need very new kernel.
>>> allow-address-pairs can aslo allow different mac in vm.
>>> Do we consider macvlan here? Thanks.
>>>
>>
>> Though, not the best person to comment on macvlan vs ipvlan, one
>> limitation of macvlan is that on physical interfaces, maximum possible
>> number of random mac generations may not cope-up with large number of
>> containers on same vm.
>>
>>
>>>
>>> Regards,
>>> Liping Mao
>>>
>>> From: Liping Mao 
>>> Reply-To: OpenStack List 
>>> Date: 2016年9月13日 星期二 下午9:09
>>> To: OpenStack List 
>>>
>>> Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>>>
>>> Hi Gary,
>>>
>>> I mean maybe that can be one choice in my mind.
>>>
>>> Security Group is for each neutron port,in this case,all the docker on
>>> one vm will share one neutron port(if I understand correct),then they will
>>> share the security group on that port,it is not per container per security
>>> group,not sure how to use security group in this case?
>>>
>>> Regards,
>>> Liping Mao
>>>
>>> 在 2016年9月13日,20:31,Loughnane, Gary  写道:
>>>
>>> Hi Liping,
>>>
>>>
>>>
>>> Thank you for the feedback!
>>>
>>>
>>>
>>> Do you mean to have disabled security groups as an optional
>>> configuration for Kuryr?
>>>
>>> Do you have any opinion on the consequences/acceptability of disabling
>>> SG?
>>>
>>>
>>>
>>> Regards,
>>>
>>> Gary
>>>
>>>
>>>
>>> *From:* Liping Mao (limao) [mailto:li...@cisco.com ]
>>> *Sent:* Tuesday, September 13, 2016 12:56 PM
>>> *To:* OpenStack Development Mailing List (not for usage questions) <
>>> openstack-dev@lists.openstack.org>
>>> *Subject:* Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>>>
>>>
>>>
>>> Hi Ivan,
>>>
>>>
>>>
>>> It sounds cool!
>>>
>>>
>>>
>>> for security group and allowed address pair,
>>>
>>> Maybe we can disable port-security,because all the docker in one vm
>>> will share one security group on the vm port. I'm not sure how to use sg
>>> for each docker,maybe just disable port-security can be one of the
>>> choice. then do not need allowed address pairs in this case.
>>>
>>>
>>>
>>>
>>>
>>> Regards,
>>>
>>> Lipimg Mao
>>>
>>>
>>> 在 2016年9月12日,19:31,Coughlan, Ivan  写道:
>>>
>>>
>>>
>>> *Overview*
>>>
>>> Kuryr proposes to address the issues of double encapsulation and
>>> exposure of containers as neutro

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Vikas Choudhary
On Wed, Sep 14, 2016 at 9:39 AM, Liping Mao (limao)  wrote:

> > Though, not the best person to comment on macvlan vs ipvlan, one
> limitation of macvlan is that on physical interfaces, maximum possible
> number of random mac generations may not cope-up with large number of
> containers on same vm.
>
> Thanks, yes, it is a limitation, Vikas.
> This happened if you use vlan as tenant network. If tenant network use
> overlay mode, maybe it will be a little bit better for the mac problem.
> The reason why I mention macvlan can be one of choice is because ipvlan
> need a very new kernel , it maybe a little bit hard to use in prod
> env(AFAIK).
>

You have a valid point regarding ipvlan support in newer kernel versions
but IIUC overlay mode might not help if nic has a limit on max number of
macs that it supports in hardware.



>
> Regards,
> Liping Mao
>
> From: Vikas Choudhary 
> Reply-To: OpenStack List 
> Date: 2016年9月14日 星期三 上午11:50
>
> To: OpenStack List 
> Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>
>
>
> On Wed, Sep 14, 2016 at 7:10 AM, Liping Mao (limao) 
> wrote:
>
>> Hi Ivan and Gary,
>>
>> maybe we can use macvlan as ipvlan need very new kernel.
>> allow-address-pairs can aslo allow different mac in vm.
>> Do we consider macvlan here? Thanks.
>>
>
> Though, not the best person to comment on macvlan vs ipvlan, one
> limitation of macvlan is that on physical interfaces, maximum possible
> number of random mac generations may not cope-up with large number of
> containers on same vm.
>
>
>>
>> Regards,
>> Liping Mao
>>
>> From: Liping Mao 
>> Reply-To: OpenStack List 
>> Date: 2016年9月13日 星期二 下午9:09
>> To: OpenStack List 
>>
>> Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>>
>> Hi Gary,
>>
>> I mean maybe that can be one choice in my mind.
>>
>> Security Group is for each neutron port,in this case,all the docker on
>> one vm will share one neutron port(if I understand correct),then they will
>> share the security group on that port,it is not per container per security
>> group,not sure how to use security group in this case?
>>
>> Regards,
>> Liping Mao
>>
>> 在 2016年9月13日,20:31,Loughnane, Gary  写道:
>>
>> Hi Liping,
>>
>>
>>
>> Thank you for the feedback!
>>
>>
>>
>> Do you mean to have disabled security groups as an optional configuration
>> for Kuryr?
>>
>> Do you have any opinion on the consequences/acceptability of disabling SG?
>>
>>
>>
>> Regards,
>>
>> Gary
>>
>>
>>
>> *From:* Liping Mao (limao) [mailto:li...@cisco.com ]
>> *Sent:* Tuesday, September 13, 2016 12:56 PM
>> *To:* OpenStack Development Mailing List (not for usage questions) <
>> openstack-dev@lists.openstack.org>
>> *Subject:* Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>>
>>
>>
>> Hi Ivan,
>>
>>
>>
>> It sounds cool!
>>
>>
>>
>> for security group and allowed address pair,
>>
>> Maybe we can disable port-security,because all the docker in one vm will
>> share one security group on the vm port. I'm not sure how to use sg for
>> each docker,maybe just disable port-security can be one of the choice.
>> then do not need allowed address pairs in this case.
>>
>>
>>
>>
>>
>> Regards,
>>
>> Lipimg Mao
>>
>>
>> 在 2016年9月12日,19:31,Coughlan, Ivan  写道:
>>
>>
>>
>> *Overview*
>>
>> Kuryr proposes to address the issues of double encapsulation and exposure
>> of containers as neutron entities when containers are running within VMs.
>>
>> As an alternative to the vlan-aware-vms and use of ovs within the VM, we
>> propose to:
>>
>> -  Use allowed-address-pairs configuration for the VM neutron
>> port
>>
>> -  Use IPVLAN for wiring the Containers within VM
>>
>>
>>
>> In this way:
>>
>> -  Achieve efficient data path to container within VM
>>
>> -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
>> features to accelerate the data path (more details below)
>>
>> -  Mitigate the risk of vlan-aware-vms not making neutron in time
>>
>> -  Provide a solution that works on existing and previous
>> openstack releases
>>
>>
>>
>> This work should be done in a way permitting the user to optionally
>> select this feature.
>>

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Liping Mao (limao)
> Though, not the best person to comment on macvlan vs ipvlan, one limitation 
> of macvlan is that on physical interfaces, maximum possible number of random 
> mac generations may not cope-up with large number of containers on same vm.

Thanks, yes, it is a limitation, Vikas.
This happened if you use vlan as tenant network. If tenant network use overlay 
mode, maybe it will be a little bit better for the mac problem.
The reason why I mention macvlan can be one of choice is because ipvlan need a 
very new kernel , it maybe a little bit hard to use in prod env(AFAIK).

Regards,
Liping Mao

From: Vikas Choudhary 
mailto:choudharyvika...@gmail.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: 2016年9月14日 星期三 上午11:50
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal



On Wed, Sep 14, 2016 at 7:10 AM, Liping Mao (limao) 
mailto:li...@cisco.com>> wrote:
Hi Ivan and Gary,

maybe we can use macvlan as ipvlan need very new kernel.
allow-address-pairs can aslo allow different mac in vm.
Do we consider macvlan here? Thanks.

Though, not the best person to comment on macvlan vs ipvlan, one limitation of 
macvlan is that on physical interfaces, maximum possible number of random mac 
generations may not cope-up with large number of containers on same vm.


Regards,
Liping Mao

From: Liping Mao mailto:li...@cisco.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: 2016年9月13日 星期二 下午9:09
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>

Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Gary,

I mean maybe that can be one choice in my mind.

Security Group is for each neutron port,in this case,all the docker on one vm 
will share one neutron port(if I understand correct),then they will share the 
security group on that port,it is not per container per security group,not sure 
how to use security group in this case?

Regards,
Liping Mao

在 2016年9月13日,20:31,Loughnane, Gary 
mailto:gary.loughn...@intel.com>> 写道:

Hi Liping,

Thank you for the feedback!

Do you mean to have disabled security groups as an optional configuration for 
Kuryr?
Do you have any opinion on the consequences/acceptability of disabling SG?

Regards,
Gary

From: Liping Mao (limao) [mailto:li...@cisco.com]
Sent: Tuesday, September 13, 2016 12:56 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Ivan,

It sounds cool!

for security group and allowed address pair,
Maybe we can disable port-security,because all the docker in one vm will share 
one security group on the vm port. I'm not sure how to use sg for each 
docker,maybe just disable port-security can be one of the choice. then do not 
need allowed address pairs in this case.


Regards,
Lipimg Mao

在 2016年9月12日,19:31,Coughlan, Ivan 
mailto:ivan.cough...@intel.com>> 写道:

Overview
Kuryr proposes to address the issues of double encapsulation and exposure of 
containers as neutron entities when containers are running within VMs.
As an alternative to the vlan-aware-vms and use of ovs within the VM, we 
propose to:

-  Use allowed-address-pairs configuration for the VM neutron port

-  Use IPVLAN for wiring the Containers within VM

In this way:

-  Achieve efficient data path to container within VM

-  Better leverage OpenStack EPA(Enhanced Platform Awareness) features 
to accelerate the data path (more details below)

-  Mitigate the risk of vlan-aware-vms not making neutron in time

-  Provide a solution that works on existing and previous openstack 
releases

This work should be done in a way permitting the user to optionally select this 
feature.


Required Changes
The four main changes we have identified in the current kuryr codebase are as 
follows:

・ Introduce an option of enabling “IPVLAN in VM” use case. This can be 
achieved by using a config file option or possibly passing a command line 
argument. The IPVLAN master interface must also be identified.

・ If using “IPVLAN in VM” use case, Kuryr should no longer create a new 
port in Neutron or the associated VEth pairs. Instead, Kuryr will create a new 
IPVLAN slave interface on top of the VM’s master interface and pass this slave 
interface to the Container netns.

・ If using “IPVLAN in VM” use case, the VM’s port ID needs to be 
identified so we can associate the additional IPVLAN addresses with the port. 
This can be achieved by querying Neutron’s show-port function and passing the 
VMs IP address.

・ If using “IPVLAN in VM” use case, Kuryr should associate the 
additional IPVLAN addresses with the VMs port. This can be achieved using 
Neutron’s allowed-address-pairs flag in the port-update function

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Vikas Choudhary
On Wed, Sep 14, 2016 at 7:10 AM, Liping Mao (limao)  wrote:

> Hi Ivan and Gary,
>
> maybe we can use macvlan as ipvlan need very new kernel.
> allow-address-pairs can aslo allow different mac in vm.
> Do we consider macvlan here? Thanks.
>

Though, not the best person to comment on macvlan vs ipvlan, one limitation
of macvlan is that on physical interfaces, maximum possible number of
random mac generations may not cope-up with large number of containers on
same vm.


>
> Regards,
> Liping Mao
>
> From: Liping Mao 
> Reply-To: OpenStack List 
> Date: 2016年9月13日 星期二 下午9:09
> To: OpenStack List 
>
> Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>
> Hi Gary,
>
> I mean maybe that can be one choice in my mind.
>
> Security Group is for each neutron port,in this case,all the docker on one
> vm will share one neutron port(if I understand correct),then they will
> share the security group on that port,it is not per container per security
> group,not sure how to use security group in this case?
>
> Regards,
> Liping Mao
>
> 在 2016年9月13日,20:31,Loughnane, Gary  写道:
>
> Hi Liping,
>
>
>
> Thank you for the feedback!
>
>
>
> Do you mean to have disabled security groups as an optional configuration
> for Kuryr?
>
> Do you have any opinion on the consequences/acceptability of disabling SG?
>
>
>
> Regards,
>
> Gary
>
>
>
> *From:* Liping Mao (limao) [mailto:li...@cisco.com ]
> *Sent:* Tuesday, September 13, 2016 12:56 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>
>
>
> Hi Ivan,
>
>
>
> It sounds cool!
>
>
>
> for security group and allowed address pair,
>
> Maybe we can disable port-security,because all the docker in one vm will
> share one security group on the vm port. I'm not sure how to use sg for
> each docker,maybe just disable port-security can be one of the choice.
> then do not need allowed address pairs in this case.
>
>
>
>
>
> Regards,
>
> Lipimg Mao
>
>
> 在 2016年9月12日,19:31,Coughlan, Ivan  写道:
>
>
>
> *Overview*
>
> Kuryr proposes to address the issues of double encapsulation and exposure
> of containers as neutron entities when containers are running within VMs.
>
> As an alternative to the vlan-aware-vms and use of ovs within the VM, we
> propose to:
>
> -  Use allowed-address-pairs configuration for the VM neutron port
>
> -  Use IPVLAN for wiring the Containers within VM
>
>
>
> In this way:
>
> -  Achieve efficient data path to container within VM
>
> -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
> features to accelerate the data path (more details below)
>
> -  Mitigate the risk of vlan-aware-vms not making neutron in time
>
> -  Provide a solution that works on existing and previous
> openstack releases
>
>
>
> This work should be done in a way permitting the user to optionally select
> this feature.
>
>
>
>
> *Required Changes*
>
> The four main changes we have identified in the current kuryr codebase are
> as follows:
>
> · Introduce an option of enabling “IPVLAN in VM” use case. This
> can be achieved by using a config file option or possibly passing a command
> line argument. The IPVLAN master interface must also be identified.
>
> · If using “IPVLAN in VM” use case, Kuryr should no longer create
> a new port in Neutron or the associated VEth pairs. Instead, Kuryr will
> create a new IPVLAN slave interface on top of the VM’s master interface and
> pass this slave interface to the Container netns.
>
> · If using “IPVLAN in VM” use case, the VM’s port ID needs to be
> identified so we can associate the additional IPVLAN addresses with the
> port. This can be achieved by querying Neutron’s show-port function and
> passing the VMs IP address.
>
> · If using “IPVLAN in VM” use case, Kuryr should associate the
> additional IPVLAN addresses with the VMs port. This can be achieved using
> Neutron’s allowed-address-pairs flag in the port-update function. We
> intend to make use of Kuryr’s existing IPAM functionality to request these
> IPs from Neutron.
>
>
>
> *Asks*
>
> We wish to discuss the pros and cons.
>
> For example, containers exposure as proper neutron entities and the
> utility of neutron’s allowed-address-pairs is not yet well understood.
>
>
>
> We also wish to understand if this approach is acceptable for kuryr?
>
>
>
>
>
> *EPA*
>
> The Enhanced

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Vikas Choudhary
On Tue, Sep 13, 2016 at 11:13 PM, Antoni Segura Puimedon  wrote:

> On Tue, Sep 13, 2016 at 5:05 PM, Hongbin Lu  wrote:
> >
> >
> > On Tue, Sep 13, 2016 at 2:10 AM, Vikas Choudhary
> >  wrote:
> >>
> >>
> >>
> >> On Mon, Sep 12, 2016 at 9:17 PM, Hongbin Lu 
> wrote:
> >>>
> >>> Ivan,
> >>>
> >>> Thanks for the proposal. From Magnum's point of view, this proposal
> >>> doesn't seem to require to store neutron/rabbitmq credentials in
> tenant VMs
> >>> which is more desirable. I am looking forward to the PoC.
> >>
> >>
> >> Hogbin, Can you please elaborate on this will not require to store
> neutron
> >> credentials?
> >> For example in libnetwork case, neutron's commands like "show_port" and
> >> "update_port" will still need to be invoked from inside VM.
> >
> >
> > In a typical COE cluster, there are master nodes and work (minion/slave)
> > nodes. Regarding to credentials, the following is optimal:
> > * Avoid storing credentials in work nodes. If credentials have to be
> stored,
> > move them to master nodes if we can (containers are running in work
> nodes so
> > credentials stored there have a higher risk). A question for you,
> neutron's
> > commands like "show_port" and "update_port" need to be invoked from work
> > nodes or master nodes?
> > * If credentials have to be stored, scope them with least privilege
> (Magnum
> > uses Keystone trust for this purpose).
>
> I think that with the ipvlan proposal you probably can do without having
> to call
>
Vikas:

To me it looks like 'from where to make neutron calls' part is same in both
the approaches(address-pairs and vlan-aware-vms). What neutron api calls
are made that will differ(no neutron port creation in ipvlan approach
rather port_update) but whether we make those calls from inside worker vm
or master vm that is going to be dependent on the choice of 'neutron
communication mode' ('rest_driver' or 'rpc_driver') .
Please correct me if I understood something wrong.


> those two. IIUC the proposal the binding on the VM, taking libnetwork
> as an example
>  would be:
>
> 1. docker sends a request to kuryr-libnetwork running in container-in-vm
> mode.
> 2. kuryr-libnetwork forwards the request to a kuryr daemon that has
> the necessary
> credentials to talk to neutron (it could run either in the master node
> or in the compute
> node just like there is the dhcp agent, i.e., with one foot on the VM
> network and one
> on the underlay).
> 3. The kuryr daemon does the address pair proposal requests to Neutron
> and returns
> the result to the kuryr-libnetwork in the VM, at which point the VM
> port can already
> send and receive data for the container.
> 4. kuryr-libnetwork in the VM creates an ipvlan virtual device and
> puts it the IP
> returned by the kuryr daemon.
>
> >
> >>
> >>
> >> Overall I liked this approach given its simplicity over vlan-aware-vms.
> >>
> >> -VikasC
> >>>
> >>>
> >>> Best regards,
> >>> Hongbin
> >>>
> >>> On Mon, Sep 12, 2016 at 7:29 AM, Coughlan, Ivan <
> ivan.cough...@intel.com>
> >>> wrote:
> 
> 
> 
>  Overview
> 
>  Kuryr proposes to address the issues of double encapsulation and
>  exposure of containers as neutron entities when containers are running
>  within VMs.
> 
>  As an alternative to the vlan-aware-vms and use of ovs within the VM,
> we
>  propose to:
> 
>  -  Use allowed-address-pairs configuration for the VM neutron
>  port
> 
>  -  Use IPVLAN for wiring the Containers within VM
> 
> 
> 
>  In this way:
> 
>  -  Achieve efficient data path to container within VM
> 
>  -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
>  features to accelerate the data path (more details below)
> 
>  -  Mitigate the risk of vlan-aware-vms not making neutron in
>  time
> 
>  -  Provide a solution that works on existing and previous
>  openstack releases
> 
> 
> 
>  This work should be done in a way permitting the user to optionally
>  select this feature.
> 
> 
> 
> 
> 
>  Required Changes
> 
>  The four main changes we have identified in the current kuryr codebase
>  are as follows:
> 
>  · Introduce an option of enabling “IPVLAN in VM” use case.
> This
>  can be achieved by using a config file option or possibly passing a
> command
>  line argument. The IPVLAN master interface must also be identified.
> 
>  · If using “IPVLAN in VM” use case, Kuryr should no longer
>  create a new port in Neutron or the associated VEth pairs. Instead,
> Kuryr
>  will create a new IPVLAN slave interface on top of the VM’s master
> interface
>  and pass this slave interface to the Container netns.
> 
>  · If using “IPVLAN in VM” use case, the VM’s port ID needs to
> be
>  identified so we can associate the additional IPVLAN addresses with
> 

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Vikas Choudhary
On Tue, Sep 13, 2016 at 5:26 PM, Liping Mao (limao)  wrote:

> Hi Ivan,
>
> It sounds cool!
>
> for security group and allowed address pair,
> Maybe we can disable port-security,because all the docker in one vm will
> share one security group on the vm port. I'm not sure how to use sg for
> each docker,maybe just disable port-security can be one of the choice. then
> do not need allowed address pairs in this case.
>
Vikas:

Can you please elaborate "maybe just disable port-security can be one of
the choice. then do not need allowed address pairs in this case" ?

Are you suggesting a solution where by disabling port security, each
container can have its own security group? Would you mind please explaining
a bit more for me ?


>
> Regards,
> Lipimg Mao
>
> 在 2016年9月12日,19:31,Coughlan, Ivan  写道:
>
>
>
> *Overview*
>
> Kuryr proposes to address the issues of double encapsulation and exposure
> of containers as neutron entities when containers are running within VMs.
>
> As an alternative to the vlan-aware-vms and use of ovs within the VM, we
> propose to:
>
> -  Use allowed-address-pairs configuration for the VM neutron port
>
> -  Use IPVLAN for wiring the Containers within VM
>
>
>
> In this way:
>
> -  Achieve efficient data path to container within VM
>
> -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
> features to accelerate the data path (more details below)
>
> -  Mitigate the risk of vlan-aware-vms not making neutron in time
>
> -  Provide a solution that works on existing and previous
> openstack releases
>
>
>
> This work should be done in a way permitting the user to optionally select
> this feature.
>
>
>
>
> *Required Changes*
>
> The four main changes we have identified in the current kuryr codebase are
> as follows:
>
> · Introduce an option of enabling “IPVLAN in VM” use case. This
> can be achieved by using a config file option or possibly passing a command
> line argument. The IPVLAN master interface must also be identified.
>
> · If using “IPVLAN in VM” use case, Kuryr should no longer create
> a new port in Neutron or the associated VEth pairs. Instead, Kuryr will
> create a new IPVLAN slave interface on top of the VM’s master interface and
> pass this slave interface to the Container netns.
>
> · If using “IPVLAN in VM” use case, the VM’s port ID needs to be
> identified so we can associate the additional IPVLAN addresses with the
> port. This can be achieved by querying Neutron’s show-port function and
> passing the VMs IP address.
>
> · If using “IPVLAN in VM” use case, Kuryr should associate the
> additional IPVLAN addresses with the VMs port. This can be achieved using
> Neutron’s allowed-address-pairs flag in the port-update function. We
> intend to make use of Kuryr’s existing IPAM functionality to request these
> IPs from Neutron.
>
>
>
> *Asks*
>
> We wish to discuss the pros and cons.
>
> For example, containers exposure as proper neutron entities and the
> utility of neutron’s allowed-address-pairs is not yet well understood.
>
>
>
> We also wish to understand if this approach is acceptable for kuryr?
>
>
>
>
>
> *EPA*
>
> The Enhanced Platform Awareness initiative is a continuous program to
> enable fine-tuning of the platform for virtualized network functions.
>
> This is done by exposing the processor and platform capabilities through
> the management and orchestration layers.
>
> When a virtual network function is instantiated by an Enhanced Platform
> Awareness enabled orchestrator, the application requirements can be more
> efficiently matched with the platform capabilities.
>
> http://itpeernetwork.intel.com/openstack-kilo-release-is-sha
> ping-up-to-be-a-milestone-for-enhanced-platform-awareness/
>
> https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
>
> https://www.brighttalk.com/webcast/12229/181563/epa-features
> -in-openstack-kilo
>
>
>
>
>
> Regards,
>
> Ivan….
>
> --
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
>
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient(s). Any review or distribution by others
> is strictly prohibited. If you are not the intended recipient, please
> contact the sender and delete all copies.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> h

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Vikas Choudhary
On Tue, Sep 13, 2016 at 8:35 PM, Hongbin Lu  wrote:

>
>
> On Tue, Sep 13, 2016 at 2:10 AM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>>
>>
>> On Mon, Sep 12, 2016 at 9:17 PM, Hongbin Lu  wrote:
>>
>>> Ivan,
>>>
>>> Thanks for the proposal. From Magnum's point of view, this proposal
>>> doesn't seem to require to store neutron/rabbitmq credentials in tenant VMs
>>> which is more desirable. I am looking forward to the PoC.
>>>
>>
>> Hogbin, Can you please elaborate on this will not require to store
>> neutron credentials?
>> For example in libnetwork case, neutron's commands like "show_port" and
>> "update_port" will still need to be invoked from inside VM.
>>
>
> In a typical COE cluster, there are master nodes and work (minion/slave)
> nodes. Regarding to credentials, the following is optimal:
> * Avoid storing credentials in work nodes. If credentials have to be
> stored, move them to master nodes if we can (containers are running in work
> nodes so credentials stored there have a higher risk). A question for you,
> neutron's commands like "show_port" and "update_port" need to be invoked
> from work nodes or master nodes?
>

VIKAS>> That will depend on kuryr configuration. There will be two choices:

   1. use  'rest_driver' for neutron communication (making calls directly
   where libnetwork driver is running. It could be a vm or baremetal)
   2. use 'rpc_driver'. Flow that Toni described is assuming that
   rpc_driver is used. So as he explained kuryr-libnetwork in the vm will talk
   to kuryr daemon over rpc for neutron services.

IMO, Above part will be common in both the approaches, address-pairs based
or vlan-aware-vms based.


* If credentials have to be stored, scope them with least privilege (Magnum
> uses Keystone trust for this purpose).
>
>
>>
>> Overall I liked this approach given its simplicity over vlan-aware-vms.
>>
>> -VikasC
>>
>>>
>>> Best regards,
>>> Hongbin
>>>
>>> On Mon, Sep 12, 2016 at 7:29 AM, Coughlan, Ivan >> > wrote:
>>>


 *Overview*

 Kuryr proposes to address the issues of double encapsulation and
 exposure of containers as neutron entities when containers are running
 within VMs.

 As an alternative to the vlan-aware-vms and use of ovs within the VM,
 we propose to:

 -  Use allowed-address-pairs configuration for the VM neutron
 port

 -  Use IPVLAN for wiring the Containers within VM



 In this way:

 -  Achieve efficient data path to container within VM

 -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
 features to accelerate the data path (more details below)

 -  Mitigate the risk of vlan-aware-vms not making neutron in
 time

 -  Provide a solution that works on existing and previous
 openstack releases



 This work should be done in a way permitting the user to optionally
 select this feature.




 *Required Changes*

 The four main changes we have identified in the current kuryr codebase
 are as follows:

 · Introduce an option of enabling “IPVLAN in VM” use case.
 This can be achieved by using a config file option or possibly passing a
 command line argument. The IPVLAN master interface must also be identified.

 · If using “IPVLAN in VM” use case, Kuryr should no longer
 create a new port in Neutron or the associated VEth pairs. Instead, Kuryr
 will create a new IPVLAN slave interface on top of the VM’s master
 interface and pass this slave interface to the Container netns.

 · If using “IPVLAN in VM” use case, the VM’s port ID needs to
 be identified so we can associate the additional IPVLAN addresses with the
 port. This can be achieved by querying Neutron’s show-port function and
 passing the VMs IP address.

 · If using “IPVLAN in VM” use case, Kuryr should associate the
 additional IPVLAN addresses with the VMs port. This can be achieved using
 Neutron’s allowed-address-pairs flag in the port-update function. We
 intend to make use of Kuryr’s existing IPAM functionality to request these
 IPs from Neutron.



 *Asks*

 We wish to discuss the pros and cons.

 For example, containers exposure as proper neutron entities and the
 utility of neutron’s allowed-address-pairs is not yet well understood.



 We also wish to understand if this approach is acceptable for kuryr?





 *EPA*

 The Enhanced Platform Awareness initiative is a continuous program to
 enable fine-tuning of the platform for virtualized network functions.

 This is done by exposing the processor and platform capabilities
 through the management and orchestration layers.

 When a virtual network function is instantiated by an Enhance

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Liping Mao (limao)
Hi Ivan and Gary,

maybe we can use macvlan as ipvlan need very new kernel.
allow-address-pairs can aslo allow different mac in vm.
Do we consider macvlan here? Thanks.

Regards,
Liping Mao

From: Liping Mao mailto:li...@cisco.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: 2016年9月13日 星期二 下午9:09
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Gary,

I mean maybe that can be one choice in my mind.

Security Group is for each neutron port,in this case,all the docker on one vm 
will share one neutron port(if I understand correct),then they will share the 
security group on that port,it is not per container per security group,not sure 
how to use security group in this case?

Regards,
Liping Mao

在 2016年9月13日,20:31,Loughnane, Gary 
mailto:gary.loughn...@intel.com>> 写道:

Hi Liping,

Thank you for the feedback!

Do you mean to have disabled security groups as an optional configuration for 
Kuryr?
Do you have any opinion on the consequences/acceptability of disabling SG?

Regards,
Gary

From: Liping Mao (limao) [mailto:li...@cisco.com]
Sent: Tuesday, September 13, 2016 12:56 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Ivan,

It sounds cool!

for security group and allowed address pair,
Maybe we can disable port-security,because all the docker in one vm will share 
one security group on the vm port. I'm not sure how to use sg for each 
docker,maybe just disable port-security can be one of the choice. then do not 
need allowed address pairs in this case.


Regards,
Lipimg Mao

在 2016年9月12日,19:31,Coughlan, Ivan 
mailto:ivan.cough...@intel.com>> 写道:

Overview
Kuryr proposes to address the issues of double encapsulation and exposure of 
containers as neutron entities when containers are running within VMs.
As an alternative to the vlan-aware-vms and use of ovs within the VM, we 
propose to:

-  Use allowed-address-pairs configuration for the VM neutron port

-  Use IPVLAN for wiring the Containers within VM

In this way:

-  Achieve efficient data path to container within VM

-  Better leverage OpenStack EPA(Enhanced Platform Awareness) features 
to accelerate the data path (more details below)

-  Mitigate the risk of vlan-aware-vms not making neutron in time

-  Provide a solution that works on existing and previous openstack 
releases

This work should be done in a way permitting the user to optionally select this 
feature.


Required Changes
The four main changes we have identified in the current kuryr codebase are as 
follows:

・ Introduce an option of enabling “IPVLAN in VM” use case. This can be 
achieved by using a config file option or possibly passing a command line 
argument. The IPVLAN master interface must also be identified.

・ If using “IPVLAN in VM” use case, Kuryr should no longer create a new 
port in Neutron or the associated VEth pairs. Instead, Kuryr will create a new 
IPVLAN slave interface on top of the VM’s master interface and pass this slave 
interface to the Container netns.

・ If using “IPVLAN in VM” use case, the VM’s port ID needs to be 
identified so we can associate the additional IPVLAN addresses with the port. 
This can be achieved by querying Neutron’s show-port function and passing the 
VMs IP address.

・ If using “IPVLAN in VM” use case, Kuryr should associate the 
additional IPVLAN addresses with the VMs port. This can be achieved using 
Neutron’s allowed-address-pairs flag in the port-update function. We intend to 
make use of Kuryr’s existing IPAM functionality to request these IPs from 
Neutron.

Asks
We wish to discuss the pros and cons.
For example, containers exposure as proper neutron entities and the utility of 
neutron’s allowed-address-pairs is not yet well understood.

We also wish to understand if this approach is acceptable for kuryr?


EPA
The Enhanced Platform Awareness initiative is a continuous program to enable 
fine-tuning of the platform for virtualized network functions.
This is done by exposing the processor and platform capabilities through the 
management and orchestration layers.
When a virtual network function is instantiated by an Enhanced Platform 
Awareness enabled orchestrator, the application requirements can be more 
efficiently matched with the platform capabilities.
http://itpeernetwork.intel.com/openstack-kilo-release-is-shaping-up-to-be-a-milestone-for-enhanced-platform-awareness/
https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
https://www.brighttalk.com/webcast/12229/181563/epa-features-in-openstack-kilo


Regards,
Ivan….

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Off

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Hongbin Lu
Sounds good!. Thanks for the clarification.

Best regards,
Hongbin

On Tue, Sep 13, 2016 at 1:43 PM, Antoni Segura Puimedon 
wrote:

> On Tue, Sep 13, 2016 at 5:05 PM, Hongbin Lu  wrote:
> >
> >
> > On Tue, Sep 13, 2016 at 2:10 AM, Vikas Choudhary
> >  wrote:
> >>
> >>
> >>
> >> On Mon, Sep 12, 2016 at 9:17 PM, Hongbin Lu 
> wrote:
> >>>
> >>> Ivan,
> >>>
> >>> Thanks for the proposal. From Magnum's point of view, this proposal
> >>> doesn't seem to require to store neutron/rabbitmq credentials in
> tenant VMs
> >>> which is more desirable. I am looking forward to the PoC.
> >>
> >>
> >> Hogbin, Can you please elaborate on this will not require to store
> neutron
> >> credentials?
> >> For example in libnetwork case, neutron's commands like "show_port" and
> >> "update_port" will still need to be invoked from inside VM.
> >
> >
> > In a typical COE cluster, there are master nodes and work (minion/slave)
> > nodes. Regarding to credentials, the following is optimal:
> > * Avoid storing credentials in work nodes. If credentials have to be
> stored,
> > move them to master nodes if we can (containers are running in work
> nodes so
> > credentials stored there have a higher risk). A question for you,
> neutron's
> > commands like "show_port" and "update_port" need to be invoked from work
> > nodes or master nodes?
> > * If credentials have to be stored, scope them with least privilege
> (Magnum
> > uses Keystone trust for this purpose).
>
> I think that with the ipvlan proposal you probably can do without having
> to call
> those two. IIUC the proposal the binding on the VM, taking libnetwork
> as an example
>  would be:
>
> 1. docker sends a request to kuryr-libnetwork running in container-in-vm
> mode.
> 2. kuryr-libnetwork forwards the request to a kuryr daemon that has
> the necessary
> credentials to talk to neutron (it could run either in the master node
> or in the compute
> node just like there is the dhcp agent, i.e., with one foot on the VM
> network and one
> on the underlay).
> 3. The kuryr daemon does the address pair proposal requests to Neutron
> and returns
> the result to the kuryr-libnetwork in the VM, at which point the VM
> port can already
> send and receive data for the container.
> 4. kuryr-libnetwork in the VM creates an ipvlan virtual device and
> puts it the IP
> returned by the kuryr daemon.
>
> >
> >>
> >>
> >> Overall I liked this approach given its simplicity over vlan-aware-vms.
> >>
> >> -VikasC
> >>>
> >>>
> >>> Best regards,
> >>> Hongbin
> >>>
> >>> On Mon, Sep 12, 2016 at 7:29 AM, Coughlan, Ivan <
> ivan.cough...@intel.com>
> >>> wrote:
> 
> 
> 
>  Overview
> 
>  Kuryr proposes to address the issues of double encapsulation and
>  exposure of containers as neutron entities when containers are running
>  within VMs.
> 
>  As an alternative to the vlan-aware-vms and use of ovs within the VM,
> we
>  propose to:
> 
>  -  Use allowed-address-pairs configuration for the VM neutron
>  port
> 
>  -  Use IPVLAN for wiring the Containers within VM
> 
> 
> 
>  In this way:
> 
>  -  Achieve efficient data path to container within VM
> 
>  -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
>  features to accelerate the data path (more details below)
> 
>  -  Mitigate the risk of vlan-aware-vms not making neutron in
>  time
> 
>  -  Provide a solution that works on existing and previous
>  openstack releases
> 
> 
> 
>  This work should be done in a way permitting the user to optionally
>  select this feature.
> 
> 
> 
> 
> 
>  Required Changes
> 
>  The four main changes we have identified in the current kuryr codebase
>  are as follows:
> 
>  · Introduce an option of enabling “IPVLAN in VM” use case.
> This
>  can be achieved by using a config file option or possibly passing a
> command
>  line argument. The IPVLAN master interface must also be identified.
> 
>  · If using “IPVLAN in VM” use case, Kuryr should no longer
>  create a new port in Neutron or the associated VEth pairs. Instead,
> Kuryr
>  will create a new IPVLAN slave interface on top of the VM’s master
> interface
>  and pass this slave interface to the Container netns.
> 
>  · If using “IPVLAN in VM” use case, the VM’s port ID needs to
> be
>  identified so we can associate the additional IPVLAN addresses with
> the
>  port. This can be achieved by querying Neutron’s show-port function
> and
>  passing the VMs IP address.
> 
>  · If using “IPVLAN in VM” use case, Kuryr should associate the
>  additional IPVLAN addresses with the VMs port. This can be achieved
> using
>  Neutron’s allowed-address-pairs flag in the port-update function. We
> intend
>  to make use of Kuryr’s existing 

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Antoni Segura Puimedon
On Tue, Sep 13, 2016 at 5:05 PM, Hongbin Lu  wrote:
>
>
> On Tue, Sep 13, 2016 at 2:10 AM, Vikas Choudhary
>  wrote:
>>
>>
>>
>> On Mon, Sep 12, 2016 at 9:17 PM, Hongbin Lu  wrote:
>>>
>>> Ivan,
>>>
>>> Thanks for the proposal. From Magnum's point of view, this proposal
>>> doesn't seem to require to store neutron/rabbitmq credentials in tenant VMs
>>> which is more desirable. I am looking forward to the PoC.
>>
>>
>> Hogbin, Can you please elaborate on this will not require to store neutron
>> credentials?
>> For example in libnetwork case, neutron's commands like "show_port" and
>> "update_port" will still need to be invoked from inside VM.
>
>
> In a typical COE cluster, there are master nodes and work (minion/slave)
> nodes. Regarding to credentials, the following is optimal:
> * Avoid storing credentials in work nodes. If credentials have to be stored,
> move them to master nodes if we can (containers are running in work nodes so
> credentials stored there have a higher risk). A question for you, neutron's
> commands like "show_port" and "update_port" need to be invoked from work
> nodes or master nodes?
> * If credentials have to be stored, scope them with least privilege (Magnum
> uses Keystone trust for this purpose).

I think that with the ipvlan proposal you probably can do without having to call
those two. IIUC the proposal the binding on the VM, taking libnetwork
as an example
 would be:

1. docker sends a request to kuryr-libnetwork running in container-in-vm mode.
2. kuryr-libnetwork forwards the request to a kuryr daemon that has
the necessary
credentials to talk to neutron (it could run either in the master node
or in the compute
node just like there is the dhcp agent, i.e., with one foot on the VM
network and one
on the underlay).
3. The kuryr daemon does the address pair proposal requests to Neutron
and returns
the result to the kuryr-libnetwork in the VM, at which point the VM
port can already
send and receive data for the container.
4. kuryr-libnetwork in the VM creates an ipvlan virtual device and
puts it the IP
returned by the kuryr daemon.

>
>>
>>
>> Overall I liked this approach given its simplicity over vlan-aware-vms.
>>
>> -VikasC
>>>
>>>
>>> Best regards,
>>> Hongbin
>>>
>>> On Mon, Sep 12, 2016 at 7:29 AM, Coughlan, Ivan 
>>> wrote:



 Overview

 Kuryr proposes to address the issues of double encapsulation and
 exposure of containers as neutron entities when containers are running
 within VMs.

 As an alternative to the vlan-aware-vms and use of ovs within the VM, we
 propose to:

 -  Use allowed-address-pairs configuration for the VM neutron
 port

 -  Use IPVLAN for wiring the Containers within VM



 In this way:

 -  Achieve efficient data path to container within VM

 -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
 features to accelerate the data path (more details below)

 -  Mitigate the risk of vlan-aware-vms not making neutron in
 time

 -  Provide a solution that works on existing and previous
 openstack releases



 This work should be done in a way permitting the user to optionally
 select this feature.





 Required Changes

 The four main changes we have identified in the current kuryr codebase
 are as follows:

 · Introduce an option of enabling “IPVLAN in VM” use case. This
 can be achieved by using a config file option or possibly passing a command
 line argument. The IPVLAN master interface must also be identified.

 · If using “IPVLAN in VM” use case, Kuryr should no longer
 create a new port in Neutron or the associated VEth pairs. Instead, Kuryr
 will create a new IPVLAN slave interface on top of the VM’s master 
 interface
 and pass this slave interface to the Container netns.

 · If using “IPVLAN in VM” use case, the VM’s port ID needs to be
 identified so we can associate the additional IPVLAN addresses with the
 port. This can be achieved by querying Neutron’s show-port function and
 passing the VMs IP address.

 · If using “IPVLAN in VM” use case, Kuryr should associate the
 additional IPVLAN addresses with the VMs port. This can be achieved using
 Neutron’s allowed-address-pairs flag in the port-update function. We intend
 to make use of Kuryr’s existing IPAM functionality to request these IPs 
 from
 Neutron.



 Asks

 We wish to discuss the pros and cons.

 For example, containers exposure as proper neutron entities and the
 utility of neutron’s allowed-address-pairs is not yet well understood.



 We also wish to understand if this approach is acceptable for kuryr?





 EPA

 The Enhanced Platform Awarene

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Hongbin Lu
On Tue, Sep 13, 2016 at 2:10 AM, Vikas Choudhary  wrote:

>
>
> On Mon, Sep 12, 2016 at 9:17 PM, Hongbin Lu  wrote:
>
>> Ivan,
>>
>> Thanks for the proposal. From Magnum's point of view, this proposal
>> doesn't seem to require to store neutron/rabbitmq credentials in tenant VMs
>> which is more desirable. I am looking forward to the PoC.
>>
>
> Hogbin, Can you please elaborate on this will not require to store neutron
> credentials?
> For example in libnetwork case, neutron's commands like "show_port" and
> "update_port" will still need to be invoked from inside VM.
>

In a typical COE cluster, there are master nodes and work (minion/slave)
nodes. Regarding to credentials, the following is optimal:
* Avoid storing credentials in work nodes. If credentials have to be
stored, move them to master nodes if we can (containers are running in work
nodes so credentials stored there have a higher risk). A question for you,
neutron's commands like "show_port" and "update_port" need to be invoked
from work nodes or master nodes?
* If credentials have to be stored, scope them with least privilege (Magnum
uses Keystone trust for this purpose).


>
> Overall I liked this approach given its simplicity over vlan-aware-vms.
>
> -VikasC
>
>>
>> Best regards,
>> Hongbin
>>
>> On Mon, Sep 12, 2016 at 7:29 AM, Coughlan, Ivan 
>> wrote:
>>
>>>
>>>
>>> *Overview*
>>>
>>> Kuryr proposes to address the issues of double encapsulation and
>>> exposure of containers as neutron entities when containers are running
>>> within VMs.
>>>
>>> As an alternative to the vlan-aware-vms and use of ovs within the VM, we
>>> propose to:
>>>
>>> -  Use allowed-address-pairs configuration for the VM neutron
>>> port
>>>
>>> -  Use IPVLAN for wiring the Containers within VM
>>>
>>>
>>>
>>> In this way:
>>>
>>> -  Achieve efficient data path to container within VM
>>>
>>> -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
>>> features to accelerate the data path (more details below)
>>>
>>> -  Mitigate the risk of vlan-aware-vms not making neutron in
>>> time
>>>
>>> -  Provide a solution that works on existing and previous
>>> openstack releases
>>>
>>>
>>>
>>> This work should be done in a way permitting the user to optionally
>>> select this feature.
>>>
>>>
>>>
>>>
>>> *Required Changes*
>>>
>>> The four main changes we have identified in the current kuryr codebase
>>> are as follows:
>>>
>>> · Introduce an option of enabling “IPVLAN in VM” use case. This
>>> can be achieved by using a config file option or possibly passing a command
>>> line argument. The IPVLAN master interface must also be identified.
>>>
>>> · If using “IPVLAN in VM” use case, Kuryr should no longer
>>> create a new port in Neutron or the associated VEth pairs. Instead, Kuryr
>>> will create a new IPVLAN slave interface on top of the VM’s master
>>> interface and pass this slave interface to the Container netns.
>>>
>>> · If using “IPVLAN in VM” use case, the VM’s port ID needs to
>>> be identified so we can associate the additional IPVLAN addresses with the
>>> port. This can be achieved by querying Neutron’s show-port function and
>>> passing the VMs IP address.
>>>
>>> · If using “IPVLAN in VM” use case, Kuryr should associate the
>>> additional IPVLAN addresses with the VMs port. This can be achieved using
>>> Neutron’s allowed-address-pairs flag in the port-update function. We
>>> intend to make use of Kuryr’s existing IPAM functionality to request these
>>> IPs from Neutron.
>>>
>>>
>>>
>>> *Asks*
>>>
>>> We wish to discuss the pros and cons.
>>>
>>> For example, containers exposure as proper neutron entities and the
>>> utility of neutron’s allowed-address-pairs is not yet well understood.
>>>
>>>
>>>
>>> We also wish to understand if this approach is acceptable for kuryr?
>>>
>>>
>>>
>>>
>>>
>>> *EPA*
>>>
>>> The Enhanced Platform Awareness initiative is a continuous program to
>>> enable fine-tuning of the platform for virtualized network functions.
>>>
>>> This is done by exposing the processor and platform capabilities through
>>> the management and orchestration layers.
>>>
>>> When a virtual network function is instantiated by an Enhanced Platform
>>> Awareness enabled orchestrator, the application requirements can be more
>>> efficiently matched with the platform capabilities.
>>>
>>> http://itpeernetwork.intel.com/openstack-kilo-release-is-sha
>>> ping-up-to-be-a-milestone-for-enhanced-platform-awareness/
>>>
>>> https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
>>>
>>> https://www.brighttalk.com/webcast/12229/181563/epa-features
>>> -in-openstack-kilo
>>>
>>>
>>>
>>>
>>>
>>> Regards,
>>>
>>> Ivan….
>>>
>>> --
>>> Intel Research and Development Ireland Limited
>>> Registered in Ireland
>>> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
>>> Registered Number: 308263
>>>
>>> This e-

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Liping Mao (limao)
Hi Gary?

I mean maybe that can be one choice in my mind.

Security Group is for each neutron port?in this case?all the docker on one vm 
will share one neutron port?if I understand correct??then they will share the 
security group on that port?it is not per container per security group?not sure 
how to use security group in this case?

Regards?
Liping Mao

? 2016?9?13??20:31?Loughnane, Gary 
mailto:gary.loughn...@intel.com>> ???

Hi Liping,

Thank you for the feedback!

Do you mean to have disabled security groups as an optional configuration for 
Kuryr?
Do you have any opinion on the consequences/acceptability of disabling SG?

Regards,
Gary

From: Liping Mao (limao) [mailto:li...@cisco.com]
Sent: Tuesday, September 13, 2016 12:56 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Ivan?

It sounds cool?

for security group and allowed address pair?
Maybe we can disable port-security?because all the docker in one vm will share 
one security group on the vm port. I'm not sure how to use sg for each 
docker?maybe just disable port-security can be one of the choice. then do not 
need allowed address pairs in this case.


Regards?
Lipimg Mao

? 2016?9?12??19:31?Coughlan, Ivan 
mailto:ivan.cough...@intel.com>> ???

Overview
Kuryr proposes to address the issues of double encapsulation and exposure of 
containers as neutron entities when containers are running within VMs.
As an alternative to the vlan-aware-vms and use of ovs within the VM, we 
propose to:

-  Use allowed-address-pairs configuration for the VM neutron port

-  Use IPVLAN for wiring the Containers within VM

In this way:

-  Achieve efficient data path to container within VM

-  Better leverage OpenStack EPA(Enhanced Platform Awareness) features 
to accelerate the data path (more details below)

-  Mitigate the risk of vlan-aware-vms not making neutron in time

-  Provide a solution that works on existing and previous openstack 
releases

This work should be done in a way permitting the user to optionally select this 
feature.


Required Changes
The four main changes we have identified in the current kuryr codebase are as 
follows:

* Introduce an option of enabling "IPVLAN in VM" use case. This can be 
achieved by using a config file option or possibly passing a command line 
argument. The IPVLAN master interface must also be identified.

* If using "IPVLAN in VM" use case, Kuryr should no longer create a new 
port in Neutron or the associated VEth pairs. Instead, Kuryr will create a new 
IPVLAN slave interface on top of the VM's master interface and pass this slave 
interface to the Container netns.

* If using "IPVLAN in VM" use case, the VM's port ID needs to be 
identified so we can associate the additional IPVLAN addresses with the port. 
This can be achieved by querying Neutron's show-port function and passing the 
VMs IP address.

* If using "IPVLAN in VM" use case, Kuryr should associate the 
additional IPVLAN addresses with the VMs port. This can be achieved using 
Neutron's allowed-address-pairs flag in the port-update function. We intend to 
make use of Kuryr's existing IPAM functionality to request these IPs from 
Neutron.

Asks
We wish to discuss the pros and cons.
For example, containers exposure as proper neutron entities and the utility of 
neutron's allowed-address-pairs is not yet well understood.

We also wish to understand if this approach is acceptable for kuryr?


EPA
The Enhanced Platform Awareness initiative is a continuous program to enable 
fine-tuning of the platform for virtualized network functions.
This is done by exposing the processor and platform capabilities through the 
management and orchestration layers.
When a virtual network function is instantiated by an Enhanced Platform 
Awareness enabled orchestrator, the application requirements can be more 
efficiently matched with the platform capabilities.
http://itpeernetwork.intel.com/openstack-kilo-release-is-shaping-up-to-be-a-milestone-for-enhanced-platform-awareness/
https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
https://www.brighttalk.com/webcast/12229/181563/epa-features-in-openstack-kilo


Regards,
Ivan

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.
__

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Loughnane, Gary
Hi Liping,

Thank you for the feedback!

Do you mean to have disabled security groups as an optional configuration for 
Kuryr?
Do you have any opinion on the consequences/acceptability of disabling SG?

Regards,
Gary

From: Liping Mao (limao) [mailto:li...@cisco.com]
Sent: Tuesday, September 13, 2016 12:56 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Ivan,

It sounds cool!

for security group and allowed address pair,
Maybe we can disable port-security,because all the docker in one vm will share 
one security group on the vm port. I'm not sure how to use sg for each 
docker,maybe just disable port-security can be one of the choice. then do not 
need allowed address pairs in this case.


Regards,
Lipimg Mao

在 2016年9月12日,19:31,Coughlan, Ivan 
mailto:ivan.cough...@intel.com>> 写道:

Overview
Kuryr proposes to address the issues of double encapsulation and exposure of 
containers as neutron entities when containers are running within VMs.
As an alternative to the vlan-aware-vms and use of ovs within the VM, we 
propose to:

-  Use allowed-address-pairs configuration for the VM neutron port

-  Use IPVLAN for wiring the Containers within VM

In this way:

-  Achieve efficient data path to container within VM

-  Better leverage OpenStack EPA(Enhanced Platform Awareness) features 
to accelerate the data path (more details below)

-  Mitigate the risk of vlan-aware-vms not making neutron in time

-  Provide a solution that works on existing and previous openstack 
releases

This work should be done in a way permitting the user to optionally select this 
feature.


Required Changes
The four main changes we have identified in the current kuryr codebase are as 
follows:

* Introduce an option of enabling “IPVLAN in VM” use case. This can be 
achieved by using a config file option or possibly passing a command line 
argument. The IPVLAN master interface must also be identified.

* If using “IPVLAN in VM” use case, Kuryr should no longer create a new 
port in Neutron or the associated VEth pairs. Instead, Kuryr will create a new 
IPVLAN slave interface on top of the VM’s master interface and pass this slave 
interface to the Container netns.

* If using “IPVLAN in VM” use case, the VM’s port ID needs to be 
identified so we can associate the additional IPVLAN addresses with the port. 
This can be achieved by querying Neutron’s show-port function and passing the 
VMs IP address.

* If using “IPVLAN in VM” use case, Kuryr should associate the 
additional IPVLAN addresses with the VMs port. This can be achieved using 
Neutron’s allowed-address-pairs flag in the port-update function. We intend to 
make use of Kuryr’s existing IPAM functionality to request these IPs from 
Neutron.

Asks
We wish to discuss the pros and cons.
For example, containers exposure as proper neutron entities and the utility of 
neutron’s allowed-address-pairs is not yet well understood.

We also wish to understand if this approach is acceptable for kuryr?


EPA
The Enhanced Platform Awareness initiative is a continuous program to enable 
fine-tuning of the platform for virtualized network functions.
This is done by exposing the processor and platform capabilities through the 
management and orchestration layers.
When a virtual network function is instantiated by an Enhanced Platform 
Awareness enabled orchestrator, the application requirements can be more 
efficiently matched with the platform capabilities.
http://itpeernetwork.intel.com/openstack-kilo-release-is-shaping-up-to-be-a-milestone-for-enhanced-platform-awareness/
https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
https://www.brighttalk.com/webcast/12229/181563/epa-features-in-openstack-kilo


Regards,
Ivan….

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confiden

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Liping Mao (limao)
Hi Ivan?

It sounds cool?

for security group and allowed address pair?
Maybe we can disable port-security?because all the docker in one vm will share 
one security group on the vm port. I'm not sure how to use sg for each 
docker?maybe just disable port-security can be one of the choice. then do not 
need allowed address pairs in this case.


Regards?
Lipimg Mao

? 2016?9?12??19:31?Coughlan, Ivan 
mailto:ivan.cough...@intel.com>> ???


Overview
Kuryr proposes to address the issues of double encapsulation and exposure of 
containers as neutron entities when containers are running within VMs.
As an alternative to the vlan-aware-vms and use of ovs within the VM, we 
propose to:

-  Use allowed-address-pairs configuration for the VM neutron port

-  Use IPVLAN for wiring the Containers within VM

In this way:

-  Achieve efficient data path to container within VM

-  Better leverage OpenStack EPA(Enhanced Platform Awareness) features 
to accelerate the data path (more details below)

-  Mitigate the risk of vlan-aware-vms not making neutron in time

-  Provide a solution that works on existing and previous openstack 
releases

This work should be done in a way permitting the user to optionally select this 
feature.


Required Changes
The four main changes we have identified in the current kuryr codebase are as 
follows:

* Introduce an option of enabling "IPVLAN in VM" use case. This can be 
achieved by using a config file option or possibly passing a command line 
argument. The IPVLAN master interface must also be identified.

* If using "IPVLAN in VM" use case, Kuryr should no longer create a new 
port in Neutron or the associated VEth pairs. Instead, Kuryr will create a new 
IPVLAN slave interface on top of the VM's master interface and pass this slave 
interface to the Container netns.

* If using "IPVLAN in VM" use case, the VM's port ID needs to be 
identified so we can associate the additional IPVLAN addresses with the port. 
This can be achieved by querying Neutron's show-port function and passing the 
VMs IP address.

* If using "IPVLAN in VM" use case, Kuryr should associate the 
additional IPVLAN addresses with the VMs port. This can be achieved using 
Neutron's allowed-address-pairs flag in the port-update function. We intend to 
make use of Kuryr's existing IPAM functionality to request these IPs from 
Neutron.

Asks
We wish to discuss the pros and cons.
For example, containers exposure as proper neutron entities and the utility of 
neutron's allowed-address-pairs is not yet well understood.

We also wish to understand if this approach is acceptable for kuryr?


EPA
The Enhanced Platform Awareness initiative is a continuous program to enable 
fine-tuning of the platform for virtualized network functions.
This is done by exposing the processor and platform capabilities through the 
management and orchestration layers.
When a virtual network function is instantiated by an Enhanced Platform 
Awareness enabled orchestrator, the application requirements can be more 
efficiently matched with the platform capabilities.
http://itpeernetwork.intel.com/openstack-kilo-release-is-shaping-up-to-be-a-milestone-for-enhanced-platform-awareness/
https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
https://www.brighttalk.com/webcast/12229/181563/epa-features-in-openstack-kilo


Regards,
Ivan

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-12 Thread Vikas Choudhary
On Mon, Sep 12, 2016 at 9:17 PM, Hongbin Lu  wrote:

> Ivan,
>
> Thanks for the proposal. From Magnum's point of view, this proposal
> doesn't seem to require to store neutron/rabbitmq credentials in tenant VMs
> which is more desirable. I am looking forward to the PoC.
>

Hogbin, Can you please elaborate on this will not require to store neutron
credentials?
For example in libnetwork case, neutron's commands like "show_port" and
"update_port" will still need to be invoked from inside VM.

Overall I liked this approach given its simplicity over vlan-aware-vms.

-VikasC

>
> Best regards,
> Hongbin
>
> On Mon, Sep 12, 2016 at 7:29 AM, Coughlan, Ivan 
> wrote:
>
>>
>>
>> *Overview*
>>
>> Kuryr proposes to address the issues of double encapsulation and exposure
>> of containers as neutron entities when containers are running within VMs.
>>
>> As an alternative to the vlan-aware-vms and use of ovs within the VM, we
>> propose to:
>>
>> -  Use allowed-address-pairs configuration for the VM neutron
>> port
>>
>> -  Use IPVLAN for wiring the Containers within VM
>>
>>
>>
>> In this way:
>>
>> -  Achieve efficient data path to container within VM
>>
>> -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
>> features to accelerate the data path (more details below)
>>
>> -  Mitigate the risk of vlan-aware-vms not making neutron in time
>>
>> -  Provide a solution that works on existing and previous
>> openstack releases
>>
>>
>>
>> This work should be done in a way permitting the user to optionally
>> select this feature.
>>
>>
>>
>>
>> *Required Changes*
>>
>> The four main changes we have identified in the current kuryr codebase
>> are as follows:
>>
>> · Introduce an option of enabling “IPVLAN in VM” use case. This
>> can be achieved by using a config file option or possibly passing a command
>> line argument. The IPVLAN master interface must also be identified.
>>
>> · If using “IPVLAN in VM” use case, Kuryr should no longer
>> create a new port in Neutron or the associated VEth pairs. Instead, Kuryr
>> will create a new IPVLAN slave interface on top of the VM’s master
>> interface and pass this slave interface to the Container netns.
>>
>> · If using “IPVLAN in VM” use case, the VM’s port ID needs to be
>> identified so we can associate the additional IPVLAN addresses with the
>> port. This can be achieved by querying Neutron’s show-port function and
>> passing the VMs IP address.
>>
>> · If using “IPVLAN in VM” use case, Kuryr should associate the
>> additional IPVLAN addresses with the VMs port. This can be achieved using
>> Neutron’s allowed-address-pairs flag in the port-update function. We
>> intend to make use of Kuryr’s existing IPAM functionality to request these
>> IPs from Neutron.
>>
>>
>>
>> *Asks*
>>
>> We wish to discuss the pros and cons.
>>
>> For example, containers exposure as proper neutron entities and the
>> utility of neutron’s allowed-address-pairs is not yet well understood.
>>
>>
>>
>> We also wish to understand if this approach is acceptable for kuryr?
>>
>>
>>
>>
>>
>> *EPA*
>>
>> The Enhanced Platform Awareness initiative is a continuous program to
>> enable fine-tuning of the platform for virtualized network functions.
>>
>> This is done by exposing the processor and platform capabilities through
>> the management and orchestration layers.
>>
>> When a virtual network function is instantiated by an Enhanced Platform
>> Awareness enabled orchestrator, the application requirements can be more
>> efficiently matched with the platform capabilities.
>>
>> http://itpeernetwork.intel.com/openstack-kilo-release-is-sha
>> ping-up-to-be-a-milestone-for-enhanced-platform-awareness/
>>
>> https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
>>
>> https://www.brighttalk.com/webcast/12229/181563/epa-features
>> -in-openstack-kilo
>>
>>
>>
>>
>>
>> Regards,
>>
>> Ivan….
>>
>> --
>> Intel Research and Development Ireland Limited
>> Registered in Ireland
>> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
>> Registered Number: 308263
>>
>> This e-mail and any attachments may contain confidential material for the
>> sole use of the intended recipient(s). Any review or distribution by others
>> is strictly prohibited. If you are not the intended recipient, please
>> contact the sender and delete all copies.
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-12 Thread Hongbin Lu
Ivan,

Thanks for the proposal. From Magnum's point of view, this proposal doesn't
seem to require to store neutron/rabbitmq credentials in tenant VMs which
is more desirable. I am looking forward to the PoC.

Best regards,
Hongbin

On Mon, Sep 12, 2016 at 7:29 AM, Coughlan, Ivan 
wrote:

>
>
> *Overview*
>
> Kuryr proposes to address the issues of double encapsulation and exposure
> of containers as neutron entities when containers are running within VMs.
>
> As an alternative to the vlan-aware-vms and use of ovs within the VM, we
> propose to:
>
> -  Use allowed-address-pairs configuration for the VM neutron port
>
> -  Use IPVLAN for wiring the Containers within VM
>
>
>
> In this way:
>
> -  Achieve efficient data path to container within VM
>
> -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
> features to accelerate the data path (more details below)
>
> -  Mitigate the risk of vlan-aware-vms not making neutron in time
>
> -  Provide a solution that works on existing and previous
> openstack releases
>
>
>
> This work should be done in a way permitting the user to optionally select
> this feature.
>
>
>
>
> *Required Changes*
>
> The four main changes we have identified in the current kuryr codebase are
> as follows:
>
> · Introduce an option of enabling “IPVLAN in VM” use case. This
> can be achieved by using a config file option or possibly passing a command
> line argument. The IPVLAN master interface must also be identified.
>
> · If using “IPVLAN in VM” use case, Kuryr should no longer create
> a new port in Neutron or the associated VEth pairs. Instead, Kuryr will
> create a new IPVLAN slave interface on top of the VM’s master interface and
> pass this slave interface to the Container netns.
>
> · If using “IPVLAN in VM” use case, the VM’s port ID needs to be
> identified so we can associate the additional IPVLAN addresses with the
> port. This can be achieved by querying Neutron’s show-port function and
> passing the VMs IP address.
>
> · If using “IPVLAN in VM” use case, Kuryr should associate the
> additional IPVLAN addresses with the VMs port. This can be achieved using
> Neutron’s allowed-address-pairs flag in the port-update function. We
> intend to make use of Kuryr’s existing IPAM functionality to request these
> IPs from Neutron.
>
>
>
> *Asks*
>
> We wish to discuss the pros and cons.
>
> For example, containers exposure as proper neutron entities and the
> utility of neutron’s allowed-address-pairs is not yet well understood.
>
>
>
> We also wish to understand if this approach is acceptable for kuryr?
>
>
>
>
>
> *EPA*
>
> The Enhanced Platform Awareness initiative is a continuous program to
> enable fine-tuning of the platform for virtualized network functions.
>
> This is done by exposing the processor and platform capabilities through
> the management and orchestration layers.
>
> When a virtual network function is instantiated by an Enhanced Platform
> Awareness enabled orchestrator, the application requirements can be more
> efficiently matched with the platform capabilities.
>
> http://itpeernetwork.intel.com/openstack-kilo-release-is-
> shaping-up-to-be-a-milestone-for-enhanced-platform-awareness/
>
> https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
>
> https://www.brighttalk.com/webcast/12229/181563/epa-
> features-in-openstack-kilo
>
>
>
>
>
> Regards,
>
> Ivan….
>
> --
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
>
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient(s). Any review or distribution by others
> is strictly prohibited. If you are not the intended recipient, please
> contact the sender and delete all copies.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-12 Thread Irena Berezovsky
Hi Ivan,
The approach looks very interesting and seems to be reasonable effort to
make it work with kuryr as alternative to the 'VLAN aware VM' approach.
Having container presented as neutron entity has its value, especially for
visibility/monitoring (i.e mirroring) and security  (i.e applying security
groups).
But I do think that for short term, this approach is a good way to provide
Container in VM  support.
I  think it's worth to submit devref to kuryr to move forward.
BR,
Irena

On Mon, Sep 12, 2016 at 2:29 PM, Coughlan, Ivan 
wrote:

>
>
> *Overview*
>
> Kuryr proposes to address the issues of double encapsulation and exposure
> of containers as neutron entities when containers are running within VMs.
>
> As an alternative to the vlan-aware-vms and use of ovs within the VM, we
> propose to:
>
> -  Use allowed-address-pairs configuration for the VM neutron port
>
> -  Use IPVLAN for wiring the Containers within VM
>
>
>
> In this way:
>
> -  Achieve efficient data path to container within VM
>
> -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
> features to accelerate the data path (more details below)
>
> -  Mitigate the risk of vlan-aware-vms not making neutron in time
>
> -  Provide a solution that works on existing and previous
> openstack releases
>
>
>
> This work should be done in a way permitting the user to optionally select
> this feature.
>
>
>
>
> *Required Changes*
>
> The four main changes we have identified in the current kuryr codebase are
> as follows:
>
> · Introduce an option of enabling “IPVLAN in VM” use case. This
> can be achieved by using a config file option or possibly passing a command
> line argument. The IPVLAN master interface must also be identified.
>
> · If using “IPVLAN in VM” use case, Kuryr should no longer create
> a new port in Neutron or the associated VEth pairs. Instead, Kuryr will
> create a new IPVLAN slave interface on top of the VM’s master interface and
> pass this slave interface to the Container netns.
>
> · If using “IPVLAN in VM” use case, the VM’s port ID needs to be
> identified so we can associate the additional IPVLAN addresses with the
> port. This can be achieved by querying Neutron’s show-port function and
> passing the VMs IP address.
>
> · If using “IPVLAN in VM” use case, Kuryr should associate the
> additional IPVLAN addresses with the VMs port. This can be achieved using
> Neutron’s allowed-address-pairs flag in the port-update function. We
> intend to make use of Kuryr’s existing IPAM functionality to request these
> IPs from Neutron.
>
>
>
> *Asks*
>
> We wish to discuss the pros and cons.
>
> For example, containers exposure as proper neutron entities and the
> utility of neutron’s allowed-address-pairs is not yet well understood.
>
>
>
> We also wish to understand if this approach is acceptable for kuryr?
>
>
>
>
>
> *EPA*
>
> The Enhanced Platform Awareness initiative is a continuous program to
> enable fine-tuning of the platform for virtualized network functions.
>
> This is done by exposing the processor and platform capabilities through
> the management and orchestration layers.
>
> When a virtual network function is instantiated by an Enhanced Platform
> Awareness enabled orchestrator, the application requirements can be more
> efficiently matched with the platform capabilities.
>
> http://itpeernetwork.intel.com/openstack-kilo-release-is-
> shaping-up-to-be-a-milestone-for-enhanced-platform-awareness/
>
> https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
>
> https://www.brighttalk.com/webcast/12229/181563/epa-
> features-in-openstack-kilo
>
>
>
>
>
> Regards,
>
> Ivan….
>
> --
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
>
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient(s). Any review or distribution by others
> is strictly prohibited. If you are not the intended recipient, please
> contact the sender and delete all copies.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-12 Thread Antoni Segura Puimedon
On Mon, Sep 12, 2016 at 1:42 PM, Antoni Segura Puimedon
 wrote:
> On Mon, Sep 12, 2016 at 1:29 PM, Coughlan, Ivan  
> wrote:
>>
>>
>> Overview
>>
>> Kuryr proposes to address the issues of double encapsulation and exposure of
>> containers as neutron entities when containers are running within VMs.
>>
>> As an alternative to the vlan-aware-vms and use of ovs within the VM, we
>> propose to:
>>
>> -  Use allowed-address-pairs configuration for the VM neutron port
>>
>> -  Use IPVLAN for wiring the Containers within VM
>>
>>
>>
>> In this way:
>>
>> -  Achieve efficient data path to container within VM
>>
>> -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
>> features to accelerate the data path (more details below)
>>
>> -  Mitigate the risk of vlan-aware-vms not making neutron in time
>>
>> -  Provide a solution that works on existing and previous openstack
>> releases
>>
>>
>>
>> This work should be done in a way permitting the user to optionally select
>> this feature.
>>
>>
>>
>>
>>
>> Required Changes
>>
>> The four main changes we have identified in the current kuryr codebase are
>> as follows:
>>
>> · Introduce an option of enabling “IPVLAN in VM” use case. This can
>> be achieved by using a config file option or possibly passing a command line
>> argument. The IPVLAN master interface must also be identified.
>>
>> · If using “IPVLAN in VM” use case, Kuryr should no longer create a
>> new port in Neutron or the associated VEth pairs. Instead, Kuryr will create
>> a new IPVLAN slave interface on top of the VM’s master interface and pass
>> this slave interface to the Container netns.
>>
>> · If using “IPVLAN in VM” use case, the VM’s port ID needs to be
>> identified so we can associate the additional IPVLAN addresses with the
>> port. This can be achieved by querying Neutron’s show-port function and
>> passing the VMs IP address.
>>
>> · If using “IPVLAN in VM” use case, Kuryr should associate the
>> additional IPVLAN addresses with the VMs port. This can be achieved using
>> Neutron’s allowed-address-pairs flag in the port-update function. We intend
>> to make use of Kuryr’s existing IPAM functionality to request these IPs from
>> Neutron.
>>
>>
>>
>> Asks
>>
>> We wish to discuss the pros and cons.
>>
>> For example, containers exposure as proper neutron entities and the utility
>> of neutron’s allowed-address-pairs is not yet well understood.
>>
>>
>>
>> We also wish to understand if this approach is acceptable for kuryr?

My vote is that it is acceptable to work on introducing such mode to
kuryr-libnetwork
(and later to kuryr-kubernetes).

Could we get a link to the current PoC and set a meeting for an
upstreaming plan?


>
> Thanks Ivan, adding discussion about this to the weekly IRC meeting. Maybe 
> it's
> a bit tight for all the participants to get comfortable enough with
> the specifics
> to take a decision today, but let's bring the topic to the table and give an
> answer during this week.
>
>>
>>
>>
>>
>>
>> EPA
>>
>> The Enhanced Platform Awareness initiative is a continuous program to enable
>> fine-tuning of the platform for virtualized network functions.
>>
>> This is done by exposing the processor and platform capabilities through the
>> management and orchestration layers.
>>
>> When a virtual network function is instantiated by an Enhanced Platform
>> Awareness enabled orchestrator, the application requirements can be more
>> efficiently matched with the platform capabilities.
>>
>> http://itpeernetwork.intel.com/openstack-kilo-release-is-shaping-up-to-be-a-milestone-for-enhanced-platform-awareness/
>>
>> https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
>>
>> https://www.brighttalk.com/webcast/12229/181563/epa-features-in-openstack-kilo
>>
>>
>>
>>
>>
>> Regards,
>>
>> Ivan….
>>
>> --
>> Intel Research and Development Ireland Limited
>> Registered in Ireland
>> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
>> Registered Number: 308263
>>
>> This e-mail and any attachments may contain confidential material for the
>> sole use of the intended recipient(s). Any review or distribution by others
>> is strictly prohibited. If you are not the intended recipient, please
>> contact the sender and delete all copies.
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-12 Thread Antoni Segura Puimedon
On Mon, Sep 12, 2016 at 1:29 PM, Coughlan, Ivan  wrote:
>
>
> Overview
>
> Kuryr proposes to address the issues of double encapsulation and exposure of
> containers as neutron entities when containers are running within VMs.
>
> As an alternative to the vlan-aware-vms and use of ovs within the VM, we
> propose to:
>
> -  Use allowed-address-pairs configuration for the VM neutron port
>
> -  Use IPVLAN for wiring the Containers within VM
>
>
>
> In this way:
>
> -  Achieve efficient data path to container within VM
>
> -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
> features to accelerate the data path (more details below)
>
> -  Mitigate the risk of vlan-aware-vms not making neutron in time
>
> -  Provide a solution that works on existing and previous openstack
> releases
>
>
>
> This work should be done in a way permitting the user to optionally select
> this feature.
>
>
>
>
>
> Required Changes
>
> The four main changes we have identified in the current kuryr codebase are
> as follows:
>
> · Introduce an option of enabling “IPVLAN in VM” use case. This can
> be achieved by using a config file option or possibly passing a command line
> argument. The IPVLAN master interface must also be identified.
>
> · If using “IPVLAN in VM” use case, Kuryr should no longer create a
> new port in Neutron or the associated VEth pairs. Instead, Kuryr will create
> a new IPVLAN slave interface on top of the VM’s master interface and pass
> this slave interface to the Container netns.
>
> · If using “IPVLAN in VM” use case, the VM’s port ID needs to be
> identified so we can associate the additional IPVLAN addresses with the
> port. This can be achieved by querying Neutron’s show-port function and
> passing the VMs IP address.
>
> · If using “IPVLAN in VM” use case, Kuryr should associate the
> additional IPVLAN addresses with the VMs port. This can be achieved using
> Neutron’s allowed-address-pairs flag in the port-update function. We intend
> to make use of Kuryr’s existing IPAM functionality to request these IPs from
> Neutron.
>
>
>
> Asks
>
> We wish to discuss the pros and cons.
>
> For example, containers exposure as proper neutron entities and the utility
> of neutron’s allowed-address-pairs is not yet well understood.
>
>
>
> We also wish to understand if this approach is acceptable for kuryr?

Thanks Ivan, adding discussion about this to the weekly IRC meeting. Maybe it's
a bit tight for all the participants to get comfortable enough with
the specifics
to take a decision today, but let's bring the topic to the table and give an
answer during this week.

>
>
>
>
>
> EPA
>
> The Enhanced Platform Awareness initiative is a continuous program to enable
> fine-tuning of the platform for virtualized network functions.
>
> This is done by exposing the processor and platform capabilities through the
> management and orchestration layers.
>
> When a virtual network function is instantiated by an Enhanced Platform
> Awareness enabled orchestrator, the application requirements can be more
> efficiently matched with the platform capabilities.
>
> http://itpeernetwork.intel.com/openstack-kilo-release-is-shaping-up-to-be-a-milestone-for-enhanced-platform-awareness/
>
> https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
>
> https://www.brighttalk.com/webcast/12229/181563/epa-features-in-openstack-kilo
>
>
>
>
>
> Regards,
>
> Ivan….
>
> --
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
>
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient(s). Any review or distribution by others
> is strictly prohibited. If you are not the intended recipient, please
> contact the sender and delete all copies.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-12 Thread Coughlan, Ivan

Overview
Kuryr proposes to address the issues of double encapsulation and exposure of 
containers as neutron entities when containers are running within VMs.
As an alternative to the vlan-aware-vms and use of ovs within the VM, we 
propose to:

-  Use allowed-address-pairs configuration for the VM neutron port

-  Use IPVLAN for wiring the Containers within VM

In this way:

-  Achieve efficient data path to container within VM

-  Better leverage OpenStack EPA(Enhanced Platform Awareness) features 
to accelerate the data path (more details below)

-  Mitigate the risk of vlan-aware-vms not making neutron in time

-  Provide a solution that works on existing and previous openstack 
releases

This work should be done in a way permitting the user to optionally select this 
feature.


Required Changes
The four main changes we have identified in the current kuryr codebase are as 
follows:

* Introduce an option of enabling "IPVLAN in VM" use case. This can be 
achieved by using a config file option or possibly passing a command line 
argument. The IPVLAN master interface must also be identified.

* If using "IPVLAN in VM" use case, Kuryr should no longer create a new 
port in Neutron or the associated VEth pairs. Instead, Kuryr will create a new 
IPVLAN slave interface on top of the VM's master interface and pass this slave 
interface to the Container netns.

* If using "IPVLAN in VM" use case, the VM's port ID needs to be 
identified so we can associate the additional IPVLAN addresses with the port. 
This can be achieved by querying Neutron's show-port function and passing the 
VMs IP address.

* If using "IPVLAN in VM" use case, Kuryr should associate the 
additional IPVLAN addresses with the VMs port. This can be achieved using 
Neutron's allowed-address-pairs flag in the port-update function. We intend to 
make use of Kuryr's existing IPAM functionality to request these IPs from 
Neutron.

Asks
We wish to discuss the pros and cons.
For example, containers exposure as proper neutron entities and the utility of 
neutron's allowed-address-pairs is not yet well understood.

We also wish to understand if this approach is acceptable for kuryr?


EPA
The Enhanced Platform Awareness initiative is a continuous program to enable 
fine-tuning of the platform for virtualized network functions.
This is done by exposing the processor and platform capabilities through the 
management and orchestration layers.
When a virtual network function is instantiated by an Enhanced Platform 
Awareness enabled orchestrator, the application requirements can be more 
efficiently matched with the platform capabilities.
http://itpeernetwork.intel.com/openstack-kilo-release-is-shaping-up-to-be-a-milestone-for-enhanced-platform-awareness/
https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
https://www.brighttalk.com/webcast/12229/181563/epa-features-in-openstack-kilo


Regards,
Ivan
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev