Re: [openstack-dev] [nova] Contrail VIF TAP plugging broken

2018-02-22 Thread Édouard Thuleau
Thanks a lot Matt, Jay and Dan for your reactivity and your time.

Édouard.

On Wed, Feb 21, 2018 at 9:50 PM, Matt Riedemann  wrote:

> On 2/21/2018 4:30 AM, Édouard Thuleau wrote:
>
>> Hi Seán, Michael,
>>
>> Since patch [1] moved Contrail VIF plugging under privsep, Nova fails to
>> plug TAP on the Contrail software switch (named vrouter) [2]. I proposed a
>> fix in the beginning of the year [3] but it still pending approval even it
>> got a couple of +1 and no negative feedback. It's why I'm writing that
>> email to get your attention.
>> That issue appeared during the Queens development cycle and we need to
>> fix that before it was released (hope we are not to late).
>> Contrail already started to move on os-vif driver [4]. A first VIF type
>> driver is there for DPDK case [5], we plan to do the same for the TAP case
>> in the R release and remove the Nova VIF plugging code for the vrouter.
>>
>> [1] https://review.openstack.org/#/c/515916/
>> [2] https://bugs.launchpad.net/nova/+bug/1742963
>> [3] https://review.openstack.org/#/c/533212/
>> [4] https://github.com/Juniper/contrail-nova-vif-driver
>> [5] https://review.openstack.org/#/c/441183/
>>
>> Regards,
>> Édouard.
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> Approved the change on master and working on the backport to
> stable/queens. We'll be cutting an RC3 tomorrow so I'll make sure this gets
> into that.
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Contrail VIF TAP plugging broken

2018-02-21 Thread Édouard Thuleau
Hi Seán, Michael,

Since patch [1] moved Contrail VIF plugging under privsep, Nova fails to
plug TAP on the Contrail software switch (named vrouter) [2]. I proposed a
fix in the beginning of the year [3] but it still pending approval even it
got a couple of +1 and no negative feedback. It's why I'm writing that
email to get your attention.
That issue appeared during the Queens development cycle and we need to fix
that before it was released (hope we are not to late).
Contrail already started to move on os-vif driver [4]. A first VIF type
driver is there for DPDK case [5], we plan to do the same for the TAP case
in the R release and remove the Nova VIF plugging code for the vrouter.

[1] https://review.openstack.org/#/c/515916/
[2] https://bugs.launchpad.net/nova/+bug/1742963
[3] https://review.openstack.org/#/c/533212/
[4] https://github.com/Juniper/contrail-nova-vif-driver
[5] https://review.openstack.org/#/c/441183/

Regards,
Édouard.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Do we still support core plugin not based on the ML2 framework?

2017-06-26 Thread Édouard Thuleau
HI Armando,

I opened a launchpad bug [1]. I'll try to propose a patch on one of
the service plugin to enable plugable backend driver.
I'll look how we can add tests to check service plugin works with a
dummy core plugin not based on the Neutron model.

[1] https://bugs.launchpad.net/neutron/+bug/1700651

Édouard.

On Thu, Jun 22, 2017 at 11:40 PM, Armando M.  wrote:
>
>
> On 22 June 2017 at 17:24, Édouard Thuleau  wrote:
>>
>> Hi Armando,
>>
>> I did not opened any bug report. But if a core plugin implements only
>> the NeutronPluginBaseV2 interface [1] and not the NeutronDbPluginV2
>> interface [2], most of the service plugins of that list will be
>> initialized without any errors (only the timestamp plugin fails to
>> initialize because it tries to do DB stuff in its constructor [3]).
>> And all API extensions of that service plugins are listed as supported
>> but none of them works. Resources are not extended (tag, revision,
>> auto-allocate) or some API extensions returns 404
>> (network-ip-availability or flavors).
>>
>> What I proposed, is to improve all that service plugins of that list
>> to be able to support pluggable backend drivers (thanks to the Neutron
>> service driver mechanism [4]) and uses by default a driver based on
>> the Neutron DB(like it's implemented actually). That will permits core
>> plugin which not implements the Neutron DB model to provide its own
>> driver. But until all service plugins will be fixed, I proposed a
>> workaround to disable them.
>
>
> I would recommend against the workaround of disabling them because of the
> stated rationale.
>
> Can you open a bug report, potentially when you're ready to file a fix (or
> enable someone else to take ownership of the fix)? This way we can have a
> more effective conversation either on the bug report or code review.
>
> Thanks,
> Armando
>
>>
>>
>> [1]
>> https://github.com/openstack/neutron/blob/master/neutron/neutron_plugin_base_v2.py#L30
>> [2]
>> https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L124
>> [3]
>> https://github.com/openstack/neutron/blob/master/neutron/services/timestamp/timestamp_plugin.py#L32
>> [4]
>> https://github.com/openstack/neutron/blob/master/neutron/services/service_base.py#L27
>>
>> Édouard.
>>
>> On Thu, Jun 22, 2017 at 12:29 AM, Armando M.  wrote:
>> >
>> >
>> > On 21 June 2017 at 17:40, Édouard Thuleau 
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >> @Chaoyi,
>> >> I don't want to change the core plugin interface. But I'm not sure we
>> >> are talking about the same interface. I had a very quick look into the
>> >> tricycle code and I think it uses the NeutronDbPluginV2 interface [1]
>> >> which implements the Neutron DB model. Our Contrail Neutron plugin
>> >> implements the NeutronPluginBaseV2 interface [2]. Anyway,
>> >> NeutronDbPluginV2 is inheriting from NeutronPluginBaseV2 [3].
>> >> Thanks for the pointer to the stadium paragraph.
>> >
>> >
>> > Is there any bug report that captures the actual error you're facing?
>> > Out of
>> > the list of plugins that have been added to that list over time, most
>> > work
>> > just exercising the core plugin API, and we can look into the ones that
>> > don't to figure out whether we overlooked some design abstractions
>> > during
>> > code review.
>> >
>> >>
>> >>
>> >> @Kevin,
>> >> Service plugins loaded by default are defined in a contant list [4]
>> >> and I don't see how I can remove a default service plugin to be loaded
>> >> [5].
>> >>
>> >> [1]
>> >>
>> >> https://github.com/openstack/tricircle/blob/master/tricircle/network/central_plugin.py#L128
>> >> [2]
>> >>
>> >> https://github.com/Juniper/contrail-neutron-plugin/blob/master/neutron_plugin_contrail/plugins/opencontrail/contrail_plugin_base.py#L113
>> >> [3]
>> >>
>> >> https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L125
>> >> [4]
>> >>
>> >> https://github.com/openstack/neutron/blob/master/neutron/plugins/common/constants.py#L43
>> >> [5]
>> >>
>> >> https://github.com/openstack/neutron/blob/master/neutron/manager.py#L190
>> >>
>> >> Édouard.
>> >>
>> >> On Wed, Jun 21, 

Re: [openstack-dev] [neutron] Do we still support core plugin not based on the ML2 framework?

2017-06-22 Thread Édouard Thuleau
Hi Armando,

I did not opened any bug report. But if a core plugin implements only
the NeutronPluginBaseV2 interface [1] and not the NeutronDbPluginV2
interface [2], most of the service plugins of that list will be
initialized without any errors (only the timestamp plugin fails to
initialize because it tries to do DB stuff in its constructor [3]).
And all API extensions of that service plugins are listed as supported
but none of them works. Resources are not extended (tag, revision,
auto-allocate) or some API extensions returns 404
(network-ip-availability or flavors).

What I proposed, is to improve all that service plugins of that list
to be able to support pluggable backend drivers (thanks to the Neutron
service driver mechanism [4]) and uses by default a driver based on
the Neutron DB(like it's implemented actually). That will permits core
plugin which not implements the Neutron DB model to provide its own
driver. But until all service plugins will be fixed, I proposed a
workaround to disable them.

[1] 
https://github.com/openstack/neutron/blob/master/neutron/neutron_plugin_base_v2.py#L30
[2] 
https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L124
[3] 
https://github.com/openstack/neutron/blob/master/neutron/services/timestamp/timestamp_plugin.py#L32
[4] 
https://github.com/openstack/neutron/blob/master/neutron/services/service_base.py#L27

Édouard.

On Thu, Jun 22, 2017 at 12:29 AM, Armando M.  wrote:
>
>
> On 21 June 2017 at 17:40, Édouard Thuleau  wrote:
>>
>> Hi,
>>
>> @Chaoyi,
>> I don't want to change the core plugin interface. But I'm not sure we
>> are talking about the same interface. I had a very quick look into the
>> tricycle code and I think it uses the NeutronDbPluginV2 interface [1]
>> which implements the Neutron DB model. Our Contrail Neutron plugin
>> implements the NeutronPluginBaseV2 interface [2]. Anyway,
>> NeutronDbPluginV2 is inheriting from NeutronPluginBaseV2 [3].
>> Thanks for the pointer to the stadium paragraph.
>
>
> Is there any bug report that captures the actual error you're facing? Out of
> the list of plugins that have been added to that list over time, most work
> just exercising the core plugin API, and we can look into the ones that
> don't to figure out whether we overlooked some design abstractions during
> code review.
>
>>
>>
>> @Kevin,
>> Service plugins loaded by default are defined in a contant list [4]
>> and I don't see how I can remove a default service plugin to be loaded
>> [5].
>>
>> [1]
>> https://github.com/openstack/tricircle/blob/master/tricircle/network/central_plugin.py#L128
>> [2]
>> https://github.com/Juniper/contrail-neutron-plugin/blob/master/neutron_plugin_contrail/plugins/opencontrail/contrail_plugin_base.py#L113
>> [3]
>> https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L125
>> [4]
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/common/constants.py#L43
>> [5]
>> https://github.com/openstack/neutron/blob/master/neutron/manager.py#L190
>>
>> Édouard.
>>
>> On Wed, Jun 21, 2017 at 11:22 AM, Kevin Benton  wrote:
>> > Why not just delete the service plugins you don't support from the
>> > default
>> > plugins dict?
>> >
>> > On Wed, Jun 21, 2017 at 1:45 AM, Édouard Thuleau
>> > 
>> > wrote:
>> >>
>> >> Ok, we would like to help on that. How we can start?
>> >>
>> >> I think the issue I raise in that thread must be the first point to
>> >> address and my second proposition seems to be the correct one. What do
>> >> you think?
>> >> But it will needs some time and not sure we'll be able to fix all
>> >> service plugins loaded by default before the next Pike release.
>> >>
>> >> I like to propose a workaround until all default service plugins will
>> >> be compatible with non-DB core plugins. We can continue to load that
>> >> default service plugins list but authorizing a core plugin to disable
>> >> it completely with a private attribut on the core plugin class like
>> >> it's done for bulk/pagination/sorting operations.
>> >>
>> >> Of course, we need to add the ability to report any regression on
>> >> that. I think unit tests will help and we can also work on a
>> >> functional test based on a fake non-DB core plugin.
>> >>
>> >> Regards,
>> >> Édouard.
>> >>
>> >> On Tue, Jun 20, 2017 at 12:09 AM, Kevin Benton 
>> >> wrote:
>> >> > 

Re: [openstack-dev] [neutron] Do we still support core plugin not based on the ML2 framework?

2017-06-21 Thread Édouard Thuleau
Hi,

@Chaoyi,
I don't want to change the core plugin interface. But I'm not sure we
are talking about the same interface. I had a very quick look into the
tricycle code and I think it uses the NeutronDbPluginV2 interface [1]
which implements the Neutron DB model. Our Contrail Neutron plugin
implements the NeutronPluginBaseV2 interface [2]. Anyway,
NeutronDbPluginV2 is inheriting from NeutronPluginBaseV2 [3].
Thanks for the pointer to the stadium paragraph.

@Kevin,
Service plugins loaded by default are defined in a contant list [4]
and I don't see how I can remove a default service plugin to be loaded
[5].

[1] 
https://github.com/openstack/tricircle/blob/master/tricircle/network/central_plugin.py#L128
[2] 
https://github.com/Juniper/contrail-neutron-plugin/blob/master/neutron_plugin_contrail/plugins/opencontrail/contrail_plugin_base.py#L113
[3] 
https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L125
[4] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/common/constants.py#L43
[5] https://github.com/openstack/neutron/blob/master/neutron/manager.py#L190

Édouard.

On Wed, Jun 21, 2017 at 11:22 AM, Kevin Benton  wrote:
> Why not just delete the service plugins you don't support from the default
> plugins dict?
>
> On Wed, Jun 21, 2017 at 1:45 AM, Édouard Thuleau 
> wrote:
>>
>> Ok, we would like to help on that. How we can start?
>>
>> I think the issue I raise in that thread must be the first point to
>> address and my second proposition seems to be the correct one. What do
>> you think?
>> But it will needs some time and not sure we'll be able to fix all
>> service plugins loaded by default before the next Pike release.
>>
>> I like to propose a workaround until all default service plugins will
>> be compatible with non-DB core plugins. We can continue to load that
>> default service plugins list but authorizing a core plugin to disable
>> it completely with a private attribut on the core plugin class like
>> it's done for bulk/pagination/sorting operations.
>>
>> Of course, we need to add the ability to report any regression on
>> that. I think unit tests will help and we can also work on a
>> functional test based on a fake non-DB core plugin.
>>
>> Regards,
>> Édouard.
>>
>> On Tue, Jun 20, 2017 at 12:09 AM, Kevin Benton  wrote:
>> > The issue is mainly developer resources. Everyone currently working
>> > upstream
>> > doesn't have the bandwidth to keep adding/reviewing the layers of
>> > interfaces
>> > to make the DB optional that go untested. (None of the projects that
>> > would
>> > use them run a CI system that reports results on Neutron patches.)
>> >
>> > I think we can certainly accept patches to do the things you are
>> > proposing,
>> > but there is no guarantee that it won't regress to being DB-dependent
>> > until
>> > there is something reporting results back telling us when it breaks.
>> >
>> > So it's not that the community is against non-DB core plugins, it's just
>> > that the people developing those plugins don't participate in the
>> > community
>> > to ensure they work.
>> >
>> > Cheers
>> >
>> >
>> > On Mon, Jun 19, 2017 at 2:15 AM, Édouard Thuleau
>> > 
>> > wrote:
>> >>
>> >> Oops, sent too fast, sorry. I try again.
>> >>
>> >> Hi,
>> >>
>> >> Since Mitaka release, a default service plugins list is loaded when
>> >> Neutron
>> >> server starts [1]. That list is not editable and was extended with few
>> >> services
>> >> [2]. But all of them rely on the Neutron DB model.
>> >>
>> >> If a core driver is not based on the ML2 core plugin framework or not
>> >> based on
>> >> the 'neutron.db.models_v2' class, all that service plugins will not
>> >> work.
>> >>
>> >> So my first question is Does Neutron still support core plugin not
>> >> based
>> >> on ML2
>> >> or 'neutron.db.models_v2' class?
>> >>
>> >> If yes, I would like to propose two solutions:
>> >> - permits core plugin to overload the service plugin class by it's own
>> >> implementation and continuing to use the actual Neutron db based
>> >> services
>> >> as
>> >> default.
>> >> - modifying all default plugin service to use service plugin driver
>> >> framework [3], and set the act

Re: [openstack-dev] [neutron] Do we still support core plugin not based on the ML2 framework?

2017-06-21 Thread Édouard Thuleau
Ok, we would like to help on that. How we can start?

I think the issue I raise in that thread must be the first point to
address and my second proposition seems to be the correct one. What do
you think?
But it will needs some time and not sure we'll be able to fix all
service plugins loaded by default before the next Pike release.

I like to propose a workaround until all default service plugins will
be compatible with non-DB core plugins. We can continue to load that
default service plugins list but authorizing a core plugin to disable
it completely with a private attribut on the core plugin class like
it's done for bulk/pagination/sorting operations.

Of course, we need to add the ability to report any regression on
that. I think unit tests will help and we can also work on a
functional test based on a fake non-DB core plugin.

Regards,
Édouard.

On Tue, Jun 20, 2017 at 12:09 AM, Kevin Benton  wrote:
> The issue is mainly developer resources. Everyone currently working upstream
> doesn't have the bandwidth to keep adding/reviewing the layers of interfaces
> to make the DB optional that go untested. (None of the projects that would
> use them run a CI system that reports results on Neutron patches.)
>
> I think we can certainly accept patches to do the things you are proposing,
> but there is no guarantee that it won't regress to being DB-dependent until
> there is something reporting results back telling us when it breaks.
>
> So it's not that the community is against non-DB core plugins, it's just
> that the people developing those plugins don't participate in the community
> to ensure they work.
>
> Cheers
>
>
> On Mon, Jun 19, 2017 at 2:15 AM, Édouard Thuleau 
> wrote:
>>
>> Oops, sent too fast, sorry. I try again.
>>
>> Hi,
>>
>> Since Mitaka release, a default service plugins list is loaded when
>> Neutron
>> server starts [1]. That list is not editable and was extended with few
>> services
>> [2]. But all of them rely on the Neutron DB model.
>>
>> If a core driver is not based on the ML2 core plugin framework or not
>> based on
>> the 'neutron.db.models_v2' class, all that service plugins will not work.
>>
>> So my first question is Does Neutron still support core plugin not based
>> on ML2
>> or 'neutron.db.models_v2' class?
>>
>> If yes, I would like to propose two solutions:
>> - permits core plugin to overload the service plugin class by it's own
>> implementation and continuing to use the actual Neutron db based services
>> as
>> default.
>> - modifying all default plugin service to use service plugin driver
>> framework [3], and set the actual Neutron db based implementation as
>> default driver for services. That permits to core drivers not based on the
>> Neutron DB to specify a driver. We can see that solution was adopted in
>> the
>> networking-bgpvpn project, where can find two abstract driver classes, one
>> for
>> core driver based on Neutron DB model [4] and one used by core driver not
>> based
>> on the DB [5] as the Contrail driver [6].
>>
>> [1]
>> https://github.com/openstack/neutron/commit/aadf2f30f84dff3d85f380a7ff4e16dbbb0c6bb0#diff-9169a6595980d19b2649d5bedfff05ce
>> [2]
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/common/constants.py#L43
>> [3]
>> https://github.com/openstack/neutron/blob/master/neutron/services/service_base.py#L27
>> [4]
>> https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/driver_api.py#L226
>> [5]
>> https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/driver_api.py#L23
>> [6]
>> https://github.com/Juniper/contrail-neutron-plugin/blob/master/neutron_plugin_contrail/plugins/opencontrail/networking_bgpvpn/contrail.py#L36
>>
>> Regards,
>> Édouard.
>>
>> On Mon, Jun 19, 2017 at 10:47 AM, Édouard Thuleau
>>  wrote:
>> > Hi,
>> > Since Mitaka release [1], a default service plugins list is loaded
>> > when Neutron server starts. That list is not editable and was extended
>> > with few services [2]. But none of th
>> >
>> > [1]
>> > https://github.com/openstack/neutron/commit/aadf2f30f84dff3d85f380a7ff4e16dbbb0c6bb0#diff-9169a6595980d19b2649d5bedfff05ce
>> > [2]
>> > https://github.com/openstack/neutron/blob/master/neutron/plugins/common/constants.py#L43
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>&g

Re: [openstack-dev] [neutron] Do we still support core plugin not based on the ML2 framework?

2017-06-19 Thread Édouard Thuleau
Oops, sent too fast, sorry. I try again.

Hi,

Since Mitaka release, a default service plugins list is loaded when Neutron
server starts [1]. That list is not editable and was extended with few services
[2]. But all of them rely on the Neutron DB model.

If a core driver is not based on the ML2 core plugin framework or not based on
the 'neutron.db.models_v2' class, all that service plugins will not work.

So my first question is Does Neutron still support core plugin not based on ML2
or 'neutron.db.models_v2' class?

If yes, I would like to propose two solutions:
- permits core plugin to overload the service plugin class by it's own
implementation and continuing to use the actual Neutron db based services as
default.
- modifying all default plugin service to use service plugin driver
framework [3], and set the actual Neutron db based implementation as
default driver for services. That permits to core drivers not based on the
Neutron DB to specify a driver. We can see that solution was adopted in the
networking-bgpvpn project, where can find two abstract driver classes, one for
core driver based on Neutron DB model [4] and one used by core driver not based
on the DB [5] as the Contrail driver [6].

[1] 
https://github.com/openstack/neutron/commit/aadf2f30f84dff3d85f380a7ff4e16dbbb0c6bb0#diff-9169a6595980d19b2649d5bedfff05ce
[2] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/common/constants.py#L43
[3] 
https://github.com/openstack/neutron/blob/master/neutron/services/service_base.py#L27
[4] 
https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/driver_api.py#L226
[5] 
https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/driver_api.py#L23
[6] 
https://github.com/Juniper/contrail-neutron-plugin/blob/master/neutron_plugin_contrail/plugins/opencontrail/networking_bgpvpn/contrail.py#L36

Regards,
Édouard.

On Mon, Jun 19, 2017 at 10:47 AM, Édouard Thuleau
 wrote:
> Hi,
> Since Mitaka release [1], a default service plugins list is loaded
> when Neutron server starts. That list is not editable and was extended
> with few services [2]. But none of th
>
> [1] 
> https://github.com/openstack/neutron/commit/aadf2f30f84dff3d85f380a7ff4e16dbbb0c6bb0#diff-9169a6595980d19b2649d5bedfff05ce
> [2] 
> https://github.com/openstack/neutron/blob/master/neutron/plugins/common/constants.py#L43

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Do we still support core plugin not based on the ML2 framework?

2017-06-19 Thread Édouard Thuleau
Hi,
Since Mitaka release [1], a default service plugins list is loaded
when Neutron server starts. That list is not editable and was extended
with few services [2]. But none of th

[1] 
https://github.com/openstack/neutron/commit/aadf2f30f84dff3d85f380a7ff4e16dbbb0c6bb0#diff-9169a6595980d19b2649d5bedfff05ce
[2] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/common/constants.py#L43

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][linux bridge]

2016-07-08 Thread Édouard Thuleau
Hi,

I'm not close to Neutron's discussions but do you think to have a look at
pyroute2 [1]?
"Pyroute2 is a pure Python netlink and Linux network configuration library.
It requires only Python stdlib, no 3rd party libraries."
Which permits to create bridge and adding interfaces easily [2] (but not
only...).

[1] https://github.com/svinota/pyroute2
[2]
https://github.com/svinota/pyroute2/blob/a76d2efd8966ec5b6cc713dc5d909b5cd070a9a8/benchmark/ipdb.py#L16

On Fri, Jul 8, 2016 at 8:05 AM, Brandon Logan 
wrote:

> pybrctl repo is at: https://github.com/udragon/pybrctl
> It is in pypi.
>
> Looks like a wrapper around the shell brctl commands.  I don't think it
> would buy us anything more than what moving neutron's current
> implementation of doing brctl commands to neutron-lib would do.  In
> fact, it might end up costing more.  That's just my very uninformed
> opinion though.
>
> Thanks,
> Brandon
>
> On Thu, 2016-07-07 at 23:59 +, Bhatia, Manjeet S wrote:
> > Hi,
> >
> > There is work in progress for pure python driven linux network
> > configuration. I think most
> > of work will be done with this patch https://review.openstack.org/#/c
> > /155631/ . The only
> > thing left after this will be linux bridge configuration, Which I
> > would like to discuss with
> > community. There are two ways at the moment I can think to do that
> > implementation,
> > First, use pybrctl which may need some changes in library itself in
> > order for full support.
> > It will clean up the code from neutron. But looking pybrctl code
> > which is just executing
> > Shell commands, another solution which Brandon Logan discussed is
> > move the existing
> > Code for executing those commands to neutron-lib, which I think is
> > better solution. I would
> > like to have views of community, especially people working neutron-
> > lib about moving
> > python code for executing brctl commands to neutron-lib.
> >
> >
> > Thanks and Regards !
> > Manjeet Singh Bhatia
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Private external network

2014-10-14 Thread Édouard Thuleau
Hi Salvatore,

I like to propose a blueprint for the next Neutron release that permits to
dedicated an external network to a tenant. For that I though to rethink the
he conjunction of the two attributes `shared`
and `router:external' of the network resource.

I saw that you already initiate a work on that topic [1] and [2] but the bp
was un-targeted for an alternative approaches which might be more complete.
Does it alternative was released or in work in progress? To be sure to not
duplicating work/effort.

[1]
https://blueprints.launchpad.net/neutron/+spec/sharing-model-for-external-networks
[2]
https://wiki.openstack.org/wiki/Neutron/sharing-model-for-external-networks

Regards,
Édouard.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-08-21 Thread Édouard Thuleau
Nice job! That's awesome.

Thanks,
Édouard.


On Thu, Aug 21, 2014 at 8:02 AM, Miguel Angel Ajo Pelayo <
mangel...@redhat.com> wrote:

> Thank you shihanzhang!,
>
> I can't believe I didn't realize the ipset part spec was accepted I live
> on my own bubble... I will be reviewing and testing/helping on that part
> too during the next few days,  I was too concentrated in the RPC part.
>
>
> Best regards,
>
> - Original Message -
> > hi neutroner!
> > my patch about BP:
> >
> https://blueprints.launchpad.net/openstack/?searchtext=add-ipset-to-security
> > need install ipset in devstack, I have commit the patch:
> > https://review.openstack.org/#/c/113453/, who can help me review it,
> thanks
> > very much!
> >
> > Best regards,
> > shihanzhang
> >
> >
> >
> >
> > At 2014-08-21 10:47:59, "Martinx - ジェームズ" 
> wrote:
> >
> >
> >
> > +1 "NFTablesDriver"!
> >
> > Also, NFTables, AFAIK, improves IDS systems, like Suricata, for example:
> > https://home.regit.org/2014/02/suricata-and-nftables/
> >
> > Then, I'm wondering here... What benefits might come for OpenStack Nova /
> > Neutron, if it comes with a NFTables driver, instead of the current
> > IPTables?!
> >
> > * E fficient Security Group design?
> > * Better FWaaS, maybe with NAT(44/66) support?
> > * Native support for IPv6, with the defamed NAT66 built-in, simpler
> "Floating
> > IP" implementation, for both v4 and v6 networks under a single
> > implementation ( I don't like NAT66, I prefer a `routed Floating IPv6`
> > version ) ?
> > * Metadata over IPv6 still using NAT(66) ( I don't like NAT66 ), single
> > implementation?
> > * Suricata-as-a-Service?!
> >
> > It sounds pretty cool! :-)
> >
> >
> > On 20 August 2014 23:16, Baohua Yang < yangbao...@gmail.com > wrote:
> >
> >
> >
> > Great!
> > We met similar problems.
> > The current mechanisms produce too many iptables rules, and it's hard to
> > debug.
> > Really look forward to seeing a more efficient security group design.
> >
> >
> > On Thu, Jul 10, 2014 at 11:44 PM, Kyle Mestery <
> mest...@noironetworks.com >
> > wrote:
> >
> >
> >
> > On Thu, Jul 10, 2014 at 4:30 AM, shihanzhang < ayshihanzh...@126.com >
> wrote:
> > >
> > > With the deployment 'nova + neutron + openvswitch', when we bulk create
> > > about 500 VM with a default security group, the CPU usage of
> neutron-server
> > > and openvswitch agent is very high, especially the CPU usage of
> openvswitch
> > > agent will be 100%, this will cause creating VMs failed.
> > >
> > > With the method discussed in mailist:
> > >
> > > 1) ipset optimization ( https://review.openstack.org/#/c/100761/ )
> > >
> > > 3) sg rpc optimization (with fanout)
> > > ( https://review.openstack.org/#/c/104522/ )
> > >
> > > I have implement these two scheme in my deployment, when we again bulk
> > > create about 500 VM with a default security group, the CPU usage of
> > > openvswitch agent will reduce to 10%, even lower than 10%, so I think
> the
> > > iprovement of these two options are very efficient.
> > >
> > > Who can help us to review our spec?
> > >
> > This is great work! These are on my list of things to review in detail
> > soon, but given the Neutron sprint this week, I haven't had time yet.
> > I'll try to remedy that by the weekend.
> >
> > Thanks!
> > Kyle
> >
> > > Best regards,
> > > shihanzhang
> > >
> > >
> > >
> > >
> > >
> > > At 2014-07-03 10:08:21, "Ihar Hrachyshka" < ihrac...@redhat.com >
> wrote:
> > >>-BEGIN PGP SIGNED MESSAGE-
> > >>Hash: SHA512
> > >>
> > >>Oh, so you have the enhancement implemented? Great! Any numbers that
> > >>shows how much we gain from that?
> > >>
> > >>/Ihar
> > >>
> > >>On 03/07/14 02:49, shihanzhang wrote:
> > >>> Hi, Miguel Angel Ajo! Yes, the ipset implementation is ready, today
> > >>> I will modify my spec, when the spec is approved, I will commit the
> > >>> codes as soon as possilbe!
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> At 2014-07-02 10:12:34, "Miguel Angel Ajo" < majop...@redhat.com >
> > >>> wrote:
> > 
> >  Nice Shihanzhang,
> > 
> >  Do you mean the ipset implementation is ready, or just the
> >  spec?.
> > 
> > 
> >  For the SG group refactor, I don't worry about who does it, or
> >  who takes the credit, but I believe it's important we address
> >  this bottleneck during Juno trying to match nova's scalability.
> > 
> >  Best regards, Miguel Ángel.
> > 
> > 
> >  On 07/02/2014 02:50 PM, shihanzhang wrote:
> > > hi Miguel Ángel and Ihar Hrachyshka, I agree with you that
> > > split the work in several specs, I have finished the work (
> > > ipset optimization), you can do 'sg rpc optimization (without
> > > fanout)'. as the third part(sg rpc optimization (with fanout)),
> > > I think we need talk about it, because just using ipset to
> > > optimize security group agent codes does not bring the best
> > > results!
> > >
> > > Best regards, shihanzhang.
> > >
> > >
> > 

Re: [openstack-dev] Performance of security group

2014-06-30 Thread Édouard Thuleau
Yes, the usage of fanout topic by VNI is also another big improvement we
could do.
That will fit perfectly for the l2-pop mechanism driver.
Of course, that need a specific call on a start/re-sync to get initial
state. That actually done by the l2-pop MD if the uptime of an agent is
less than 'agent_boot_time' flag [1].

[1]
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/mech_driver.py#L181

Édouard.


On Fri, Jun 27, 2014 at 3:43 AM, joehuang  wrote:

> Interesting idea to optimize the performance.
>
> Not only security group rule will leads to fanout message load, we need to
> review and check to see if all fanout usegae in Neutron could be optimized.
>
> For example, L2 population:
>
> self.fanout_cast(context,
>   self.make_msg(method, fdb_entries=fdb_entries),
>   topic=self.topic_l2pop_update)
>
> it would be better to use network+l2pop_update as the topic, and only the
> agents which there are VMs running on it will consume the message.
>
> Best Regards
> Chaoyi Huang( Joe Huang)
>
> -邮件原件-
> 发件人: Miguel Angel Ajo Pelayo [mailto:mangel...@redhat.com]
> 发送时间: 2014年6月27日 1:33
> 收件人: OpenStack Development Mailing List (not for usage questions)
> 主题: Re: [openstack-dev] [neutron]Performance of security group
>
> - Original Message -
> > @Nachi: Yes that could a good improvement to factorize the RPC mechanism.
> >
> > Another idea:
> > What about creating a RPC topic per security group (quid of the RPC
> > topic
> > scalability) on which an agent subscribes if one of its ports is
> > associated to the security group?
> >
> > Regards,
> > Édouard.
> >
> >
>
>
> Hmm, Interesting,
>
> @Nachi, I'm not sure I fully understood:
>
>
> SG_LIST [ SG1, SG2]
> SG_RULE_LIST = [SG_Rule1, SG_Rule2] ..
> port[SG_ID1, SG_ID2], port2 , port3
>
>
> Probably we may need to include also the SG_IP_LIST = [SG_IP1, SG_IP2] ...
>
>
> and let the agent do all the combination work.
>
> Something like this could make sense?
>
> Security_Groups = {SG1:{IPs:[],RULES:[],
>SG2:{IPs:[],RULES:[]}
>   }
>
> Ports = {Port1:[SG1, SG2], Port2: [SG1]  }
>
>
> @Edouard, actually I like the idea of having the agent subscribed
> to security groups they have ports on... That would remove the need to
> include
> all the security groups information on every call...
>
> But would need another call to get the full information of a set of
> security groups
> at start/resync if we don't already have any.
>
>
> >
> > On Fri, Jun 20, 2014 at 4:04 AM, shihanzhang < ayshihanzh...@126.com >
> wrote:
> >
> >
> >
> > hi Miguel Ángel,
> > I am very agree with you about the following point:
> > >  * physical implementation on the hosts (ipsets, nftables, ... )
> > --this can reduce the load of compute node.
> > >  * rpc communication mechanisms.
> > -- this can reduce the load of neutron server
> > can you help me to review my BP specs?
> >
> >
> >
> >
> >
> >
> >
> > At 2014-06-19 10:11:34, "Miguel Angel Ajo Pelayo" < mangel...@redhat.com
> >
> > wrote:
> > >
> > >  Hi it's a very interesting topic, I was getting ready to raise
> > >the same concerns about our security groups implementation, shihanzhang
> > >thank you for starting this topic.
> > >
> > >  Not only at low level where (with our default security group
> > >rules -allow all incoming from 'default' sg- the iptable rules
> > >will grow in ~X^2 for a tenant, and, the
> "security_group_rules_for_devices"
> > >rpc call from ovs-agent to neutron-server grows to message sizes of
> >100MB,
> > >generating serious scalability issues or timeouts/retries that
> > >totally break neutron service.
> > >
> > >   (example trace of that RPC call with a few instances
> > > http://www.fpaste.org/104401/14008522/ )
> > >
> > >  I believe that we also need to review the RPC calling mechanism
> > >for the OVS agent here, there are several possible approaches to
> breaking
> > >down (or/and CIDR compressing) the information we return via this api
> call.
> > >
> > >
> > >   So we have to look at two things here:
> > >
> > >  * physical implementation on the hosts (ipsets, nftables, ... )
> > >  * rpc communication mechanisms.
> > >
> > >   Best regards,
> > >Miguel Ángel.
> > >
> > >- Mensaje original -
> > >
> > >> Do you though about nftables that will replace {ip,ip6,arp,eb}tables?
> > >> It also based on the rule set mechanism.
> > >> The issue in that proposition, it's only stable since the begin of the
> > >> year
> > >> and on Linux kernel 3.13.
> > >> But there lot of pros I don't list here (leverage iptables limitation,
> > >> efficient update rule, rule set, standardization of netfilter
> > >> commands...).
> > >
> > >> Édouard.
> > >
> > >> On Thu, Jun 19, 2014 at 8:25 AM, henry hly < henry4...@gmail.com >
> wrote:
> > >
> > >> > we have done some tests, but have different result: the performance
> is
> > >> > nearly
> > >> > the same for empty and 5k rules in iptable, bu

Re: [openstack-dev] Jenkins faillure

2014-06-25 Thread Édouard Thuleau
Yes, the wrong mailing list.
Sorry for the noise.

Édouard.


On Wed, Jun 25, 2014 at 6:18 PM, Anita Kuno  wrote:

> On 06/25/2014 12:07 PM, Édouard Thuleau wrote:
> > Hi,
> >
> > I got a jenkins failure on that small fix [1] on OpenContrail.
> > Here the last lines console output:
> >
> > 2014-06-25 07:02:55
> > RunUnitTest(["build/debug/bgp/rtarget/test/rtarget_table_test.log"],
> > ["build/debug/bgp/rtarget/test/rtarget_table_test"])
> > 2014-06-25 07:02:56
> >
> /home/jenkins/workspace/ci-contrail-controller-unittest/repo/build/debug/bgp/rtarget/test/rtarget_table_test
> > FAIL
> > 2014-06-25 07:02:56 scons: ***
> > [build/debug/bgp/rtarget/test/rtarget_table_test.log] Error -4
> > 2014-06-25 07:02:56 scons: building terminated because of errors.
> > 2014-06-25 07:02:59 Build step 'Execute shell' marked build as failure
> > 2014-06-25 07:03:00 Finished: FAILURE
> >
> > I don't think that failure is related to my patch. What I can do?
> >
> > [1] https://review.opencontrail.org/#/c/526/
> >
> > Regards,
> > Édouard.
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> This seems to be a gerrit system that is not OpenStack's gerrit system.
> Have you tried to evaluate this situation with the maintainers of this
> gerrit system?
>
> There are many reasons it might fail but the maintainers of the gerrit
> system you are using is probably the best place to begin.
>
> If this is a system question that relates to a third party ci system
> that interacts with OpenStack's gerrit, please post to the infra mailing
> list at openstack-in...@lists.openstack.org
>
> Thanks Édouard,
> Anita.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Jenkins faillure

2014-06-25 Thread Édouard Thuleau
Hi,

I got a jenkins failure on that small fix [1] on OpenContrail.
Here the last lines console output:

2014-06-25 07:02:55
RunUnitTest(["build/debug/bgp/rtarget/test/rtarget_table_test.log"],
["build/debug/bgp/rtarget/test/rtarget_table_test"])
2014-06-25 07:02:56
/home/jenkins/workspace/ci-contrail-controller-unittest/repo/build/debug/bgp/rtarget/test/rtarget_table_test
FAIL
2014-06-25 07:02:56 scons: ***
[build/debug/bgp/rtarget/test/rtarget_table_test.log] Error -4
2014-06-25 07:02:56 scons: building terminated because of errors.
2014-06-25 07:02:59 Build step 'Execute shell' marked build as failure
2014-06-25 07:03:00 Finished: FAILURE

I don't think that failure is related to my patch. What I can do?

[1] https://review.opencontrail.org/#/c/526/

Regards,
Édouard.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-06-23 Thread Édouard Thuleau
@Nachi: Yes that could a good improvement to factorize the RPC mechanism.

Another idea:
What about creating a RPC topic per security group (quid of the RPC topic
scalability) on which an agent subscribes if one of its ports is associated
to the security group?

Regards,
Édouard.



On Fri, Jun 20, 2014 at 4:04 AM, shihanzhang  wrote:

> hi Miguel Ángel,
> I am very agree with you about the following point:
>
> >  * physical implementation on the hosts (ipsets, nftables, ... )
>
> --this can reduce the load of compute node.
> >  * rpc communication mechanisms.
>
>   --this can reduce the load of neutron server
>
> can you help me to review my BP specs?
>
>
>
>
>
>
>
>
> At 2014-06-19 10:11:34, "Miguel Angel Ajo Pelayo"  
> wrote:
> >
> >  Hi it's a very interesting topic, I was getting ready to raise
> >the same concerns about our security groups implementation, shihanzhang
> >thank you for starting this topic.
> >
> >  Not only at low level where (with our default security group
> >rules -allow all incoming from 'default' sg- the iptable rules
> >will grow in ~X^2 for a tenant, and, the "security_group_rules_for_devices"
> >rpc call from ovs-agent to neutron-server grows to message sizes of >100MB,
> >generating serious scalability issues or timeouts/retries that
> >totally break neutron service.
> >
> >   (example trace of that RPC call with a few instances
> >http://www.fpaste.org/104401/14008522/)
> >
> >  I believe that we also need to review the RPC calling mechanism
> >for the OVS agent here, there are several possible approaches to breaking
> >down (or/and CIDR compressing) the information we return via this api call.
> >
> >
> >   So we have to look at two things here:
> >
> >  * physical implementation on the hosts (ipsets, nftables, ... )
> >  * rpc communication mechanisms.
> >
> >   Best regards,
> >Miguel Ángel.
> >
> >- Mensaje original -
> >
> >> Do you though about nftables that will replace {ip,ip6,arp,eb}tables?
> >> It also based on the rule set mechanism.
> >> The issue in that proposition, it's only stable since the begin of the year
> >> and on Linux kernel 3.13.
> >> But there lot of pros I don't list here (leverage iptables limitation,
> >> efficient update rule, rule set, standardization of netfilter commands...).
> >
> >> Édouard.
> >
> >> On Thu, Jun 19, 2014 at 8:25 AM, henry hly < henry4...@gmail.com > wrote:
> >
> >> > we have done some tests, but have different result: the performance is
> >> > nearly
> >> > the same for empty and 5k rules in iptable, but huge gap between
> >> > enable/disable iptable hook on linux bridge
> >>
> >
> >> > On Thu, Jun 19, 2014 at 11:21 AM, shihanzhang < ayshihanzh...@126.com >
> >> > wrote:
> >>
> >
> >> > > Now I have not get accurate test data, but I can confirm the following
> >> > > points:
> >> >
> >>
> >> > > 1. In compute node, the iptable's chain of a VM is liner, iptable 
> >> > > filter
> >> > > it
> >> > > one by one, if a VM in default security group and this default security
> >> > > group have many members, but ipset chain is set, the time ipset filter
> >> > > one
> >> > > and many member is not much difference.
> >> >
> >>
> >> > > 2. when the iptable rule is very large, the probability of failure that
> >> > > iptable-save save the iptable rule is very large.
> >> >
> >>
> >
> >> > > At 2014-06-19 10:55:56, "Kevin Benton" < blak...@gmail.com > wrote:
> >> >
> >>
> >
> >> > > > This sounds like a good idea to handle some of the performance issues
> >> > > > until
> >> > > > the ovs firewall can be implemented down the the line.
> >> > >
> >> >
> >>
> >> > > > Do you have any performance comparisons?
> >> > >
> >> >
> >>
> >> > > > On Jun 18, 2014 7:46 PM, "shihanzhang" < ayshihanzh...@126.com > 
> >> > > > wrote:
> >> > >
> >> >
> >>
> >
> >> > > > > Hello all,
> >> > > >
> >> > >
> >> >
> >>
> >
> >> > > > > Now in neutron, it use iptable implementing security group, but the
> >> > > > > performance of this implementation is very poor, there is a bug:
> >> > > > > https://bugs.launchpad.net/neutron/+bug/1302272 to reflect this
> >> > > > > problem.
> >> > > > > In
> >> > > > > his test, w ith default security groups(which has remote security
> >> > > > > group),
> >> > > > > beyond 250-300 VMs, there were around 6k Iptable rules on evry
> >> > > > > compute
> >> > > > > node,
> >> > > > > although his patch can reduce the processing time, but it don't 
> >> > > > > solve
> >> > > > > this
> >> > > > > problem fundamentally. I have commit a BP to solve this problem:
> >> > > > > https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security
> >> > > >
> >> > >
> >> >
> >>
> >> > > > > There are other people interested in this it?
> >> > > >
> >> > >
> >> >
> >>
> >
> >> > > > > ___
> >> > > >
> >> > >
> >> >
> >>
> >> > > > > OpenStack-dev mailing list
> >> > > >
> >> > >
> >> >
> >>
> >> > > > > OpenStack-dev@lists.openstack.org
> >> > > >
> >> > >
> >> >
> >>
>

Re: [openstack-dev] [neutron]Performance of security group

2014-06-19 Thread Édouard Thuleau
Do you though about nftables that will replace {ip,ip6,arp,eb}tables?
It also based on the rule set mechanism.
The issue in that proposition, it's only stable since the begin of the year
and on Linux kernel 3.13.
But there lot of pros I don't list here (leverage iptables limitation,
efficient update rule, rule set, standardization of netfilter commands...).

Édouard.


On Thu, Jun 19, 2014 at 8:25 AM, henry hly  wrote:

> we have done some tests, but have different result: the performance is
> nearly the same for empty and 5k rules in iptable, but huge gap between
> enable/disable iptable hook on linux bridge
>
>
> On Thu, Jun 19, 2014 at 11:21 AM, shihanzhang 
> wrote:
>
>> Now I have not get accurate test data, but I  can confirm the following
>> points:
>> 1. In compute node, the iptable's chain of a VM is liner, iptable filter
>> it one by one, if a VM in default security group and this default security
>> group have many members, but ipset chain is set, the time ipset filter one
>> and many member is not much difference.
>> 2. when the iptable rule is very large, the probability of  failure  that  
>> iptable-save
>> save the iptable rule  is very large.
>>
>>
>>
>>
>>
>> At 2014-06-19 10:55:56, "Kevin Benton"  wrote:
>>
>> This sounds like a good idea to handle some of the performance issues
>> until the ovs firewall can be implemented down the the line.
>> Do you have any performance comparisons?
>> On Jun 18, 2014 7:46 PM, "shihanzhang"  wrote:
>>
>>> Hello all,
>>>
>>> Now in neutron, it use iptable implementing security group, but the
>>> performance of this  implementation is very poor, there is a bug:
>>> https://bugs.launchpad.net/neutron/+bug/1302272 to reflect this
>>> problem. In his test, with default security groups(which has remote
>>> security group), beyond 250-300 VMs, there were around 6k Iptable rules on
>>> evry compute node, although his patch can reduce the processing time, but
>>> it don't solve this problem fundamentally. I have commit a BP to solve
>>> this problem:
>>> https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security
>>> 
>>> There are other people interested in this it?
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] OVS 2.1.0 is available but not the Neutron ARP responder

2014-03-25 Thread Édouard Thuleau
Hi all,

As promise, the blog post [1] about running devstack into containers.

[1]
http://dev.cloudwatt.com/en/blog/running-devstack-into-linux-containers.html

Regards,
Edouard.
Le 21 mars 2014 14:12, "Kyle Mestery"  a écrit :

> Getting this type of functional testing into the gate would be pretty
> phenomenal.
> Thanks for your continued efforts here Mathieu! If there is anything I can
> do to
> help here, let me know. One other concern here is that the infra team may
> have
> issues running a version of OVS which isn't packaged into Ubuntu/CentOS.
> Keep
> that in mind as well.
>
> Edourard, I look forward to your blog, please share it here once you've
> written it!
>
> Thanks,
> Kyle
>
>
>
> On Fri, Mar 21, 2014 at 6:15 AM, Édouard Thuleau wrote:
>
>> Thanks Mathieu for your support and work onto CI to enable multi-node.
>>
>> I wrote a blog post about how to run devstack development environment
>> with LXC.
>> I hope it will be publish next week.
>>
>> Just add a pointer about OVS support network namespaces since 2 years ago
>> now [1].
>>
>> [1]
>> http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=commitdiff;h=2a4999f3f33467f4fa22ed6e5b06350615fb2dac
>>
>> Regards,
>> Édouard.
>>
>>
>> On Fri, Mar 21, 2014 at 11:31 AM, Mathieu Rohon 
>> wrote:
>>
>>> Hi edouard,
>>>
>>> thanks for the information. I would love to see your patch getting
>>> merged to have l2-population MD fully functional with an OVS based
>>> deployment. Moreover, this patch has a minimal impact on neutron,
>>> since the code is used only if l2-population MD is used in the ML2
>>> plugin.
>>>
>>> markmcclain was concerned that no functional testing is done, but
>>> L2-population MD needs mutlinode deployment to be tested. A deployment
>>> based on a single VM won't create overlay tunnels, which is a
>>> mandatory technology to have l2-population activated.
>>> The Opensatck-CI is not able, for the moment, to run job based on
>>> multi-node deployment. We proposed an evolution of devstack to have a
>>> multinode deployment based on a single VM which launch compute nodes
>>> in LXC containers [1], but this evolution has been refused by
>>> Opensatck-CI since there is other ways to run multinode setup with
>>> devstack, and LXC container is not compatible with iscsi and probably
>>> ovs [2][3].
>>>
>>> One way to have functional test for this feature would be to deploy
>>> 3rd party testing environment, but it would be a pity to have to
>>> maintain a 3rd party to test some functionalities which are not based
>>> on 3rd party equipments. So we are currently learning about the
>>> Openstack-CI tools to propose some evolutions to have mutinode setup
>>> inside the gate [4]. There are a lot of way to implement it
>>> (node-pools evolution, usage of tripleO, of Heat [5]), and we don't
>>> know which one would be the easiest, and so the one we have to work on
>>> to have the multinode feature available ASAP.
>>>
>>> This feature looks very important for Neutron, at least to test
>>> overlay tunneling. I thinks it's very important for nova too, to test
>>> live-migration.
>>>
>>>
>>> [1]https://blueprints.launchpad.net/devstack/+spec/lxc-computes
>>> [2]https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855
>>> [3]
>>> http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-18-19.01.log.html
>>> [4]
>>> https://www.mail-archive.com/openstack-infra@lists.openstack.org/msg00968.html
>>> [5]
>>> http://lists.openstack.org/pipermail/openstack-infra/2013-July/000128.html
>>>
>>> On Fri, Mar 21, 2014 at 10:08 AM, Édouard Thuleau 
>>> wrote:
>>> > Hi,
>>> >
>>> > Just to inform you that the new OVS release 2.1.0 was done yesterday
>>> [1].
>>> > This release contains new features and significant performance
>>> improvements
>>> > [2].
>>> >
>>> > And in that new features, one [3] was use to add local ARP responder
>>> with
>>> > OVS agent and the plugin ML2 with the MD l2-pop [4]. Perhaps, it's
>>> time to
>>> > reconsider that review?
>>> >
>>> > [1] https://www.mail-archive.com/discuss@openvswitch.org/msg09251.html
>>> > [2] http://openvswitch.org/releases/NEWS-2.1.0
>>> > [3]

Re: [openstack-dev] [Neutron] OVS 2.1.0 is available but not the Neutron ARP responder

2014-03-21 Thread Édouard Thuleau
Thanks Mathieu for your support and work onto CI to enable multi-node.

I wrote a blog post about how to run devstack development environment with
LXC.
I hope it will be publish next week.

Just add a pointer about OVS support network namespaces since 2 years ago
now [1].

[1]
http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=commitdiff;h=2a4999f3f33467f4fa22ed6e5b06350615fb2dac

Regards,
Édouard.


On Fri, Mar 21, 2014 at 11:31 AM, Mathieu Rohon wrote:

> Hi edouard,
>
> thanks for the information. I would love to see your patch getting
> merged to have l2-population MD fully functional with an OVS based
> deployment. Moreover, this patch has a minimal impact on neutron,
> since the code is used only if l2-population MD is used in the ML2
> plugin.
>
> markmcclain was concerned that no functional testing is done, but
> L2-population MD needs mutlinode deployment to be tested. A deployment
> based on a single VM won't create overlay tunnels, which is a
> mandatory technology to have l2-population activated.
> The Opensatck-CI is not able, for the moment, to run job based on
> multi-node deployment. We proposed an evolution of devstack to have a
> multinode deployment based on a single VM which launch compute nodes
> in LXC containers [1], but this evolution has been refused by
> Opensatck-CI since there is other ways to run multinode setup with
> devstack, and LXC container is not compatible with iscsi and probably
> ovs [2][3].
>
> One way to have functional test for this feature would be to deploy
> 3rd party testing environment, but it would be a pity to have to
> maintain a 3rd party to test some functionalities which are not based
> on 3rd party equipments. So we are currently learning about the
> Openstack-CI tools to propose some evolutions to have mutinode setup
> inside the gate [4]. There are a lot of way to implement it
> (node-pools evolution, usage of tripleO, of Heat [5]), and we don't
> know which one would be the easiest, and so the one we have to work on
> to have the multinode feature available ASAP.
>
> This feature looks very important for Neutron, at least to test
> overlay tunneling. I thinks it's very important for nova too, to test
> live-migration.
>
>
> [1]https://blueprints.launchpad.net/devstack/+spec/lxc-computes
> [2]https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855
> [3]
> http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-18-19.01.log.html
> [4]
> https://www.mail-archive.com/openstack-infra@lists.openstack.org/msg00968.html
> [5]
> http://lists.openstack.org/pipermail/openstack-infra/2013-July/000128.html
>
> On Fri, Mar 21, 2014 at 10:08 AM, Édouard Thuleau 
> wrote:
> > Hi,
> >
> > Just to inform you that the new OVS release 2.1.0 was done yesterday [1].
> > This release contains new features and significant performance
> improvements
> > [2].
> >
> > And in that new features, one [3] was use to add local ARP responder with
> > OVS agent and the plugin ML2 with the MD l2-pop [4]. Perhaps, it's time
> to
> > reconsider that review?
> >
> > [1] https://www.mail-archive.com/discuss@openvswitch.org/msg09251.html
> > [2] http://openvswitch.org/releases/NEWS-2.1.0
> > [3]
> >
> http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=commitdiff;h=f6c8a6b163af343c66aea54953553d84863835f7
> > [4] https://review.openstack.org/#/c/49227/
> >
> > Regards,
> > Édouard.
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] OVS 2.1.0 is available but not the Neutron ARP responder

2014-03-21 Thread Édouard Thuleau
Hi,

Just to inform you that the new OVS release 2.1.0 was done yesterday [1].
This release contains new features and significant performance improvements
[2].

And in that new features, one [3] was use to add local ARP responder with
OVS agent and the plugin ML2 with the MD l2-pop [4]. Perhaps, it's time to
reconsider that review?

[1] https://www.mail-archive.com/discuss@openvswitch.org/msg09251.html
[2] http://openvswitch.org/releases/NEWS-2.1.0
[3]
http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=commitdiff;h=f6c8a6b163af343c66aea54953553d84863835f7
[4] https://review.openstack.org/#/c/49227/

Regards,
Édouard.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] FFE request: L3 HA VRRP

2014-03-07 Thread Édouard Thuleau
+1
I though it must merge as experimental for IceHouse, to let the community
tries it and stabilizes it during the Juno release. And for the Juno
release, we will be able to announce it as stable.

Furthermore, the next work, will be to distribute the l3 stuff at the edge
(compute) (called DVR) but this VRRP work will still needed for that [1].
So if we merge L3 HA VRRP as experimental in I to be stable in J, will
could also propose an experimental DVR solution for J and a stable for K.

[1]
https://docs.google.com/drawings/d/1GGwbLa72n8c2T3SBApKK7uJ6WLTSRa7erTI_3QNj5Bg/edit

Regards,
Édouard.


On Thu, Mar 6, 2014 at 4:27 PM, Sylvain Afchain <
sylvain.afch...@enovance.com> wrote:

> Hi all,
>
> I would like to request a FFE for the following patches of the L3 HA VRRP
> BP :
>
> https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
>
> https://review.openstack.org/#/c/64553/
> https://review.openstack.org/#/c/66347/
> https://review.openstack.org/#/c/68142/
> https://review.openstack.org/#/c/70700/
>
> These should be low risk since HA is not enabled by default.
> The server side code has been developed as an extension which minimizes
> risk.
> The agent side code introduces a bit more changes but only to filter
> whether to apply the
> new HA behavior.
>
> I think it's a good idea to have this feature in Icehouse, perhaps even
> marked as experimental,
> especially considering the demand for HA in real world deployments.
>
> Here is a doc to test it :
>
>
> https://docs.google.com/document/d/1P2OnlKAGMeSZTbGENNAKOse6B2TRXJ8keUMVvtUCUSM/edit#heading=h.xjip6aepu7ug
>
> -Sylvain
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2]

2014-03-07 Thread Édouard Thuleau
Yes, that sounds good to be able to load extensions from a mechanism driver.

But another problem I think we have with ML2 plugin is the list extensions
supported by default [1].
The extensions should only load by MD and the ML2 plugin should only
implement the Neutron core API.

Any though ?
Édouard.

[1]
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L87



On Fri, Mar 7, 2014 at 8:32 AM, Akihiro Motoki  wrote:

> Hi,
>
> I think it is better to continue the discussion here. It is a good log :-)
>
> Eugine and I talked the related topic to allow drivers to load
> extensions)  in Icehouse Summit
> but I could not have enough time to work on it during Icehouse.
> I am still interested in implementing it and will register a blueprint on
> it.
>
> etherpad in icehouse summit has baseline thought on how to achieve it.
> https://etherpad.openstack.org/p/icehouse-neutron-vendor-extension
> I hope it is a good start point of the discussion.
>
> Thanks,
> Akihiro
>
> On Fri, Mar 7, 2014 at 4:07 PM, Nader Lahouti 
> wrote:
> > Hi Kyle,
> >
> > Just wanted to clarify: Should I continue using this mailing list to
> post my
> > question/concerns about ML2? Please advise.
> >
> > Thanks,
> > Nader.
> >
> >
> >
> > On Thu, Mar 6, 2014 at 1:50 PM, Kyle Mestery 
> > wrote:
> >>
> >> Thanks Edgar, I think this is the appropriate place to continue this
> >> discussion.
> >>
> >>
> >> On Thu, Mar 6, 2014 at 2:52 PM, Edgar Magana 
> wrote:
> >>>
> >>> Nader,
> >>>
> >>> I would encourage you to first discuss the possible extension with the
> >>> ML2 team. Rober and Kyle are leading this effort and they have a IRC
> meeting
> >>> every week:
> >>> https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting
> >>>
> >>> Bring your concerns on this meeting and get the right feedback.
> >>>
> >>> Thanks,
> >>>
> >>> Edgar
> >>>
> >>> From: Nader Lahouti 
> >>> Reply-To: OpenStack List 
> >>> Date: Thursday, March 6, 2014 12:14 PM
> >>> To: OpenStack List 
> >>> Subject: Re: [openstack-dev] [Neutron][ML2]
> >>>
> >>> Hi Aaron,
> >>>
> >>> I appreciate your reply.
> >>>
> >>> Here is some more details on what I'm trying to do:
> >>> I need to add new attribute to the network resource using extensions
> >>> (i.e. network config profile) and use it in the mechanism driver (in
> the
> >>> create_network_precommit/postcommit).
> >>> If I use current implementation of Ml2Plugin, when a call is made to
> >>> mechanism driver's create_network_precommit/postcommit the new
> attribute is
> >>> not included in the 'mech_context'
> >>> Here is code from Ml2Plugin:
> >>> class Ml2Plugin(...):
> >>> ...
> >>>def create_network(self, context, network):
> >>> net_data = network['network']
> >>> ...
> >>> with session.begin(subtransactions=True):
> >>> self._ensure_default_security_group(context, tenant_id)
> >>> result = super(Ml2Plugin, self).create_network(context,
> >>> network)
> >>> network_id = result['id']
> >>> ...
> >>> mech_context = driver_context.NetworkContext(self, context,
> >>> result)
> >>>
> self.mechanism_manager.create_network_precommit(mech_context)
> >>>
> >>> Also need to include new extension in the
>  _supported_extension_aliases.
> >>>
> >>> So to avoid changes in the existing code, I was going to create my own
> >>> plugin (which will be very similar to Ml2Plugin) and use it as
> core_plugin.
> >>>
> >>> Please advise the right solution implementing that.
> >>>
> >>> Regards,
> >>> Nader.
> >>>
> >>>
> >>> On Wed, Mar 5, 2014 at 11:49 PM, Aaron Rosen 
> >>> wrote:
> 
>  Hi Nader,
> 
>  Devstack's default plugin is ML2. Usually you wouldn't 'inherit' one
>  plugin in another. I'm guessing  you probably wire a driver that ML2
> can use
>  though it's hard to tell from the information you've provided what
> you're
>  trying to do.
> 
>  Best,
> 
>  Aaron
> 
> 
>  On Wed, Mar 5, 2014 at 10:42 PM, Nader Lahouti <
> nader.laho...@gmail.com>
>  wrote:
> >
> > Hi All,
> >
> > I have a question regarding ML2 plugin in neutron:
> > My understanding is that, 'Ml2Plugin' is the default core_plugin for
> > neutron ML2. We can use either the default plugin or our own plugin
> (i.e.
> > my_ml2_core_plugin that can be inherited from Ml2Plugin) and use it
> as
> > core_plugin.
> >
> > Is my understanding correct?
> >
> >
> > Regards,
> > Nader.
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
>  ___
>  OpenStack-dev mailing list
>  OpenStack-dev@lists.openstack.org
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> >>>
> >>> 

Re: [openstack-dev] [Neutron] Does l2-pop sync fdb on agent start ?

2014-02-27 Thread Édouard Thuleau
Yes, the agent sync fdb on startup thanks to the flag 'agent_boot_time'
(default 180 seconds).
Plugin compares it with time agent is started (diff between the agent start
(agent.started.at) and its last heartbeat timestamp) and if it less, the
plugin send all fdb entries of the network.

Édoaurd.


On Thu, Feb 27, 2014 at 4:53 AM, Zang MingJie  wrote:

> Hi all,
>
> I found my ovs-agent has missed some tunnels on br-tun. I have l2-pop
> enabled, if some fdb entries is added while the agent is down, can it be
> added back once the agent is back ?
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Can somebody describe the all the rolls about networks' admin_state_up

2014-02-25 Thread Édouard Thuleau
A thread [1] was also initiated on the ML by Syvlain but no answers/comment
for the moment.

[1] http://openstack.markmail.org/thread/qy6ikldtq2o4imzl

Édouard.


On Mon, Feb 24, 2014 at 9:35 AM, 黎林果  wrote:

> Thanks you very much.
>
> "IMHO when admin_state_up is false that entity should be down, meaning
> network should be down.
> otherwise what it the usage of admin_state_up ? same is true for port
> admin_state_up"
>
> It likes switch's power button?
>
> 2014-02-24 16:03 GMT+08:00 Assaf Muller :
> >
> >
> > - Original Message -
> >> Hi,
> >>
> >> I want to know the admin_state_up attribute about networks but I
> >> have not found any describes.
> >>
> >> Can you help me to understand it? Thank you very much.
> >>
> >
> > There's a discussion about this in this bug [1].
> > From what I gather, nobody knows what admin_state_up is actually supposed
> > to do with respect to networks.
> >
> > [1] https://bugs.launchpad.net/neutron/+bug/1237807
> >
> >>
> >> Regard,
> >>
> >> Lee Li
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Ensure that configured gateway is on subnet by default

2014-02-20 Thread Édouard Thuleau
Ha yes, I completely forget IPv6 case.
Sorry and forget that thread.

Édouard.


On Thu, Feb 20, 2014 at 3:34 PM, Veiga, Anthony <
anthony_ve...@cable.comcast.com> wrote:

>  This would break IPv6.  The gateway address, according to RFC 4861[1]
> Section 4.2 regarding Router Advertisements: "Source Address MUST be the
> link-local address assigned to the interface from which this message is
> sent".  This means that if you configure a subnet with a Globally Unique
> Address scope, the gateway by definition cannot be in the configured
> subnet.  Please don't force this option, as it will break work going on in
> the Neutron IPv6 sub-team.
> -Anthony
>
>  [1] http://tools.ietf.org/html/rfc4861
>
>   Hi,
>
>  Neutron permits to set a gateway IP outside of the subnet cidr by
> default. And, thanks to the garyk's patch [1], it's possible to change this
> default behavior with config flag 'force_gateway_on_subnet'.
>
>  This flag was added to keep the backward compatibility for people who
> need to set the gateway outside of the subnet.
>
>  I think this behavior does not reflect the classic usage of subnets. So
> I propose to update the default value of the flag 'force_gateway_on_subnet'
> to True.
>
>  Any thought?
>
>  [1] https://review.openstack.org/#/c/19048/
>
>  Regards,
> Édouard.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Ensure that configured gateway is on subnet by default

2014-02-20 Thread Édouard Thuleau
Looking back, perhaps we should remove that flag and only authorize the
admin user to be able to set the gateway IP outside of the subnet cidr (for
tricky network), like only admin user can create provider network. And
require classic users to set gatway IP inside the subnet cidr.

Édouard.


On Thu, Feb 20, 2014 at 3:15 PM, Édouard Thuleau  wrote:

> Hi,
>
> Neutron permits to set a gateway IP outside of the subnet cidr by default.
> And, thanks to the garyk's patch [1], it's possible to change this default
> behavior with config flag 'force_gateway_on_subnet'.
>
> This flag was added to keep the backward compatibility for people who need
> to set the gateway outside of the subnet.
>
> I think this behavior does not reflect the classic usage of subnets. So I
> propose to update the default value of the flag 'force_gateway_on_subnet'
> to True.
>
> Any thought?
>
> [1] https://review.openstack.org/#/c/19048/
>
> Regards,
> Édouard.
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Ensure that configured gateway is on subnet by default

2014-02-20 Thread Édouard Thuleau
Hi,

Neutron permits to set a gateway IP outside of the subnet cidr by default.
And, thanks to the garyk's patch [1], it's possible to change this default
behavior with config flag 'force_gateway_on_subnet'.

This flag was added to keep the backward compatibility for people who need
to set the gateway outside of the subnet.

I think this behavior does not reflect the classic usage of subnets. So I
propose to update the default value of the flag 'force_gateway_on_subnet'
to True.

Any thought?

[1] https://review.openstack.org/#/c/19048/

Regards,
Édouard.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] ARP Proxy in l2-population Mechanism Driver for OVS

2014-02-13 Thread Édouard Thuleau
Hi,

On Havana, a local ARP responder is available if you use the ML2 with the
l2-pop MD and the Linux Bridge (natively implemented by the Linux kernel
VXLAN module).
It's not (yet [1]) available with the OVS agent. The proposed OVS
implementation use new OVS flows integrated on branch 2.1.

Just few remarks about the ML2 MD l2-pop. Two important bugs persists:
- One [2] impacts all the MD l2-pop (Linux Bridge and OVS agents). Merged
on trunk and waiting to be backported [3]
- Another one [4] impacts only the OVS agent stills waiting review.

[1] https://review.openstack.org/#/c/49227/
[2] https://review.openstack.org/#/c/63913/
[3] https://review.openstack.org/#/c/71821/
[4] https://review.openstack.org/#/c/63917/

Édouard.


On Thu, Feb 13, 2014 at 4:57 AM, Nick Ma  wrote:

> Hi all,
>
> I'm running a OpenStack Havana cloud on pre-production stage using
> Neutron ML2 VxLAN. I'd like to incorporate l2-population to get rid of
> tunnel broadcast.
>
> However, it seems that ARP Proxy has NOT been implemented yet for Open
> vSwitch for Havana and also the latest master branch.
>
> I find that ebtables arpreply can do it and then put some corresponding
> flow rules into OVS.
>
> Could anyone provide more hints on how to implement it in l2-pop?
>
> thanks,
>
> --
>
> Nick Ma
> skywalker.n...@gmail.com
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [ML2] l2-pop bugs review

2014-02-07 Thread Édouard Thuleau
Thanks to you, Kyle.
I proposed a backport [1] of the review 63913 you approved.

[1] https://review.openstack.org/#/c/71821/

Édouard.


On Thu, Feb 6, 2014 at 6:04 PM, Kyle Mestery wrote:

>
> On Feb 6, 2014, at 3:09 AM, Édouard Thuleau  wrote:
>
> > Hi all,
> >
> > Just to point 2 reviews [1] & [2] I submitted to correct l2-pop
> > mechanism driver into the ML2 plugin.
> > I had some reviews and +1 but they doesn't progress anymore.
> > Could you check them ?
> > I also like to backport them for stable Havana branch.
> >
> > [1] https://review.openstack.org/#/c/63917/
> > [2] https://review.openstack.org/#/c/63913/
> >
> Hi Edouard:
>
> I'll take a look at these later today, thanks for bringing them to
> my attention!
>
> Kyle
>
> > Thanks,
> > Édouard.
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [ML2] l2-pop bugs review

2014-02-06 Thread Édouard Thuleau
Hi all,

Just to point 2 reviews [1] & [2] I submitted to correct l2-pop
mechanism driver into the ML2 plugin.
I had some reviews and +1 but they doesn't progress anymore.
Could you check them ?
I also like to backport them for stable Havana branch.

[1] https://review.openstack.org/#/c/63917/
[2] https://review.openstack.org/#/c/63913/

Thanks,
Édouard.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-04 Thread Édouard Thuleau
There also another bug you can link/duplicate with #1192381 is
https://bugs.launchpad.net/neutron/+bug/1185916.
I proposed a fix but it's not the good way. I abandoned it.

Édouard.

On Wed, Dec 4, 2013 at 10:43 PM, Carl Baldwin  wrote:
> I have offered up https://review.openstack.org/#/c/60082/ as a
> backport to Havana.  Interest was expressed in the blueprint for doing
> this even before this thread.  If there is consensus for this as the
> stop-gap then it is there for the merging.  However, I do not want to
> discourage discussion of other stop-gap solutions like what Maru
> proposed in the original post.
>
> Carl
>
> On Wed, Dec 4, 2013 at 9:12 AM, Ashok Kumaran  
> wrote:
>>
>>
>>
>> On Wed, Dec 4, 2013 at 8:30 PM, Maru Newby  wrote:
>>>
>>>
>>> On Dec 4, 2013, at 8:55 AM, Carl Baldwin  wrote:
>>>
>>> > Stephen, all,
>>> >
>>> > I agree that there may be some opportunity to split things out a bit.
>>> > However, I'm not sure what the best way will be.  I recall that Mark
>>> > mentioned breaking out the processes that handle API requests and RPC
>>> > from each other at the summit.  Anyway, it is something that has been
>>> > discussed.
>>> >
>>> > I actually wanted to point out that the neutron server now has the
>>> > ability to run a configurable number of sub-processes to handle a
>>> > heavier load.  Introduced with this commit:
>>> >
>>> > https://review.openstack.org/#/c/37131/
>>> >
>>> > Set api_workers to something > 1 and restart the server.
>>> >
>>> > The server can also be run on more than one physical host in
>>> > combination with multiple child processes.
>>>
>>> I completely misunderstood the import of the commit in question.  Being
>>> able to run the wsgi server(s) out of process is a nice improvement, thank
>>> you for making it happen.  Has there been any discussion around making the
>>> default for api_workers > 0 (at least 1) to ensure that the default
>>> configuration separates wsgi and rpc load?  This also seems like a great
>>> candidate for backporting to havana and maybe even grizzly, although
>>> api_workers should probably be defaulted to 0 in those cases.
>>
>>
>> +1 for backporting the api_workers feature to havana as well as Grizzly :)
>>>
>>>
>>> FYI, I re-ran the test that attempted to boot 75 micro VM's simultaneously
>>> with api_workers = 2, with mixed results.  The increased wsgi throughput
>>> resulted in almost half of the boot requests failing with 500 errors due to
>>> QueuePool errors (https://bugs.launchpad.net/neutron/+bug/1160442) in
>>> Neutron.  It also appears that maximizing the number of wsgi requests has
>>> the side-effect of increasing the RPC load on the main process, and this
>>> means that the problem of dhcp notifications being dropped is little
>>> improved.  I intend to submit a fix that ensures that notifications are sent
>>> regardless of agent status, in any case.
>>>
>>>
>>> m.
>>>
>>> >
>>> > Carl
>>> >
>>> > On Tue, Dec 3, 2013 at 9:47 AM, Stephen Gran
>>> >  wrote:
>>> >> On 03/12/13 16:08, Maru Newby wrote:
>>> >>>
>>> >>> I've been investigating a bug that is preventing VM's from receiving
>>> >>> IP
>>> >>> addresses when a Neutron service is under high load:
>>> >>>
>>> >>> https://bugs.launchpad.net/neutron/+bug/1192381
>>> >>>
>>> >>> High load causes the DHCP agent's status updates to be delayed,
>>> >>> causing
>>> >>> the Neutron service to assume that the agent is down.  This results in
>>> >>> the
>>> >>> Neutron service not sending notifications of port addition to the DHCP
>>> >>> agent.  At present, the notifications are simply dropped.  A simple
>>> >>> fix is
>>> >>> to send notifications regardless of agent status.  Does anybody have
>>> >>> any
>>> >>> objections to this stop-gap approach?  I'm not clear on the
>>> >>> implications of
>>> >>> sending notifications to agents that are down, but I'm hoping for a
>>> >>> simple
>>> >>> fix that can be backported to both havana and grizzly (yes, this bug
>>> >>> has
>>> >>> been with us that long).
>>> >>>
>>> >>> Fixing this problem for real, though, will likely be more involved.
>>> >>> The
>>> >>> proposal to replace the current wsgi framework with Pecan may increase
>>> >>> the
>>> >>> Neutron service's scalability, but should we continue to use a 'fire
>>> >>> and
>>> >>> forget' approach to notification?  Being able to track the success or
>>> >>> failure of a given action outside of the logs would seem pretty
>>> >>> important,
>>> >>> and allow for more effective coordination with Nova than is currently
>>> >>> possible.
>>> >>
>>> >>
>>> >> It strikes me that we ask an awful lot of a single neutron-server
>>> >> instance -
>>> >> it has to take state updates from all the agents, it has to do
>>> >> scheduling,
>>> >> it has to respond to API requests, and it has to communicate about
>>> >> actual
>>> >> changes with the agents.
>>> >>
>>> >> Maybe breaking some of these out the way nova has a scheduler and a
>>> >> conductor and so on might be a good model (I

Re: [openstack-dev] Reg : Security groups implementation using openflows in quantum ovs plugin

2013-11-30 Thread Édouard Thuleau
And what do you think about the performance issue I talked ?
Do you have any thought to improve wildcarding to use megaflow feature ?

Édouard.

On Fri, Nov 29, 2013 at 1:11 PM, Zang MingJie  wrote:
> On Fri, Nov 29, 2013 at 2:25 PM, Jian Wen  wrote:
>> I don't think we can implement a stateful firewall[1] now.
>
> I don't think we need a stateful firewall, a stateless one should work
> well. If the stateful conntrack is completed in the future, we can
> also take benefit from it.
>
>>
>> Once connection tracking capability[2] is added to the Linux OVS, we
>> could start to implement the ovs-firewall-driver blueprint.
>>
>> [1] http://en.wikipedia.org/wiki/Stateful_firewall
>> [2]
>> http://wiki.xenproject.org/wiki/Xen_Development_Projects#Add_connection_tracking_capability_to_the_Linux_OVS
>>
>>
>> On Tue, Nov 26, 2013 at 2:23 AM, Mike Wilson  wrote:
>>>
>>> Adding Jun to this thread since gmail is failing him.
>>>
>>>
>>> On Tue, Nov 19, 2013 at 10:44 AM, Amir Sadoughi
>>>  wrote:

 Yes, my work has been on ML2 with neutron-openvswitch-agent.  I’m
 interested to see what Jun Park has. I might have something ready before he
 is available again, but would like to collaborate regardless.

 Amir



 On Nov 19, 2013, at 3:31 AM, Kanthi P  wrote:

 Hi All,

 Thanks for the response!
 Amir,Mike: Is your implementation being done according to ML2 plugin

 Regards,
 Kanthi


 On Tue, Nov 19, 2013 at 1:43 AM, Mike Wilson 
 wrote:
>
> Hi Kanthi,
>
> Just to reiterate what Kyle said, we do have an internal implementation
> using flows that looks very similar to security groups. Jun Park was the 
> guy
> that wrote this and is looking to get it upstreamed. I think he'll be back
> in the office late next week. I'll point him to this thread when he's 
> back.
>
> -Mike
>
>
> On Mon, Nov 18, 2013 at 3:39 PM, Kyle Mestery (kmestery)
>  wrote:
>>
>> On Nov 18, 2013, at 4:26 PM, Kanthi P 
>> wrote:
>> > Hi All,
>> >
>> > We are planning to implement quantum security groups using openflows
>> > for ovs plugin instead of iptables which is the case now.
>> >
>> > Doing so we can avoid the extra linux bridge which is connected
>> > between the vnet device and the ovs bridge, which is given as a work 
>> > around
>> > since ovs bridge is not compatible with iptables.
>> >
>> > We are planning to create a blueprint and work on it. Could you
>> > please share your views on this
>> >
>> Hi Kanthi:
>>
>> Overall, this idea is interesting and removing those extra bridges
>> would certainly be nice. Some people at Bluehost gave a talk at the 
>> Summit
>> [1] in which they explained they have done something similar, you may 
>> want
>> to reach out to them since they have code for this internally already.
>>
>> The OVS plugin is in feature freeze during Icehouse, and will be
>> deprecated in favor of ML2 [2] at the end of Icehouse. I would advise 
>> you to
>> retarget your work at ML2 when running with the OVS agent instead. The
>> Neutron team will not accept new features into the OVS plugin anymore.
>>
>> Thanks,
>> Kyle
>>
>> [1]
>> http://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/towards-truly-open-and-commoditized-software-defined-networks-in-openstack
>> [2] https://wiki.openstack.org/wiki/Neutron/ML2
>>
>> > Thanks,
>> > Kanthi
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Cheers,
>> Jian
>>
>> ___
>> OpenStack-dev

Re: [openstack-dev] Fw: [Neutron][IPv6] Meeting logs from the first IRC meeting

2013-11-28 Thread Édouard Thuleau
A temporary workaround waiting fix for bug:
https://bugs.launchpad.net/nova/+bug/1112912
(https://review.openstack.org/#/c/21946/)

diff --git a/nova/virt/libvirt/vif.py b/nova/virt/libvirt/vif.py
index 5bf0dba..5fd041c 100644
--- a/nova/virt/libvirt/vif.py
+++ b/nova/virt/libvirt/vif.py
@@ -159,7 +159,7 @@ class LibvirtGenericVIFDriver(LibvirtBaseVIFDriver):
 # has already applied firewall filtering itself.
 if CONF.firewall_driver != "nova.virt.firewall.NoopFirewallDriver":
 return True
-return False
+return True

 def get_config_bridge(self, instance, vif, image_meta, inst_type):
 """Get VIF configurations for bridge type."""
@@ -173,8 +173,8 @@ class LibvirtGenericVIFDriver(LibvirtBaseVIFDriver):

 mac_id = vif['address'].replace(':', '')
 name = "nova-instance-" + instance['name'] + "-" + mac_id
-if self.get_firewall_required():
-conf.filtername = name
 designer.set_vif_bandwidth_config(conf, inst_type)

 return conf

On Tue, Nov 26, 2013 at 10:42 PM, Collins, Sean (Contractor)
 wrote:
> On Tue, Nov 26, 2013 at 06:07:07PM +0800, Da Zhao Y Yu wrote:
>> Sean, what about your progress? I saw your code change jekins still in
>> failed status.
>
> Hi,
>
> I've been busy tracking down the IPv6 issue in our lab environment -
> we were using the Hybrid OVS driver in our Nova.conf and that was
> breaking IPv6 - so we changed over to the Generic VIF driver,
> only to hit the bug https://bugs.launchpad.net/devstack/+bug/1252620
> where the Security Group API doesn't work.
>
> Which leaves you with the following choices:
>
> A) Working V6 with the Generic VIF driver, but no Security Groups
> B) Working Security Groups but no V6, with the hybrid VIF driver.
>
> I'm going to try and see if I can make some of the patches against the
> hybrid driver work and get V6 working.
>
> --
> Sean M. Collins
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reg : Security groups implementation using openflows in quantum ovs plugin

2013-11-19 Thread Édouard Thuleau
Hi,

It's an interesting feature.
But just to understand, what do you blame to the actual implementation with
iptables and linux bridge?

The OVS release 1.11.0 implements a new feature calls 'megaflows'
which reduce the number of kernel/usespace crossings.
Actually, OVS neutron agent uses simple default "normal" flow (simple mac
learning switch) which the wildcarding (use by megaflow) will be very good
(just the L2 headers will be matched).
But if we implement security group as OVS flows, the performance will be
reduced. Wildcarding will be worse (L2, L3, and L4headers will be mateched).
Here [1] and [2] a post from the OVS mailing list that explain that.
Perhaps we can create security group OVS flow smarter to improve the
wilcarding.

Another improvement, I see, is the simplification of the interfaces on the
compute node. All interfaces qbr, qvo and qvb will disappear.
But another simple improvement about that could be to use only one Linux
bridge by network instead of one per VNIC and continous to use Linux bridge
and iptables.

Another problem is the OVS flows are not stateful [3] but is it necessary?

[1] http://www.mail-archive.com/discuss@openvswitch.org/msg07715.html
[2] http://www.mail-archive.com/discuss@openvswitch.org/msg07582.html
[3] http://www.mail-archive.com/discuss@openvswitch.org/msg01919.html

Édouard.


On Tue, Nov 19, 2013 at 10:31 AM, Kanthi P wrote:

> Hi All,
>
> Thanks for the response!
> Amir,Mike: Is your implementation being done according to ML2 plugin
>
> Regards,
> Kanthi
>
>
> On Tue, Nov 19, 2013 at 1:43 AM, Mike Wilson  wrote:
>
>> Hi Kanthi,
>>
>> Just to reiterate what Kyle said, we do have an internal implementation
>> using flows that looks very similar to security groups. Jun Park was the
>> guy that wrote this and is looking to get it upstreamed. I think he'll be
>> back in the office late next week. I'll point him to this thread when he's
>> back.
>>
>> -Mike
>>
>>
>> On Mon, Nov 18, 2013 at 3:39 PM, Kyle Mestery (kmestery) <
>> kmest...@cisco.com> wrote:
>>
>>> On Nov 18, 2013, at 4:26 PM, Kanthi P  wrote:
>>> > Hi All,
>>> >
>>> > We are planning to implement quantum security groups using openflows
>>> for ovs plugin instead of iptables which is the case now.
>>> >
>>> > Doing so we can avoid the extra linux bridge which is connected
>>> between the vnet device and the ovs bridge, which is given as a work around
>>> since ovs bridge is not compatible with iptables.
>>> >
>>> > We are planning to create a blueprint and work on it. Could you please
>>> share your views on this
>>> >
>>> Hi Kanthi:
>>>
>>> Overall, this idea is interesting and removing those extra bridges would
>>> certainly be nice. Some people at Bluehost gave a talk at the Summit [1] in
>>> which they explained they have done something similar, you may want to
>>> reach out to them since they have code for this internally already.
>>>
>>> The OVS plugin is in feature freeze during Icehouse, and will be
>>> deprecated in favor of ML2 [2] at the end of Icehouse. I would advise you
>>> to retarget your work at ML2 when running with the OVS agent instead. The
>>> Neutron team will not accept new features into the OVS plugin anymore.
>>>
>>> Thanks,
>>> Kyle
>>>
>>> [1]
>>> http://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/towards-truly-open-and-commoditized-software-defined-networks-in-openstack
>>> [2] https://wiki.openstack.org/wiki/Neutron/ML2
>>>
>>> > Thanks,
>>> > Kanthi
>>> > ___
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev@lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Linux Bridge MTU bug when the VXLAN tunneling is used

2013-10-20 Thread Édouard Thuleau
I had a look for fix that and I don't found a simple way.

The minimal MTU can be deducted by the LB agent with the value found on the
bridge and the LB agent can set it on veth interface connect to that bridge.
But there no easy way to set it on the other side of the veth in the
namespace. LB agent doesn't know the name of the other side of the veth and
it doesn't know the name of the namespace. Furthermore, I'm not sure it's a
good way to modify network in a namespace doesn't manage by the LB agent.

Another simple solution, is to set a global config flag to define the
minimal MTU and all agents which create veth use it to set interfaces MTU.

I opened a bug to discuss: https://bugs.launchpad.net/neutron/+bug/1242534

Regards,
Édouard.


On Sun, Oct 20, 2013 at 5:29 PM, Salvatore Orlando wrote:

> It might be worth both documenting this limitation on the admin guide and
> provide a fix which we should backport to havana too.
> It sounds like the fix should not be too extensive, so the backport should
> be easily feasible.
>
> Regards,
> Salvatore
>
>
> On 18 October 2013 21:50, Édouard Thuleau  wrote:
>
>> Hi all,
>>
>> I made some tests with the ML2 plugin and the Linux Bridge agent with
>> VXLAN tunneling.
>>
>> By default, physical interface (used for VXLAN tunneling) has an MTU of
>> 1500 octets. And when LB agent creates a VXLAN interface, the MTU is
>> automatically 50 octets less than the physical interface (so 1450 octets)
>> [1]. Therefore, the bridge use to plug tap of VM, veth from network
>> namespaces (l3 or dhcp) and VXLAN interface has an MTU of 1450 octets
>> (Linux bridges take minimum of all the underlying ports [2]).
>>
>> So the bridge could only forward packets of length smaller than 1450
>> octets to VXLAN interface [3].
>>
>> But the veth interfaces used to link network namespaces and bridges are
>> spawn by l3 and dhcp agents (and perhaps other agents) with an MTU of 1500
>> octets. So, packets which arriving from them are dropped if they need to be
>> forwarded to the VXLAN interface.
>>
>> A simple workaround is to increase by 50 at least the MTU of the physical
>> interface to harmonize MTU between interfaces. But by default (without MTU
>> customizing), the LB/VXLAN mode have strange behavior (cannot make curl
>> from server behind a router or execute command with verbose output in SSH
>> through a floating IP (SSH connection works)...)
>>
>> So my question is, do you think we need to open a bug and find a fix for
>> that ? Or do we need to put warning in docs (and logs perhaps)?
>>
>> [1]
>> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/net/vxlan.c#n2437
>> [2]
>> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/bridge/br_if.c#n402
>> [3]
>> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/bridge/br_forward.c#n74
>>
>> Édouard.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Linux Bridge MTU bug when the VXLAN tunneling is used

2013-10-18 Thread Édouard Thuleau
Hi all,

I made some tests with the ML2 plugin and the Linux Bridge agent with VXLAN
tunneling.

By default, physical interface (used for VXLAN tunneling) has an MTU of
1500 octets. And when LB agent creates a VXLAN interface, the MTU is
automatically 50 octets less than the physical interface (so 1450 octets)
[1]. Therefore, the bridge use to plug tap of VM, veth from network
namespaces (l3 or dhcp) and VXLAN interface has an MTU of 1450 octets
(Linux bridges take minimum of all the underlying ports [2]).

So the bridge could only forward packets of length smaller than 1450 octets
to VXLAN interface [3].

But the veth interfaces used to link network namespaces and bridges are
spawn by l3 and dhcp agents (and perhaps other agents) with an MTU of 1500
octets. So, packets which arriving from them are dropped if they need to be
forwarded to the VXLAN interface.

A simple workaround is to increase by 50 at least the MTU of the physical
interface to harmonize MTU between interfaces. But by default (without MTU
customizing), the LB/VXLAN mode have strange behavior (cannot make curl
from server behind a router or execute command with verbose output in SSH
through a floating IP (SSH connection works)...)

So my question is, do you think we need to open a bug and find a fix for
that ? Or do we need to put warning in docs (and logs perhaps)?

[1]
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/net/vxlan.c#n2437
[2]
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/bridge/br_if.c#n402
[3]
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/bridge/br_forward.c#n74

Édouard.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev