[openstack-dev] [neutron] Update_port can not remove allocation from auto-addressed subnets

2016-05-20 Thread Pavel Bondar
Hi,

Currently using update_port workflow user can not remove ip addresses from
auto-addressed subnets (SLAAC). It prevents me from implementing complete
fix for [1].

Typically for removing ip address from port, 'fixed_ips' list is updated and
ip address is cleaned up from it.
But for auto-addressed subnets if ip address is removed from 'fixed_ips',
port_update is called, but SLAAC ip are not removed because of [2].
This area was significantly reworked during liberty, but the same
behavior is
preserved at least from kilo [3].

To make subnet deletion to comply with IPAM interface [1] any ip address
deallocation has to be done via ipam interface (update_port), but
update_port
currently skips deallocation of SLAAC addresses.

So I am looking for advice about a way to allow deallocation of SLAAC
addresses via update_port.

I see several possible solutions, but they are not ideal, so please let
me know
if you see a better solution:
- Add additional parameter like 'allow_slaac_deletion' to update_port
method,
and pass it through
update_port->update_port_with_ips->_update_ips_for_port->
_get_changed_ips_for_port to alternate behavior in [2]. It involves changing
parameters for API exposed method update_port, so not sure if it can be
accepted.
- Another way is to introduce new state for 'fixed_ips' list. Currently
it can
have 'subnet_id' and 'ip_address' as keys. We could add new key like
'delete_subnet_id' to force delete allocations for slaac subnets. This way
there is no need to update parameters for bunch of methods.

Please share your thoughts about the ways to fix it.

[1] https://bugs.launchpad.net/neutron/+bug/1564335
[2]
https://github.com/openstack/neutron/blob/f494de47fcef7776f7d29d5ceb2cc4db96bd1efd/neutron/db/ipam_backend_mixin.py#L435
[3]
https://github.com/openstack/neutron/blob/stable/kilo/neutron/db/db_base_plugin_v2.py#L444

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-04-08 Thread Pavel Bondar
I believe during next week I will upload code for the one time switch to
pluggable ipam using pure alembic migration.

Thanks,
Pavel

On 05.04.2016 22:05, Carl Baldwin wrote:
> I think the only thing standing in our way is this bug [1].  Ryan
> Tidwell and I are working on this.
>
> Carl
>
> [1] https://bugs.launchpad.net/neutron/+bug/1543094
>
> On Mon, Apr 4, 2016 at 3:48 PM, John Belamaric  
> wrote:
>> I was on vacation last week so I am just seeing this now. I agree that now 
>> that we are in Newton we should just remove the non-pluggable code and move 
>> forward with the migration.
>>
>> John
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-03-30 Thread Pavel Bondar
On 12.02.2016 15:01, Ihar Hrachyshka wrote:
> Salvatore Orlando  wrote:
>
>> On 11 February 2016 at 20:17, John Belamaric
>>  wrote:
>>
>>> On Feb 11, 2016, at 12:04 PM, Armando M.  wrote:
>>>
>>>
>>>
>>> On 11 February 2016 at 07:01, John Belamaric
>>>  wrote:
>>>
>>>
>>>
>>> It is only internal implementation changes.
>>>
>>> That's not entirely true, is it? There are config variables to
>>> change and it opens up the possibility of a scenario that the
>>> operator may not care about.
>>>
>>
>> If we were to remove the non-pluggable version altogether, then the
>> default for ipam_driver would switch from None to internal.
>> Therefore, there would be no config file changes needed.
>>
>> I think this is correct.
>> Assuming the migration path to Neutron will include the data
>> transformation from built-in to pluggable IPAM, do we just remove the
>> old code and models?
>> On the other hand do you think it might make sense to give operators
>> a chance to rollback - perhaps just in case some nasty bug pops up?
>
> They can always revert to a previous release. And if we enable the new
> implementation start of Newton, we’ll have enough time to fix bugs
> that will pop up in gate.
>
We are now in early Newton, so it is good time to discuss plan for
pluggable ipam for this release cycle.

Kevin Benton commented on review page for current migration to pluggable
approach [1]:
>
> IMO this cannot be optional. It's going to be a nightmare to try to
> support two IPAM systems that people may have switched between at
> various points in time. I would much rather go all-in on the upgrade
> by making it automatic with alembic and removing the option to use the
> legacy IPAM code completely.
>
> I've already been bitten by testing the new IPAM code with the config
> option and switching back which resulted in undeletable subnets. Now
> we can always yell at anyone that changes the config option like I
> did, but it takes a lot of energy to yell at users and they don't care
> for it much. :)
>
> Even ignoring the support issue, consider schema changes. This
> migration script will have to be constantly updated to work with
> whatever the current state of the schema is on both sets of ipam
> tables. Without constant in-tree testing enforcing that, we are one
> schema change away from this script breaking.
>
> So let's bite the bullet and make this a normal contract migration.
> Either the new ipam system is stable enough for us to commit to
> supporting it and fix whatever bugs it may have, or we need to remove
> it from the tree. Supporting both systems is unsustainable.
>
This sound reasonable to me. It simplify support and testing (testing
both implementations in gate with full coverage is not easy).
>From user prospective there should be no visible changes between
pluggable ipam and non-pluggable.
And making switch early in release cycle gives us enough time to fix any
bug we will find in pluggable implementation.

Right now we have some open bugs for pluggable code [2], but they are
still possible to fix.

Does it make sense to you?

[1] https://review.openstack.org/#/c/277767/
[2] https://bugs.launchpad.net/neutron/+bug/1543094
>> What's the team level of confidence in the robustness of the
>> reference IPAM driver?
>>
>> Salvatore
>>
>>
>>
>>
>> John
>>
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-15 Thread Pavel Bondar
On 13.02.2016 02:42, Carl Baldwin wrote:
> On Fri, Feb 12, 2016 at 5:01 AM, Ihar Hrachyshka  wrote:
 It is only internal implementation changes.

 That's not entirely true, is it? There are config variables to change and
 it opens up the possibility of a scenario that the operator may not care
 about.

>>> If we were to remove the non-pluggable version altogether, then the
>>> default for ipam_driver would switch from None to internal. Therefore, there
>>> would be no config file changes needed.
>>>
>>> I think this is correct.
>>> Assuming the migration path to Neutron will include the data
>>> transformation from built-in to pluggable IPAM, do we just remove the old
>>> code and models?
>>> On the other hand do you think it might make sense to give operators a
>>> chance to rollback - perhaps just in case some nasty bug pops up?
>>
>> They can always revert to a previous release. And if we enable the new
>> implementation start of Newton, we’ll have enough time to fix bugs that will
>> pop up in gate.
> So, to do this, we have to consider two classes of current users.
> Since the pluggable implementation has been available, I think that we
> have to assume someone might be using it.  Someone could easily have
> turned it on in a green-field deployment.  If we push the offline
> migration in to Mitaka as per my last email then we'll likely get a
> few more of these but it doesn't really matter, the point is that I
> think we need t assume that they exist.
>
> 1) Users of the old baked-in implementation
>   - Their current data is stored in the old tables.
>
> 2) User of the new pluggable implementation
>  - Their current data is stored in the new tables.
>
> So, how does an unconditional migration work?  We can't just copy the
> old tables to the new tables because we might clobber data for the
> users in #2.  I've already heard that conditional migrations are a
> pain and shouldn't be considered.  This seems like a problem.
>
> I had an idea that I wanted to share but I'm warning you, it sounds a
> little crazy even to me.  But, maybe it could work.  Read through it
> for entertainment purposes if nothing else.
>
> Instead of migrating data from the old tables to the new.  What if we
> migrated the old tables in place in a patch set that removed all of
> the old code?  The table structure is nearly identical, right?  The
> differences, I imagine, could be easily handled by an alembic
> migration.  Correct me if I'm wrong.
>
> Now, we still have a difference between users in groups #1 and #2
> above.  To keep them separate, we would call the new built-in
> pluggable driver "built-in", "neutron", or whatever.  The name isn't
> important except that it can't be "internal".
>
> 1) Users who were migrated to the new baked-in implementation.
>   - Their current data is still in the old tables but they have been
> migrated to look just like the new tables.
>   - They have still not set "ipam_driver" in their config so they get
> the new default of "built-in".
>
> 2) Early adopters of built-in pluggable ipam
>   - Their current data is still in the new tables
>   - They have their config set to "internal" already
>
> So, now we have to deal with two identical pluggable implementations:
> one called "built-in" and the other called "internal" but otherwise
> they're identical in every way.  So, to handle this, could we
> parameterize the plugin so that they share exactly the same code while
> "internal" is deprecated?  Just the table names would be
> parameterized.
>
> We have to eventually reconcile users in group #1 with #2.  But, now
> that the tables are identical we could provide an offline migration
> that might be as simple as deleting the "old" tables and renaming the
> "new" tables.  Now, only users in group #2 are required to perform an
> offline migration.
>
> Carl
Hi Carl,

Your idea sounds workable to me. However I think a simpler way exists.

Significant part of 'built-in' ipam tables are continued to be updated even
if reference ipam driver is used (or any another driver).
It happens because these tables are API exposed, so they still have to
receive updates.

To be specific, next models from 'built-in' are related to ipam in some way:
- Subnet;
- IPAllocationPool;
- IPAllocation;
- IPAvailabilityRange;
Only IPAvailabilityRange stops to receive updates when switch to pluggable
ipam backend occurs.  And IPAvailabilityRange can be rebuilt based on
information
from IPAllocationPool and IPAllocation models [1].
It gives us ability to rebuild all the needed ipam information in
'built-in' tables even
if ipam driver is used.

I am trying to implement this approach in current version of migration
to pluggable ipam [2]
to allow safe switch back to 'built-in' implementation from pluggable.
'--rebuild' flag forces ip availability ranges recalculation relying
only on tables that are up to date
independently of backend currently used.

So to remove built-in ipam implementation we need:
- 

Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-08 Thread Pavel Bondar
On 06.02.2016 00:03, Salvatore Orlando wrote:
>
>
> On 5 February 2016 at 17:58, Neil Jerram  <mailto:neil.jer...@metaswitch.com>> wrote:
>
> On 05/02/16 16:31, Pavel Bondar wrote:
> > On 05.02.2016 12:28, Salvatore Orlando wrote:
> >>
> >>
> >> On 5 February 2016 at 04:12, Armando M.  <mailto:arma...@gmail.com>
> >> <mailto:arma...@gmail.com <mailto:arma...@gmail.com>>> wrote:
> >>
> >>
> >>
> >> On 4 February 2016 at 08:22, John Belamaric
> >> <<mailto:jbelama...@infoblox.com
> <mailto:jbelama...@infoblox.com>>jbelama...@infoblox.com
> <mailto:jbelama...@infoblox.com>> wrote:
> >>
> >>
> >> > On Feb 4, 2016, at 11:09 AM, Carl Baldwin
> mailto:c...@ecbaldwin.net>
> <mailto:c...@ecbaldwin.net <mailto:c...@ecbaldwin.net>>> wrote:
> >> >
> >> > On Thu, Feb 4, 2016 at 7:23 AM, Pavel Bondar
> mailto:pbon...@infoblox.com>
> <mailto:pbon...@infoblox.com <mailto:pbon...@infoblox.com>>> wrote:
> >> >> I am trying to bring more attention to [1] to make
> final decision on
> >> >> approach to use.
> >> >> There are a few point that are not 100% clear for me
> at this point.
> >> >>
> >> >> 1) Do we plan to switch all current clouds to
> pluggable ipam
> >> >> implementation in Mitaka?
>
> I possibly shouldn't comment at all, as I don't know the history, and
> wasn't around when the fundamental design decisions here were
> being made.
>
> However, it seems a shame to me that this was done in a way that
> needs a
> DB migration at all.  (And I would have thought it possible for the
> default pluggable IPAM driver to use the same DB state as the
> non-pluggable IPAM backend, given that it is delivering the same
> semantics.)  Without that, I believe it should be a no-brainer to
> switch
> unconditionally to the pluggable IPAM backend.
>
>
> This was indeed the first implementation attempt that we made, but it
> failed spectacularly as we wrestled with different foreign key
> relationships in the original and new model.
> Pavel has all the details, but after careful considerations we decided
> to adopt a specular model with different db tables.
Yeah, we had this chicken and egg problem on subnet creation.

On the one hand, ipam driver create_subnet has to be called *before*
creating neutron subnet.
Because for AnySubnetRequest ipam driver is responsible for selecting
cidr for subnet.

On the other hand, during ipam driver create_subnet call availability
ranges has to be created,
but they are linked with neutron subnet using foreign key (with
allocation pools in the middle).
So availability ranges could not be created before neutron subnet due to
FK constraint in old tables.

To solve this chicken and egg problem it was decided to use tables for
reference driver that have no FK to neutron subnet.
And it allowed to call ipam driver create_subnet (and create
availability ranges) before creating neutron subnet.
>  
>
>
> Sorry if that's unhelpful...
>
> Neil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-05 Thread Pavel Bondar
On 05.02.2016 12:28, Salvatore Orlando wrote:
>
>
> On 5 February 2016 at 04:12, Armando M.  <mailto:arma...@gmail.com>> wrote:
>
>
>
> On 4 February 2016 at 08:22, John Belamaric
> mailto:jbelama...@infoblox.com>> wrote:
>
>
> > On Feb 4, 2016, at 11:09 AM, Carl Baldwin
> mailto:c...@ecbaldwin.net>> wrote:
> >
> > On Thu, Feb 4, 2016 at 7:23 AM, Pavel Bondar
> mailto:pbon...@infoblox.com>> wrote:
> >> I am trying to bring more attention to [1] to make final
> decision on
> >> approach to use.
> >> There are a few point that are not 100% clear for me at
> this point.
> >>
> >> 1) Do we plan to switch all current clouds to pluggable ipam
> >> implementation in Mitaka?
> >
> > I think our plan originally was only to deprecate the
> non-pluggable
> > implementation in Mitaka and remove it in Newton.  However,
> this is
> > worth some more consideration.  The pluggable version of the
> reference
> > implementation should, in theory, be at parity with the current
> > non-pluggable implementation.  We've tested it before and shown
> > parity.  What we're missing is regular testing in the gate
> to ensure
> > it continues this way.
> >
>
> Yes, it certainly should be at parity, and gate testing to
> ensure it would be best.
>
> >> yes -->
> >> Then data migration can be done as alembic_migration and it
> is what
> >> currently implemented in [2] PS54.
> >> In this case during upgrade from Liberty to Mitaka all
> users are
> >> unconditionally switched to reference ipam driver
> >> from built-in ipam implementation.
> >> If operator wants to continue using build-in ipam
> implementation it can
> >> manually turn off ipam_driver in neutron.conf
> >> immediately after upgrade (data is not deleted from old
> tables).
> >
> > This has a certain appeal to it.  I think the migration will be
> > straight-forward since the table structure doesn't really
> change much.
> > Doing this as an alembic migration would be the easiest from an
> > upgrade point of view because it fits seamlessly in to our
> current
> > upgrade strategy.
> >
> > If we go this way, we should get this in soon so that we can
> get the
> > gate and others running with this code for the remainder of
> the cycle.
> >
>
> If we do this, and the operator reverts back to the
> non-pluggable version,
> then we will leave stale records in the new IPAM tables. At
> the very least,
> we would need a way to clean those up and to migrate at a
> later time.
>
> >> no -->
> >> Operator is free to choose whether it will switch to
> pluggable ipam
> >> implementation
> >> and when. And it leads to no automatic data migration.
> >> In this case operator is supplied with script for migration
> to pluggable
> >> ipam (and probably from pluggable ipam),
> >> which can be executed by operator during upgrade or at any
> point after
> >> upgrade is done.
> >> I was testing this approach in [2] PS53 (have unresolved
> issues in it
> >> for now).
> >
> > If there is some risk in changing over then this should still be
> > considered.  But, the more I think about it, the more I
> think that we
> > should just make the switch seamlessly for the operator and
> be done
> > with it.  This approach puts a certain burden on the operator to
> > choose when to do the migration and go through the steps
> manually to
> > do it.  And, since our intention is to deprecate and remove the
> > non-pluggable implementation, it is inevitable that they
> will have to
> > eventually switch anyway.
> >
> > This also makes testing much more difficult.  If we go this
> route, we
> > really should be testing both equally.  Does this mean that
> we need 

[openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-04 Thread Pavel Bondar
Hi,

I am trying to bring more attention to [1] to make final decision on
approach to use.
There are a few point that are not 100% clear for me at this point.

1) Do we plan to switch all current clouds to pluggable ipam
implementation in Mitaka?

yes -->
Then data migration can be done as alembic_migration and it is what
currently implemented in [2] PS54.
In this case during upgrade from Liberty to Mitaka all users are
unconditionally switched to reference ipam driver
from built-in ipam implementation.
If operator wants to continue using build-in ipam implementation it can
manually turn off ipam_driver in neutron.conf
immediately after upgrade (data is not deleted from old tables).

no -->
Operator is free to choose whether it will switch to pluggable ipam
implementation
and when. And it leads to no automatic data migration.
In this case operator is supplied with script for migration to pluggable
ipam (and probably from pluggable ipam),
which can be executed by operator during upgrade or at any point after
upgrade is done.
I was testing this approach in [2] PS53 (have unresolved issues in it
for now).

Or we could do both, i.e. migrate data during upgrade from built-in to
pluggable ipam implementation
and supply operator with scripts to migrate from/to pluggable ipam at
any time after upgrade.

According to current feedback in [1] it most likely we go with script
approach,
so would like to confirm if that is the case.

2) Do we plan to make ipam implementation default in Mitaka for greenfields?

If answer for this question is the same as for previous (yes/yes,
no/no), then it doesn't introduce additional issues.
But if answer is different from previous then it might complicate stuff.
For example, greyfields might be migrated manually by operator to
pluggable ipam, or continue to work using built-in implementation after
upgrade in Mitaka.
But greenfields might be set to pluggable ipam implementation by default.

Is it what we are going to support?

3) How the script approach should be tested?

Currently if pluggable implementation is set as default, then grenade
test fails.
Data has to be migrated during upgrade automatically to make grenade pass.
In [1] PS53 I was using alembic migration that internally just call
external migrate script.
Is it a valid approach? I expect that better way to test script
execution during upgrade might exist.

[1] https://bugs.launchpad.net/neutron/+bug/1516156
[2] https://review.openstack.org/#/c/181023

Thanks,
Pavel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] IPAM drivers for BlueCat networks and Alcatel-Lucent VitalQIP

2015-10-30 Thread Pavel Bondar
Hi Maruti,

For now only Pluggable IPAM interface with reference IPAM driver were released,
and I am working on implementing IPAM driver for Infoblox.

But I am not aware about existense of IPAM drivers from BlueCat or Alcatel.
If you want to investigate with pluggable IPAM more you can start with IPAM 
Interface
and reference IPAM driver. They can be found under [1].

[1] https://github.com/openstack/neutron/tree/master/neutron/ipam

Thanks,
Pavel

From: Kamat, Maruti Haridas 
Sent: Thursday, October 29, 2015 8:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron] IPAM drivers for BlueCat networks and 
Alcatel-Lucent VitalQIP
  

Hi
 
I am exploring IPAM feature in OpenStack Neutron. In particular, I wanted to 
look at the IPAM drivers for BlueCat Networks’ (Proteus) and Alcatel-Lucent’s 
VitalQIP.
 
If someone can provide me with pointers, it would be of great help.
 
Thanks,
Maruti
 
 
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Do we need migrate script for neutron IPAM now?

2015-05-06 Thread Pavel Bondar
Ok, sounds good to me.
I'll switch #153236 to  built-in IPAM implementation by default, and pay
additional attention on testing pluggable IPAM routines in this case.

- Pavel

On 06.05.2015 16:50, John Belamaric wrote:
> I agree, we should amend it to not run pluggable IPAM as the default for
> now. When we decide to make it the default, the migration scripts will
> be needed.
> 
> John
> 
> 
> On 5/5/15, 1:47 PM, "Salvatore Orlando"  <mailto:sorla...@nicira.com>> wrote:
> 
> Patch #153236 is introducing pluggable IPAM in the db base plugin
> class, and default to it at the same time, I believe.
> 
> If the consensus is to default to IPAM driver then in order to
> satisfy grenade requirements those migrations scripts should be run.
> There should actually be a single script to be run in a one-off
> fashion. Even better is treated as a DB migration.
> 
> However, the plan for Kilo was to not turn on pluggable IPAM for
> default. Now that we are targeting Liberty, we should have this
> discussion again, and not take for granted that we should default to
> pluggable IPAM just because a few months ago we assumed it would be
> default by Liberty.
> I suggest to not enable it by default, and then consider in L-3
> whether we should do this switch.
> For the time being, would it be possible to amend patch #153236 to
> not run pluggable IPAM by default. I appreciate this would have some
> impact on unit tests as well, which should be run both for pluggable
> and "traditional" IPAM.
> 
> Salvatore
> 
> On 4 May 2015 at 20:11, Pavel Bondar  <mailto:pbon...@infoblox.com>> wrote:
> 
> Hi,
> 
> During fixing failures in db_base_plugin_v2.py with new IPAM[1]
> I faced
> to check-grenade-dsvm-neutron failures[2].
> check-grenade-dsvm-neutron installs stable/kilo, creates
> networks/subnets and upgrades to patched master.
> So it validates that migrations passes fine and installation is
> works
> fine after it.
> 
> This is where failure occurs.
> Earlier there was an agreement about using pluggable IPAM only for
> greenhouse installation, so migrate script from built-in IPAM to
> pluggable IPAM was postponed.
> And check-grenade-dsvm-neutron validates greyhouse scenario.
> So do we want to update this agreement and implement migration
> scripts
> from built-in IPAM to pluggable IPAM now?
> 
> Details about failures.
> Subnets created before patch was applied does not have correspondent
> IPAM subnet,
> so observed a lot of failures like this in [2]:
> >Subnet 2c702e2a-f8c2-4ea9-a25d-924e32ef5503 could not be found
> Currently config option in patch is modified to use
> pluggable_ipam by
> default (to catch all possible UT/tempest failures).
> But before the merge patch will be switched back to non-ipam
> implementation by default.
> 
> I would prefer to implement migrate script as a separate review,
> since [1] is already quite big and hard for review.
> 
> [1] https://review.openstack.org/#/c/153236
> [2]
> 
> http://logs.openstack.org/36/153236/54/check/check-grenade-dsvm-neutron/42ab4ac/logs/grenade.sh.txt.gz
> 
> - Pavel Bondar
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] _get_subnet() in OpenContrail tests results in port deletion

2015-05-05 Thread Pavel Bondar
Salvatore,

Agree, I checked with Kevin and hi also prefer the first workaround.
Patch set 55 was uploaded to [1] implementing this solution.
UT passed fine locally, waiting results from jenkins.

[1] https://review.openstack.org/#/c/153236/

Thanks,
Pavel

On 04.05.2015 23:58, Salvatore Orlando wrote:
> I think the first workaround is the solution we're looking for as it
> better reflects the fact that opencontrail is a db-less plugin.
> I hope it will be the easier too, but you can never be too sure with
> neutron unit tests.
> 
> Salvatore
> 
> On 4 May 2015 at 12:56, Pavel Bondar  <mailto:pbon...@infoblox.com>> wrote:
> 
> Hi Kevin,
> 
> Thanks for your answer, that is what I was looking for!
> I'll check with you in irc to decide which workaround is better:
> 1. Mocking NeutronDbSubnet fetch_subnet for opencontrail tests.
> 2. Using session.query() directly in NeutronDbSubnet fetch_subnet.
> 
> - Pavel Bondar
> 
> On 30.04.2015 22:46, Kevin Benton wrote:
> > The OpenContrail plugin itself doesn't even use the Neutron DB. I
> > believe what you are observing is a side effect of the fake server
> they
> > have for their tests, which does inherit the neutron DB.
> >
> > When you call a method on the core plugin in the contrail unit test
> > case, it will go through their request logic and will be piped
> into the
> > fake server. During this time, the db session that was associated with
> > the original context passed to the core plugin will be lost do to its
> > conversion to a dict.[1, 2]
> >
> > So I believe what you're seeing is this.
> >
> > 1. The FakeServer gets create_port called and starts its transactions.
> > 2. It now hits the ipam driver which calls out to the neutron
> manager to
> > get the core plugin handle, which is actually the contrail plugin and
> > not the FakeServer.
> > 3. IPAM calls _get_subnet on the contrail plugin, which serializes the
> > context[1] and sends it to the FakeServer.
> > 4. The FakeServer code receives the request and deserializes the
> > context[2], which no longer has the db session.
> > 5. The FakeServer then ends up starting a new session to read the
> > subnet, which will interfere with the transaction you created the port
> > under since they are from the same engine.
> >
> > This is why you can query the DB directly rather than calling the core
> > plugin. The good news is that you don't have to worry because the
> actual
> > contrail plugin won't be using any of this logic so you're not
> actually
> > breaking anything.
> >
> > I think what you'll want to do is add a mock.patch for the
> > NeutronDbSubnet fetch_subnet method to monkey patch in a reference to
> > their FakeServer's _get_subnet method. Ping me on IRC (kevinbenton) if
> > you need help.
> >
>     > 1.
> >
> 
> https://github.com/openstack/neutron/blob/master/neutron/plugins/opencontrail/contrail_plugin.py#L111
> > 2.
> >
> 
> https://github.com/openstack/neutron/blob/master/neutron/tests/unit/plugins/opencontrail/test_contrail_plugin.py#L121
> >
> > On Thu, Apr 30, 2015 at 6:37 AM, Pavel Bondar
> mailto:pbon...@infoblox.com>
> > <mailto:pbon...@infoblox.com <mailto:pbon...@infoblox.com>>> wrote:
> >
> > Hi,
> >
> > I am debugging issue observed in OpenContrail tests[1] and so
> far it
> > does not look obvious.
> >
> > Issue:
> >
> > In create_port[2] new transaction is started.
> > Port gets created, but disappears right after reading subnet
> from plugin
> > in reference ipam driver[3]:
> >
> > >plugin = manager.NeutronManager.get_plugin()
> > >return plugin._get_subnet(context, id)
> >
> > Port no longer seen in transaction, like it never existed before
> > (magic?). As a result inserting IPAllocation fails with
> foreing key
> > constraint error:
> >
> > DBReferenceError: (IntegrityError) FOREIGN KEY constraint failed
> > u'INSERT INTO ipallocations (port_id, ip_address, subnet_id,
> network_id)
> > VALUES (?, ?, ?, ?)' ('aba6eaa2-2b2f-4ab9-97b0-4d8a36659363',
> > u'10.0.0.2', u'be7bb05b-d501-4cf3-a29

[openstack-dev] [Neutron][IPAM] Do we need migrate script for neutron IPAM now?

2015-05-04 Thread Pavel Bondar
Hi,

During fixing failures in db_base_plugin_v2.py with new IPAM[1] I faced
to check-grenade-dsvm-neutron failures[2].
check-grenade-dsvm-neutron installs stable/kilo, creates
networks/subnets and upgrades to patched master.
So it validates that migrations passes fine and installation is works
fine after it.

This is where failure occurs.
Earlier there was an agreement about using pluggable IPAM only for
greenhouse installation, so migrate script from built-in IPAM to
pluggable IPAM was postponed.
And check-grenade-dsvm-neutron validates greyhouse scenario.
So do we want to update this agreement and implement migration scripts
from built-in IPAM to pluggable IPAM now?

Details about failures.
Subnets created before patch was applied does not have correspondent
IPAM subnet,
so observed a lot of failures like this in [2]:
>Subnet 2c702e2a-f8c2-4ea9-a25d-924e32ef5503 could not be found
Currently config option in patch is modified to use pluggable_ipam by
default (to catch all possible UT/tempest failures).
But before the merge patch will be switched back to non-ipam
implementation by default.

I would prefer to implement migrate script as a separate review,
since [1] is already quite big and hard for review.

[1] https://review.openstack.org/#/c/153236
[2]
http://logs.openstack.org/36/153236/54/check/check-grenade-dsvm-neutron/42ab4ac/logs/grenade.sh.txt.gz

- Pavel Bondar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] _get_subnet() in OpenContrail tests results in port deletion

2015-05-04 Thread Pavel Bondar
Hi Kevin,

Thanks for your answer, that is what I was looking for!
I'll check with you in irc to decide which workaround is better:
1. Mocking NeutronDbSubnet fetch_subnet for opencontrail tests.
2. Using session.query() directly in NeutronDbSubnet fetch_subnet.

- Pavel Bondar

On 30.04.2015 22:46, Kevin Benton wrote:
> The OpenContrail plugin itself doesn't even use the Neutron DB. I
> believe what you are observing is a side effect of the fake server they
> have for their tests, which does inherit the neutron DB.
> 
> When you call a method on the core plugin in the contrail unit test
> case, it will go through their request logic and will be piped into the
> fake server. During this time, the db session that was associated with
> the original context passed to the core plugin will be lost do to its
> conversion to a dict.[1, 2]
> 
> So I believe what you're seeing is this. 
> 
> 1. The FakeServer gets create_port called and starts its transactions. 
> 2. It now hits the ipam driver which calls out to the neutron manager to
> get the core plugin handle, which is actually the contrail plugin and
> not the FakeServer.
> 3. IPAM calls _get_subnet on the contrail plugin, which serializes the
> context[1] and sends it to the FakeServer.
> 4. The FakeServer code receives the request and deserializes the
> context[2], which no longer has the db session.
> 5. The FakeServer then ends up starting a new session to read the
> subnet, which will interfere with the transaction you created the port
> under since they are from the same engine.
> 
> This is why you can query the DB directly rather than calling the core
> plugin. The good news is that you don't have to worry because the actual
> contrail plugin won't be using any of this logic so you're not actually
> breaking anything.
> 
> I think what you'll want to do is add a mock.patch for the
> NeutronDbSubnet fetch_subnet method to monkey patch in a reference to
> their FakeServer's _get_subnet method. Ping me on IRC (kevinbenton) if
> you need help.
> 
> 1.
> https://github.com/openstack/neutron/blob/master/neutron/plugins/opencontrail/contrail_plugin.py#L111
> 2.
> https://github.com/openstack/neutron/blob/master/neutron/tests/unit/plugins/opencontrail/test_contrail_plugin.py#L121
> 
> On Thu, Apr 30, 2015 at 6:37 AM, Pavel Bondar  <mailto:pbon...@infoblox.com>> wrote:
> 
> Hi,
> 
> I am debugging issue observed in OpenContrail tests[1] and so far it
> does not look obvious.
> 
> Issue:
> 
> In create_port[2] new transaction is started.
> Port gets created, but disappears right after reading subnet from plugin
> in reference ipam driver[3]:
> 
> >plugin = manager.NeutronManager.get_plugin()
> >return plugin._get_subnet(context, id)
> 
> Port no longer seen in transaction, like it never existed before
> (magic?). As a result inserting IPAllocation fails with foreing key
> constraint error:
> 
> DBReferenceError: (IntegrityError) FOREIGN KEY constraint failed
> u'INSERT INTO ipallocations (port_id, ip_address, subnet_id, network_id)
> VALUES (?, ?, ?, ?)' ('aba6eaa2-2b2f-4ab9-97b0-4d8a36659363',
> u'10.0.0.2', u'be7bb05b-d501-4cf3-a29a-3861b3b54950',
> u'169f6a61-b5d0-493a-b7fa-74fd5b445c84')
> }}}
> 
> Only OpenContrail tests fail with that error (116 failures[1]). Tests
> for other plugin passes fine. As I see OpenContrail is different from
> other plugins: each call to plugin is wrapped into http request, so
> getting subnet happens in another transaction. In tests requests.post()
> is mocked and http call gets translated into self.get_subnet(...).
> Stack trace from plugin._get_subnet() to db_base get_subnet() in open
> contrail tests looks next[4].
> 
> Also single test failure with full db debug was uploaded for
> investigation[5]:
> - Port is inserted at 362.
> - Subnet is read by plugin at 384.
> - IPAllocation was tried to be inserted at 407.
> Between Port and IPAllocation insert no COMMIT/ROLLBACK or delete
> statement were issued, so can't find explanation why port no longer
> exists on IPAllocation insert step.
> Am I missing something obvious?
> 
> For now I have several workarounds, which are basically do not use
> plugin._get_subnet(). Direct session.query() works without such side
> effects.
> But this issue bothers me much since I can't explain why it even happens
> in OpenContrail tests.
> Any ideas are welcome!
> 
> My best theory for now: OpenContrail silently wipes currently running
&

[openstack-dev] [Neutron] _get_subnet() in OpenContrail tests results in port deletion

2015-04-30 Thread Pavel Bondar
Hi,

I am debugging issue observed in OpenContrail tests[1] and so far it
does not look obvious.

Issue:

In create_port[2] new transaction is started.
Port gets created, but disappears right after reading subnet from plugin
in reference ipam driver[3]:

>plugin = manager.NeutronManager.get_plugin()
>return plugin._get_subnet(context, id)

Port no longer seen in transaction, like it never existed before
(magic?). As a result inserting IPAllocation fails with foreing key
constraint error:

DBReferenceError: (IntegrityError) FOREIGN KEY constraint failed
u'INSERT INTO ipallocations (port_id, ip_address, subnet_id, network_id)
VALUES (?, ?, ?, ?)' ('aba6eaa2-2b2f-4ab9-97b0-4d8a36659363',
u'10.0.0.2', u'be7bb05b-d501-4cf3-a29a-3861b3b54950',
u'169f6a61-b5d0-493a-b7fa-74fd5b445c84')
}}}

Only OpenContrail tests fail with that error (116 failures[1]). Tests
for other plugin passes fine. As I see OpenContrail is different from
other plugins: each call to plugin is wrapped into http request, so
getting subnet happens in another transaction. In tests requests.post()
is mocked and http call gets translated into self.get_subnet(...).
Stack trace from plugin._get_subnet() to db_base get_subnet() in open
contrail tests looks next[4].

Also single test failure with full db debug was uploaded for
investigation[5]:
- Port is inserted at 362.
- Subnet is read by plugin at 384.
- IPAllocation was tried to be inserted at 407.
Between Port and IPAllocation insert no COMMIT/ROLLBACK or delete
statement were issued, so can't find explanation why port no longer
exists on IPAllocation insert step.
Am I missing something obvious?

For now I have several workarounds, which are basically do not use
plugin._get_subnet(). Direct session.query() works without such side
effects.
But this issue bothers me much since I can't explain why it even happens
in OpenContrail tests.
Any ideas are welcome!

My best theory for now: OpenContrail silently wipes currently running
transaction in tests (in this case it doesn't sound good).

Anyone can checkout and debug patch set 50 (where issue is observed)
from review page[6].

Thank you in advance.

- Pavel Bondar

[1]
http://logs.openstack.org/36/153236/50/check/gate-neutron-python27/dd83d43/testr_results.html.gz
[2]
https://review.openstack.org/#/c/153236/50/neutron/db/db_base_plugin_v2.py
line 1578 / line 1857
[3]
https://review.openstack.org/#/c/153236/50/neutron/ipam/drivers/neutrondb_ipam/driver.py
line 50
[4] http://pastebin.com/n0AqhC5x
[5] http://pastebin.com/BDBAXFy9
[6] http://logs.openstack.org/36/153236/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3] IPAM alternate refactoring

2015-04-13 Thread Pavel Bondar
Hi,

I made some investigation on the topic[1] and see several issues on this
way.

1. Plugin's create_port() is wrapped up in top level transaction for
create floating ip case[2], so it becomes more complicated to do IPAM
calls outside main db transaction.

- for create floating ip case transaction is initialized on
create_floatingip level:
create_floatingip(l3_db)->create_port(plugin)->create_port(db_base)
So IPAM call should be added into create_floatingip to be outside db
transaction

- for usual port create transaction is initialized on plugin's
create_port level, and John's change[1] cover this case:
create_port(plugin)->create_port(db_base)

Create floating ip work-flow involves calling plugin's create_port,
so IPAM code inside of it should be executed only when it is not wrapped
into top level transaction.

2. It is opened question about error handling.
Should we use taskflow to manage IPAM calls to external systems?
Or simple exception based model is enough to handle rollback actions on
third party systems in case of failing main db transaction.

[1] https://review.openstack.org/#/c/172443/
[2] neutron/db/l3_db.py: line 905

Thanks,
Pavel

On 10.04.2015 21:04, openstack-dev-requ...@lists.openstack.org wrote:
> L3 Team,
> 
> I have put up a WIP [1] that provides an approach that shows the ML2 
> create_port method refactored to use the IPAM driver prior to initiating the 
> database transaction. Details are in the commit message - this is really just 
> intended to provide a strawman for discussion of the options. The actual 
> refactor here is only about 40 lines of code.
> 
> [1] https://review.openstack.org/#/c/172443/
> 
> 
> Thanks,
> John


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev