[Yahoo-eng-team] [Bug 1750735] [NEW] [OVO] UT fails when setting new_facade to True

2018-02-20 Thread Lujin Luo
Public bug reported:

How to reproduce:
1) Set new_facade = True in any OVO object. I tried PortBinding(), Port() and 
Network().
2) Run python -m testtools.run neutron/tests/unit/objects/test_network.py
   or python -m testtools.run neutron/tests/unit/objects/test_port.py
3) Example of failures:
==
ERROR: 
neutron.tests.unit.objects.test_network.NetworkObjectIfaceTestCase.test_update_updates_from_db_object
--
Traceback (most recent call last):
  File "neutron/tests/base.py", line 132, in func
return f(self, *args, **kwargs)
  File "neutron/tests/base.py", line 132, in func
return f(self, *args, **kwargs)
  File 
"/home/stack/small_port_2/neutron/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
return func(*args, **keywargs)
  File "neutron/tests/unit/objects/test_base.py", line 1167, in 
test_update_updates_from_db_object
obj.update()
  File "neutron/objects/base.py", line 319, in decorator
self.obj_context.session.refresh(self.db_obj)
  File 
"/home/stack/small_port_2/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 line 1498, in refresh
self._expire_state(state, attribute_names)
  File 
"/home/stack/small_port_2/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 line 1600, in _expire_state
self._validate_persistent(state)
  File 
"/home/stack/small_port_2/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 line 2042, in _validate_persistent
state_str(state))
sqlalchemy.exc.InvalidRequestError: Instance '' is 
not persistent within this Session

I believe something merged after Feb. 9th breaks them. As in [1], no
codes changes from Feb. 9th but it fails on recheck on Feb. 20th.


[1] https://review.openstack.org/#/c/537320/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750735

Title:
  [OVO] UT fails when setting new_facade to True

Status in neutron:
  New

Bug description:
  How to reproduce:
  1) Set new_facade = True in any OVO object. I tried PortBinding(), Port() and 
Network().
  2) Run python -m testtools.run neutron/tests/unit/objects/test_network.py
 or python -m testtools.run neutron/tests/unit/objects/test_port.py
  3) Example of failures:
  ==
  ERROR: 
neutron.tests.unit.objects.test_network.NetworkObjectIfaceTestCase.test_update_updates_from_db_object
  --
  Traceback (most recent call last):
File "neutron/tests/base.py", line 132, in func
  return f(self, *args, **kwargs)
File "neutron/tests/base.py", line 132, in func
  return f(self, *args, **kwargs)
File 
"/home/stack/small_port_2/neutron/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
  return func(*args, **keywargs)
File "neutron/tests/unit/objects/test_base.py", line 1167, in 
test_update_updates_from_db_object
  obj.update()
File "neutron/objects/base.py", line 319, in decorator
  self.obj_context.session.refresh(self.db_obj)
File 
"/home/stack/small_port_2/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 line 1498, in refresh
  self._expire_state(state, attribute_names)
File 
"/home/stack/small_port_2/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 line 1600, in _expire_state
  self._validate_persistent(state)
File 
"/home/stack/small_port_2/neutron/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 line 2042, in _validate_persistent
  state_str(state))
  sqlalchemy.exc.InvalidRequestError: Instance '' is 
not persistent within this Session

  I believe something merged after Feb. 9th breaks them. As in [1], no
  codes changes from Feb. 9th but it fails on recheck on Feb. 20th.

  
  [1] https://review.openstack.org/#/c/537320/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750735/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750353] [NEW] _get_changed_synthetic_fields() does not guarantee returned fields to be updatable

2018-02-19 Thread Lujin Luo
Public bug reported:

While revising [1], I discovered an issue of
_get_changed_synthetic_fields(): it does not guarantee returned fields
to be updatable.

How to reproduce:
 Set a breakpoint in [2] and then run 
neutron.tests.unit.objects.test_ports.DistributedPortBindingIfaceObjTestCase.test_update_updates_from_db_object,
 the returned fields are
-> return fields
(Pdb) fields
{'host': u'c2753a12ec', 'port_id': 'ae5700cd-f872-4694-bf36-92b919b0d3bf'}
where 'host' and 'port_id' are not updatable.

[1] https://review.openstack.org/#/c/544206/
[2] 
https://github.com/openstack/neutron/blob/master/neutron/objects/base.py#L696

** Affects: neutron
 Importance: Undecided
 Assignee: Lujin Luo (luo-lujin)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Lujin Luo (luo-lujin)

** Description changed:

  While revising [1], I discovered an issue of
  _get_changed_synthetic_fields(): it does not guarantee returned fields
  to be updatable.
  
- How to reproduce: 
-  Set a breakpoint in [2] and then run 
neutron.tests.unit.objects.test_ports.DistributedPortBindingIfaceObjTestCase.test_update_updates_from_db_object,
 the returned fields are 
+ How to reproduce:
+  Set a breakpoint in [2] and then run 
neutron.tests.unit.objects.test_ports.DistributedPortBindingIfaceObjTestCase.test_update_updates_from_db_object,
 the returned fields are
  -> return fields
  (Pdb) fields
  {'host': u'c2753a12ec', 'port_id': 'ae5700cd-f872-4694-bf36-92b919b0d3bf'}
- where port_id is not updatable.
+ where 'host' and 'port_id' are not updatable.
+ 
+ [1] https://review.openstack.org/#/c/544206/

** Description changed:

  While revising [1], I discovered an issue of
  _get_changed_synthetic_fields(): it does not guarantee returned fields
  to be updatable.
  
  How to reproduce:
   Set a breakpoint in [2] and then run 
neutron.tests.unit.objects.test_ports.DistributedPortBindingIfaceObjTestCase.test_update_updates_from_db_object,
 the returned fields are
  -> return fields
  (Pdb) fields
  {'host': u'c2753a12ec', 'port_id': 'ae5700cd-f872-4694-bf36-92b919b0d3bf'}
  where 'host' and 'port_id' are not updatable.
  
  [1] https://review.openstack.org/#/c/544206/
+ [2] 
https://github.com/openstack/neutron/blob/master/neutron/objects/base.py#L696

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750353

Title:
  _get_changed_synthetic_fields() does not guarantee returned fields to
  be updatable

Status in neutron:
  In Progress

Bug description:
  While revising [1], I discovered an issue of
  _get_changed_synthetic_fields(): it does not guarantee returned fields
  to be updatable.

  How to reproduce:
   Set a breakpoint in [2] and then run 
neutron.tests.unit.objects.test_ports.DistributedPortBindingIfaceObjTestCase.test_update_updates_from_db_object,
 the returned fields are
  -> return fields
  (Pdb) fields
  {'host': u'c2753a12ec', 'port_id': 'ae5700cd-f872-4694-bf36-92b919b0d3bf'}
  where 'host' and 'port_id' are not updatable.

  [1] https://review.openstack.org/#/c/544206/
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/objects/base.py#L696

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750353/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1744829] [NEW] Avoid mixed usage of old and new transaction styles

2018-01-22 Thread Lujin Luo
Public bug reported:

The newly merged (distributed) Port Binding OVO integration patch [1] is
mixing the old nested transaction style used by OVO with the new engine
facade transactions. According to what we learnt in
https://review.openstack.org/#/c/529169 and
https://review.openstack.org/#/c/532343, this shouldn't be done.

A patch to change the new engine facade transactions back to old nested
transaction style is required.

[1] https://review.openstack.org/#/c/407868

** Affects: neutron
 Importance: Undecided
 Assignee: Lujin Luo (luo-lujin)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Lujin Luo (luo-lujin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1744829

Title:
  Avoid mixed usage of old and new transaction styles

Status in neutron:
  New

Bug description:
  The newly merged (distributed) Port Binding OVO integration patch [1]
  is mixing the old nested transaction style used by OVO with the new
  engine facade transactions. According to what we learnt in
  https://review.openstack.org/#/c/529169 and
  https://review.openstack.org/#/c/532343, this shouldn't be done.

  A patch to change the new engine facade transactions back to old
  nested transaction style is required.

  [1] https://review.openstack.org/#/c/407868

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1744829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687616] Re: KeyError 'options' while doing zero downtime upgrade from N to O

2017-10-30 Thread Lujin Luo
** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1687616

Title:
  KeyError 'options' while doing zero downtime upgrade from N to O

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  I am trying to do a zero downtime upgrade from N release to O release
  following [1].

  I have 3 controller nodes running behind a HAProxy. Everytime, when I
  upgraded one of the keystone and bring it back to the cluster, it
  would encounter this error [2] when I tried to update a created user
  for about 5 minutes. After I brought back all the 3 upgraded keystone
  nodes, and 5 or more minutes later, this error will disappear and
  everything works fine.

  I am using the same conf file for both releases as shown in [3].

  [1]. https://docs.openstack.org/keystone/latest/admin/identity-upgrading.html
  [2]. http://paste.openstack.org/show/608557/
  [3]. http://paste.openstack.org/show/608558/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1687616/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724446] [NEW] object string field filtering on "LIKE" statement lacks initialization

2017-10-17 Thread Lujin Luo
Public bug reported:

In [1], we allow get_objects() to filter LIKE statements, i.e. starts,
contains and ends.

However, the parent class lacks necessary initialization of these three
attributes.

There is also a trivial bug in [2], the usage of LIKE statement should
be "obj_utils.StringContains()", not
"obj_utils.StringMatchingContains()".

[1] https://review.openstack.org/#/c/419152/
[2] 
https://github.com/openstack/neutron/blob/master/doc/source/contributor/internals/objects_usage.rst

** Affects: neutron
 Importance: Undecided
     Assignee: Lujin Luo (luo-lujin)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Lujin Luo (luo-lujin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1724446

Title:
  object string field filtering on "LIKE" statement lacks initialization

Status in neutron:
  New

Bug description:
  In [1], we allow get_objects() to filter LIKE statements, i.e. starts,
  contains and ends.

  However, the parent class lacks necessary initialization of these
  three attributes.

  There is also a trivial bug in [2], the usage of LIKE statement should
  be "obj_utils.StringContains()", not
  "obj_utils.StringMatchingContains()".

  [1] https://review.openstack.org/#/c/419152/
  [2] 
https://github.com/openstack/neutron/blob/master/doc/source/contributor/internals/objects_usage.rst

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1724446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724177] [NEW] Sqlalchemy column in_() operator does not allow NULL element

2017-10-17 Thread Lujin Luo
Public bug reported:

I met this issue when I was integrating Floating IP OVO objects. There
would be a case that we want to pass router_id=None and
fixed_port_id=None into get_objects() method [1], which eventually leads
to this method [2].

In my case, since when key is "router_id" and value is "[None]", the
in_() clause in Line 205 will not return any matching queries, cause
in_() does not support None element.

We need to add a check if [2] when None is contained in value.

[1] https://review.openstack.org/#/c/396351/34..35/neutron/db/l3_db.py@1429
[2] 
https://github.com/openstack/neutron/blob/master/neutron/db/_model_query.py#L176

** Affects: neutron
 Importance: Undecided
     Assignee: Lujin Luo (luo-lujin)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Lujin Luo (luo-lujin)

** Description changed:

  I met this issue when I was integrating Floating IP OVO objects. There
  would be a case that we want to pass router_id=None and
  fixed_port_id=None into get_objects() method [1], which eventually leads
  to this method [2].
  
  In my case, since when key is "router_id" and value is "[None]", the
- in_() clause in Line 205 will not return any matching queries.
+ in_() clause in Line 205 will not return any matching queries, cause
+ in_() does not support None element.
  
  We need to add a check if [2] when None is contained in value.
  
  [1] https://review.openstack.org/#/c/396351/34..35/neutron/db/l3_db.py@1429
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/db/_model_query.py#L176

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1724177

Title:
  Sqlalchemy column in_() operator does not allow NULL  element

Status in neutron:
  New

Bug description:
  I met this issue when I was integrating Floating IP OVO objects. There
  would be a case that we want to pass router_id=None and
  fixed_port_id=None into get_objects() method [1], which eventually
  leads to this method [2].

  In my case, since when key is "router_id" and value is "[None]", the
  in_() clause in Line 205 will not return any matching queries, cause
  in_() does not support None element.

  We need to add a check if [2] when None is contained in value.

  [1] https://review.openstack.org/#/c/396351/34..35/neutron/db/l3_db.py@1429
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/db/_model_query.py#L176

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1724177/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1705187] [NEW] Add specific values to specific fields in get_random_object_fields()

2017-07-19 Thread Lujin Luo
Public bug reported:

In current
neutron.tests.unit.objects.test_router.FloatingIPDbObjectTestCase.test_update_objects
code path, all the field values are generated randomly by
get_random_object_fields(). However in several cases, the actual object
is required to fulfill Foreign Key requirement.

An example here is FloatingIP OVO. It updates router_id and
fixed_port_id, but without actual objects we will see the following
errors.

oslo_db.exception.DBReferenceError: (sqlite3.IntegrityError) FOREIGN KEY
constraint failed [SQL: u'UPDATE floatingips SET project_id=?,
floating_ip_address=?, floating_network_id=?, floating_port_id=?,
fixed_port_id=?, fixed_ip_address=?, router_id=?,
last_known_router_id=?, status=? WHERE floatingips.id = ?'] [parameters:
(u'082f1ab417', '10.81.171.5', '7290820a-186b-43d5-bdb6-b06390d62f96',
'96a2689b-f31d-4660-9253-cb27d1d44ffc', '323da865-b33d-
48e9-b5c2-650cb7ead20f', '10.186.214.129',
'0b093c10-5947-44b6-9c11-3484ef7565fc', 'fc9e158f-522f-
42b2-9ce4-49a88e22d535', u'DOWN', u'11d0ed9e-191a-
41b6-839b-107257559929')]

Reference to more logs can be found in
https://review.openstack.org/#/c/481972

** Affects: neutron
 Importance: Undecided
 Assignee: Lujin Luo (luo-lujin)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Lujin Luo (luo-lujin)

** Description changed:

  In current
  
neutron.tests.unit.objects.test_router.FloatingIPDbObjectTestCase.test_update_objects
- code path, all the field values are generated randomly. However in
- several cases, the actual object is required to fulfill Foreign Key
- requirement.
+ code path, all the field values are generated randomly by
+ get_random_object_fields(). However in several cases, the actual object
+ is required to fulfill Foreign Key requirement.
  
  An example here is FloatingIP OVO. It updates router_id and
  fixed_port_id, but without actual objects we will see the following
  errors.
  
  oslo_db.exception.DBReferenceError: (sqlite3.IntegrityError) FOREIGN KEY
  constraint failed [SQL: u'UPDATE floatingips SET project_id=?,
  floating_ip_address=?, floating_network_id=?, floating_port_id=?,
  fixed_port_id=?, fixed_ip_address=?, router_id=?,
  last_known_router_id=?, status=? WHERE floatingips.id = ?'] [parameters:
  (u'082f1ab417', '10.81.171.5', '7290820a-186b-43d5-bdb6-b06390d62f96',
  '96a2689b-f31d-4660-9253-cb27d1d44ffc', '323da865-b33d-
  48e9-b5c2-650cb7ead20f', '10.186.214.129',
  '0b093c10-5947-44b6-9c11-3484ef7565fc', 'fc9e158f-522f-
  42b2-9ce4-49a88e22d535', u'DOWN', u'11d0ed9e-191a-
  41b6-839b-107257559929')]
  
  Reference to more logs can be found in
  https://review.openstack.org/#/c/481972

** Summary changed:

- Add specific values to random object fields
+ Add specific values to specific fields in get_random_object_fields()

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1705187

Title:
  Add specific values to specific fields in get_random_object_fields()

Status in neutron:
  New

Bug description:
  In current
  
neutron.tests.unit.objects.test_router.FloatingIPDbObjectTestCase.test_update_objects
  code path, all the field values are generated randomly by
  get_random_object_fields(). However in several cases, the actual
  object is required to fulfill Foreign Key requirement.

  An example here is FloatingIP OVO. It updates router_id and
  fixed_port_id, but without actual objects we will see the following
  errors.

  oslo_db.exception.DBReferenceError: (sqlite3.IntegrityError) FOREIGN
  KEY constraint failed [SQL: u'UPDATE floatingips SET project_id=?,
  floating_ip_address=?, floating_network_id=?, floating_port_id=?,
  fixed_port_id=?, fixed_ip_address=?, router_id=?,
  last_known_router_id=?, status=? WHERE floatingips.id = ?']
  [parameters: (u'082f1ab417', '10.81.171.5', '7290820a-186b-
  43d5-bdb6-b06390d62f96', '96a2689b-f31d-4660-9253-cb27d1d44ffc',
  '323da865-b33d-48e9-b5c2-650cb7ead20f', '10.186.214.129',
  '0b093c10-5947-44b6-9c11-3484ef7565fc', 'fc9e158f-522f-
  42b2-9ce4-49a88e22d535', u'DOWN', u'11d0ed9e-191a-
  41b6-839b-107257559929')]

  Reference to more logs can be found in
  https://review.openstack.org/#/c/481972

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1705187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687616] [NEW] KeyError 'options' while doing zero downtime upgrade from N to O

2017-05-02 Thread Lujin Luo
Public bug reported:

I am trying to do a zero downtime upgrade from N release to O release
following [1].

I have 3 controller nodes running behind a HAProxy. Everytime, when I
upgraded one of the keystone and bring it back to the cluster, it would
encounter this error [2] when I tried to update a created user for about
5 minutes. After I brought back all the 3 upgraded keystone nodes, and 5
or more minutes later, this error will disappear and everything works
fine.

I am using the same conf file for both releases as shown in [3].


[1]. 
https://docs.openstack.org/developer/keystone/upgrading.html#upgrading-without-downtime
[2]. http://paste.openstack.org/show/608557/
[3]. http://paste.openstack.org/show/608558/

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1687616

Title:
  KeyError 'options' while doing zero downtime upgrade from N to O

Status in OpenStack Identity (keystone):
  New

Bug description:
  I am trying to do a zero downtime upgrade from N release to O release
  following [1].

  I have 3 controller nodes running behind a HAProxy. Everytime, when I
  upgraded one of the keystone and bring it back to the cluster, it
  would encounter this error [2] when I tried to update a created user
  for about 5 minutes. After I brought back all the 3 upgraded keystone
  nodes, and 5 or more minutes later, this error will disappear and
  everything works fine.

  I am using the same conf file for both releases as shown in [3].

  
  [1]. 
https://docs.openstack.org/developer/keystone/upgrading.html#upgrading-without-downtime
  [2]. http://paste.openstack.org/show/608557/
  [3]. http://paste.openstack.org/show/608558/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1687616/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1535554] Re: Multiple dhcp agents are scheduled to host one network automatically if multiple subnets are created at the same time

2017-02-21 Thread Lujin Luo
I will fix this one. Sorry for the late notice.

** Changed in: neutron
 Assignee: (unassigned) => Lujin Luo (luo-lujin)

** Changed in: neutron
   Status: Expired => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1535554

Title:
  Multiple dhcp agents are scheduled to host one network automatically
  if multiple subnets are created at the same time

Status in neutron:
  In Progress

Bug description:
  I have three all-in-one controller nodes deployed by DevStack with the
  latest codes. Neutron servers on these controllers are set behind
  Pacemaker and HAProxy to realize active/active HA. MariaDB Galera
  cluster is used as my database backend.

  In neutron.conf, I have made the following changes:
  dhcp_agents_per_network = 1
  network_scheduler_driver = 
neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler

  Since I only allow one dhcp agent per tenant on each controller, now I
  have three dhcp agents in total for a given tenant. After I created
  one network within this given tenant, before I add any subnets to this
  network, no dhcp agents would be scheduled to host this network. If I
  run multiple commands at the same time to add subnets to the network,
  we may end up with more than one dhcp agents hosting the network.

  It is not easy to re-produce the bug. You might need to repeat the
  following steps multiple times.

  How to reproduce:

  Prerequisite
  make the following changes in neutron.conf
  [DEFAULT]
  dhcp_agents_per_network = 1
  network_scheduler_driver = 
neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler

  Step 1: Confirm multiple dhcp agents are running
  $ neutron agent-list --agent_type='DHCP agent'
  my result is shown http://paste.openstack.org/show/483956/

  Step 2: Create a network
  $ neutron net-create net-dhcptest

  Step 3: Create multiple subnets on the network at the same time
  On controller1:
  $ neutron subnet-create --name subnet-dhcptest-1 net-dhcptest 192.162.101.0/24
  On controller2:
  $ neutron subnet-create --name subnet-dhcptest-2 net-dhcptest 192.162.102.0/24

  Step 4: Check which dhcp agent(s) is/are hosting the network
  $ neutron dhcp-agent-list-hosting-net net-dhcptest
  my result is shown http://paste.openstack.org/show/483958/

  If you end up with only one dhcp agent, please delete the subnets and
  network. Then repeat Step 1-4 several times.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1535554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642517] [NEW] several resources lack revision_on_change attribute to bump the version of their parent resources

2016-11-17 Thread Lujin Luo
Public bug reported:

I went through some codes related to bumping parent resources revision
number when children resources, which are not top-level neutron objects,
are update. I found the following children resources lack of
"revises_on_change" attribute, which is used to bump parent resources'
revision number.

* PortBindingPort (Port)
* QosPortPolicyBinding (Port)
* SegmentHostMapping (Network Segment)
* RouterExtraAttributes (Router)
* ExternalNetwork (Network)
* QosNetworkPolicyBinding (Network)

** Affects: neutron
 Importance: Undecided
 Assignee: Lujin Luo (luo-lujin)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Lujin Luo (luo-lujin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1642517

Title:
  several resources lack revision_on_change attribute to bump the
  version of their parent resources

Status in neutron:
  New

Bug description:
  I went through some codes related to bumping parent resources revision
  number when children resources, which are not top-level neutron
  objects, are update. I found the following children resources lack of
  "revises_on_change" attribute, which is used to bump parent resources'
  revision number.

  * PortBindingPort (Port)
  * QosPortPolicyBinding (Port)
  * SegmentHostMapping (Network Segment)
  * RouterExtraAttributes (Router)
  * ExternalNetwork (Network)
  * QosNetworkPolicyBinding (Network)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1642517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1535557] [NEW] Multiple l3 agents are scheduled to host one newly created router if multiple interfaces are added at the same time

2016-01-19 Thread Lujin Luo
Public bug reported:

I have three all-in-one controller nodes deployed by DevStack with the
latest codes. Neutron servers on these controllers are set behind
Pacemaker and HAProxy to realize active/active HA. MariaDB Galera
cluster is used as my database backend.

In neutron.conf, I have made the following changes:
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler

When we add interfaces of multiple subnets to a newly created router, we
might end up with more than one l3 agents hosting this router. This bug
is not easy to reproduce. You may need to repeat the following steps
several times.

How to reproduce:

Prerequisite
make the following changes in neutron.conf
[DEFAULT]
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler

Step 0: Confirm multiple l3 agents are running
$ neutron agent-list --agent_type='L3 agent'
my result is shown http://paste.openstack.org/show/483963/

Step 1: Create two networks
$ neutron net-create net-l3agent-test-1
$ neutron net-create net-l3agent-test-2

Step 2: Add one subnet to each of the two networks
$ neutron subnet-create --name subnet-l3agent-test-1 net-l3agent-test-1 
192.168.11.0/24
$ neutron subnet-create --name subnet-l3agent-test-2 net-l3agent-test-2 
192.168.12.0/24

Step 3: Create a router
$ neutron router-create router-l3agent-test

Step 4: Add the two subnets as the router's interfaces immediately after 
creating the router at the same time
On controller1:
$ neutron router-interface-add router-l3agent-test subnet-l3agent-test-1
On controller2:
$ neutron router-interface-add router-l3agent-test subnet-l3agent-test-2

Step 5: Check which l3 agent(s) is/are hosting the router
$ neutron l3-agent-list-hosting-router router-l3agent-test
my result is shown http://paste.openstack.org/show/483962/

If you end up with only one l3 agent, please proceed as follows
Step 6: Clear interfaces on the router
$ neutron router-interface-delete router-l3agent-test subnet-l3agent-test-1
$ neutron router-interface-delete router-l3agent-test subnet-l3agent-test-2

Step 7: Delete the router
$ neutron router-delete router-l3agent-test

Go back to Step 3-5

** Affects: neutron
 Importance: Undecided
 Assignee: Lujin Luo (luo-lujin)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Lujin Luo (luo-lujin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1535557

Title:
  Multiple l3 agents are scheduled to host one newly created router if
  multiple interfaces are added at the same time

Status in neutron:
  New

Bug description:
  I have three all-in-one controller nodes deployed by DevStack with the
  latest codes. Neutron servers on these controllers are set behind
  Pacemaker and HAProxy to realize active/active HA. MariaDB Galera
  cluster is used as my database backend.

  In neutron.conf, I have made the following changes:
  router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler

  When we add interfaces of multiple subnets to a newly created router,
  we might end up with more than one l3 agents hosting this router. This
  bug is not easy to reproduce. You may need to repeat the following
  steps several times.

  How to reproduce:

  Prerequisite
  make the following changes in neutron.conf
  [DEFAULT]
  router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler

  Step 0: Confirm multiple l3 agents are running
  $ neutron agent-list --agent_type='L3 agent'
  my result is shown http://paste.openstack.org/show/483963/

  Step 1: Create two networks
  $ neutron net-create net-l3agent-test-1
  $ neutron net-create net-l3agent-test-2

  Step 2: Add one subnet to each of the two networks
  $ neutron subnet-create --name subnet-l3agent-test-1 net-l3agent-test-1 
192.168.11.0/24
  $ neutron subnet-create --name subnet-l3agent-test-2 net-l3agent-test-2 
192.168.12.0/24

  Step 3: Create a router
  $ neutron router-create router-l3agent-test

  Step 4: Add the two subnets as the router's interfaces immediately after 
creating the router at the same time
  On controller1:
  $ neutron router-interface-add router-l3agent-test subnet-l3agent-test-1
  On controller2:
  $ neutron router-interface-add router-l3agent-test subnet-l3agent-test-2

  Step 5: Check which l3 agent(s) is/are hosting the router
  $ neutron l3-agent-list-hosting-router router-l3agent-test
  my result is shown http://paste.openstack.org/show/483962/

  If you end up with only one l3 agent, please proceed as follows
  Step 6: Clear interfaces on the router
  $ neutron router-interface-delete router-l3agent-test subnet-l3agent-test-1
  $ neutron router-interface-delete router-l3agent-test subnet-l3agent-test-2

  Step 7: Delete the router
  $ neutron router-delete router-l3agent-test

  Go back to Step 3-5

To manage notifications about this bug go to:
https://bugs.launchpad.net/neut

[Yahoo-eng-team] [Bug 1535225] [NEW] Multiple gateway ports are created if multiple API requests update same router's gateway at the same time

2016-01-18 Thread Lujin Luo
Public bug reported:

I have three controller nodes and the Neutron servers on these
controllers are set behind Pacemaker and HAProxy to realize
active/active HA using DevStack. MariaDB Galera cluster is used as my
database backend.I am using the latest codes.

I have one external network and one router. If I run $neutron router-
gateway-set at three controllers at the same time, I will end up with
three ports created on the external network. Although, only the latest
created port would be updated as the router's default gateway IP, the
former two ports would remain in both routerports and ports database.

Besides, the former two ports could not be deleted from $ neutron
router-gateway-clear command. They can only be deleted from database by
hand.

How to reproduce:

Step 1: Create an external network
$ neutron net-create ext-net --router:external True

Step 2: Create a subnet on the external network
$ neutron subnet-create --name ext-subnet ext-net 192.168.122.0/24

Step 3: Create a router
$ neutron router-create router-gateway-test

Step 4: Set gateway to the router on three controllers at the same time
On controller1:
$ neutron router-gateway-set router-gateway-test ext-net

On controller2:
$ neutron router-gateway-set router-gateway-test ext-net

On controller3:
$ neutron router-gateway-set router-gateway-test ext-net

The port list on the router after the above commands could be seen here
http://paste.openstack.org/show/484091/

Step 5: Clear the gateway on the router
$ neutron router-gateway-clear router-gateway-test ext-net

The port list on the router after Step 5 could be seen here
http://paste.openstack.org/show/484092/

As we can see, the two ports created earlier were not able to be cleared
through CLI.

Step 6: Try to deletes the rest two router ports
$ neutron port-delete 3095887a-cb2d-46eb-bf56-be6305596868
$ neutron port-delete 76a42eda-33ce-4048-882a-6f8cb4d7137c
where '3095887a-cb2d-46eb-bf56-be6305596868' and 
'76a42eda-33ce-4048-882a-6f8cb4d7137c' are the two remaining gateway ports on 
the router
The result is shown here http://paste.openstack.org/show/484094/

The routerports and ports database info could be seen here
http://paste.openstack.org/show/484095/ , where '9a261e95-1f3b-4c8f-
91a6-098b9fab7c25' is the id of the router we created in Step 3.

** Affects: neutron
 Importance: Undecided
 Assignee: Lujin Luo (luo-lujin)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Lujin Luo (luo-lujin)

** Description changed:

  I have three controller nodes and the Neutron servers on these
  controllers are set behind Pacemaker and HAProxy to realize
  active/active HA using DevStack. MariaDB Galera cluster is used as my
  database backend.I am using the latest codes.
  
  I have one external network and one router. If I run $neutron router-
  gateway-set at three controllers at the same time, I will end up with
  three ports created on the external network. Although, only the latest
  created port would be updated as the router's default gateway IP, the
  former two ports would remain in both routerports and ports database.
  
  Besides, the former two ports could not be deleted from $ neutron
  router-gateway-clear command. They can only be deleted from database by
  hand.
  
  How to reproduce:
  
  Step 1: Create an external network
  $ neutron net-create ext-net --router:external True
  
  Step 2: Create a subnet on the external network
  $ neutron subnet-create --name ext-subnet ext-net 192.168.122.0/24
  
  Step 3: Create a router
  $ neutron router-create router-gateway-test
  
  Step 4: Set gateway to the router on three controllers at the same time
  On controller1:
  $ neutron router-gateway-set router-gateway-test ext-net
  
  On controller2:
  $ neutron router-gateway-set router-gateway-test ext-net
  
  On controller3:
  $ neutron router-gateway-set router-gateway-test ext-net
  
  The port list on the router after the above commands could be seen here
- http://paste.openstack.org/show/483693/
+ http://paste.openstack.org/show/484091/
  
  Step 5: Clear the gateway on the router
  $ neutron router-gateway-clear router-gateway-test ext-net
  
  The port list on the router after Step 5 could be seen here
- http://paste.openstack.org/show/483694/
+ http://paste.openstack.org/show/484092/
  
  As we can see, the two ports created earlier were not able to be cleared
  through CLI.
  
+ Step 6: Try to deletes the rest two router ports
+ $ neutron port-delete 3095887a-cb2d-46eb-bf56-be6305596868
+ $ neutron port-delete 76a42eda-33ce-4048-882a-6f8cb4d7137c
+ where '3095887a-cb2d-46eb-bf56-be6305596868' and 
'76a42eda-33ce-4048-882a-6f8cb4d7137c' are the two remaining gateway ports on 
the router
+ The result is shown here http://paste.openstack.org/show/484094/
+ 
  The routerports and ports database info could be seen here
- http://paste.openstack.org/show/483695/ , where '9a261e95-1f3b-4c8f-
+ http://paste.openstack.org/show/484095/ , wh

[Yahoo-eng-team] [Bug 1535226] [NEW] Subnets with duplicated CIDRs could be added to one router if multiple commands are executed at the same time

2016-01-18 Thread Lujin Luo
Public bug reported:

I have three controller nodes and the Neutron servers on these
controllers are set behind Pacemaker and HAProxy to realize
active/active HA using DevStack. MariaDB Galera cluster is used as my
database backend.I am using the latest codes.

If one router is going to add two subnets as interface, however these two 
subnets' CIDRs are duplicated, the expected result is the later API request 
would fail with error message like this
Bad router request: Cidr 192.166.100.0/24 of subnet 
bee7663c-f0a0-4120-b556-944af7ca40cf overlaps with cidr 192.166.0.0/16 of 
subnet 697c82cf-82fd-4187-b460-7046c81f13dc.

But when we run the two commands at the same time, both commands would
work and the router would end up with two ports, which have duplicated
CIDRs. I have tested for more than 20 times and in only once have I
received the expected error message.

How to reproduce

Step 1: Create a router
$ neutron router-create router-subnet-test

Step 2: Create two internal networks
$ neutron net-create net1
$ neutron net-create net2

Step 3: Add one subnet to each of these two networks
$ neutron subnet-create --name subnet1 net1 192.166.100.0/24
$ neutron subnet-create --name subnet2 net2 192.166.0.0/16

Here, we are creating two subnets on different networks with duplicated
CIDRs.

Step 4: Add the two subnets as one router's interface at the same time
On controller1:
$ neutron router-interface-add router-subnet-test subnet1
On controller2:
$ neutron router-interface-add router-subnet-test subnet2

Both commands would work and we could see that the router now has two ports, 
which have duplicated CIDRs
http://paste.openstack.org/show/483838/

In [1], we do have a method to _check_for_dup_router_subnet, but when
two API requests arrive at the same time, both checks would validate.

[1]
https://github.com/openstack/neutron/blob/master/neutron/db/l3_db.py#L590

** Affects: neutron
 Importance: Undecided
 Assignee: Lujin Luo (luo-lujin)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Lujin Luo (luo-lujin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1535226

Title:
  Subnets with duplicated CIDRs could be added to one router if multiple
  commands are executed at the same time

Status in neutron:
  New

Bug description:
  I have three controller nodes and the Neutron servers on these
  controllers are set behind Pacemaker and HAProxy to realize
  active/active HA using DevStack. MariaDB Galera cluster is used as my
  database backend.I am using the latest codes.

  If one router is going to add two subnets as interface, however these two 
subnets' CIDRs are duplicated, the expected result is the later API request 
would fail with error message like this
  Bad router request: Cidr 192.166.100.0/24 of subnet 
bee7663c-f0a0-4120-b556-944af7ca40cf overlaps with cidr 192.166.0.0/16 of 
subnet 697c82cf-82fd-4187-b460-7046c81f13dc.

  But when we run the two commands at the same time, both commands would
  work and the router would end up with two ports, which have duplicated
  CIDRs. I have tested for more than 20 times and in only once have I
  received the expected error message.

  How to reproduce

  Step 1: Create a router
  $ neutron router-create router-subnet-test

  Step 2: Create two internal networks
  $ neutron net-create net1
  $ neutron net-create net2

  Step 3: Add one subnet to each of these two networks
  $ neutron subnet-create --name subnet1 net1 192.166.100.0/24
  $ neutron subnet-create --name subnet2 net2 192.166.0.0/16

  Here, we are creating two subnets on different networks with
  duplicated CIDRs.

  Step 4: Add the two subnets as one router's interface at the same time
  On controller1:
  $ neutron router-interface-add router-subnet-test subnet1
  On controller2:
  $ neutron router-interface-add router-subnet-test subnet2

  Both commands would work and we could see that the router now has two ports, 
which have duplicated CIDRs
  http://paste.openstack.org/show/483838/

  In [1], we do have a method to _check_for_dup_router_subnet, but when
  two API requests arrive at the same time, both checks would validate.

  [1]
  https://github.com/openstack/neutron/blob/master/neutron/db/l3_db.py#L590

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1535226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1535549] [NEW] Multiple ports which have duplicated CIDRs are added as one router's interfaces if commands are executed at the same time

2016-01-18 Thread Lujin Luo
Public bug reported:

I have three controller nodes and the Neutron servers on these
controllers are set behind Pacemaker and HAProxy to realize
active/active HA using DevStack. MariaDB Galera cluster is used as my
database backend.I am using the latest codes.

If one router is going to add two ports as its interface, however these two 
ports belong to two subnets which have duplicated CIDRs, the expected result 
would be the later API request would fail, with error message like
BadRequest: Bad router request: Cidr 192.166.100.0/24 of subnet 
bee7663c-f0a0-4120-b556-944af7ca40cf overlaps with cidr 192.166.0.0/16 of 
subnet 697c82cf-82fd-4187-b460-7046c81f13dc.

But if we run the two commands at the same time, both commands would
succeed. The router would have two ports, which belong to subnets with
duplicated CIDRs. I have tested for 30 times and only three times I
could receive the expected error messages.

How to reproduce:

Step 1: Create a router
$ neutron router-create router-port-test

Step 2: Create two internal networks
$ neutron net-create net1
$ neutron net-create net2

Step 3: Add one subnet to each of these two networks
$ neutron subnet-create --name subnet1 net1 192.166.100.0/24
$ neutron subnet-create --name subnet2 net2 192.166.0.0/16

Here, we are creating two subnets on different networks with DUPLICATED
CIDRs.

Step 4: Create one port on each of these two networks
$ neutron port-create --name port1 net1
$ neutron port-create --name port2 net2

Step 5: Add these two ports as the router's interface at the same time
On controller1:
$ neutron router-interface-add router-port-test port=port1
On controller2:
$ neutron router-interface-add router-port-test port=port2

Both commands would work and we can see the ports listed on the router
as http://paste.openstack.org/show/483839/

This bug is similar to [1]. We also have _check_for_dup_router_subnet
method to check if subnets have duplicated CIDRs or not. The problem
happens multiple API requests arrive at the same time and all the checks
validate.

[1] https://bugs.launchpad.net/neutron/+bug/1535226
[2] https://github.com/openstack/neutron/blob/master/neutron/db/l3_db.py#L535

** Affects: neutron
 Importance: Undecided
 Assignee: Lujin Luo (luo-lujin)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Lujin Luo (luo-lujin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1535549

Title:
  Multiple ports which have duplicated CIDRs are added as one router's
  interfaces if commands are executed at the same time

Status in neutron:
  New

Bug description:
  I have three controller nodes and the Neutron servers on these
  controllers are set behind Pacemaker and HAProxy to realize
  active/active HA using DevStack. MariaDB Galera cluster is used as my
  database backend.I am using the latest codes.

  If one router is going to add two ports as its interface, however these two 
ports belong to two subnets which have duplicated CIDRs, the expected result 
would be the later API request would fail, with error message like
  BadRequest: Bad router request: Cidr 192.166.100.0/24 of subnet 
bee7663c-f0a0-4120-b556-944af7ca40cf overlaps with cidr 192.166.0.0/16 of 
subnet 697c82cf-82fd-4187-b460-7046c81f13dc.

  But if we run the two commands at the same time, both commands would
  succeed. The router would have two ports, which belong to subnets with
  duplicated CIDRs. I have tested for 30 times and only three times I
  could receive the expected error messages.

  How to reproduce:

  Step 1: Create a router
  $ neutron router-create router-port-test

  Step 2: Create two internal networks
  $ neutron net-create net1
  $ neutron net-create net2

  Step 3: Add one subnet to each of these two networks
  $ neutron subnet-create --name subnet1 net1 192.166.100.0/24
  $ neutron subnet-create --name subnet2 net2 192.166.0.0/16

  Here, we are creating two subnets on different networks with
  DUPLICATED CIDRs.

  Step 4: Create one port on each of these two networks
  $ neutron port-create --name port1 net1
  $ neutron port-create --name port2 net2

  Step 5: Add these two ports as the router's interface at the same time
  On controller1:
  $ neutron router-interface-add router-port-test port=port1
  On controller2:
  $ neutron router-interface-add router-port-test port=port2

  Both commands would work and we can see the ports listed on the router
  as http://paste.openstack.org/show/483839/

  This bug is similar to [1]. We also have _check_for_dup_router_subnet
  method to check if subnets have duplicated CIDRs or not. The problem
  happens multiple API requests arrive at the same time and all the
  checks validate.

  [1] https://bugs.launchpad.net/neutron/+bug/1535226
  [2] https://github.com/openstack/neutron/blob/master/neutron/db/l3_db.py#L535

To manage notifications about this bug go to:
ht

[Yahoo-eng-team] [Bug 1535554] [NEW] Multiple dhcp agents are scheduled to host one network automatically if multiple subnets are created at the same time

2016-01-18 Thread Lujin Luo
Public bug reported:

I have three all-in-one controller nodes deployed by DevStack with the
latest codes. Neutron servers on these controllers are set behind
Pacemaker and HAProxy to realize active/active HA. MariaDB Galera
cluster is used as my database backend.

In neutron.conf, I have made the following changes:
dhcp_agents_per_network = 1
network_scheduler_driver = 
neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler

Since I only allow one dhcp agent per tenant on each controller, now I
have three dhcp agents in total for a given tenant. After I created one
network within this given tenant, before I add any subnets to this
network, no dhcp agents would be scheduled to host this network. If I
run multiple commands at the same time to add subnets to the network, we
may end up with more than one dhcp agents hosting the network.

It is not easy to re-produce the bug. You might need to repeat the
following steps multiple times.

How to reproduce:

Prerequisite
make the following changes in neutron.conf
[DEFAULT]
dhcp_agents_per_network = 1
network_scheduler_driver = 
neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler

Step 1: Confirm multiple dhcp agents are running
$ neutron agent-list --agent_type='DHCP agent'
my result is shown http://paste.openstack.org/show/483956/

Step 2: Create a network
$ neutron net-create net-dhcptest

Step 3: Create multiple subnets on the network at the same time
On controller1:
$ neutron subnet-create --name subnet-dhcptest-1 net-dhcptest 192.162.101.0/24
On controller2:
$ neutron subnet-create --name subnet-dhcptest-2 net-dhcptest 192.162.102.0/24

Step 4: Check which dhcp agent(s) is/are hosting the network
$ neutron dhcp-agent-list-hosting-net net-dhcptest
my result is shown http://paste.openstack.org/show/483958/

If you end up with only one dhcp agent, please delete the subnets and
network. Then repeat Step 1-4 several times.

** Affects: neutron
 Importance: Undecided
 Assignee: Lujin Luo (luo-lujin)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Lujin Luo (luo-lujin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1535554

Title:
  Multiple dhcp agents are scheduled to host one network automatically
  if multiple subnets are created at the same time

Status in neutron:
  New

Bug description:
  I have three all-in-one controller nodes deployed by DevStack with the
  latest codes. Neutron servers on these controllers are set behind
  Pacemaker and HAProxy to realize active/active HA. MariaDB Galera
  cluster is used as my database backend.

  In neutron.conf, I have made the following changes:
  dhcp_agents_per_network = 1
  network_scheduler_driver = 
neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler

  Since I only allow one dhcp agent per tenant on each controller, now I
  have three dhcp agents in total for a given tenant. After I created
  one network within this given tenant, before I add any subnets to this
  network, no dhcp agents would be scheduled to host this network. If I
  run multiple commands at the same time to add subnets to the network,
  we may end up with more than one dhcp agents hosting the network.

  It is not easy to re-produce the bug. You might need to repeat the
  following steps multiple times.

  How to reproduce:

  Prerequisite
  make the following changes in neutron.conf
  [DEFAULT]
  dhcp_agents_per_network = 1
  network_scheduler_driver = 
neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler

  Step 1: Confirm multiple dhcp agents are running
  $ neutron agent-list --agent_type='DHCP agent'
  my result is shown http://paste.openstack.org/show/483956/

  Step 2: Create a network
  $ neutron net-create net-dhcptest

  Step 3: Create multiple subnets on the network at the same time
  On controller1:
  $ neutron subnet-create --name subnet-dhcptest-1 net-dhcptest 192.162.101.0/24
  On controller2:
  $ neutron subnet-create --name subnet-dhcptest-2 net-dhcptest 192.162.102.0/24

  Step 4: Check which dhcp agent(s) is/are hosting the network
  $ neutron dhcp-agent-list-hosting-net net-dhcptest
  my result is shown http://paste.openstack.org/show/483958/

  If you end up with only one dhcp agent, please delete the subnets and
  network. Then repeat Step 1-4 several times.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1535554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1535551] [NEW] One port can be added as multiple routers' interfaces if commands are executed at the same time

2016-01-18 Thread Lujin Luo
Public bug reported:

I have three controller nodes and the Neutron servers on these
controllers are set behind Pacemaker and HAProxy to realize
active/active HA using DevStack. MariaDB Galera cluster is used as my
database backend.I am using the latest codes.

If one port is added as multiple routers' interfaces, the expected result is 
that only API request is executed successfully and the port is associated to 
one router. Other API requests would recieve error message like
PortInUseClient: Unable to complete operation on port 
d2c97788-61d7-489a-8b20-7a6e8e39a217 for network 
496de8cf-4284-41d7-ad6b-7dd5f232dc21. Port already has an attached device 
1b316d80-f5d8-4477-88df-54b376c4c8cd.

Besides, in routerports database, only one record of port is allowed to
exist. However, if we run two commands to add one port as two different
routers' interfaces at the same time. Both of the commands would show
execution succeed. The truth is two records that the port is associated
to both routers are listed in routerports database.

How to reproduce

Step 1: Create two routers
$ neutron router-create router-1
$ neutron router-create router-2

Step 2: Create an internal network
$ neutron net-create net1

Step 3: Add a subnet to the network
$ neutron subnet-create --name subnet1 net1 192.166.100.0/24

Step 4: Create a port in the network
$ neutron port-create --name port1 net1

Step 5: Add this port as two routers' interfaces at the same time
On controller1:
$ neutron router-interface-add router-1 port=port1
on controller2:
$ neutron router-interface-add router-2 port=port1

Both commands would return success, as shown
http://paste.openstack.org/show/483840/

Step 6: Check port list on both routers
The result is shown http://paste.openstack.org/show/483843/

As we can see, only one router is successfully associated to the port

Step 7: Check routerports database
http://paste.openstack.org/show/483842/

where '99276755-236a-44b7-bf97-b2234d97028b' is the port_id of the port
we created in Step 4.

To sum up, we have two issues here
a) Only one API request is executed successfully, but both commands return 
success
b) Routerports database is updated twice and we need to delete the older 
record. 

Related source codes is [1]

[1]
https://github.com/openstack/neutron/blob/master/neutron/db/l3_db.py#L535

** Affects: neutron
 Importance: Undecided
 Assignee: Lujin Luo (luo-lujin)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Lujin Luo (luo-lujin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1535551

Title:
  One port can be added as multiple routers' interfaces if commands are
  executed at the same time

Status in neutron:
  New

Bug description:
  I have three controller nodes and the Neutron servers on these
  controllers are set behind Pacemaker and HAProxy to realize
  active/active HA using DevStack. MariaDB Galera cluster is used as my
  database backend.I am using the latest codes.

  If one port is added as multiple routers' interfaces, the expected result is 
that only API request is executed successfully and the port is associated to 
one router. Other API requests would recieve error message like
  PortInUseClient: Unable to complete operation on port 
d2c97788-61d7-489a-8b20-7a6e8e39a217 for network 
496de8cf-4284-41d7-ad6b-7dd5f232dc21. Port already has an attached device 
1b316d80-f5d8-4477-88df-54b376c4c8cd.

  Besides, in routerports database, only one record of port is allowed
  to exist. However, if we run two commands to add one port as two
  different routers' interfaces at the same time. Both of the commands
  would show execution succeed. The truth is two records that the port
  is associated to both routers are listed in routerports database.

  How to reproduce

  Step 1: Create two routers
  $ neutron router-create router-1
  $ neutron router-create router-2

  Step 2: Create an internal network
  $ neutron net-create net1

  Step 3: Add a subnet to the network
  $ neutron subnet-create --name subnet1 net1 192.166.100.0/24

  Step 4: Create a port in the network
  $ neutron port-create --name port1 net1

  Step 5: Add this port as two routers' interfaces at the same time
  On controller1:
  $ neutron router-interface-add router-1 port=port1
  on controller2:
  $ neutron router-interface-add router-2 port=port1

  Both commands would return success, as shown
  http://paste.openstack.org/show/483840/

  Step 6: Check port list on both routers
  The result is shown http://paste.openstack.org/show/483843/

  As we can see, only one router is successfully associated to the port

  Step 7: Check routerports database
  http://paste.openstack.org/show/483842/

  where '99276755-236a-44b7-bf97-b2234d97028b' is the port_id of the
  port we created in Step 4.

  To sum up, we have two issues here
  a) Only one API request is executed successfully, but b

[Yahoo-eng-team] [Bug 1534445] [NEW] Multiple floating IPs from the same external network are associated to one port when commands are executed at the same time

2016-01-14 Thread Lujin Luo
Public bug reported:

I have three controller nodes and the Neutron servers on these
controllers are set behind Pacemaker and HAProxy to realize
active/active HA using DevStack. MariaDB Galera cluster is used as my
database backend.  I am using the latest codes.

If I have multiple commands to create floating IPs and associate them to
the same port at the same time, all of the commands would return success
and end up with multiple floating IPs from the same external network
associated to the same port.

How to reproduce:

Step 1: Create a network
$ neutron net-create net1

Step 2: Create a subnet on the network
$ neutron subnet-create --name subnet1 net1 192.168.100.0/24

Step 3: Create a port on the network
$ neutron port-create net1

Step 4: Create a router
$ neutron router-create router-floatingip-test

Step 5: Add the subnet as its interface
$ neutron router-interface-add router-floatingip-test subnet1

Step 5: Create an external network
$ neutron net-create ext-net --router:external True

Step 6: Add a subnet on the external network
$ neutron subnet-create --name ext-subnet ext-net 192.168.122.0/24

Step 7: Set the external network as the router's default gateway
$ neutron router-gateway-set router-floatingip-test ext-net

Step 8: Run the three commands at the same time to create floating IPs
On controller1:
$ neutron floatingip-create ext-net --port-id 
b53d0826-53c4-427b-81b2-3ab6cb0f4511

On controller2:
$ neutron floatingip-create ext-net --port-id 
b53d0826-53c4-427b-81b2-3ab6cb0f4511

On controller3:
$ neutron floatingip-create ext-net --port-id 
b53d0826-53c4-427b-81b2-3ab6cb0f4511

where, port_id b53d0826-53c4-427b-81b2-3ab6cb0f4511 is the port we
created in Step 3.

The result would be three floating IPs associated to the same port, as
shown in http://paste.openstack.org/show/483691/

The expected error message (say, we run the second command after the first one 
succeeds) would be
Cannot associate floating IP 192.168.122.20 
(bd4d47a5-45c1-48e1-a48a-aef08039a955) with port 
b53d0826-53c4-427b-81b2-3ab6cb0f4511 using fixed IP 192.168.100.3, as that 
fixed IP already has a floating IP on external network 
920ee0f3-3db8-4005-8d29-0be474947186.
Since one port with one fixed_ip is not allowed to have multiple floating IPs 
from the same external network. 

In the above procedure, I set port_id when creating these three floating
IPs. Same bug occurred when I updated three existing floating IPs to be
associated with the same port at the same time.

I assume this bug happens because multiple APIs are executed
concurrently and the validation check on every API succeeds [1].

[1]
https://github.com/openstack/neutron/blob/master/neutron/db/l3_db.py#L915

** Affects: neutron
 Importance: Undecided
 Assignee: Lujin Luo (luo-lujin)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Lujin Luo (luo-lujin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1534445

Title:
  Multiple floating IPs from the same external network are associated to
  one port when commands are executed at the same time

Status in neutron:
  New

Bug description:
  I have three controller nodes and the Neutron servers on these
  controllers are set behind Pacemaker and HAProxy to realize
  active/active HA using DevStack. MariaDB Galera cluster is used as my
  database backend.  I am using the latest codes.

  If I have multiple commands to create floating IPs and associate them
  to the same port at the same time, all of the commands would return
  success and end up with multiple floating IPs from the same external
  network associated to the same port.

  How to reproduce:

  Step 1: Create a network
  $ neutron net-create net1

  Step 2: Create a subnet on the network
  $ neutron subnet-create --name subnet1 net1 192.168.100.0/24

  Step 3: Create a port on the network
  $ neutron port-create net1

  Step 4: Create a router
  $ neutron router-create router-floatingip-test

  Step 5: Add the subnet as its interface
  $ neutron router-interface-add router-floatingip-test subnet1

  Step 5: Create an external network
  $ neutron net-create ext-net --router:external True

  Step 6: Add a subnet on the external network
  $ neutron subnet-create --name ext-subnet ext-net 192.168.122.0/24

  Step 7: Set the external network as the router's default gateway
  $ neutron router-gateway-set router-floatingip-test ext-net

  Step 8: Run the three commands at the same time to create floating IPs
  On controller1:
  $ neutron floatingip-create ext-net --port-id 
b53d0826-53c4-427b-81b2-3ab6cb0f4511

  On controller2:
  $ neutron floatingip-create ext-net --port-id 
b53d0826-53c4-427b-81b2-3ab6cb0f4511

  On controller3:
  $ neutron floatingip-create ext-net --port-id 
b53d0826-53c4-427b-81b2-3ab6cb0f4511

  where, port_id b53d0826-53c4-427b-81b2-3ab6cb0f4511 is the port we
  created in Ste