[Yahoo-eng-team] [Bug 1793747] Re: Fails to boot instance using Blazar flavor if compute host names are in uppercase

2018-09-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/604898
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c9448cbdbf96e7436b13ac5c3a92addfc0f2f5a2
Submitter: Zuul
Branch:master

commit c9448cbdbf96e7436b13ac5c3a92addfc0f2f5a2
Author: Dan Smith 
Date:   Mon Sep 24 13:28:29 2018 -0700

Revert "Make host_aggregate_map dictionary case-insensitive"

This reverts commit 0dc0db932e3ad5ad911f2072015cb9854f6e4e23.

The original change caused our host state processing to be inconsistent
with our own hypervisors API. Automation tooling that used our API
to add hosts to aggregates would fail silently. We are reverting
this and will propose a check on the aggregate host add action
which will confirm the case-sensitive mapping of the host being
added, which is what we should have done in the first place.

Change-Id: Ibd44ba9de5680958f55f0ae6325cfc33dabadc4c
Closes-Bug: #1793747


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1793747

Title:
  Fails to boot instance using Blazar flavor if compute host names are
  in uppercase

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  In Progress
Status in OpenStack Compute (nova) queens series:
  In Progress
Status in OpenStack Compute (nova) rocky series:
  In Progress

Bug description:
  Bug Description:
  

  Steps to reproduce:
  

  $ nova hypervisor-list
  
+--+--++--+
  | ID   | Hypervisor hostname  | State 
 | Status   |
  
+--+--++--+
  | e517e75b-d57c-45b2-af41-6be2fed536c6 | Openstack-VirtualBox | up
 | enabled  |
  
+--+--++--+

  Step 1: Add host into freepool
  $ blazar host-create Openstack-VirtualBox

  $ blazar host-list
  ++-+---+---+--+
  | id | hypervisor_hostname | vcpus | memory_mb | local_gb |  
  ++-+---+---+--+
  | 1  | Openstack-VirtualBox| 4 | 11941 |   91 | 
  ++-+---+---+--+

  
  Step 2: Create a lease
  $ blazar lease-create --reservation 
resource_type=virtual:instance,vcpus=1,memory_mb=1024,disk_gb=20,amount=1,affinity=False
 --start-date "2018-08-27 12:59" --end-date "2018-08-27 13:55" lease-1

  nova_api database entries related to aggregates:

  Blazar creates aggregate with id=18
  mysql> select * from aggregates;
  
+-+++--+--+
  | created_at  | updated_at | id | uuid
 | name |
  
+-+++--+--+
  | 2018-08-13 06:49:37 | NULL   |  1 | 
2a7d838f-4e42-48af-a9a9-faf3f29e3c96 | freepool |
  | 2018-08-29 13:43:15 | NULL   | 18 | 
88c37cc5-373a-4da5-820f-508d25f00903 | 8c85522a-cc39-4a0d-a5ca-4de92e4d2c1f |
  
+-+++--+--+

  Blazar adds aggregate with aggregate_id=18, to host at the time of
  lease-start event.

  mysql> select * from aggregate_hosts;
  
+-+++---+--+
  | created_at  | updated_at | id | host  | 
aggregate_id |
  
+-+++---+--+
  | 2018-08-29 13:34:46 | NULL   | 32 | Openstack-VirtualBox  |
1 |
  | 2018-08-29 13:39:05 | NULL   | 34 | Openstack-VirtualBox  |   
18 |
  
+-+++---+--+

  Step 3: Create a server: Please specify the flavor of the reservation and 
group_id as a scheduler hint.
  $ openstack server create --flavor 03067174-2a5e-43f7-baf7-037aac23b4ef 
--image cirros-0.4.0-x86_64-disk --network 42d6f419-b445-40a6-b542-e8a502c6ae64 
--hint group=09389292-6639-48d6-9709-045061f42ebf instance-1

  For more details regarding instance reservation please refer:
  https://docs.openstack.org/blazar/latest/cli/instance-reservation.html

  Logs
  =

  Service logs of n-sch:
  ===
  Aug 27 13:00:31 Openstack-VirtualBox nova-scheduler[22177]: INFO nova.filters 
[None 

[Yahoo-eng-team] [Bug 1794647] [NEW] unnecessary inst_base was created

2018-09-26 Thread fupingxie
Public bug reported:

Description
===
When migrating to other hosts, if inst_base is on shared storage, it will 
recreate inst_base after moving inst_base to inst_base_resize. But rbd backend 
is treated as shared storage too, it will create inst_base on source host also. 
In this situation, even if the resize finished, the inst_base on source host 
will still exist. This wll trigger errors when live-migration to the source 
host.

** Affects: nova
 Importance: Undecided
 Assignee: fupingxie (fpxie)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => fupingxie (fpxie)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1794647

Title:
  unnecessary inst_base was created

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  When migrating to other hosts, if inst_base is on shared storage, it will 
recreate inst_base after moving inst_base to inst_base_resize. But rbd backend 
is treated as shared storage too, it will create inst_base on source host also. 
In this situation, even if the resize finished, the inst_base on source host 
will still exist. This wll trigger errors when live-migration to the source 
host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1794647/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794406] Re: neutron.objects lost PortForwarding in setup.cfg

2018-09-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/605302
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=66991f1c8be86560ef2d774ed7a7b07ff2834ab1
Submitter: Zuul
Branch:master

commit 66991f1c8be86560ef2d774ed7a7b07ff2834ab1
Author: Wenran Xiao 
Date:   Wed Sep 26 10:15:57 2018 +0800

Add PortForwarding to neutron.objects entrypoint.

Closes-bug: #1794406
Change-Id: Ifad26642d730456136dfa9177d1c9515fe5ec421


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1794406

Title:
  neutron.objects lost PortForwarding in setup.cfg

Status in neutron:
  Fix Released

Bug description:
  PortFording is neutron's objects.
  
https://github.com/openstack/neutron/blob/master/neutron/objects/port_forwarding.py#L31
  But it not found in neutron.objects entrypoint.
  https://github.com/openstack/neutron/blob/master/setup.cfg#L156

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1794406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794259] Re: rocky upgrade path broken requirements pecan too low

2018-09-26 Thread Launchpad Bug Tracker
This bug was fixed in the package neutron - 2:13.0.0-0ubuntu4

---
neutron (2:13.0.0-0ubuntu4) cosmic; urgency=medium

  * d/control: Update min version of python(3)-pecan to rocky version
(LP: #1794259).

 -- Corey Bryant   Tue, 25 Sep 2018 09:12:54
-0400

** Changed in: neutron (Ubuntu Cosmic)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1794259

Title:
  rocky upgrade path broken requirements pecan too low

Status in Ubuntu Cloud Archive:
  Fix Committed
Status in Ubuntu Cloud Archive rocky series:
  Fix Committed
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Cosmic:
  Fix Released

Bug description:
  When upgrading to Rocky we noticed that the pecan requirement is:
  pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.1.1 # BSD

  https://github.com/openstack/neutron/blob/stable/rocky/requirements.txt#L11

  But when having python2-pecan-1.1.2 which should satisfy this requirement we 
get below.
  After upgrading to python2-pecan-1.3.2 this issue was solved.

  2018-09-25 11:03:37.579 416002 INFO neutron.wsgi [-] 172.20.106.11 "GET / 
HTTP/1.0" status: 500  len: 2523 time: 0.0019162
  2018-09-25 11:03:39.582 416002 INFO neutron.wsgi [-] Traceback (most recent 
call last):
File "/usr/lib/python2.7/site-packages/eventlet/wsgi.py", line 490, in 
handle_one_response
  result = self.application(self.environ, start_response)
File "/usr/lib/python2.7/site-packages/paste/urlmap.py", line 203, in 
__call__
  return app(environ, start_response)
File "/usr/lib/python2.7/site-packages/webob/dec.py", line 129, in __call__
  resp = self.call_func(req, *args, **kw)
File "/usr/lib/python2.7/site-packages/webob/dec.py", line 193, in call_func
  return self.func(req, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/oslo_middleware/base.py", line 131, 
in __call__
  response = req.get_response(self.application)
File "/usr/lib/python2.7/site-packages/webob/request.py", line 1313, in send
  application, catch_exc_info=False)
File "/usr/lib/python2.7/site-packages/webob/request.py", line 1277, in 
call_application
  app_iter = application(self.environ, start_response)
File "/usr/lib/python2.7/site-packages/webob/dec.py", line 129, in __call__
  resp = self.call_func(req, *args, **kw)
File "/usr/lib/python2.7/site-packages/webob/dec.py", line 193, in call_func
  return self.func(req, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/oslo_middleware/base.py", line 131, 
in __call__
  response = req.get_response(self.application)
File "/usr/lib/python2.7/site-packages/webob/request.py", line 1313, in send
  application, catch_exc_info=False)
File "/usr/lib/python2.7/site-packages/webob/request.py", line 1277, in 
call_application
  app_iter = application(self.environ, start_response)
File "/usr/lib/python2.7/site-packages/pecan/middleware/recursive.py", line 
56, in __call__
  return self.application(environ, start_response)
File "/usr/lib/python2.7/site-packages/pecan/core.py", line 835, in __call__
  return super(Pecan, self).__call__(environ, start_response)
File "/usr/lib/python2.7/site-packages/pecan/core.py", line 677, in __call__
  controller, args, kwargs = self.find_controller(state)
File "/usr/lib/python2.7/site-packages/pecan/core.py", line 853, in 
find_controller
  controller, args, kw = super(Pecan, self).find_controller(_state)
File "/usr/lib/python2.7/site-packages/pecan/core.py", line 480, in 
find_controller
  accept.startswith('text/html,') and
  AttributeError: 'NoneType' object has no attribute 'startswith'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1794259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1793768] Re: VirtDriverNotReady trace in _sync_power_states periodic from ironic nova-compute

2018-09-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/604376
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=6eb32bc40340fed631b9fce1245326e0ebc1c540
Submitter: Zuul
Branch:master

commit 6eb32bc40340fed631b9fce1245326e0ebc1c540
Author: Matt Riedemann 
Date:   Fri Sep 21 10:44:51 2018 -0400

Ignore VirtDriverNotReady in _sync_power_states periodic task

Change Ib0ec1012b74e9a9e74c8879f3feed5f9332b711f introduced
a new VirtDriverNotReady exception which the ironic driver raises
when asked to retrieve a list of nodes and ironic-api is not
available, like if nova-compute is started before ironic-api.
This is normal and meant to be self-healing, but we can get it
in other periodic tasks besides update_available_resource which
leads to ugly exception traces on startup in the logs. This adds
handling for the exception in the _sync_power_states periodic
task.

Change-Id: Iaf29b9e7a92705ac8a2e7ef338b92f7f1203506d
Closes-Bug: #1793768


** Changed in: nova
   Status: In Progress => Fix Released

** Changed in: nova/rocky
   Status: Confirmed => In Progress

** Changed in: nova/rocky
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1793768

Title:
  VirtDriverNotReady trace in _sync_power_states periodic from ironic
  nova-compute

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  Confirmed
Status in OpenStack Compute (nova) rocky series:
  In Progress

Bug description:
  On nova-compute startup with the ironic driver, the _sync_power_states
  periodic can fail and trace with a VirtDriverNotReady error if ironic-
  api is not yet running. This is normal, and we shouldn't trace for it.

  http://logs.openstack.org/27/602127/2/check/ironic-tempest-dsvm-ipa-
  wholedisk-bios-agent_ipmitool-
  
tinyipa/4238d0f/controller/logs/screen-n-cpu.txt.gz?level=TRACE#_Sep_20_21_52_03_587436

  Sep 20 21:52:03.587436 ubuntu-xenial-inap-mtl01-0002177186 
nova-compute[14241]: ERROR oslo_service.periodic_task [None 
req-2339900e-55df-468b-95ff-b904d73d5728 None None] Error during 
ComputeManager._sync_power_states: VirtDriverNotReady: Virt driver is not ready.
  Sep 20 21:52:03.587629 ubuntu-xenial-inap-mtl01-0002177186 
nova-compute[14241]: ERROR oslo_service.periodic_task Traceback (most recent 
call last):
  Sep 20 21:52:03.587816 ubuntu-xenial-inap-mtl01-0002177186 
nova-compute[14241]: ERROR oslo_service.periodic_task   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/periodic_task.py", line 
220, in run_periodic_tasks
  Sep 20 21:52:03.588019 ubuntu-xenial-inap-mtl01-0002177186 
nova-compute[14241]: ERROR oslo_service.periodic_task task(self, context)
  Sep 20 21:52:03.588182 ubuntu-xenial-inap-mtl01-0002177186 
nova-compute[14241]: ERROR oslo_service.periodic_task   File 
"/opt/stack/nova/nova/compute/manager.py", line 7462, in _sync_power_states
  Sep 20 21:52:03.588344 ubuntu-xenial-inap-mtl01-0002177186 
nova-compute[14241]: ERROR oslo_service.periodic_task num_vm_instances = 
self.driver.get_num_instances()
  Sep 20 21:52:03.588517 ubuntu-xenial-inap-mtl01-0002177186 
nova-compute[14241]: ERROR oslo_service.periodic_task   File 
"/opt/stack/nova/nova/virt/driver.py", line 183, in get_num_instances
  Sep 20 21:52:03.588714 ubuntu-xenial-inap-mtl01-0002177186 
nova-compute[14241]: ERROR oslo_service.periodic_task return 
len(self.list_instances())
  Sep 20 21:52:03.55 ubuntu-xenial-inap-mtl01-0002177186 
nova-compute[14241]: ERROR oslo_service.periodic_task   File 
"/opt/stack/nova/nova/virt/ironic/driver.py", line 624, in list_instances
  Sep 20 21:52:03.589071 ubuntu-xenial-inap-mtl01-0002177186 
nova-compute[14241]: ERROR oslo_service.periodic_task 
fields=['instance_uuid'], limit=0)
  Sep 20 21:52:03.589263 ubuntu-xenial-inap-mtl01-0002177186 
nova-compute[14241]: ERROR oslo_service.periodic_task   File 
"/opt/stack/nova/nova/virt/ironic/driver.py", line 611, in _get_node_list
  Sep 20 21:52:03.589447 ubuntu-xenial-inap-mtl01-0002177186 
nova-compute[14241]: ERROR oslo_service.periodic_task raise 
exception.VirtDriverNotReady()
  Sep 20 21:52:03.589602 ubuntu-xenial-inap-mtl01-0002177186 
nova-compute[14241]: ERROR oslo_service.periodic_task VirtDriverNotReady: Virt 
driver is not ready.
  Sep 20 21:52:03.589770 ubuntu-xenial-inap-mtl01-0002177186 
nova-compute[14241]: ERROR oslo_service.periodic_task

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1793768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1793347] Re: keystone upgrade fails q->r oslo.log requirement to low

2018-09-26 Thread Corey Bryant
This bug was fixed in the package keystone - 2:14.0.0-0ubuntu2~cloud0
---

 keystone (2:14.0.0-0ubuntu2~cloud0) bionic-rocky; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 keystone (2:14.0.0-0ubuntu2) cosmic; urgency=medium
 .
   * d/control: Set min python-oslo.log to rocky version (3.39.0) as
 requirements.txt min version is too low (LP: #1793347).


** Changed in: cloud-archive
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1793347

Title:
  keystone upgrade fails q->r oslo.log requirement to low

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive rocky series:
  Fix Released
Status in OpenStack Identity (keystone):
  New
Status in keystone package in Ubuntu:
  Fix Released
Status in keystone source package in Cosmic:
  Fix Released

Bug description:
  When upgrading from Keystone queens to rocky the requirements.txt for
  rocky says oslo.log >= 3.36.0 but versionutils.deprecated.ROCKY is not
  introduced until 3.37.0

  Should bump requirements.txt to atleast 3.37.0

  Error when running db sync:
  Traceback (most recent call last):
File "/bin/keystone-manage", line 6, in 
  from keystone.cmd.manage import main
File "/usr/lib/python2.7/site-packages/keystone/cmd/manage.py", line 19, in 

  from keystone.cmd import cli
File "/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 29, in 

  from keystone.cmd import bootstrap
File "/usr/lib/python2.7/site-packages/keystone/cmd/bootstrap.py", line 17, 
in 
  from keystone.common import driver_hints
File "/usr/lib/python2.7/site-packages/keystone/common/driver_hints.py", 
line 18, in 
  from keystone import exception
File "/usr/lib/python2.7/site-packages/keystone/exception.py", line 20, in 

  import keystone.conf
File "/usr/lib/python2.7/site-packages/keystone/conf/__init__.py", line 27, 
in 
  from keystone.conf import default
File "/usr/lib/python2.7/site-packages/keystone/conf/default.py", line 60, 
in 
  deprecated_since=versionutils.deprecated.ROCKY,
  AttributeError: type object 'deprecated' has no attribute 'ROCKY'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1793347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794569] Re: DVR with static routes may cause routed traffic to be dropped

2018-09-26 Thread Nate Johnston
Marking this 'invalid' since, as you suggest, Neutron 9.4.1 (Newton)
reached end of life 10/25/2017, and is no longer supported upstream.  If
you believe this to still be an issue in master then please recomment
and I will change status appropriately.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1794569

Title:
  DVR with static routes may cause routed traffic to be dropped

Status in neutron:
  Invalid

Bug description:
  Neutron version: 9.4.1 (EOL, but bug may still be present)
  Network scenario: Openvswitch with DVR
  Openvswitch version: 2.6.1
  OpenStack installation version: Newton
  Operating system: Ubuntu 16.04.5 LTS
  Kernel: 4.4.0-135 x86_64

  Symptoms:
  Instances whose default gateway is a DVR interface (10.10.255.1 in our case) 
occassionaly lose connectivity to non-local networks. Meaning, any packet that 
had to pass through the local virtual router is dropped. Sometimes this 
behavior lasts for a few milliseconds, sometimes tens of seconds. Since 
floating-ip traffic is a subset of those cases, north-south connectivity breaks 
too.

  Steps to reproduce:
  - Use DVR routing mode
  - Configure at least one static route in the virtual router, whose next hop 
is NOT an address managed by Neutron (e.g. a physical interface on a VPN 
gateway; in our case 10.2.0.0/24 with next-hop 10.10.0.254)
  - Have an instance plugged into a Flat or VLAN network, use the virtual 
router as the default gateway
  - Try to reach a host inside the statically-routed network from within the 
instance

  Possible explanation:
  Distributed routers get their ARP caches populated by neutron-l3-agent at its 
startup. The agent takes all the ports in a given subnet and fills in their 
IP-to-MAC mappings inside the qrouter- namespace, as permanent entries (meaning 
they won't expire from the cache). However, if Neutron doesn't manage an IP (as 
is the case with our static route's next-hop 10.10.0.254), a permanent record 
isn't created, naturally.

  So when we try to reach a host in the statically-routed network (e.g.
  10.2.0.10) from inside the instance, the packet goes to default
  gateway (10.10.255.1). After it arrives to the qrouter- namespace,
  there is a static route for this host pointing to 10.10.0.254 as next-
  hop. However qrouter- doesn't have its MAC address, so what it does is
  it sends out an ARP request with source MAC of the distributed
  router's qr- interface.

  And that's the problem. Since ARP requests are usually broadcasts,
  they land on pretty much every hypervisor in the network within the
  same VLAN. Combined with the fact that qr- interfaces in a given
  qrouter- namespace have the same MAC address on every host, this leads
  to a disaster: every integration bridge will recieve that ARP request
  on the port that connects it to the Flat/VLAN network and learns that
  the qr- interface's MAC address is actually there - not on the qr-
  port also attached to br-int. From this moment on, packets from
  instances that need to pass via qrouter- are forwarded to the
  Flat/VLAN network interface, circumventing the qrouter- namespace.
  This is especially problematic with traffic that needs to be SNAT-ed
  on its way out.

  Workarounds:
  - The workaround that we used is creating stub Neutron ports for next-hop 
addresses, with correct MACs. After restarting neutron-l3-agents, they got 
populated into the qrouter- ARP cache as permanent entries.
  - Next option is setting the static route into the instances' routing tables 
instead of the virtual router. This way it's the instance that makes ARP 
discovery and not the qrouter- namespace.
  - Another workaround might consist of using ebtables/arptables on hypervisors 
to block incoming ARP requests from qrouters.

  Possible long-term solution:
  Maybe it would help if ancillary bridges (those connecting Flat/VLAN network 
interfaces to br-int) contained an OVS flow that drops ARP requests with source 
MAC addresses of qr- interfaces originating from the physical interface. Since 
their IPs and MACs are well defined (their device_owner is 
"network:router_interface_distributed"), it shouldn't be a problem setting 
these flows up. However I'm not sure of the shortcomings of this approach.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1794569/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1788159] Re: AngularJS keypair panel is broken

2018-09-26 Thread Lars Erik Pedersen
** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1788159

Title:
  AngularJS keypair panel is broken

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  In queens, the AngularJS panel for key pairs is enabled by default. I
  would consider it broken, because both "Create keypair" and "Import
  keypair" don't work. They throw the following JS error to the console:
  http://paste.openstack.org/show/728509/

  If I set 
  ANGULAR_FEATURES = {
  'images_panel': True,
  'key_pairs_panel': False,
  'flavors_panel': False,
  'domains_panel': False,
  'users_panel': False,
  'groups_panel': False,
  'roles_panel': True
  }

  in local_settings.py - effectively disabling the Angular panel for key
  pairs, it (obviously) works.

  Somewhat related to https://bugs.launchpad.net/ubuntu/+source
  /designate-dashboard/+bug/1659620?comments=all where they claim it's a
  problem with the openstack-dashboard, and that it's still broken in
  queens.

  Using openstack-dashboard 3:13.0.1-0ubuntu1~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1788159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794558] Re: Tempest test AttachInterfacesUnderV243Test results in FixedIpNotFoundForSpecificInstance traceback in n-cpu logs

2018-09-26 Thread Matt Riedemann
** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1794558

Title:
  Tempest test AttachInterfacesUnderV243Test results in
  FixedIpNotFoundForSpecificInstance traceback in n-cpu logs

Status in tempest:
  In Progress

Bug description:
  This new Tempest change was recently merged:

  https://review.openstack.org/#/c/587734/

  And results in a traceback in the n-cpu logs:

  http://logs.openstack.org/98/604898/2/check/nova-
  next/df58e8a/logs/screen-n-cpu.txt.gz?level=TRACE#_Sep_26_00_20_14_150429

  Sep 26 00:20:14.150429 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server [None req-7aa4027c-2550-461f-a293-9b89d479f760 
tempest-AttachInterfacesUnderV243Test-1609135526 
tempest-AttachInterfacesUnderV243Test-1609135526] Exception during message 
handling: FixedIpNotFoundForSpecificInstance: Instance 
609c0565-d193-445c-be8f-4667eecbf2f4 doesn't have fixed IP '10.1.0.4'.
  Sep 26 00:20:14.150678 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server Traceback (most recent call last):
  Sep 26 00:20:14.150905 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
163, in _process_incoming
  Sep 26 00:20:14.151124 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
  Sep 26 00:20:14.151352 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
265, in dispatch
  Sep 26 00:20:14.151572 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, 
ctxt, args)
  Sep 26 00:20:14.151792 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
194, in _do_dispatch
  Sep 26 00:20:14.152036 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
  Sep 26 00:20:14.152270 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 79, in wrapped
  Sep 26 00:20:14.152483 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server function_name, call_dict, binary, tb)
  Sep 26 00:20:14.152696 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  Sep 26 00:20:14.152943 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server self.force_reraise()
  Sep 26 00:20:14.153182 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  Sep 26 00:20:14.153408 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
  Sep 26 00:20:14.153612 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 69, in wrapped
  Sep 26 00:20:14.153843 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw)
  Sep 26 00:20:14.154056 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 216, in decorated_function
  Sep 26 00:20:14.154284 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server kwargs['instance'], e, sys.exc_info())
  Sep 26 00:20:14.154518 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  Sep 26 00:20:14.154737 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server self.force_reraise()
  Sep 26 00:20:14.154975 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  Sep 26 00:20:14.155192 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
  Sep 26 00:20:14.155405 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 204, in 

[Yahoo-eng-team] [Bug 1794569] [NEW] DVR with static routes may cause routed traffic to be dropped

2018-09-26 Thread Peter Slovak
Public bug reported:

Neutron version: 9.4.1 (EOL, but bug may still be present)
Network scenario: Openvswitch with DVR
Openvswitch version: 2.6.1
OpenStack installation version: Newton
Operating system: Ubuntu 16.04.5 LTS
Kernel: 4.4.0-135 x86_64

Symptoms:
Instances whose default gateway is a DVR interface (10.10.255.1 in our case) 
occassionaly lose connectivity to non-local networks. Meaning, any packet that 
had to pass through the local virtual router is dropped. Sometimes this 
behavior lasts for a few milliseconds, sometimes tens of seconds. Since 
floating-ip traffic is a subset of those cases, north-south connectivity breaks 
too.

Steps to reproduce:
- Use DVR routing mode
- Configure at least one static route in the virtual router, whose next hop is 
NOT an address managed by Neutron (e.g. a physical interface on a VPN gateway; 
in our case 10.2.0.0/24 with next-hop 10.10.0.254)
- Have an instance plugged into a Flat or VLAN network, use the virtual router 
as the default gateway
- Try to reach a host inside the statically-routed network from within the 
instance

Possible explanation:
Distributed routers get their ARP caches populated by neutron-l3-agent at its 
startup. The agent takes all the ports in a given subnet and fills in their 
IP-to-MAC mappings inside the qrouter- namespace, as permanent entries (meaning 
they won't expire from the cache). However, if Neutron doesn't manage an IP (as 
is the case with our static route's next-hop 10.10.0.254), a permanent record 
isn't created, naturally.

So when we try to reach a host in the statically-routed network (e.g.
10.2.0.10) from inside the instance, the packet goes to default gateway
(10.10.255.1). After it arrives to the qrouter- namespace, there is a
static route for this host pointing to 10.10.0.254 as next-hop. However
qrouter- doesn't have its MAC address, so what it does is it sends out
an ARP request with source MAC of the distributed router's qr-
interface.

And that's the problem. Since ARP requests are usually broadcasts, they
land on pretty much every hypervisor in the network within the same
VLAN. Combined with the fact that qr- interfaces in a given qrouter-
namespace have the same MAC address on every host, this leads to a
disaster: every integration bridge will recieve that ARP request on the
port that connects it to the Flat/VLAN network and learns that the qr-
interface's MAC address is actually there - not on the qr- port also
attached to br-int. From this moment on, packets from instances that
need to pass via qrouter- are forwarded to the Flat/VLAN network
interface, circumventing the qrouter- namespace. This is especially
problematic with traffic that needs to be SNAT-ed on its way out.

Workarounds:
- The workaround that we used is creating stub Neutron ports for next-hop 
addresses, with correct MACs. After restarting neutron-l3-agents, they got 
populated into the qrouter- ARP cache as permanent entries.
- Another workaround might consist of using ebtables/arptables on hypervisors 
to block incoming ARP requests from qrouters.

Possible long-term sloution:
Maybe it would help if ancillary bridges (those connecting Flat/VLAN network 
interfaces to br-int) contained an OVS flow that drops ARP requests with source 
MAC addresses of qr- interfaces originating from the physical interface. Since 
their IPs and MACs are well defined (their device_owner is 
"network:router_interface_distributed"), it shouldn't be a problem setting 
these flows up. However I'm not sure of the shortcomings of this approach.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: drop dvr route static traffic

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1794569

Title:
  DVR with static routes may cause routed traffic to be dropped

Status in neutron:
  New

Bug description:
  Neutron version: 9.4.1 (EOL, but bug may still be present)
  Network scenario: Openvswitch with DVR
  Openvswitch version: 2.6.1
  OpenStack installation version: Newton
  Operating system: Ubuntu 16.04.5 LTS
  Kernel: 4.4.0-135 x86_64

  Symptoms:
  Instances whose default gateway is a DVR interface (10.10.255.1 in our case) 
occassionaly lose connectivity to non-local networks. Meaning, any packet that 
had to pass through the local virtual router is dropped. Sometimes this 
behavior lasts for a few milliseconds, sometimes tens of seconds. Since 
floating-ip traffic is a subset of those cases, north-south connectivity breaks 
too.

  Steps to reproduce:
  - Use DVR routing mode
  - Configure at least one static route in the virtual router, whose next hop 
is NOT an address managed by Neutron (e.g. a physical interface on a VPN 
gateway; in our case 10.2.0.0/24 with next-hop 10.10.0.254)
  - Have an instance plugged into a Flat or VLAN network, use the virtual 
router as the default gateway
  - Try to reach a host inside the 

[Yahoo-eng-team] [Bug 1784155] Re: nova_placement service start not coordinated with api db sync on multiple controllers

2018-09-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/604693
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=601aa94a2fb9b0d1a884ddecc7a6a5e1f5f8686b
Submitter: Zuul
Branch:master

commit 601aa94a2fb9b0d1a884ddecc7a6a5e1f5f8686b
Author: Lee Yarwood 
Date:   Mon Sep 24 09:01:24 2018 +0100

placement: Always reset conf.CONF when starting the wsgi app

This ensures that options loaded during any prior run of the application
are dropped before being added again during init_application.

Change-Id: I41b5c7990d4d62a3a397f1686261f3fb7dc1a0be
Closes-bug: #1784155
(cherry picked from commit ac88b596c60f6c48c0e4c8e878a3ee70c4c2b756)


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1784155

Title:
  nova_placement service start not coordinated with api db sync on
  multiple controllers

Status in OpenStack Compute (nova):
  Fix Released
Status in tripleo:
  Triaged

Bug description:
  On a loaded HA / galera environment using VMs I can fairly
  consistently reproduce a race condition where the nova_placement
  service is started on controllers where the database is not yet
  available.   The nova_placement service itself does not seem to be
  able to tolerate this condition upon startup and it then fails to
  recover.   Mitigation here can either involve synchronizing these
  conditions or getting nova-placement to be more resilient.

  The symptoms of overcloud deploy failure look like two out of three
  controllers having the nova_placement container in an unhealthy state:

  TASK [Debug output for task which failed: Check for unhealthy containers 
after step 3] ***
  Saturday 28 July 2018  10:19:29 + (0:00:00.663)   0:30:26.152 
* 
  fatal: [stack2-overcloud-controller-2]: FAILED! => {
  "failed_when_result": true, 
  
"outputs.stdout_lines|default([])|union(outputs.stderr_lines|default([]))": [
  "3597b92e9714
192.168.25.1:8787/tripleomaster/centos-binary-nova-placement-api:959e1d7f755ee681b6f23b498d262a9e4dd6326f_4cbb1814
   \"kolla_start\"   2 minutes ago   Up 2 minutes (unhealthy)   
nova_placement"
  ]
  }
  fatal: [stack2-overcloud-controller-1]: FAILED! => {
  "failed_when_result": true, 
  
"outputs.stdout_lines|default([])|union(outputs.stderr_lines|default([]))": [
  "322c5ea53895
192.168.25.1:8787/tripleomaster/centos-binary-nova-placement-api:959e1d7f755ee681b6f23b498d262a9e4dd6326f_4cbb1814
   \"kolla_start\"   2 minutes ago   Up 2 minutes (unhealthy)   
nova_placement"
  ]
  }
  ok: [stack2-overcloud-controller-0] => {
  "failed_when_result": false, 
  
"outputs.stdout_lines|default([])|union(outputs.stderr_lines|default([]))": []
  }
  ok: [stack2-overcloud-compute-0] => {
  "failed_when_result": false, 
  
"outputs.stdout_lines|default([])|union(outputs.stderr_lines|default([]))": []
  }

  NO MORE HOSTS LEFT
  *

  
  inspecting placement_wsgi_error.log shows the first stack trace that the 
nova_placement database is missing the "traits" table:

  [Sat Jul 28 10:17:06.525018 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
mod_wsgi (pid=14): Target WSGI script 
'/var/www/cgi-bin/nova/nova-placement-api' cannot be loaded as Python module.
  [Sat Jul 28 10:17:06.525067 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
mod_wsgi (pid=14): Exception occurred processing WSGI script 
'/var/www/cgi-bin/nova/nova-placement-api'.
  [Sat Jul 28 10:17:06.525101 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
Traceback (most recent call last):
  [Sat Jul 28 10:17:06.525124 2018] [:error] [pid 14] [remote 10.1.20.15:0]   
File "/var/www/cgi-bin/nova/nova-placement-api", line 54, in 
  [Sat Jul 28 10:17:06.525165 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
application = init_application()
  [Sat Jul 28 10:17:06.525174 2018] [:error] [pid 14] [remote 10.1.20.15:0]   
File "/usr/lib/python2.7/site-packages/nova/api/openstack/placement/wsgi.py", 
line 88, in init_application
  [Sat Jul 28 10:17:06.525198 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
return deploy.loadapp(conf.CONF)
  [Sat Jul 28 10:17:06.525205 2018] [:error] [pid 14] [remote 10.1.20.15:0]   
File "/usr/lib/python2.7/site-packages/nova/api/openstack/placement/deploy.py", 
line 111, in loadapp
  [Sat Jul 28 10:17:06.525300 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
update_database()
  [Sat Jul 28 10:17:06.525310 2018] [:error] [pid 14] [remote 10.1.20.15:0]   
File "/usr/lib/python2.7/site-packages/nova/api/openstack/placement/deploy.py", 
line 92, in update_database
  [Sat Jul 28 10:17:06.525329 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
resource_provider.ensure_trait_sync(ctx)
  [Sat Jul 28 

[Yahoo-eng-team] [Bug 1794564] [NEW] Apparmor denies /usr/bin/nova-compute access to /proc/loadavg on openstack hypervisor show

2018-09-26 Thread Drew Freiberger
Public bug reported:

On Xenial-Queens cloud, I'm seeing failure with nova-compute
17.0.5-0ubuntu1~cloud0 package unable to run uptime due to a failure to
read /proc/loadavg.

Kernel log entries:

[4726259.738185] audit: type=1400 audit(1537977315.312:59959): 
apparmor="DENIED" operation="open" profile="/usr/bin/nova-compute" 
name="/proc/loadavg" pid=1958757 comm="uptime" requested_mask="r" 
denied_mask="r" fsuid=64060 ouid=0
[4726265.862186] audit: type=1400 audit(1537977321.436:59960): 
apparmor="DENIED" operation="open" profile="/usr/bin/nova-compute" 
name="/proc/loadavg" pid=1959961 comm="uptime" requested_mask="r" 
denied_mask="r" fsuid=64060 ouid=0

This happens when running "openstack hypervisor show " with
AppArmor in enforce mode.

this read access to /proc/loadavg should be added to apparmor profiles
for the nova-compute package.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1794564

Title:
  Apparmor denies /usr/bin/nova-compute access to /proc/loadavg on
  openstack hypervisor show

Status in OpenStack Compute (nova):
  New

Bug description:
  On Xenial-Queens cloud, I'm seeing failure with nova-compute
  17.0.5-0ubuntu1~cloud0 package unable to run uptime due to a failure
  to read /proc/loadavg.

  Kernel log entries:

  [4726259.738185] audit: type=1400 audit(1537977315.312:59959): 
apparmor="DENIED" operation="open" profile="/usr/bin/nova-compute" 
name="/proc/loadavg" pid=1958757 comm="uptime" requested_mask="r" 
denied_mask="r" fsuid=64060 ouid=0
  [4726265.862186] audit: type=1400 audit(1537977321.436:59960): 
apparmor="DENIED" operation="open" profile="/usr/bin/nova-compute" 
name="/proc/loadavg" pid=1959961 comm="uptime" requested_mask="r" 
denied_mask="r" fsuid=64060 ouid=0

  This happens when running "openstack hypervisor show " with
  AppArmor in enforce mode.

  this read access to /proc/loadavg should be added to apparmor profiles
  for the nova-compute package.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1794564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794558] Re: Tempest test AttachInterfacesUnderV243Test results in FixedIpNotFoundForSpecificInstance traceback in n-cpu logs

2018-09-26 Thread Matt Riedemann
This is the port delete request:

http://logs.openstack.org/98/604898/2/check/nova-
next/df58e8a/logs/screen-q-svc.txt.gz#_Sep_26_00_20_10_283226

Sep 26 00:20:10.283226 ubuntu-xenial-ovh-bhs1-0002284194 neutron-
server[24409]: DEBUG neutron.plugins.ml2.plugin [None req-8e9ab2d9-25b2
-452e-8fe1-f66a7b065773 tempest-AttachInterfacesUnderV243Test-1609135526
tempest-AttachInterfacesUnderV243Test-1609135526] Deleting port 2ac9b1d5
-ba6f-4cbb-a867-bdb61e176421 {{(pid=24781) _pre_delete_port
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py:1535}}

I don't see req-8e9ab2d9-25b2-452e-8fe1-f66a7b065773 in the tempest logs
nor the n-cpu logs, so I'm not sure what is requesting the port to be
deleted.

Looks like it's probably nova because the instance is being deleted and
the vif is being unplugged:

http://logs.openstack.org/98/604898/2/check/nova-
next/df58e8a/logs/screen-n-cpu.txt.gz#_Sep_26_00_20_09_945815

Sep 26 00:20:09.945815 ubuntu-xenial-ovh-bhs1-0002284194 nova-
compute[521]: INFO os_vif [None req-5f7ba3e6-6d08-4457-970d-7b3ee9b4015a
tempest-AttachInterfacesUnderV243Test-1609135526 tempest-
AttachInterfacesUnderV243Test-1609135526] Successfully unplugged vif
VIFOpenVSwitch(active=False,address=fa:16:3e:ef:cf:c0,bridge_name='br-
int',has_traffic_filtering=True,id=2ac9b1d5-ba6f-
4cbb-a867-bdb61e176421,network=Network(fedfa4f2-f0a8-4649-833f-
48dbf3aa0f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac9b1d5-ba')

Oh I get the failure now...tempest is sending the removeFixedIp request
which is asynchronous, but not waiting for it to complete and then
enters it's tearDown routine which deletes the server (and port) before
the removeFixedIp request is processed by nova-compute, at which point
it complains about the fixed IP no longer being associated with the
instance. So this is a testing bug.

** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: Confirmed => Invalid

** Changed in: tempest
   Status: New => Triaged

** Changed in: tempest
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1794558

Title:
  Tempest test AttachInterfacesUnderV243Test results in
  FixedIpNotFoundForSpecificInstance traceback in n-cpu logs

Status in OpenStack Compute (nova):
  Invalid
Status in tempest:
  Triaged

Bug description:
  This new Tempest change was recently merged:

  https://review.openstack.org/#/c/587734/

  And results in a traceback in the n-cpu logs:

  http://logs.openstack.org/98/604898/2/check/nova-
  next/df58e8a/logs/screen-n-cpu.txt.gz?level=TRACE#_Sep_26_00_20_14_150429

  Sep 26 00:20:14.150429 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server [None req-7aa4027c-2550-461f-a293-9b89d479f760 
tempest-AttachInterfacesUnderV243Test-1609135526 
tempest-AttachInterfacesUnderV243Test-1609135526] Exception during message 
handling: FixedIpNotFoundForSpecificInstance: Instance 
609c0565-d193-445c-be8f-4667eecbf2f4 doesn't have fixed IP '10.1.0.4'.
  Sep 26 00:20:14.150678 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server Traceback (most recent call last):
  Sep 26 00:20:14.150905 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
163, in _process_incoming
  Sep 26 00:20:14.151124 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
  Sep 26 00:20:14.151352 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
265, in dispatch
  Sep 26 00:20:14.151572 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, 
ctxt, args)
  Sep 26 00:20:14.151792 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
194, in _do_dispatch
  Sep 26 00:20:14.152036 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
  Sep 26 00:20:14.152270 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 79, in wrapped
  Sep 26 00:20:14.152483 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server function_name, call_dict, binary, tb)
  Sep 26 00:20:14.152696 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", 

[Yahoo-eng-team] [Bug 1794558] [NEW] Tempest test AttachInterfacesUnderV243Test results in FixedIpNotFoundForSpecificInstance traceback in n-cpu logs

2018-09-26 Thread Matt Riedemann
Public bug reported:

This new Tempest change was recently merged:

https://review.openstack.org/#/c/587734/

And results in a traceback in the n-cpu logs:

http://logs.openstack.org/98/604898/2/check/nova-
next/df58e8a/logs/screen-n-cpu.txt.gz?level=TRACE#_Sep_26_00_20_14_150429

Sep 26 00:20:14.150429 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server [None req-7aa4027c-2550-461f-a293-9b89d479f760 
tempest-AttachInterfacesUnderV243Test-1609135526 
tempest-AttachInterfacesUnderV243Test-1609135526] Exception during message 
handling: FixedIpNotFoundForSpecificInstance: Instance 
609c0565-d193-445c-be8f-4667eecbf2f4 doesn't have fixed IP '10.1.0.4'.
Sep 26 00:20:14.150678 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Sep 26 00:20:14.150905 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
163, in _process_incoming
Sep 26 00:20:14.151124 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
Sep 26 00:20:14.151352 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
265, in dispatch
Sep 26 00:20:14.151572 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, 
ctxt, args)
Sep 26 00:20:14.151792 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
194, in _do_dispatch
Sep 26 00:20:14.152036 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
Sep 26 00:20:14.152270 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 79, in wrapped
Sep 26 00:20:14.152483 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server function_name, call_dict, binary, tb)
Sep 26 00:20:14.152696 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
Sep 26 00:20:14.152943 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server self.force_reraise()
Sep 26 00:20:14.153182 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
Sep 26 00:20:14.153408 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
Sep 26 00:20:14.153612 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 69, in wrapped
Sep 26 00:20:14.153843 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw)
Sep 26 00:20:14.154056 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 216, in decorated_function
Sep 26 00:20:14.154284 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server kwargs['instance'], e, sys.exc_info())
Sep 26 00:20:14.154518 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
Sep 26 00:20:14.154737 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server self.force_reraise()
Sep 26 00:20:14.154975 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
Sep 26 00:20:14.155192 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
Sep 26 00:20:14.155405 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 204, in decorated_function
Sep 26 00:20:14.155635 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server return function(self, context, *args, 
**kwargs)
Sep 26 00:20:14.155860 ubuntu-xenial-ovh-bhs1-0002284194 nova-compute[521]: 
ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 4684, in 
remove_fixed_ip_from_instance
Sep 26 00:20:14.156128 

[Yahoo-eng-team] [Bug 1794552] [NEW] Flaskification broke ECP

2018-09-26 Thread Adam Young
Public bug reported:

THe Federation itegration (not voting) tests for Python35 are failing.


 ==
2018-09-26 06:26:21.371093 | primary | Failed 1 tests - output below:
2018-09-26 06:26:21.371172 | primary | ==
2018-09-26 06:26:21.371200 | primary |
2018-09-26 06:26:21.371360 | primary | 
keystone_tempest_plugin.tests.scenario.test_federated_authentication.TestSaml2EcpFederatedAuthentication.test_request_scoped_token
2018-09-26 06:26:21.371521 | primary | 
--
2018-09-26 06:26:21.371538 | primary |
2018-09-26 06:26:21.371576 | primary | Captured traceback:
2018-09-26 06:26:21.371614 | primary | ~~~
2018-09-26 06:26:21.371675 | primary | b'Traceback (most recent call last):'
2018-09-26 06:26:21.371900 | primary | b'  File 
"/opt/stack/new/tempest/.tox/tempest/lib/python3.5/site-packages/keystone_tempest_plugin/tests/scenario/test_federated_authentication.py",
 line 176, in test_request_scoped_token'
2018-09-26 06:26:21.371979 | primary | b"project_id=projects[0]['id'], 
token=token_id)"
2018-09-26 06:26:21.372155 | primary | b'  File 
"/opt/stack/new/tempest/tempest/lib/services/identity/v3/token_client.py", line 
140, in auth'
2018-09-26 06:26:21.372357 | primary | b'resp, body = 
self.post(self.auth_url, body=body)'
2018-09-26 06:26:21.372573 | primary | b'  File 
"/opt/stack/new/tempest/tempest/lib/common/rest_client.py", line 279, in post'
2018-09-26 06:26:21.372724 | primary | b"return self.request('POST', 
url, extra_headers, headers, body, chunked)"
2018-09-26 06:26:21.372881 | primary | b'  File 
"/opt/stack/new/tempest/tempest/lib/services/identity/v3/token_client.py", line 
172, in request'
2018-09-26 06:26:21.372961 | primary | b"'Unexpected status code 
{0}'.format(resp.status))"
2018-09-26 06:26:21.373034 | primary | 
b'tempest.lib.exceptions.IdentityError: Got identity error'
2018-09-26 06:26:21.373088 | primary | b'Details: Unexpected status code 
500'
2018-09-26 06:26:21.373108 | primary | b''


Looking in the logs for Keystone show an improper string replacement:

/OS-FEDERATION/identity_providers//protocols

See below


2018-09-26 06:26:16.800826 | primary | b'Body: b\'{"protocol": 
{"links": {"self": 
"http://149.202.181.254/identity/v3/OS-FEDERATION/identity_providers//protocols/mapped",
 "identity_provider": "http://149.202.181.254/identity/v3/testshib"}, 
"mapping_id": "608508b0cd09476289b2be05bcca98e3", "id": "mapped"}}\\n\''
2018-09-26 06:26:16.801021 | primary | b'2018-09-26 06:26:16,423 30292 INFO 
[tempest.lib.common.rest_client] Request 
(TestSaml2EcpFederatedAuthentication:test_request_scoped_token): 500 POST 
http://149.202.181.254/identity/v3/auth/tokens'
2018-09-26 06:26:16.801187 | primary | b"2018-09-26 06:26:16,424 30292 
DEBUG[tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json'}"
2018-09-26 06:26:16.801241 | primary | b'Body: '
2018-09-26 06:26:16.801530 | primary | b"Response - Headers: 
{'connection': 'close', 'content-type': 'application/json', 'server': 
'Apache/2.4.18 (Ubuntu)', 'date': 'Wed, 26 Sep 2018 06:26:16 GMT', 
'x-openstack-request-id': 'req-2185af52-06fa-41c3-80eb-de3d5e667380', 
'content-location': 'http://149.202.181.254/identity/v3/auth/tokens', 
'content-length': '143', 'vary': 'X-Auth-Token', 'status': '500'}"
2018-09-26 06:26:16.801855 | primary | b'Body: b\'{"error": 
{"message": "An unexpected error prevented the server from fulfilling your 
request.", "title": "Internal Server Error", "code": 500}}\''

** Affects: keystone
 Importance: Undecided
 Assignee: Morgan Fainberg (mdrnstm)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Morgan Fainberg (mdrnstm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1794552

Title:
  Flaskification broke ECP

Status in OpenStack Identity (keystone):
  New

Bug description:
  THe Federation itegration (not voting) tests for Python35 are failing.

  
   ==
  2018-09-26 06:26:21.371093 | primary | Failed 1 tests - output below:
  2018-09-26 06:26:21.371172 | primary | ==
  2018-09-26 06:26:21.371200 | primary |
  2018-09-26 06:26:21.371360 | primary | 
keystone_tempest_plugin.tests.scenario.test_federated_authentication.TestSaml2EcpFederatedAuthentication.test_request_scoped_token
  2018-09-26 06:26:21.371521 | primary | 
--
  2018-09-26 06:26:21.371538 | primary |
  2018-09-26 

[Yahoo-eng-team] [Bug 1794545] [NEW] PlacementAPIClient.update_resource_class wrong client call, missing argument

2018-09-26 Thread Rodolfo Alonso
Public bug reported:

"PlacementAPIClient.update_resource_class" is calling Placement client
"put" method with a missing argument, "data". In this call, "data"
should be None [1], but it's a positional argument and must be passed.

[1]
https://github.com/openstack/nova/blob/master/nova/api/openstack/placement/microversion.py#L41

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1794545

Title:
  PlacementAPIClient.update_resource_class wrong client call, missing
  argument

Status in neutron:
  New

Bug description:
  "PlacementAPIClient.update_resource_class" is calling Placement client
  "put" method with a missing argument, "data". In this call, "data"
  should be None [1], but it's a positional argument and must be passed.

  [1]
  
https://github.com/openstack/nova/blob/master/nova/api/openstack/placement/microversion.py#L41

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1794545/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746709] Re: ngdetail for non-existing resource type or resource ID does not return 404

2018-09-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/580103
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=58af8067966539dd3417f113e7356298b48386e5
Submitter: Zuul
Branch:master

commit 58af8067966539dd3417f113e7356298b48386e5
Author: Shu Muto 
Date:   Wed Jul 4 15:53:03 2018 +0900

Move to '404' page when resource type or resource not found

When refresh or link directly to ngdetails without existing resource type
or ID for the resource, ngdetails view shows blank view.

This patch jump to 404 page in this situation.

Change-Id: Ie95132d0fdb1e7aae5e32faad752f92ff76b238a
Closes-Bug: #1746709


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1746709

Title:
  ngdetail for non-existing resource type or resource ID does not return
  404

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  After a fix for bug 1681627 is merged, when we open non-existing
  ngdetail page, "not found" page is not displayed.

  There are two patterns.

  (1) Non-existing resource ID for known resource type
example: /ngdetails/OS::Glance::Image/
-> Error message popup is shown but "not found" page is not displayed.

  (2) Unknown resource type
example /ngdetails/OS::UNKNOWN/
   -> No error message and "not found" page is displayed. A blank page with 
breadcrumb menu is shown.

  In either case, "not found" page would be user-friendly I believe.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1746709/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794534] [NEW] first load of launch instance is missing tabs if LAUNCH_INSTANCE_DEFAULTS.enable_scheduler_hints is False

2018-09-26 Thread Albert Beauregard
Public bug reported:

If LAUNCH_INSTANCE_DEFAULTS.enable_scheduler_hints = False in
local_settings.py, the first time launch instance is used, some tabs are
missing.

This seems to be intermittent with Chrome browser, but reproducible
every time with MS Edge browser.

Closing the launch instance dialog, and opening it again almost always
results in proper display.

Full refresh of the instances page, and clicking the launch instance
dialog again reintroduces the issue.

It appears that this issue may have been introduced by the code for
"Choose a server group when booting a VM with NF Launch instance"
https://github.com/openstack/horizon/commit/cf91124d0c97ae80c565ba0b03a41aa2579b998c
#diff-925f277526c87432c7733408f990be2f

The server group code appears to share some of the scheduler hints code
and variables, but there is no LAUNCH_INSTANCE_DEFAULTS configuration
option to disable the server group tab.

I've been able to reproduce this with a fresh install of RDO packages
for Queens on CentOS, with only required environment configuration
changes.

Not seeing any error messages logged in horizon logs, or javascript
errors in the browser.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "screenshot - missing tabs"
   
https://bugs.launchpad.net/bugs/1794534/+attachment/5193154/+files/missing-tabs.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1794534

Title:
  first load of launch instance is missing tabs if
  LAUNCH_INSTANCE_DEFAULTS.enable_scheduler_hints is False

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If LAUNCH_INSTANCE_DEFAULTS.enable_scheduler_hints = False in
  local_settings.py, the first time launch instance is used, some tabs
  are missing.

  This seems to be intermittent with Chrome browser, but reproducible
  every time with MS Edge browser.

  Closing the launch instance dialog, and opening it again almost always
  results in proper display.

  Full refresh of the instances page, and clicking the launch instance
  dialog again reintroduces the issue.

  It appears that this issue may have been introduced by the code for
  "Choose a server group when booting a VM with NF Launch instance"
  
https://github.com/openstack/horizon/commit/cf91124d0c97ae80c565ba0b03a41aa2579b998c
  #diff-925f277526c87432c7733408f990be2f

  The server group code appears to share some of the scheduler hints
  code and variables, but there is no LAUNCH_INSTANCE_DEFAULTS
  configuration option to disable the server group tab.

  I've been able to reproduce this with a fresh install of RDO packages
  for Queens on CentOS, with only required environment configuration
  changes.

  Not seeing any error messages logged in horizon logs, or javascript
  errors in the browser.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1794534/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794535] [NEW] Consider all router ports for dvr arp updates

2018-09-26 Thread Christoph Manns
Public bug reported:

If you have a subnet with 2 routers and you create and then delete
a VM it may happen that an old ARP entry may persist. If you create
another VM with the same IP and the ARP update goes to the other
router you have a VM which isn't reachable via one router since the
ARP entry is wrong.

A solution would be to update all router ports and not just one.

** Affects: neutron
 Importance: Undecided
 Assignee: Christoph Manns (christoph-manns)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1794535

Title:
  Consider all router ports for dvr arp updates

Status in neutron:
  In Progress

Bug description:
  If you have a subnet with 2 routers and you create and then delete
  a VM it may happen that an old ARP entry may persist. If you create
  another VM with the same IP and the ARP update goes to the other
  router you have a VM which isn't reachable via one router since the
  ARP entry is wrong.

  A solution would be to update all router ports and not just one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1794535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794527] [NEW] Allow domain creation with a specific ID

2018-09-26 Thread Adam Young
Public bug reported:

When keeping two Keystone servers in sync, but avoiding Database
replication, it is often necessary to hack the database to update the
Domain ID so that entries match.  Domain ID is then used for LDAP mapped
IDs, and if they don't match, the user IDs are different.  It should be
possible to add a domain with an explicit ID, so that the two servers
can match User IDs.

** Affects: keystone
 Importance: Wishlist
 Status: New

** Changed in: keystone
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1794527

Title:
  Allow domain creation with a specific ID

Status in OpenStack Identity (keystone):
  New

Bug description:
  When keeping two Keystone servers in sync, but avoiding Database
  replication, it is often necessary to hack the database to update the
  Domain ID so that entries match.  Domain ID is then used for LDAP
  mapped IDs, and if they don't match, the user IDs are different.  It
  should be possible to add a domain with an explicit ID, so that the
  two servers can match User IDs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1794527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794530] [NEW] Federation IDs hardcode UUIDs instead of configured id_generator

2018-09-26 Thread Adam Young
Public bug reported:

A Federated user gets an entry in the shadow-users table.  This entry
has a unique ID.  It is generated using a UUID.  This mirrors what we do
for LDAP, but in the LDAP case, the ID is generated from the domain ID +
the local id of the user (an attribute that uniquely ids the user in
LDAP).  THus, the LDAP code can be changed at config time, but the
Federated code can't.  It also means that Federated IDs cannot be kept
in sync between two keystone servers.

** Affects: keystone
 Importance: Low
 Assignee: Adam Young (ayoung)
 Status: In Progress

** Changed in: keystone
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1794530

Title:
  Federation IDs hardcode UUIDs instead of configured id_generator

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  A Federated user gets an entry in the shadow-users table.  This entry
  has a unique ID.  It is generated using a UUID.  This mirrors what we
  do for LDAP, but in the LDAP case, the ID is generated from the domain
  ID + the local id of the user (an attribute that uniquely ids the user
  in LDAP).  THus, the LDAP code can be changed at config time, but the
  Federated code can't.  It also means that Federated IDs cannot be kept
  in sync between two keystone servers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1794530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1780770] Re: heat-dashboard installation guides

2018-09-26 Thread Ivan Kolodyazhny
** No longer affects: horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1780770

Title:
  heat-dashboard installation guides

Status in heat-dashboard package in Ubuntu:
  Invalid

Bug description:
  Hi all,

  I am following this documentation to install heat-dashboard
  https://docs.openstack.org/heat-dashboard/latest/install/index.html

  I'm having issue getting it to work. There are several parts in the
  documents which are a little bit confusing, it would be great if
  someone can clarify.:

  1. The part in the document which said

 "Configure the policy file for heat-dashboard in OpenStack Dashboard 
local_settings.py."
 Is this referring to the "local_settings.py" in /etc/openstack_dashboard/?

  2. The documentation said to execute the following commands:

  $ cd 
  $ python ./manage.py compilemessages

  $ cd 
  $ DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python manage.py 
collectstatic --noinput
  $ DJANGO_SETTINGS_MODULE=openstack_dashboard.settings python manage.py 
compress --force

  However, the "manage.py" files are missing in both .../horizon/
  (horizon-dir) and .../heat_dashboard/ (heat-dashboard-dir)

  Thanks in advance,

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/heat-dashboard/+bug/1780770/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1789512] Re: Can't refresh DNS zones pages, get HTTP 404

2018-09-26 Thread Ivan Kolodyazhny
** Project changed: horizon => designate-dashboard

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1789512

Title:
  Can't refresh DNS zones pages, get HTTP 404

Status in Designate Dashboard:
  New

Bug description:
  If I've gone to a Designate DNS zone (or any of its instances) within 
Horizon, if I then press F5 or send the link to someone else all you get is an 
HTTP 404.
  Example such URLs:
  * 
https://horizon.wikimedia.org/project/ngdetails/OS::Designate::Zone/7f302e0f-8e24-4378-a77b-916e5f55478f
  * 
https://horizon.wikimedia.org/project/ngdetails/OS::Designate::RecordSet/7f302e0f-8e24-4378-a77b-916e5f55478f/112128b1-a518-45de-b89f-f4e9aecdcca3

  Instead you should get the same content as you were just lokoing at.

  I believe that's mitaka. I can ask for details of the installation
  from it's administrators if that would be helpful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate-dashboard/+bug/1789512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1768980] Re: Wrong Port in "Create OpenStack client environment scripts in keystone" document

2018-09-26 Thread Colleen Murphy
** Changed in: keystone
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1768980

Title:
  Wrong Port in "Create OpenStack client environment scripts in
  keystone" document

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: __

  On the admin auth url, it was supposed to be port 35357 instead of
  5000, as mentioned at the page before. Even it working on 5000 too,
  the script is not doing the same as the page before.

  
  If you have a troubleshooting or support issue, use the following  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 13.0.1.dev8 on 2018-05-02 17:02
  SHA: 56d108858a2284516e1cba66a86883ea969755d4
  Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-openrc-rdo.rst
  URL: 
https://docs.openstack.org/keystone/queens/install/keystone-openrc-rdo.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1768980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750843] Re: pysaml2 version in global requirements must be updated to 4.5.0

2018-09-26 Thread Colleen Murphy
** Changed in: keystone
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1750843

Title:
  pysaml2 version in global requirements must be updated to 4.5.0

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Global Requirements:
  Fix Released

Bug description:
  As per security vulnerability CVE-2016-10149, XML External Entity
  (XXE) vulnerability in PySAML2 4.4.0 and earlier allows remote
  attackers to read arbitrary files via a crafted SAML XML request or
  response and it has a CVSS v3 Base Score of 7.5.

  The above vulnerability has been fixed in version 4.5.0 as per
  https://github.com/rohe/pysaml2/issues/366. The latest version of
  pysaml2 (https://pypi.python.org/pypi/pysaml2/4.5.0) has this fix.
  However, the global requirements has the version set to < 4.0.3

  
https://github.com/openstack/requirements/blob/master/global-requirements.txt#L230
  pysaml2>=4.0.2,<4.0.3

  
https://github.com/openstack/requirements/blob/master/upper-constraints.txt#L347
  pysaml2===4.0.2

  The version of pysaml2 supported for OpenStack should be updated such
  that OpenStack deployments are not vulnerable to the above mentioned
  CVE.

  pysaml2 is used by OpenStack Keystone for identity Federation. This
  bug in itself is not a security vulnerability but not fixing this bug
  causes OpenStack deployments to be vulnerable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1750843/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794493] [NEW] The behaviors of creating and updating mapping cell0 differ

2018-09-26 Thread Scott Wulf
Public bug reported:

Description
===
The behaviors of creating and updating cell0 differ. Given that the parameters 
to nova-manage did not change, I would have expected no changes made to the 
cell_mappings table other than updating updated_at.

Steps to reproduce
==
NOTE: Be sure to replace  in 
api.conf with a valid db connection string.
OPTIONAL NOTE: In order to restart nova-api and run "nova service-list" 
successfully, replace 
"mysql+pymysql://nova:novapwd@vip/nova?ssl_ca=/etc/ssl/ca-bundle.pem" in 
api.conf with a valid db connection string.

controller1:~ # mkdir -p /etc/novabug
controller1:~ # cat << EOF > /etc/novabug/api.conf
[database]
backend = sqlalchemy
connection = mysql+pymysql://nova:novapwd@vip/nova?ssl_ca=/etc/ssl/ca-bundle.pem
[api_database]
connection = 
EOF
controller1:~ # cat << EOF > /etc/novabug/nova.conf
[DEFAULT]
transport_url = rabbit://rabbit:rabbitpwd@vip:5671/
EOF
controller1:~ # chown -R nova:nova /etc/novabug
controller1:~ # /usr/local/bin/nova-manage --config-file /etc/novabug/nova.conf 
--config-file /etc/novabug/api.conf cell_v2 map_cell0
controller1:~ # mysql -D nova_api -e "select * from cell_mappings"
+-+-++--+---++--+
| created_at  | updated_at  | id | uuid 
| name  | transport_url  | 
database_connection 
 |
+-+-++--+---++--+
| 2018-09-26 09:14:21 | NULL|  1 | 
---- | cell0 | none:/// 
  | 
mysql+pymysql://nova:novapwd@vip/nova_cell0?ssl_ca=/etc/ssl/ca-bundle.pem   
 |
+-+-++--+---++--+

controller1:~ # /usr/local/bin/nova-manage --config-file /etc/novabug/nova.conf 
--config-file /etc/novabug/api.conf cell_v2 update_cell --cell_uuid 
----
controller1:~ # mysql -D nova_api -e "select * from cell_mappings"
+-+-++--+---+++
| created_at  | updated_at  | id | uuid 
| name  | transport_url  | 
database_connection 
   |
+-+-++--+---+++
| 2018-09-26 09:14:21 | 2018-09-26 09:15:01 |  1 | 
---- | cell0 | 
rabbit://rabbit:rabbitpwd@vip:5671/| 
mysql+pymysql://nova:novapwd@vip/nova?ssl_ca=/etc/ssl/ca-bundle.pem 
   |
+-+-++--+---+++

Expected result
===
Given that the parameters to nova-manage did not change, I would have expected 
no changes made to the transport_url and database_connection values of cell0.

Actual result
=
After running "cell_v2 update_cell --cell_uuid 
----", the transport_url changed from the 
hardcoded "none:///" from CellV2Commands.map_cell0() to the actual transport 
URL from nova.conf and the database_connection value lost it's string "_cell0" 
previously added by the nested function 
CellV2Commands.map_cell0().cell0_default_connection(). These changes cause 
ripple effects like the duplication of all rows in the output of "nova 
service-list" [as well as in Horizon]. See below.

controller1:~ # service nova-api restart
controller1:~ # nova service-list

[Yahoo-eng-team] [Bug 1794450] [NEW] When creating a server instance with an IPv4 and an IPv6 addresses, the IPv6 is not assigned

2018-09-26 Thread Federico Ressi
Public bug reported:

This is expected behaviour:

Given:
  G1) A tenant network with two subnets (one for IPV4 and one for IPv6)
  G2) A port attached to the network with two fixed IPs (one for IPv4 and one 
for IPv6)
  G3) A server VM instance created with the port with its two fixed IPs

When:
  W1) Server instance is booted

Then:
  T1) Server instance receives from cloud init the IPv4 address correctly on 
the first interface and set it up
  T2) Server instance receives from cloud init the IPv6 address correctly on 
the first interface and set it up
  

The observed behavior differs in T2:
  T2) Server instance receives from cloud init the wrong IPv6 on the first 
interface and set it up the wrong IP

These are the IPs of the server when it is created:

  "addresses": {
"tempest-loginable-619471459": [
{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:81:72:35",
 "version": 4, "addr": "10.1.0.35", "OS-EXT-IPS:type": "fixed"},
{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:81:72:35",
 "version": 6, "addr": "2003:0:0:2::2", "OS-EXT-IPS:type": "fixed"}
]
  }

These are the actual IPs assigned to the VM:

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
   valid_lft forever preferred_lft forever
2: ens3:  mtu 1400 qdisc pfifo_fast state UP 
group default qlen 1000
link/ether fa:16:3e:81:72:35 brd ff:ff:ff:ff:ff:ff
inet 10.1.0.35/28 brd 10.1.0.47 scope global ens3
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe81:7235/64 scope link 
   valid_lft forever preferred_lft forever

These traces has been extracted from the instance log:

[  331.846900] cloud-init[1014]: Cloud-init v. 18.2 running 'init' at Tue, 25 
Sep 2018 19:13:18 +. Up 262.77 seconds.
[  331.986855] cloud-init[1014]: ci-info: 
+++Net device 
info
[  332.026098] cloud-init[1014]: ci-info: 
++--+--+-+---+---+
[  332.052076] cloud-init[1014]: ci-info: | Device |  Up  |   
Address|   Mask  | Scope | Hw-Address|
[  332.117348] cloud-init[1014]: ci-info: 
++--+--+-+---+---+
[  332.137446] cloud-init[1014]: ci-info: |  ens3  | True |  
10.1.0.35   | 255.255.255.240 |   .   | fa:16:3e:81:72:35 |
[  332.166719] cloud-init[1014]: ci-info: |  ens3  | True | 
fe80::f816:3eff:fe81:7235/64 |.|  link | fa:16:3e:81:72:35 |
[  332.197524] cloud-init[1014]: ci-info: |   lo   | True |  
127.0.0.1   |255.0.0.0|   .   | . |
[  332.223152] cloud-init[1014]: ci-info: |   lo   | True |   
::1/128|.|  host | . |
[  332.258243] cloud-init[1014]: ci-info: 
++--+--+-+---+---+
[  332.282758] cloud-init[1014]: ci-info: 
+++Route IPv4 info+++
[  332.318255] cloud-init[1014]: ci-info: 
+---+-+---+-+---+---+
[  332.343051] cloud-init[1014]: ci-info: | Route |   Destination   |  
Gateway  | Genmask | Interface | Flags |
[  332.369270] cloud-init[1014]: ci-info: 
+---+-+---+-+---+---+
[  332.394200] cloud-init[1014]: ci-info: |   0   | 0.0.0.0 | 
10.1.0.33 | 0.0.0.0 |ens3   |   UG  |
[  332.418340] cloud-init[1014]: ci-info: |   1   |10.1.0.32|  
0.0.0.0  | 255.255.255.240 |ens3   |   U   |
[  332.442581] cloud-init[1014]: ci-info: |   2   | 169.254.169.254 | 
10.1.0.33 | 255.255.255.255 |ens3   |  UGH  |
[  332.463209] cloud-init[1014]: ci-info: 
+---+-+---+-+---+---+

This bug has been see when writing this test case for neutron-tempest-plugin:
  https://review.openstack.org/#/c/586040/21
  
https://review.openstack.org/#/c/586040/21/neutron_tempest_plugin/scenario/test_loginable.py

The logs of the problem can be found here:
  
http://logs.openstack.org/40/586040/21/check/neutron-tempest-plugin-scenario-linuxbridge/28228b0/
  
http://logs.openstack.org/40/586040/21/check/neutron-tempest-plugin-scenario-linuxbridge/28228b0/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, 

[Yahoo-eng-team] [Bug 1794424] [NEW] trunk: can not delete bound trunk for agent which allow create trunk on bound port

2018-09-26 Thread Le, Huifeng
Public bug reported:

High level description:
For agent such as linux bridge which allows creating trunk on bound port, it is 
not allowed to delete the trunk if not unbound the port (parent-port) first. It 
will break the use scenario which expect to keep the port's (trunk's parent 
port) communication while delete the trunk.
The issue can be reproduced on the latest devstack.

Environments:configure to use linux bridge agent
Version: latest devstack

Steps to reproduce:
1. create network/subnet/trunk parent port/trunk sub port
openstack network create net0
openstack subnet create --network net0 --subnet-range 10.0.4.0/24 subnet0
openstack port create --network net0 trunk1_port
parent_trunk1_mac="$( openstack port show trunk1_port | awk '/ mac_address / { 
print $4 }' )"
openstack port create --network net1 --mac-address "$parent_trunk1_mac" 
trunk1_subport1
result: success

2. create VM (bound trunk parent port first)
openstack server create --flavor ds512M --image vlan-capable-image --nic 
port-id="$trunk1_port_id" --wait vm_trunk1
result: success

3. create trunk: openstack network trunk create --parent-port trunk1_port 
--subport port=trunk1_subport1,segmentation-type=vlan,segmentation-id=101 trunk1
result: success

4. delete trunk:
openstack network trunk delete trunk1
Expected output: success
Actual output: fail with message "Trunk trunk1 is currently in use"

** Affects: neutron
 Importance: Undecided
 Assignee: Le, Huifeng (hle2)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Le, Huifeng (hle2)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1794424

Title:
  trunk: can not delete bound trunk for agent which allow create trunk
  on bound port

Status in neutron:
  New

Bug description:
  High level description:
  For agent such as linux bridge which allows creating trunk on bound port, it 
is not allowed to delete the trunk if not unbound the port (parent-port) first. 
It will break the use scenario which expect to keep the port's (trunk's parent 
port) communication while delete the trunk.
  The issue can be reproduced on the latest devstack.

  Environments:configure to use linux bridge agent
  Version: latest devstack

  Steps to reproduce:
  1. create network/subnet/trunk parent port/trunk sub port
  openstack network create net0
  openstack subnet create --network net0 --subnet-range 10.0.4.0/24 subnet0
  openstack port create --network net0 trunk1_port
  parent_trunk1_mac="$( openstack port show trunk1_port | awk '/ mac_address / 
{ print $4 }' )"
  openstack port create --network net1 --mac-address "$parent_trunk1_mac" 
trunk1_subport1
  result: success

  2. create VM (bound trunk parent port first)
  openstack server create --flavor ds512M --image vlan-capable-image --nic 
port-id="$trunk1_port_id" --wait vm_trunk1
  result: success

  3. create trunk: openstack network trunk create --parent-port trunk1_port 
--subport port=trunk1_subport1,segmentation-type=vlan,segmentation-id=101 trunk1
  result: success

  4. delete trunk:
  openstack network trunk delete trunk1
  Expected output: success
  Actual output: fail with message "Trunk trunk1 is currently in use"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1794424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794421] [NEW] Add allowed address pair button is only visible to admin

2018-09-26 Thread Lars Erik Pedersen
Public bug reported:

In Queens (openstack-dashboard 3:13.0.1-0ubuntu1~cloud0), when logging
into Horizon in context of a _member_ in any project, the "Add allowed
address pair" button in "Network" -> "Networks" ->  ->
"Ports" ->  -> "Allowed address pairs" is not visible.

When accessing the same panel in context of a project where the user has
the admin role, the "add"-button is visible and functional.

I consider this to be a horizon/dashboard bug, beacause I am able to add
an allowed address pair to a port with the "neutron port-update" command
in context of a non-admin user.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1794421

Title:
  Add allowed address pair button is only visible to admin

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In Queens (openstack-dashboard 3:13.0.1-0ubuntu1~cloud0), when logging
  into Horizon in context of a _member_ in any project, the "Add allowed
  address pair" button in "Network" -> "Networks" -> 
  -> "Ports" ->  -> "Allowed address pairs" is not
  visible.

  When accessing the same panel in context of a project where the user
  has the admin role, the "add"-button is visible and functional.

  I consider this to be a horizon/dashboard bug, beacause I am able to
  add an allowed address pair to a port with the "neutron port-update"
  command in context of a non-admin user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1794421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp