[Yahoo-eng-team] [Bug 1759808] Re: Deprecate firewall_driver and use_neutron option
this doesn't deserve a bug report, I suggest you consider contribute to https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/delete-nova-network ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1759808 Title: Deprecate firewall_driver and use_neutron option Status in OpenStack Compute (nova): Invalid Bug description: In Nova install doc :https://docs.openstack.org/nova/queens/install /controller-install-ubuntu.html#install-and-configure-components The nova.conf [DEFAULT] section firewall_driver and use_neutron option is deprecated for removal since 15.0.0, so it can remove from install doc. This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [ ] This doc is inaccurate in this way: __ - [ ] This is a doc addition request. - [ ] I have a fix to the document that I can paste below including example: input and output. If you have a troubleshooting or support issue, use the following resources: - Ask OpenStack: http://ask.openstack.org - The mailing list: http://lists.openstack.org - IRC: 'openstack' channel on Freenode --- Release: 17.0.2.dev33 on 2018-03-29 03:39 SHA: aea284349f5efc64c645e4970de7774ff58cc77c Source: https://git.openstack.org/cgit/openstack/nova/tree/doc/source/install/controller-install-ubuntu.rst URL: https://docs.openstack.org/nova/queens/install/controller-install-ubuntu.html To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1759808/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1760017] [NEW] The 'test_supports_direct_io' method belongs a wrong test class
Public bug reported: The 'test_supports_direct_io' method belongs the 'GetEndpointTestCase' class. But it should not be in the class because the class is for the 'get_endpoint' method. The 'test_supports_direct_io' method should be in an isolated class. https://github.com/openstack/nova/blob/942ed9b265b0f1fe4c237052030f2d73a3807b7a/nova/tests/unit/test_utils.py#L1337-L1436 ** Affects: nova Importance: Undecided Assignee: Takashi NATSUME (natsume-takashi) Status: In Progress ** Tags: testing ** Changed in: nova Status: New => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1760017 Title: The 'test_supports_direct_io' method belongs a wrong test class Status in OpenStack Compute (nova): In Progress Bug description: The 'test_supports_direct_io' method belongs the 'GetEndpointTestCase' class. But it should not be in the class because the class is for the 'get_endpoint' method. The 'test_supports_direct_io' method should be in an isolated class. https://github.com/openstack/nova/blob/942ed9b265b0f1fe4c237052030f2d73a3807b7a/nova/tests/unit/test_utils.py#L1337-L1436 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1760017/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1759935] Re: Documentation build broken with openstackdocstheme 1.20.0
Reviewed: https://review.openstack.org/557819 Committed: https://git.openstack.org/cgit/openstack/glance/commit/?id=6310052486764922149427b43f1b359ad5aab23b Submitter: Zuul Branch:master commit 6310052486764922149427b43f1b359ad5aab23b Author: Ben NemecDate: Thu Mar 29 19:51:39 2018 + Make eventlet monkey patching conform to best practices Per [1], eventlet monkey patching should happen as early as possible to avoid mismatches where a module was imported both before and after it was monkey patched. This is an exception to the import order rules that should maybe be more explicitly called out. In addition, partial monkey patching can be a problem, as shown in the discussion of the thread module from the same document. This seems to be contributing to the doc build problems that are occurring with the latest version of openstackdocstheme, so the partial monkey patching is also removed in favor of full patching. Change-Id: I0d2d9fb9f0b9d747ad1d955420f6ad129ebbfbcf 1: http://specs.openstack.org/openstack/openstack-specs/specs/eventlet-best-practices.html#monkey-patching Closes-Bug: 1759935 ** Changed in: glance Status: New => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1759935 Title: Documentation build broken with openstackdocstheme 1.20.0 Status in Glance: Fix Released Bug description: The glance doc builds are all failing with the latest release of openstackdocstheme. The full traceback looks like this: Traceback (most recent call last): File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/sphinx/setup_command.py", line 191, in run warningiserror=self.warning_is_error) File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/sphinx/application.py", line 234, in __init__ self._init_builder() File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/sphinx/application.py", line 312, in _init_builder self.emit('builder-inited') File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/sphinx/application.py", line 489, in emit return self.events.emit(event, self, *args) File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/sphinx/events.py", line 79, in emit results.append(callback(*args)) File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/openstackdocstheme/ext.py", line 209, in _builder_inited version = packaging.get_version(project_name) File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/pbr/packaging.py", line 740, in get_version version = _get_version_from_git(pre_version) File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/pbr/packaging.py", line 665, in _get_version_from_git git_dir = git._run_git_functions() File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/pbr/git.py", line 131, in _run_git_functions if _git_is_installed(): File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/pbr/git.py", line 83, in _git_is_installed _run_shell_command(['git', '--version']) File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/pbr/git.py", line 50, in _run_shell_command out = output.communicate() File "/usr/lib64/python2.7/subprocess.py", line 479, in communicate return self._communicate(input) File "/usr/lib64/python2.7/subprocess.py", line 1098, in _communicate stdout, stderr = self._communicate_with_poll(input) File "/usr/lib64/python2.7/subprocess.py", line 1128, in _communicate_with_poll poller = select.poll() AttributeError: 'module' object has no attribute 'poll' Although you have to dig into the output file to actually find that. The console only shows the last frame: 2018-03-29 18:29:54.962798 | ubuntu-xenial | Exception occurred: 2018-03-29 18:29:54.966922 | ubuntu-xenial | File "/usr/lib/python2.7/subprocess.py", line 1447, in _communicate_with_poll 2018-03-29 18:29:54.967107 | ubuntu-xenial | poller = select.poll() 2018-03-29 18:29:54.967324 | ubuntu-xenial | AttributeError: 'module' object has no attribute 'poll' This seems to be related to eventlet monkey patching and was triggered by https://review.openstack.org/552069 That patch added a pbr import to openstackdocstheme, which apparently is happening before the eventlet monkey patching occurs. Then, after monkey patching, pbr attempts to call the subprocess module which fails because poll() has been patched out from under it. At least, this is our working theory. Eventlet monkey patching is a complex beast so it's hard to say anything with absolute certainty. I believe the best solution is to tweak Glance's monkey patching method to better conform to http://specs.openstack.org/openstack
[Yahoo-eng-team] [Bug 1760006] [NEW] Creating image with a file make the disk format display uncorrectly
Public bug reported: Django page:Select a image file when creating image will make the correct disk format could not be selected. Only must select another disk format and then select the correct one. ** Affects: horizon Importance: Undecided Assignee: Wangliangyu (wangly) Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1760006 Title: Creating image with a file make the disk format display uncorrectly Status in OpenStack Dashboard (Horizon): New Bug description: Django page:Select a image file when creating image will make the correct disk format could not be selected. Only must select another disk format and then select the correct one. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1760006/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1759979] [NEW] xenapi: InstanceNotFound trace in detach_interface during instance delete
Public bug reported: The xenserver CI n-cpu logs are full of InstanceNotFound tracebacks in detach_interface during what appears to be instance delete, which should be an OK situation that we shouldn't log a traceback ERROR. http://dd6b71949550285df7dc- dda4e480e005aaa13ec303551d2d8155.r49.cf1.rackcdn.com/37/557837/1/check /dsvm-tempest-neutron-network/86c316e/logs/screen-n-cpu.txt.gz Mar 29 16:29:49.390491 dsvm-devstack-citrix-mia-nodepool-934537 nova-compute[18313]: ERROR nova.virt.xenapi.vmops [None req-84263977-247f-4c70-9ddf-003680e4eaf8 service nova] [instance: 4e642f5f-abf3-4db9-b630-a1bff1fd1cca] detach network interface 1de85994-b333-4ea1-946b-ce3b11c2f8c5 failed.: InstanceNotFound: Instance instance-0054 could not be found. Mar 29 16:29:49.390900 dsvm-devstack-citrix-mia-nodepool-934537 nova-compute[18313]: ERROR nova.virt.xenapi.vmops [instance: 4e642f5f-abf3-4db9-b630-a1bff1fd1cca] Traceback (most recent call last): Mar 29 16:29:49.391239 dsvm-devstack-citrix-mia-nodepool-934537 nova-compute[18313]: ERROR nova.virt.xenapi.vmops [instance: 4e642f5f-abf3-4db9-b630-a1bff1fd1cca] File "/opt/stack/new/nova/nova/virt/xenapi/vmops.py", line 2696, in detach_interface Mar 29 16:29:49.391522 dsvm-devstack-citrix-mia-nodepool-934537 nova-compute[18313]: ERROR nova.virt.xenapi.vmops [instance: 4e642f5f-abf3-4db9-b630-a1bff1fd1cca] vm_ref = self._get_vm_opaque_ref(instance) Mar 29 16:29:49.391802 dsvm-devstack-citrix-mia-nodepool-934537 nova-compute[18313]: ERROR nova.virt.xenapi.vmops [instance: 4e642f5f-abf3-4db9-b630-a1bff1fd1cca] File "/opt/stack/new/nova/nova/virt/xenapi/vmops.py", line 983, in _get_vm_opaque_ref Mar 29 16:29:49.392067 dsvm-devstack-citrix-mia-nodepool-934537 nova-compute[18313]: ERROR nova.virt.xenapi.vmops [instance: 4e642f5f-abf3-4db9-b630-a1bff1fd1cca] raise exception.InstanceNotFound(instance_id=instance['name']) Mar 29 16:29:49.392331 dsvm-devstack-citrix-mia-nodepool-934537 nova-compute[18313]: ERROR nova.virt.xenapi.vmops [instance: 4e642f5f-abf3-4db9-b630-a1bff1fd1cca] InstanceNotFound: Instance instance-0054 could not be found. Mar 29 16:29:49.393133 dsvm-devstack-citrix-mia-nodepool-934537 nova-compute[18313]: ERROR nova.virt.xenapi.vmops [instance: 4e642f5f-abf3-4db9-b630-a1bff1fd1cca] ** Affects: nova Importance: Undecided Status: New ** Tags: xenserver -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1759979 Title: xenapi: InstanceNotFound trace in detach_interface during instance delete Status in OpenStack Compute (nova): New Bug description: The xenserver CI n-cpu logs are full of InstanceNotFound tracebacks in detach_interface during what appears to be instance delete, which should be an OK situation that we shouldn't log a traceback ERROR. http://dd6b71949550285df7dc- dda4e480e005aaa13ec303551d2d8155.r49.cf1.rackcdn.com/37/557837/1/check /dsvm-tempest-neutron-network/86c316e/logs/screen-n-cpu.txt.gz Mar 29 16:29:49.390491 dsvm-devstack-citrix-mia-nodepool-934537 nova-compute[18313]: ERROR nova.virt.xenapi.vmops [None req-84263977-247f-4c70-9ddf-003680e4eaf8 service nova] [instance: 4e642f5f-abf3-4db9-b630-a1bff1fd1cca] detach network interface 1de85994-b333-4ea1-946b-ce3b11c2f8c5 failed.: InstanceNotFound: Instance instance-0054 could not be found. Mar 29 16:29:49.390900 dsvm-devstack-citrix-mia-nodepool-934537 nova-compute[18313]: ERROR nova.virt.xenapi.vmops [instance: 4e642f5f-abf3-4db9-b630-a1bff1fd1cca] Traceback (most recent call last): Mar 29 16:29:49.391239 dsvm-devstack-citrix-mia-nodepool-934537 nova-compute[18313]: ERROR nova.virt.xenapi.vmops [instance: 4e642f5f-abf3-4db9-b630-a1bff1fd1cca] File "/opt/stack/new/nova/nova/virt/xenapi/vmops.py", line 2696, in detach_interface Mar 29 16:29:49.391522 dsvm-devstack-citrix-mia-nodepool-934537 nova-compute[18313]: ERROR nova.virt.xenapi.vmops [instance: 4e642f5f-abf3-4db9-b630-a1bff1fd1cca] vm_ref = self._get_vm_opaque_ref(instance) Mar 29 16:29:49.391802 dsvm-devstack-citrix-mia-nodepool-934537 nova-compute[18313]: ERROR nova.virt.xenapi.vmops [instance: 4e642f5f-abf3-4db9-b630-a1bff1fd1cca] File "/opt/stack/new/nova/nova/virt/xenapi/vmops.py", line 983, in _get_vm_opaque_ref Mar 29 16:29:49.392067 dsvm-devstack-citrix-mia-nodepool-934537 nova-compute[18313]: ERROR nova.virt.xenapi.vmops [instance: 4e642f5f-abf3-4db9-b630-a1bff1fd1cca] raise exception.InstanceNotFound(instance_id=instance['name']) Mar 29 16:29:49.392331 dsvm-devstack-citrix-mia-nodepool-934537 nova-compute[18313]: ERROR nova.virt.xenapi.vmops [instance: 4e642f5f-abf3-4db9-b630-a1bff1fd1cca] InstanceNotFound: Instance instance-0054 could not be found. Mar 29 16:29:49.393133 dsvm-devstack-citrix-mia-nodepool-934537 nova-compute[18313]: ERROR nova.virt.xenapi.vmops [instance:
[Yahoo-eng-team] [Bug 1752152] Re: Attach Volume Fails with secure call to cinder
Reviewed: https://review.openstack.org/557508 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=20eaaee2334957eb8739ecca524a1c4aa9f246e9 Submitter: Zuul Branch:master commit 20eaaee2334957eb8739ecca524a1c4aa9f246e9 Author: Eric FriedDate: Wed Mar 28 15:45:26 2018 -0500 Use ksa session for cinder microversion check [1] added a method to validate availability of a desired version of the cinder API. This method called into cinderclient.client.get_highest_client_server_version to (unsurprisingly) discover the highest available version to compare against. However, that routine uses a raw requests.get to access the version document from the server. This breaks when the endpoint URL is using HTTPS, because nothing sets up the cert info for that call. With this change, we work around the issue by duplicating the logic from get_highest_client_server_version, but doing the version discovery via the same keystoneauth1 session that's configured for use with the client itself, thus inheriting any SSL configuration as appropriate. [1] https://review.openstack.org/#/c/469579/ Change-Id: I4de355195281009a5979710d7f14ae8ea11d10e0 Closes-Bug: #1752152 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1752152 Title: Attach Volume Fails with secure call to cinder Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) queens series: In Progress Status in python-cinderclient: Invalid Bug description: It is found that when cinder endpoint is configured to use https, attach volume flow fails with the stack trace seen below (seen in nova api log) because it fails to make a secure call from nova to cinder. Secure calls perform certificate validation and in this particular flow, certificate validation is completely skipped File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 3971, in attach_volume 2018-02-27 08:16:51.338 1324 ERROR cinder.is_microversion_supported(context, '3.44') 2018-02-27 08:16:51.338 1324 ERRORFile "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 138, in is_microversion_supported 2018-02-27 08:16:51.338 1324 ERROR _check_microversion(url, microversion) 2018-02-27 08:16:51.338 1324 ERRORFile "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 86, in _check_microversion 2018-02-27 08:16:51.338 1324 ERROR max_api_version = cinder_client.get_highest_client_server_version(url) 2018-02-27 08:16:51.338 1324 ERRORFile "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 126, in get_highest_client_server_version 2018-02-27 08:16:51.338 1324 ERROR min_server, max_server = get_server_version(url) 2018-02-27 08:16:51.338 1324 ERRORFile "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 109, in get_server_version 2018-02-27 08:16:51.338 1324 ERROR response = requests.get(version_url) 2018-02-27 08:16:51.338 1324 ERRORFile "/usr/lib/python2.7/site-packages/requests/api.py", line 72, in get 2018-02-27 08:16:51.338 1324 ERROR return request('get', url, params=params, **kwargs) 2018-02-27 08:16:51.338 1324 ERRORFile "/usr/lib/python2.7/site-packages/requests/api.py", line 58, in request 2018-02-27 08:16:51.338 1324 ERROR return session.request(method=method, url=url, **kwargs) 2018-02-27 08:16:51.338 1324 ERRORFile "/usr/lib/python2.7/site-packages/requests/sessions.py", line 502, in request 2018-02-27 08:16:51.338 1324 ERROR resp = self.send(prep, **send_kwargs) 2018-02-27 08:16:51.338 1324 ERRORFile "/usr/lib/python2.7/site-packages/requests/sessions.py", line 612, in send 2018-02-27 08:16:51.338 1324 ERROR r = adapter.send(request, **kwargs) 2018-02-27 08:16:51.338 1324 ERRORFile "/usr/lib/python2.7/site-packages/requests/adapters.py", line 504, in send 2018-02-27 08:16:51.338 1324 ERROR raise ConnectionError(e, request=request) 2018-02-27 08:16:51.338 1324 ERROR ConnectionError: HTTPSConnectionPool(host='ipx-x-x-x.xxx.xxx.xxx.com', port=9000): Max retries exceeded with url: / (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')],)",),)) This is a regression bug introduced as part of changeset https://review.openstack.org/#/c/469579/, which was merged way back in June 2017. As part of this changeset, a new function namely _check_microversion was introduced, which then makes a cinderclient call , which finally makes a cinder https REST api call without passing the certificate. This leads to the problem listed above.
[Yahoo-eng-team] [Bug 1759971] [NEW] [dvr][fast-exit] a route to a tenant network does not get created in fip namespace if an external network is attached after a tenant network have been attached
Public bug reported: Overall, similar scenario to https://bugs.launchpad.net/neutron/+bug/1759956 but a different problem. OpenStack Queens from UCA (xenial, GA kernel, deployed via OpenStack charms), 2 external subnets (one routed provider network), 1 tenant subnet, all subnets in the same address scope to trigger "fast exit" logic. Tenant subnet cidr: 192.168.100.0/24 openstack address scope create dev openstack subnet pool create --address-scope dev --pool-prefix 10.232.40.0/21 --pool-prefix 10.232.16.0/21 dev openstack subnet pool create --address-scope dev --pool-prefix 192.168.100.0/24 tenant openstack network create --external --provider-physical-network physnet1 --provider-network-type flat pubnet openstack network segment set --name segment1 d8391bfb-4466-4a45-972c-45ffcec9f6bc openstack network segment create --physical-network physnet2 --network-type flat --network pubnet segment2 openstack subnet create --no-dhcp --subnet-pool dev --subnet-range 10.232.16.0/21 --allocation-pool start=10.232.17.0,end=10.232.17.255 --dns-nameserver 10.232.36.101 --ip-version 4 --network pubnet --network-segment segment1 pubsubnetl1 openstack subnet create --gateway 10.232.40.100 --no-dhcp --subnet-pool dev --subnet-range 10.232.40.0/21 --allocation-pool start=10.232.41.0,end=10.232.41.255 --dns-nameserver 10.232.36.101 --ip-version 4 --network pubnet --network-segment segment2 pubsubnetl2 openstack network create --internal --provider-network-type vxlan tenantnet openstack subnet create --dhcp --ip-version 4 --subnet-range 192.168.100.0/24 --subnet-pool tenant --dns-nameserver 10.232.36.101 --network tenantnet tenantsubnet # --- # Works in this order when an external network is attached first openstack router create --disable --no-ha --distributed pubrouter openstack router set --disable-snat --external-gateway pubnet --enable pubrouter openstack router add subnet pubrouter tenantsubnet 2018-03-29 23:30:48.933 2050638 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'ne tns', 'exec', 'fip-d0f008fc-dc45-4237-9ce0-a9e1977735eb', 'ip', '-4', 'route', 'replace', '192.168.100.0/24', 'via', '169.254.106.114', 'dev', 'fpr-09fd1 424-7'] create_process /usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:92 # -- # Doesn't work the other way around - as a fip namespace does not get created before a tenant network is attached openstack router create --disable --no-ha --distributed pubrouter openstack router add subnet pubrouter tenantsubnet openstack router set --disable-snat --external-gateway pubnet --enable pubrouter # to "fix" this we need to re-trigger the right code path openstack router remove subnet pubrouter tenantsubnet openstack router add subnet pubrouter tenantsubnet The right code path seems to be in dvr_local_router.py https://github.com/openstack/neutron/blob/stable/queens/neutron/agent/l3/dvr_local_router.py#L413 https://github.com/openstack/neutron/blob/stable/queens/neutron/agent/l3/dvr_local_router.py#L623-L632 Based on a quick grep nothing in dvr_fip_ns.py calls internal_network_added so this never gets triggered. neutron/agent/l3/dvr_edge_ha_router.py|40| def internal_network_added(self, port): neutron/agent/l3/dvr_edge_ha_router.py|41| # Call RouterInfo's internal_network_added (Plugs the port, adds IP) neutron/agent/l3/dvr_edge_ha_router.py|42| router_info.RouterInfo.internal_network_added(self, port) neutron/agent/l3/dvr_edge_router.py|96| def internal_network_added(self, port): neutron/agent/l3/dvr_edge_router.py|97| super(DvrEdgeRouter, self).internal_network_added(port) neutron/agent/l3/dvr_edge_router.py|110| self._internal_network_added( neutron/agent/l3/dvr_edge_router.py|142| self._internal_network_added( neutron/agent/l3/dvr_local_router.py|398| def internal_network_added(self, port): neutron/agent/l3/dvr_local_router.py|399| super(DvrLocalRouter, self).internal_network_added(port) neutron/agent/l3/ha_router.py|331| def internal_network_added(self, port): neutron/agent/l3/router_info.py|441| def _internal_network_added(self, ns_name, network_id, port_id, neutron/agent/l3/router_info.py|458| def internal_network_added(self, port): neutron/agent/l3/router_info.py|466| self._internal_network_added(self.ns_name, neutron/agent/l3/router_info.py|556| self.internal_network_added(p) https://github.com/openstack/neutron/blob/stable/queens/neutron/agent/l3/dvr_fip_ns.py ** Affects: neutron Importance: Undecided Status: New ** Tags: cpe-onsite -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1759971 Title: [dvr][fast-exit] a route to a tenant network does not get created in fip namespace if an external network is attached after a tenant network have been attached Status in neutron: New Bug description: Overall, similar scenario to
[Yahoo-eng-team] [Bug 1759863] Re: placement functional tests can collide when synchronising the traits table
Reviewed: https://review.openstack.org/557722 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=bdf06d18aa4ad81e43bf0d1a22c851b43c4cd1c5 Submitter: Zuul Branch:master commit bdf06d18aa4ad81e43bf0d1a22c851b43c4cd1c5 Author: Chris DentDate: Thu Mar 29 15:10:19 2018 +0100 [placement] Fix bad management of _TRAITS_SYNCED flag Placement functional tests could, depending on run order, be unable to synchronise the os-traits because a previous test or this test has failed to reset the _TRAITS_SYNCED flag. This change fixes it so the tests will always reset the flag at both setUp and tearDown. In the process a nearby misleading comment which says a database is not being used is corrected. Change-Id: I595be7bca2c1bde86651b126ce501286b301d272 Closes-Bug: #1759863 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1759863 Title: placement functional tests can collide when synchronising the traits table Status in OpenStack Compute (nova): Fix Released Bug description: The placement functional tests make use of the traits table. At the start of most requests to the objects in resource_provider.py some code is run to make sure that traits in the os-traits library are synchronised to the table. A global flag is present which says "I've already synchronised". Functional tests are responsible for making sure this is in the right state. It turns out that this management was not complete, and after a recent move of db/test_resource_provider.py and db/test_allocation_candidates.py within the placement hierarchy the delicate balance of how tests are split among processes by stestr was upset. This leads to test failures where no traits are in the traits table. The fix is to ensure that functional tests manage the related db flags both during setup and teardown and not rely solely on one or the other (as people can easily get it wrong). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1759863/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1736224] Re: ram_allocation_ratio > 1 causes RAMFIlter to incorrectly decide on ability so spawn instance
Sorry, the release was Mitaka, so this is a duplicate bug. I've changed status to Invalid. ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1736224 Title: ram_allocation_ratio > 1 causes RAMFIlter to incorrectly decide on ability so spawn instance Status in OpenStack Compute (nova): Invalid Bug description: The problem is inside this function - https://github.com/openstack/nova/blob/master/nova/scheduler/filters/ram_filter.py#L33 Probably related to https://bugs.launchpad.net/nova/+bug/1635367 The problem is that RAMFilter calculations do not take into account VM RAM subscription. This causes scheduler to try spawning VMs on hosts which a fully oversubscribed while still have some physical free RAM - this is possible due to KSM for example. Consider this scenario: ram_allocation_ratio = 1.5 Some compute host has 10GB physical RAM and 15 1GB VMs already spawned on it. At the same time, there is still 2GB free physical RAM on the host, as seen in "free -m" and in nova hypervisor-show. A new VM is scheduled and RAMFilter is executed: requested_ram = spec_obj.memory_mb = 1GB free_ram_mb = host_state.free_ram_mb = 2GB # this is actual free RAM on a host, which does not properly reflect VM subscription total_usable_ram_mb = host_state.total_usable_ram_mb = 10GB # host has 10GB RAM total Then the main check which is performed is: memory_mb_limit = total_usable_ram_mb * ram_allocation_ratio = 15GB used_ram_mb = total_usable_ram_mb - free_ram_mb = 10 - 2 = 8GB usable_ram = memory_mb_limit - used_ram_mb = 15 - 8 = 7GB # incorrect assumption that host has 7GB usable RAM left Unless I have some incorrect understanding, the logic here is broken. At first I tried to make up a quick fix, but then realized the VM subscription RAM value (sum of RAM of all VMs scheduled on a host) is not present in this code so proper calculation cannot be done. It may be available inside host_state object, I have not checked yet. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1736224/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1759959] [NEW] api-ref: documentation of address scope extension is missing
Public bug reported: There is an address scope API extension defined in neutron-lib https://github.com/openstack/neutron- lib/blob/master/neutron_lib/api/definitions/address_scope.py , but its documentation seems to be missing in API reference. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1759959 Title: api-ref: documentation of address scope extension is missing Status in neutron: New Bug description: There is an address scope API extension defined in neutron-lib https://github.com/openstack/neutron- lib/blob/master/neutron_lib/api/definitions/address_scope.py , but its documentation seems to be missing in API reference. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1759959/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1759956] [NEW] [dvr][fast-exit] incorrect policy rules get deleted when a distributed router has ports on multiple tenant networks
Public bug reported: TL;DR: ip -4 rule del priority table type unicast will delete the first matching rule it encounters: if there are two rules with the same priority it will just kill the first one it finds. The original setup is described here: https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1759918 OpenStack Queens from UCA (xenial, GA kernel, deployed via OpenStack charms), 2 external subnets (one routed provider network), 2 tenant subnets all in the same address scope to trigger "fast exit". 2 tenant networks attached (subnets 192.168.100.0/24 and 192.168.200.0/24) to a DVR: # 2 rules as expected ip netns exec qrouter-4f9ca9ef-303b-4082-abbc-e50782d9b800 ip rule 0: from all lookup local 32766: from all lookup main 32767: from all lookup default 8: from 192.168.100.0/24 lookup 16 8: from 192.168.200.0/24 lookup 16 # remove 192.168.200.0/24 sometimes deletes an incorrect policy rule openstack router remove subnet pubrouter othertenantsubnet # ip route del contains the cidr 2018-03-29 20:09:52.946 2083594 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'ne tns', 'exec', 'fip-d0f008fc-dc45-4237-9ce0-a9e1977735eb', 'ip', '-4', 'route', 'del', '192.168.200.0/24', 'via', '169.254.93.94', 'dev', 'fpr-4f9ca9ef-3' ] create_process /usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:92 # ip rule delete is not that specific 2018-03-29 20:09:53.195 2083594 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'qrouter-4f9ca9ef-303b-4082-abbc-e50782d9b800', 'ip', '-4', 'rule', 'del', 'priority', '8', 'table', '16', 'type', 'unicast'] create_pr ocess /usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:92 2018-03-29 20:15:59.210 2083594 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'qrouter-4f9ca9ef-303b-4082-abbc-e50782d9b800', 'ip', '-4', 'rule', 'show'] create_process /usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:92 2018-03-29 20:15:59.455 2083594 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'qrouter-4f9ca9ef-303b-4082-abbc-e50782d9b800', 'ip', '-4', 'rule', 'add', 'from', '192.168.100.0/24', 'priority', '8', 'table', '16', 'type', 'unicast'] create_process /usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:92 ip netns exec qrouter-4f9ca9ef-303b-4082-abbc-e50782d9b800 ip rule 0: from all lookup local 32766: from all lookup main 32767: from all lookup default 8: from 192.168.100.0/24 lookup 16 8: from 192.168.200.0/24 lookup 16 # try to delete a rule manually to see what is going on ip netns exec qrouter-4f9ca9ef-303b-4082-abbc-e50782d9b800 ip rule ; ip netns exec qrouter-4f9ca9ef-303b-4082-abbc-e50782d9b800 ip -4 rule del priority 8 table 16 type unicast ; ip netns exec qrouter-4f9ca9ef-303b-4082-abbc-e50782d9b800 ip rule 0: from all lookup local 32766: from all lookup main 32767: from all lookup default 8: from 192.168.100.0/24 lookup 16 8: from 192.168.200.0/24 lookup 16 0: from all lookup local 32766: from all lookup main 32767: from all lookup default 8: from 192.168.200.0/24 lookup 16 # ^^ 192.168.100.0/24 rule got deleted instead of 192.168.200.0/24 # add the rule back manually ip netns exec qrouter-4f9ca9ef-303b-4082-abbc-e50782d9b800 ip rule add from 192.168.100.0/24 priority 8 table 16 type unicast # different order now - 192.168.200.0/24 is first ip netns exec qrouter-4f9ca9ef-303b-4082-abbc-e50782d9b800 ip rule 0: from all lookup local 32766: from all lookup main 32767: from all lookup default 8: from 192.168.200.0/24 lookup 16 8: from 192.168.100.0/24 lookup 16 # now 192.168.200.0/24 got deleted because it was first to match ip netns exec qrouter-4f9ca9ef-303b-4082-abbc-e50782d9b800 ip rule ; ip netns exec qrouter-4f9ca9ef-303b-4082-abbc-e50782d9b800 ip -4 rule del priority 8 table 16 type unicast ; ip netns exec qrouter-4f9ca9ef-303b-4082-abbc-e50782d9b800 ip rule 0: from all lookup local 32766: from all lookup main 32767: from all lookup default 8: from 192.168.200.0/24 lookup 16 8: from 192.168.100.0/24 lookup 16 0: from all lookup local 32766: from all lookup main 32767: from all lookup default 8: from 192.168.100.0/24 lookup 16 Code: _dvr_internal_network_removed https://github.com/openstack/neutron/blob/stable/queens/neutron/agent/l3/dvr_local_router.py#L431-L443 _delete_interface_routing_rule_in_router_ns https://github.com/openstack/neutron/blob/stable/queens/neutron/agent/l3/dvr_local_router.py#L642-L648 ip_rule = ip_lib.IPRule(namespace=self.ns_name) for subnet in
[Yahoo-eng-team] [Bug 1746674] Re: emulator_threads_policy=isolate doesn't work with multi numa node
** Also affects: nova/pike Importance: Undecided Status: New ** Also affects: nova/queens Importance: Undecided Status: New ** Changed in: nova/pike Status: New => In Progress ** Changed in: nova/pike Importance: Undecided => Medium ** Changed in: nova Importance: Undecided => Medium ** Changed in: nova/pike Assignee: (unassigned) => Tetsuro Nakamura (tetsuro0907) ** Changed in: nova/queens Assignee: (unassigned) => Tetsuro Nakamura (tetsuro0907) ** Changed in: nova/queens Status: New => In Progress ** Changed in: nova/queens Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1746674 Title: emulator_threads_policy=isolate doesn't work with multi numa node Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) pike series: In Progress Status in OpenStack Compute (nova) queens series: In Progress Bug description: Description === As described in test_multi_nodes_isolate() in https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/test_hardware.py#L3006-L3024, numa_fit_instance_to_host() function returns None for cpuset_reserved for cells with id > 0. - def test_multi_nodes_isolate(self): host_topo = self._host_topology() inst_topo = objects.InstanceNUMATopology( emulator_threads_policy=( fields.CPUEmulatorThreadsPolicy.ISOLATE), cells=[objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED), objects.InstanceNUMACell( id=1, cpuset=set([1]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) inst_topo = hw.numa_fit_instance_to_host(host_topo, inst_topo) self.assertEqual({0: 0}, inst_topo.cells[0].cpu_pinning) self.assertEqual(set([1]), inst_topo.cells[0].cpuset_reserved) self.assertEqual({1: 2}, inst_topo.cells[1].cpu_pinning) self.assertIsNone(inst_topo.cells[1].cpuset_reserved) However, we are testing libvirt driver with non-None cpuset_reserved value in https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/libvirt/test_driver.py#L3052. def test_get_guest_config_numa_host_instance_isolated_emulator_threads( self): instance_topology = objects.InstanceNUMATopology( emulator_threads_policy=( fields.CPUEmulatorThreadsPolicy.ISOLATE), cells=[ objects.InstanceNUMACell( id=0, cpuset=set([0, 1]), memory=1024, pagesize=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_pinning={0: 4, 1: 5}, cpuset_reserved=set([6])), objects.InstanceNUMACell( id=1, cpuset=set([2, 3]), memory=1024, pagesize=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_pinning={2: 7, 3: 8}, cpuset_reserved=set([]))])# <- this should be None!! ...(snip)... Actually, this causes errors when deploying VMs with multi numa nodes with `emulator_threads_policy=isolate`. Environment & Steps to reproduce == 1. Use devstack to build all-in-one OpenStack with libvirt/KVM driver on a VM whose lscpu looks like this. CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 2 NUMA node0 CPU(s): 0-3 NUMA node1 CPU(s): 4-7 2. Try to Build a VM with multi numa nodes with emulator_threads_policy=isolate. $ openstack flavor create c2r1024d1 --id 6 --ram 1024 --disk 1 --vcpu 2 $ openstack flavor set c2r1024d1 --property hw:cpu_policy=dedicated --property hw:emulator_threads_policy=isolate --property hw:numa_nodes=2 $ openstack server create test1 --image cirros-0.3.5-x86_64-disk --flavor c2r1024d1 --network private Expected & Actual result === Expected: VM is built without an error Actual: VM goes into an ERROR state with the following message. $ openstack server show test1 ...(snip)... | fault | {u'message': u'Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance e8d7537f-9140-4ae6-bb3f-b98458a862ac.', u'code': 500, u'details': u' File "/opt/stack/nova/nova/conductor/manager.py", line 578, in build_instances\\n raise exception.MaxRetriesExceeded(reason=msg)\\n', u'created':
[Yahoo-eng-team] [Bug 1759935] [NEW] Documentation build broken with openstackdocstheme 1.20.0
Public bug reported: The glance doc builds are all failing with the latest release of openstackdocstheme. The full traceback looks like this: Traceback (most recent call last): File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/sphinx/setup_command.py", line 191, in run warningiserror=self.warning_is_error) File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/sphinx/application.py", line 234, in __init__ self._init_builder() File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/sphinx/application.py", line 312, in _init_builder self.emit('builder-inited') File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/sphinx/application.py", line 489, in emit return self.events.emit(event, self, *args) File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/sphinx/events.py", line 79, in emit results.append(callback(*args)) File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/openstackdocstheme/ext.py", line 209, in _builder_inited version = packaging.get_version(project_name) File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/pbr/packaging.py", line 740, in get_version version = _get_version_from_git(pre_version) File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/pbr/packaging.py", line 665, in _get_version_from_git git_dir = git._run_git_functions() File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/pbr/git.py", line 131, in _run_git_functions if _git_is_installed(): File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/pbr/git.py", line 83, in _git_is_installed _run_shell_command(['git', '--version']) File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/pbr/git.py", line 50, in _run_shell_command out = output.communicate() File "/usr/lib64/python2.7/subprocess.py", line 479, in communicate return self._communicate(input) File "/usr/lib64/python2.7/subprocess.py", line 1098, in _communicate stdout, stderr = self._communicate_with_poll(input) File "/usr/lib64/python2.7/subprocess.py", line 1128, in _communicate_with_poll poller = select.poll() AttributeError: 'module' object has no attribute 'poll' Although you have to dig into the output file to actually find that. The console only shows the last frame: 2018-03-29 18:29:54.962798 | ubuntu-xenial | Exception occurred: 2018-03-29 18:29:54.966922 | ubuntu-xenial | File "/usr/lib/python2.7/subprocess.py", line 1447, in _communicate_with_poll 2018-03-29 18:29:54.967107 | ubuntu-xenial | poller = select.poll() 2018-03-29 18:29:54.967324 | ubuntu-xenial | AttributeError: 'module' object has no attribute 'poll' This seems to be related to eventlet monkey patching and was triggered by https://review.openstack.org/552069 That patch added a pbr import to openstackdocstheme, which apparently is happening before the eventlet monkey patching occurs. Then, after monkey patching, pbr attempts to call the subprocess module which fails because poll() has been patched out from under it. At least, this is our working theory. Eventlet monkey patching is a complex beast so it's hard to say anything with absolute certainty. I believe the best solution is to tweak Glance's monkey patching method to better conform to http://specs.openstack.org/openstack/openstack- specs/specs/eventlet-best-practices.html#monkey-patching Doing so seems to make the doc build work again, and will likely avoid other problems down the line. ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1759935 Title: Documentation build broken with openstackdocstheme 1.20.0 Status in Glance: New Bug description: The glance doc builds are all failing with the latest release of openstackdocstheme. The full traceback looks like this: Traceback (most recent call last): File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/sphinx/setup_command.py", line 191, in run warningiserror=self.warning_is_error) File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/sphinx/application.py", line 234, in __init__ self._init_builder() File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/sphinx/application.py", line 312, in _init_builder self.emit('builder-inited') File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/sphinx/application.py", line 489, in emit return self.events.emit(event, self, *args) File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/sphinx/events.py", line 79, in emit results.append(callback(*args)) File "/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/openstackdocstheme/ext.py", line 209, in _builder_inited version = packaging.get_version(project_name) File
[Yahoo-eng-team] [Bug 1759924] [NEW] Port device owner isn't updated with new host availability zone during unshelve
Public bug reported: During an unshelve the host for an instance and therefor the availability zone may change but does not seem to updated in the port's device_owner causing problems with server action add fixed ip for example. In nova/network/neutronv2/api.py _update_port_binding_for_instance should probably update the port's device_owner the same way that _update_ports_for_instance does. +---+--+ | Field | Value| +---+--+ | admin_state_up| UP | | allowed_address_pairs | | | binding_host_id | r02c4b15 | | binding_profile | | | binding_vif_details | port_filter='True' | | binding_vif_type | bridge | | binding_vnic_type | normal | | created_at| 2018-03-05T13:25:48Z | | data_plane_status | None | | description | | | device_id | 53f04bf3-eb1f-4c64-a70f-fd16d6c1a5af | | device_owner | compute:zone-r7 | | dns_assignment| | | dns_name | instance-w-volume-shelving-test | | extra_dhcp_opts | | | fixed_ips | | | id| 327b891f-1820-4aa9-bbc3-fe9cc619eac3 | | ip_address| None | | mac_address | fa:16:3e:14:21:d1| | name | | | network_id| e73b1699-0129-4c12-b722-e6ce52604824 | | option_name | None | | option_value | None | | port_security_enabled | False| | project_id| ecf32b152563403bbde297f58f4637d4 | | qos_policy_id | None | | revision_number | 19 | | security_group_ids| bb25a73a-a62e-4015-9595-16add6b7d3a0 | | status| ACTIVE | | subnet_id | None | | tags | | | trunk_details | None | | updated_at| 2018-03-28T20:03:23Z | +---+--+ nova show 53f04bf3-eb1f-4c64-a70f-fd16d6c1a5af +--++ | Property | Value | +--++ | OS-DCF:diskConfig| MANUAL | | OS-EXT-AZ:availability_zone | zone-r2 | | OS-EXT-SRV-ATTR:host | r02c4b15 | | OS-EXT-SRV-ATTR:hypervisor_hostname | r02c4b15 ** Affects: nova Importance: Medium Status: Triaged ** Tags: neutron shelve -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1759924 Title: Port device owner isn't updated with new host availability zone during unshelve Status in OpenStack Compute (nova): Triaged Bug description:
[Yahoo-eng-team] [Bug 1759609] Re: VMware: _detach_instance_volumes method fails due to wrong detach_volume call
Reviewed: https://review.openstack.org/557377 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=ad4a26e1843bc4cefa2c5b4bb093d692cddfaa49 Submitter: Zuul Branch:master commit ad4a26e1843bc4cefa2c5b4bb093d692cddfaa49 Author: Claudiu BeluDate: Wed Mar 28 01:36:24 2018 -0700 vmware: Fixes _detach_instance_volumes method The _detach_instance_volumes method calls self.detach_volume with an invalid number of arguments (context missing), causing it to fail. This patch solves the issue. Change-Id: Ibb6afa883c4ed55ea544a1e9d247dab4fc657cd2 Closes-Bug: #1759609 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1759609 Title: VMware: _detach_instance_volumes method fails due to wrong detach_volume call Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) queens series: In Progress Bug description: The method _detach_instance_volumes [1] calls self.detach_volume [2] with an invalid number of arguments (context argument is missing), causing it to fail. [1] https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/driver.py#L426 [2] https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/driver.py#L438 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1759609/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1759616] Re: [2.x] CentOS networking adds weird route
This is an expected route for RHEL/Centos defaults. Please re-open if you think cloud-init needs to do something here. ** Changed in: cloud-init Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1759616 Title: [2.x] CentOS networking adds weird route Status in cloud-init: Invalid Status in MAAS: Incomplete Bug description: MAAS CentOS deployed machine has a weird route: 169.254.0.0/16 dev ens3 scope link metric 1002 [centos@precise ~]$ ip route sh default via 192.168.122.1 dev ens3 169.254.0.0/16 dev ens3 scope link metric 1002 192.168.122.0/24 dev ens3 proto kernel scope link src 192.168.122.6 192.168.133.0/24 via 192.168.122.1 dev ens3 Deploying Ubuntu doesn't yield the same config. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1759616/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1752152] Re: Attach Volume Fails with secure call to cinder
I know this is at least needed in Queens but I'm not sure if we need this in Pike. Need to see if anything is using this code in Pike. ** Also affects: nova/queens Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1752152 Title: Attach Volume Fails with secure call to cinder Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) queens series: New Status in python-cinderclient: Invalid Bug description: It is found that when cinder endpoint is configured to use https, attach volume flow fails with the stack trace seen below (seen in nova api log) because it fails to make a secure call from nova to cinder. Secure calls perform certificate validation and in this particular flow, certificate validation is completely skipped File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 3971, in attach_volume 2018-02-27 08:16:51.338 1324 ERROR cinder.is_microversion_supported(context, '3.44') 2018-02-27 08:16:51.338 1324 ERRORFile "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 138, in is_microversion_supported 2018-02-27 08:16:51.338 1324 ERROR _check_microversion(url, microversion) 2018-02-27 08:16:51.338 1324 ERRORFile "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 86, in _check_microversion 2018-02-27 08:16:51.338 1324 ERROR max_api_version = cinder_client.get_highest_client_server_version(url) 2018-02-27 08:16:51.338 1324 ERRORFile "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 126, in get_highest_client_server_version 2018-02-27 08:16:51.338 1324 ERROR min_server, max_server = get_server_version(url) 2018-02-27 08:16:51.338 1324 ERRORFile "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 109, in get_server_version 2018-02-27 08:16:51.338 1324 ERROR response = requests.get(version_url) 2018-02-27 08:16:51.338 1324 ERRORFile "/usr/lib/python2.7/site-packages/requests/api.py", line 72, in get 2018-02-27 08:16:51.338 1324 ERROR return request('get', url, params=params, **kwargs) 2018-02-27 08:16:51.338 1324 ERRORFile "/usr/lib/python2.7/site-packages/requests/api.py", line 58, in request 2018-02-27 08:16:51.338 1324 ERROR return session.request(method=method, url=url, **kwargs) 2018-02-27 08:16:51.338 1324 ERRORFile "/usr/lib/python2.7/site-packages/requests/sessions.py", line 502, in request 2018-02-27 08:16:51.338 1324 ERROR resp = self.send(prep, **send_kwargs) 2018-02-27 08:16:51.338 1324 ERRORFile "/usr/lib/python2.7/site-packages/requests/sessions.py", line 612, in send 2018-02-27 08:16:51.338 1324 ERROR r = adapter.send(request, **kwargs) 2018-02-27 08:16:51.338 1324 ERRORFile "/usr/lib/python2.7/site-packages/requests/adapters.py", line 504, in send 2018-02-27 08:16:51.338 1324 ERROR raise ConnectionError(e, request=request) 2018-02-27 08:16:51.338 1324 ERROR ConnectionError: HTTPSConnectionPool(host='ipx-x-x-x.xxx.xxx.xxx.com', port=9000): Max retries exceeded with url: / (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')],)",),)) This is a regression bug introduced as part of changeset https://review.openstack.org/#/c/469579/, which was merged way back in June 2017. As part of this changeset, a new function namely _check_microversion was introduced, which then makes a cinderclient call , which finally makes a cinder https REST api call without passing the certificate. This leads to the problem listed above. https://github.com/openstack/nova/blob/stable/queens/nova/volume/cinder.py#L75 https://github.com/openstack/nova/blob/stable/queens/nova/volume/cinder.py#L86 https://github.com/openstack/python-cinderclient/blob/stable/queens/cinderclient/client.py#L126 https://github.com/openstack/python-cinderclient/blob/stable/queens/cinderclient/client.py#L109 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1752152/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1759863] [NEW] placement functional tests can collide when synchronising the traits table
Public bug reported: The placement functional tests make use of the traits table. At the start of most requests to the objects in resource_provider.py some code is run to make sure that traits in the os-traits library are synchronised to the table. A global flag is present which says "I've already synchronised". Functional tests are responsible for making sure this is in the right state. It turns out that this management was not complete, and after a recent move of db/test_resource_provider.py and db/test_allocation_candidates.py within the placement hierarchy the delicate balance of how tests are split among processes by stestr was upset. This leads to test failures where no traits are in the traits table. The fix is to ensure that functional tests manage the related db flags both during setup and teardown and not rely solely on one or the other (as people can easily get it wrong). ** Affects: nova Importance: Medium Assignee: Chris Dent (cdent) Status: Triaged ** Tags: placement testing -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1759863 Title: placement functional tests can collide when synchronising the traits table Status in OpenStack Compute (nova): Triaged Bug description: The placement functional tests make use of the traits table. At the start of most requests to the objects in resource_provider.py some code is run to make sure that traits in the os-traits library are synchronised to the table. A global flag is present which says "I've already synchronised". Functional tests are responsible for making sure this is in the right state. It turns out that this management was not complete, and after a recent move of db/test_resource_provider.py and db/test_allocation_candidates.py within the placement hierarchy the delicate balance of how tests are split among processes by stestr was upset. This leads to test failures where no traits are in the traits table. The fix is to ensure that functional tests manage the related db flags both during setup and teardown and not rely solely on one or the other (as people can easily get it wrong). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1759863/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1759839] [NEW] Documentation Error for Placement API Port Number
Public bug reported: This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [X] This doc is inaccurate in this way: Document instructs user to create the endpoints for Placement API to be 8780, but they should be 8778. This applies to the Red Hat and OpenSUSE documentation for Queens. Ubuntu edition appears to be correct. - [ ] This is a doc addition request. - [ ] I have a fix to the document that I can paste below including example: input and output. If you have a troubleshooting or support issue, use the following resources: - Ask OpenStack: http://ask.openstack.org - The mailing list: http://lists.openstack.org - IRC: 'openstack' channel on Freenode --- Release: 17.0.2.dev37 on 2018-03-29 10:13 SHA: 20d293588adbead40d5f4b29bc695d8dff332ba4 Source: https://git.openstack.org/cgit/openstack/nova/tree/doc/source/install/controller-install-obs.rst URL: https://docs.openstack.org/nova/queens/install/controller-install-obs.html ** Affects: nova Importance: Undecided Status: New ** Tags: connection nova placement port refused -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1759839 Title: Documentation Error for Placement API Port Number Status in OpenStack Compute (nova): New Bug description: This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [X] This doc is inaccurate in this way: Document instructs user to create the endpoints for Placement API to be 8780, but they should be 8778. This applies to the Red Hat and OpenSUSE documentation for Queens. Ubuntu edition appears to be correct. - [ ] This is a doc addition request. - [ ] I have a fix to the document that I can paste below including example: input and output. If you have a troubleshooting or support issue, use the following resources: - Ask OpenStack: http://ask.openstack.org - The mailing list: http://lists.openstack.org - IRC: 'openstack' channel on Freenode --- Release: 17.0.2.dev37 on 2018-03-29 10:13 SHA: 20d293588adbead40d5f4b29bc695d8dff332ba4 Source: https://git.openstack.org/cgit/openstack/nova/tree/doc/source/install/controller-install-obs.rst URL: https://docs.openstack.org/nova/queens/install/controller-install-obs.html To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1759839/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1759609] Re: VMware: _detach_instance_volumes method fails due to wrong detach_volume call
** Changed in: nova Importance: Undecided => Medium ** Also affects: nova/queens Importance: Undecided Status: New ** Changed in: nova/queens Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1759609 Title: VMware: _detach_instance_volumes method fails due to wrong detach_volume call Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) queens series: New Bug description: The method _detach_instance_volumes [1] calls self.detach_volume [2] with an invalid number of arguments (context argument is missing), causing it to fail. [1] https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/driver.py#L426 [2] https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/driver.py#L438 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1759609/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1229445] Re: db type could not be determined
** Also affects: oslo.versionedobjects Importance: Undecided Status: New ** Changed in: oslo.versionedobjects Assignee: (unassigned) => Sorin Sbârnea (sorin-sbarnea) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1229445 Title: db type could not be determined Status in Ironic: Fix Released Status in Magnum: Fix Released Status in OpenStack Compute (nova): Fix Released Status in oslo.versionedobjects: New Status in Python client library for Sahara: Invalid Status in tempest: Incomplete Status in Testrepository: Triaged Status in Zun: Fix Released Bug description: In openstack/python-novaclient project, run test in py27 env, then run test in py33 env, the following error will stop test: db type could not be determined But, if you run "tox -e py33" fist, then run "tox -e py27", it will be fine, no error. workaround: remove the file in .testrepository/times.dbm, then run py33 test, it will be fine. To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1229445/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1759808] [NEW] Deprecate firewall_driver and use_neutron option
Public bug reported: In Nova install doc :https://docs.openstack.org/nova/queens/install /controller-install-ubuntu.html#install-and-configure-components The nova.conf [DEFAULT] section firewall_driver and use_neutron option is deprecated for removal since 15.0.0, so it can remove from install doc. This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [ ] This doc is inaccurate in this way: __ - [ ] This is a doc addition request. - [ ] I have a fix to the document that I can paste below including example: input and output. If you have a troubleshooting or support issue, use the following resources: - Ask OpenStack: http://ask.openstack.org - The mailing list: http://lists.openstack.org - IRC: 'openstack' channel on Freenode --- Release: 17.0.2.dev33 on 2018-03-29 03:39 SHA: aea284349f5efc64c645e4970de7774ff58cc77c Source: https://git.openstack.org/cgit/openstack/nova/tree/doc/source/install/controller-install-ubuntu.rst URL: https://docs.openstack.org/nova/queens/install/controller-install-ubuntu.html ** Affects: nova Importance: Undecided Assignee: Chiawei Xie (dommgifer) Status: New ** Changed in: nova Assignee: (unassigned) => Chiawei Xie (dommgifer) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1759808 Title: Deprecate firewall_driver and use_neutron option Status in OpenStack Compute (nova): New Bug description: In Nova install doc :https://docs.openstack.org/nova/queens/install /controller-install-ubuntu.html#install-and-configure-components The nova.conf [DEFAULT] section firewall_driver and use_neutron option is deprecated for removal since 15.0.0, so it can remove from install doc. This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [ ] This doc is inaccurate in this way: __ - [ ] This is a doc addition request. - [ ] I have a fix to the document that I can paste below including example: input and output. If you have a troubleshooting or support issue, use the following resources: - Ask OpenStack: http://ask.openstack.org - The mailing list: http://lists.openstack.org - IRC: 'openstack' channel on Freenode --- Release: 17.0.2.dev33 on 2018-03-29 03:39 SHA: aea284349f5efc64c645e4970de7774ff58cc77c Source: https://git.openstack.org/cgit/openstack/nova/tree/doc/source/install/controller-install-ubuntu.rst URL: https://docs.openstack.org/nova/queens/install/controller-install-ubuntu.html To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1759808/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1759792] [NEW] [Testing] Compute nodes are registered to only one cell database regardless of their cells
Public bug reported: The nova processes for functional tests are started by the 'start_service' method of the 'TestCase' class in nova/test.py. A cell can be specified to run a process in the cell. https://github.com/openstack/nova/blob/00b19c73cfc72a79ab5ae4830a25dd53476a3b08/nova/test.py#L409-L427 --- def start_service(self, name, host=None, **kwargs): if name == 'compute' and self.USES_DB: # NOTE(danms): We need to create the HostMapping first, because # otherwise we'll fail to update the scheduler while running # the compute node startup routines below. ctxt = context.get_context() cell = self.cell_mappings[kwargs.pop('cell', CELL1_NAME)] <=== hm = objects.HostMapping(context=ctxt, host=host or name, cell_mapping=cell) <=== hm.create() self.host_mappings[hm.host] = hm if host is not None: # Make sure that CONF.host is relevant to the right hostname self.useFixture(nova_fixtures.ConfPatcher(host=host)) svc = self.useFixture( nova_fixtures.ServiceFixture(name, host, **kwargs)) return svc.service --- But multiple cells are not considered in ServiceFixture. So all compute nodes are registered to only one cell (cell1) database regardless of their cells. The nova.context.get_admin_context() is called in the inside of self.service.start(). It returns a context that is not aware of its cell. So the compute node is registered to only one cell (cell1) database regardless of its cell. https://github.com/openstack/nova/blob/00b19c73cfc72a79ab5ae4830a25dd53476a3b08/nova/tests/fixtures.py#L65-L82 --- class ServiceFixture(fixtures.Fixture): """Run a service as a test fixture.""" def __init__(self, name, host=None, **kwargs): name = name # If not otherwise specified, the host will default to the # name of the service. Some things like aggregates care that # this is stable. host = host or name kwargs.setdefault('host', host) kwargs.setdefault('binary', 'nova-%s' % name) self.kwargs = kwargs def setUp(self): super(ServiceFixture, self).setUp() self.service = service.Service.create(**self.kwargs) self.service.start() <=== self.addCleanup(self.service.kill) --- Environment --- nova master (commit 00b19c73cfc72a79ab5ae4830a25dd53476a3b08) ** Affects: nova Importance: Undecided Assignee: Takashi NATSUME (natsume-takashi) Status: In Progress ** Tags: testing ** Changed in: nova Status: New => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1759792 Title: [Testing] Compute nodes are registered to only one cell database regardless of their cells Status in OpenStack Compute (nova): In Progress Bug description: The nova processes for functional tests are started by the 'start_service' method of the 'TestCase' class in nova/test.py. A cell can be specified to run a process in the cell. https://github.com/openstack/nova/blob/00b19c73cfc72a79ab5ae4830a25dd53476a3b08/nova/test.py#L409-L427 --- def start_service(self, name, host=None, **kwargs): if name == 'compute' and self.USES_DB: # NOTE(danms): We need to create the HostMapping first, because # otherwise we'll fail to update the scheduler while running # the compute node startup routines below. ctxt = context.get_context() cell = self.cell_mappings[kwargs.pop('cell', CELL1_NAME)] <=== hm = objects.HostMapping(context=ctxt, host=host or name, cell_mapping=cell) <=== hm.create() self.host_mappings[hm.host] = hm if host is not None: # Make sure that CONF.host is relevant to the right hostname self.useFixture(nova_fixtures.ConfPatcher(host=host)) svc = self.useFixture( nova_fixtures.ServiceFixture(name, host, **kwargs)) return svc.service --- But multiple cells are not considered in ServiceFixture. So all compute nodes are registered to only one cell (cell1) database
[Yahoo-eng-team] [Bug 1759790] [NEW] [RFE] metric for the route
Public bug reported: Problem Description === A routing metric is a quantitative value that is used to evaluate the path cost. But neutron can't specify a different metric with the same destination address. It is useful to realize FRR(Fast Reroute) in Telecoms and NFV scenario. There is no optional argument for metric: root@ubuntudbs:/home/dbs# neutron router-update --help neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. usage: neutron router-update [-h] [--name NAME] [--description DESCRIPTION] [--admin-state-up {True,False}] [--distributed {True,False}] [--route destination=CIDR,nexthop=IP_ADDR | --no-routes] ROUTER API impact == "router": { "admin_state_up": true, "routes": [ { "destination": "179.24.1.0/24", "nexthop": "172.24.3.99" "metric": "100" } ] } Proposal A new optional argument metric can be added to set the metric for the route. This value can be set by the user or have a default value. References == NULL ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1759790 Title: [RFE] metric for the route Status in neutron: New Bug description: Problem Description === A routing metric is a quantitative value that is used to evaluate the path cost. But neutron can't specify a different metric with the same destination address. It is useful to realize FRR(Fast Reroute) in Telecoms and NFV scenario. There is no optional argument for metric: root@ubuntudbs:/home/dbs# neutron router-update --help neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. usage: neutron router-update [-h] [--name NAME] [--description DESCRIPTION] [--admin-state-up {True,False}] [--distributed {True,False}] [--route destination=CIDR,nexthop=IP_ADDR | --no-routes] ROUTER API impact == "router": { "admin_state_up": true, "routes": [ { "destination": "179.24.1.0/24", "nexthop": "172.24.3.99" "metric": "100" } ] } Proposal A new optional argument metric can be added to set the metric for the route. This value can be set by the user or have a default value. References == NULL To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1759790/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1759773] [NEW] FWaaS: Invalid port error on associating L3 ports (Router in HA) to firewall group
Public bug reported: From: Ignazio Cassano: I am trying to use fwaas v2 on centos 7 openstack ocata. After creating firewall rules an policy I am looking for creating firewall group . I am able to create the firewall group, but it does not work when I try to set the ports into it. openstack firewall group set --port 87173e27-c2b3-4a67-83d0-d8645d9f309b prova Failed to set firewall group 'prova': Firewall Group Port 87173e27-c2b3-4a67-83d0-d8645d9f309b is invalid Neutron server returns request_ids: ['req-9ef8ad1e-9fad-4956-8aff-907c32d01e1f'] ** Affects: neutron Importance: Undecided Assignee: Sridar Kandaswamy (skandasw) Status: Confirmed ** Tags: fwaas ** Changed in: neutron Status: New => Confirmed ** Changed in: neutron Assignee: (unassigned) => Sridar Kandaswamy (skandasw) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1759773 Title: FWaaS: Invalid port error on associating L3 ports (Router in HA) to firewall group Status in neutron: Confirmed Bug description: From: Ignazio Cassano: I am trying to use fwaas v2 on centos 7 openstack ocata. After creating firewall rules an policy I am looking for creating firewall group . I am able to create the firewall group, but it does not work when I try to set the ports into it. openstack firewall group set --port 87173e27-c2b3-4a67-83d0-d8645d9f309b prova Failed to set firewall group 'prova': Firewall Group Port 87173e27-c2b3-4a67-83d0-d8645d9f309b is invalid Neutron server returns request_ids: ['req-9ef8ad1e-9fad-4956-8aff-907c32d01e1f'] To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1759773/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp