[Yahoo-eng-team] [Bug 1817260] [NEW] Filtering by 'Changes Since =' will get the deleted instance and the instance will slowly disappear
Public bug reported: Filtering by 'Changes Since =' will get the deleted instance and the instance will slowly disappear ** Affects: horizon Importance: Undecided Assignee: pengyuesheng (pengyuesheng) Status: In Progress ** Changed in: horizon Assignee: (unassigned) => pengyuesheng (pengyuesheng) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1817260 Title: Filtering by 'Changes Since =' will get the deleted instance and the instance will slowly disappear Status in OpenStack Dashboard (Horizon): In Progress Bug description: Filtering by 'Changes Since =' will get the deleted instance and the instance will slowly disappear To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1817260/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1817035] Re: eth0 lost carrier / down after restart and IP change on older EC2-classic instance
I am going to close out the cloud-images side of this bug as well. The daily image for a release will contain the fix as soon as it is released. ** Changed in: cloud-images Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1817035 Title: eth0 lost carrier / down after restart and IP change on older EC2-classic instance Status in cloud-images: Invalid Status in cloud-init: Invalid Bug description: I'm experiencing a consistent issue where older EC2 instance types (e.g. c3.large) launched in EC2-Classic from the bionic AMI lose network connection if they're stopped and subsequently restarted. They work fine on the first boot, but when restarted they time out both for things like SSH and also for EC2's status checks. They also appear to have no outbound connection e.g. to the metadata service etc. Rebooting does not resolve the issue, nor does stopping and starting again. On one occasion when testing, I resumed the instance very quickly and Amazon allocated it the same IP address as before - the instance booted with no problems. Normally however the instance gets a new IP address - so it appears this may be related. This is happening consistently with ami-08d658f84a6d84a80 (ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-20190212.1) and I've also reproduced with ami-0c21eb76a5574aa2f (ubuntu/images /hvm-ssd/ubuntu-bionic-18.04-amd64-server-20190210) It does not happen if launching a newer instance type into EC2-VPC. Steps to reproduce: * Launch ami-08d658f84a6d84a80 on a c3.large in EC2-Classic, with a securing group allowing port 22 from anywhere and other configuration all as AWS defaults * Wait for instance to boot, SSH to instance and observe all working normally. Wait for EC2 status checks to initialise and observe they pass. * Stop instance * Wait a minute or two - if restarted very rapidly AWS may reallocate the previous IP * Start instance and observe it has been allocated a new IP address * Wait a few minutes * Attempt to SSH to the instance and observe the connection times out * Observe that the EC2 instance reachability status check is failing * Use the EC2 console to take an instance screenshot and observe that the console is showing the login prompt By attaching the root volume from the broken instance to a new instance, I was able to capture and compare the syslog for the two boots. Both appear broadly similar at first, DHCP works as expected over eth0. In both boots, systemd-networkd then reports "eth0: lost carrier". On the successful boot, systemd-networkd almost immediately afterwards then reports "eth0: gained carrier" and "eth0: IPv6 successfully enabled". However on the failed boot these entries never appear. Shortly afterwards cloud-init runs and on the success boot shows eth0 up with both IPv4 and IPv6 addresses, and valid routing tables. On the failed boot it shows eth0 down, no IPv4 routing table and an empty IPv6 routing table. Also later on in the log from the failed boot amazon-ssm-agent.amazon- ssm-agent reports that it cannot contact the metadata service (dial tcp 169.254.169.254:80: connect: network is unreachable). One thing I did notice is that the images don't appear to have been configured to disable Predictable Network Interface Names. I've tried changing that but it didn't resolve the issue. On reflection I think that's perhaps unrelated, since presumably the interface names don't change between a stop and start of the same instance on the same EC2 instance type, and the first boot works happily. Also the logs are all consistently showing eth0 rather than one of the newer interface names. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-images/+bug/1817035/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1817253] [NEW] when create volume failure, the message is Success , I think change the success message to info message is much better.
Public bug reported: when create volume failure,the message is Success , I think change the success message to info message is much better. ** Affects: horizon Importance: Undecided Assignee: pengyuesheng (pengyuesheng) Status: New ** Changed in: horizon Assignee: (unassigned) => pengyuesheng (pengyuesheng) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1817253 Title: when create volume failure,the message is Success , I think change the success message to info message is much better. Status in OpenStack Dashboard (Horizon): New Bug description: when create volume failure,the message is Success , I think change the success message to info message is much better. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1817253/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1804516] Re: Identity provider API doesn't use default roles
Reviewed: https://review.openstack.org/619373 Committed: https://git.openstack.org/cgit/openstack/keystone/commit/?id=a4c5d804395f20d0c8832ae6ed9a7594926bf981 Submitter: Zuul Branch:master commit a4c5d804395f20d0c8832ae6ed9a7594926bf981 Author: Lance Bragstad Date: Wed Nov 21 21:58:24 2018 + Update idp policies for system admin This change makes the policy definitions for admin idp operations consistent with the other idp policies. Subsequent patches will incorporate: - domain users test coverage - project users test coverage Related-Bug: 1804517 Closes-Bug: 1804516 Change-Id: I6d6a19d95d8970362993c83e70cf23c989ae45e3 ** Changed in: keystone Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1804516 Title: Identity provider API doesn't use default roles Status in OpenStack Identity (keystone): Fix Released Bug description: In Rocky, keystone implemented support to ensure at least three default roles were available [0]. The identity provider (federation) API doesn't incorporate these defaults into its default policies [1], but it should. [0] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html [1] https://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/identity_provider.py?id=fb73912d87b61c419a86c0a9415ebdcf1e186927 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1804516/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1815498] Re: Use pyroute2 to check vlan/vxlan in use
Reviewed: https://review.openstack.org/636296 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=cd31eae33d2e613c178e277af36dc6a9924d597a Submitter: Zuul Branch:master commit cd31eae33d2e613c178e277af36dc6a9924d597a Author: Rodolfo Alonso Hernandez Date: Tue Feb 12 10:05:28 2019 + Use pyroute2 to check vlan/vxlan in use Now ip_lib.get_devices_info function is implemented using pyroute2, "vlan_in_use" and "vxlan_in_use" can make use of it. Change-Id: I82a2c3ea76195b10880cf37bf2229341b995b0ae Closes-Bug: #1815498 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1815498 Title: Use pyroute2 to check vlan/vxlan in use Status in neutron: Fix Released Bug description: Now ip_lib.get_devices_info function is implemented using pyroute2, "vlan_in_use" and "vxlan_in_use" can make use of it. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1815498/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1817238] [NEW] Failed to replace security group tags
Public bug reported: Branch: master Environment: devstack Error log: http://paste.openstack.org/show/745672/ ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1817238 Title: Failed to replace security group tags Status in neutron: New Bug description: Branch: master Environment: devstack Error log: http://paste.openstack.org/show/745672/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1817238/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1817230] [NEW] SUSE network configuration for IPv6 static addresses
Public bug reported: The sysconfig renderer uses IPV6ADDR IPV6ADDR_SECONDARIES but this is not part of the set of names understood on SUSE distros it should be IPADDR6 IPADDR6_? ** Affects: cloud-init Importance: Undecided Assignee: Robert Schweikert (rjschwei) Status: New ** Changed in: cloud-init Assignee: (unassigned) => Robert Schweikert (rjschwei) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1817230 Title: SUSE network configuration for IPv6 static addresses Status in cloud-init: New Bug description: The sysconfig renderer uses IPV6ADDR IPV6ADDR_SECONDARIES but this is not part of the set of names understood on SUSE distros it should be IPADDR6 IPADDR6_? To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1817230/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1817219] [NEW] Failed to list security group tags
Public bug reported: branch: master environment: devstack error log: http://paste.openstack.org/show/745657/ ``` Feb 22 10:40:15 lingxiankong-pc neutron-server[3164]: DEBUG neutron.wsgi [-] (3195) accepted ('192.168.206.8', 41556) {{(pid=3195) server /usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py:956}} Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation [None req-acd651ef-cef2-4b36-9d46-7b81e4d18793 demo demo] GET failed.: TypeError: argument of type 'NoneType' is not iterable Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation Traceback (most recent call last): Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation File "/usr/local/lib/python2.7/dist-packages/pecan/core.py", line 682, in __call__ Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation controller, args, kwargs = self.find_controller(state) Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation File "/usr/local/lib/python2.7/dist-packages/pecan/core.py", line 858, in find_controller Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation controller, args, kw = super(Pecan, self).find_controller(_state) Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation File "/usr/local/lib/python2.7/dist-packages/pecan/core.py", line 550, in find_controller Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation self.handle_hooks(self.determine_hooks(controller), 'before', state) Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation File "/usr/local/lib/python2.7/dist-packages/pecan/core.py", line 865, in handle_hooks Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation return super(Pecan, self).handle_hooks(hooks, *args, **kw) Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation File "/usr/local/lib/python2.7/dist-packages/pecan/core.py", line 342, in handle_hooks Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation result = getattr(hook, hook_type)(*args) Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation File "/opt/stack/neutron/neutron/pecan_wsgi/hooks/query_parameters.py", line 101, in before Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation filters = _set_filters(state, controller) Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation File "/opt/stack/neutron/neutron/pecan_wsgi/hooks/query_parameters.py", line 81, in _set_filters Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation is_filter_validation_supported=controller.filter_validation) Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation File "/opt/stack/neutron/neutron/api/api_common.py", line 87, in get_filters_from_dict Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation attributes.populate_project_info(attr_info) Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation File "/usr/local/lib/python2.7/dist-packages/neutron_lib/api/attributes.py", line 49, in populate_project_info Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation if 'tenant_id' in attributes and 'project_id' not in attributes: Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation TypeError: argument of type 'NoneType' is not iterable Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: ERROR neutron.pecan_wsgi.hooks.translation Feb 22 10:40:16 lingxiankong-pc neutron-server[3164]: INFO neutron.wsgi [None req-acd651ef-cef2-4b36-9d46-7b81e4d18793 demo demo] 192.168.206.8 "GET /v2.0/security_groups/c63a2657-300b-4ab0-9201-65b49e3ba815/tags HTTP/1.1" status: 500 len: 368 time: 0.4839101 ``` ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1817219 Title: Failed to list security group tags Status in neutron: New Bug description: branch: master environment: devstack error log: http://paste.openstack.org/show/745657/ ``` Feb 22 10:40:15 lingxiankong-pc neutron-server[3164]: DEBUG neutron.wsgi [-] (3195) accepted ('192.168.206.8', 41556) {{(pid=3195) server /usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py:956}} Feb 2
[Yahoo-eng-team] [Bug 1816771] Re: Creation of router fails in devstack
Reviewed: https://review.openstack.org/638380 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=d802fad8a92625005597ebda4931b0bbe13418e9 Submitter: Zuul Branch:master commit d802fad8a92625005597ebda4931b0bbe13418e9 Author: Slawek Kaplonski Date: Thu Feb 21 11:16:03 2019 +0100 Avoid loading same service plugin more than once In patch [1] requirement that only each service plugin can be loaded only once was removed. Unfortunatelly it is not possible that same service plugin will be instantiate more than once because it may reqister some callbacks or other things which can't be duplicated. So this patch adds mechanism which will ensure that each service plugin class is instantiate only once and reused if necessary. [1] https://review.openstack.org/#/c/626561/ Closes-Bug: #1816771 Change-Id: Ie6e6cc1bbbe50ff7cfad4e8033e48711569ea020 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1816771 Title: Creation of router fails in devstack Status in neutron: Fix Released Bug description: Router creation failed in http://logs.openstack.org/36/638136/1/check /openstacksdk-functional-devstack/1cdc712/job- output.txt.gz#_2019-02-20_12_04_53_592956 In neutron-server log there is error like: http://logs.openstack.org/36/638136/1/check/openstacksdk-functional- devstack/1cdc712/controller/logs/screen-q-svc.txt.gz#_Feb_20_12_04_53_392346 It is possible that this could be introduced somehow by https://review.openstack.org/#/c/635671/ but it is not sure for now. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1816771/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1817169] Re: Volume Groups Under Project Panel not working
** Changed in: horizon Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1817169 Title: Volume Groups Under Project Panel not working Status in OpenStack Dashboard (Horizon): Invalid Bug description: Volume Groups under Project panel does not show any data. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1817169/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1817169] [NEW] Volume Groups Under Project Panel not working
Public bug reported: Volume Groups under Project panel does not show any data. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1817169 Title: Volume Groups Under Project Panel not working Status in OpenStack Dashboard (Horizon): New Bug description: Volume Groups under Project panel does not show any data. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1817169/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1796074] Re: interface-attach to instance with a large number of attached interfaces fails with RequestURITooLong from neutron
** Changed in: nova/ocata Importance: Undecided => Medium ** Changed in: nova/queens Importance: Undecided => Medium ** Changed in: nova/rocky Importance: Undecided => Medium ** Changed in: nova/ocata Status: New => Confirmed ** Changed in: nova Importance: Undecided => Medium ** Changed in: nova/pike Importance: Undecided => Medium ** No longer affects: python-novaclient -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1796074 Title: interface-attach to instance with a large number of attached interfaces fails with RequestURITooLong from neutron Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) ocata series: Confirmed Status in OpenStack Compute (nova) pike series: In Progress Status in OpenStack Compute (nova) queens series: Fix Committed Status in OpenStack Compute (nova) rocky series: Fix Committed Bug description: Hello! # nova-manage --version 14.0.0 Command which produce error: nova interface-attach --net-id I got Unexpected API Error when i try nova interface-attach to instance with attached 250 network interface. And after execute nova interface-attach i can't manipulate network interface, i can't see interface inside instance, only delete port. DEBUG (session:727) GET call to compute for http://ip:8774/v2.1/b060ad44b2cd4592bdfc50948256ab02/servers/506260c2-343b-4f56-9409-5c4b5ea9d269 used request id req-34fe7aae-75ed-4a90-833d-86ef8cd3d2a4 DEBUG (client:85) GET call to compute for http://ip:8774/v2.1/b060ad44b2cd4592bdfc50948256ab02/servers/506260c2-343b-4f56-9409-5c4b5ea9d269 used request id req-34fe7aae-75ed-4a90-833d-86ef8cd3d2a4 DEBUG (session:375) REQ: curl -g -i -X POST http://ip:8774/v2.1/b060ad44b2cd4592bdfc50948256ab02/servers/506260c2-343b-4f56-9409-5c4b5ea9d269/os-interface -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "OpenStack-API-Version: compute 2.37" -H "X-OpenStack-Nova-API-Version: 2.37" -H "X-Auth-Token: {SHA1}04925ba60ec47cac9d6e099b287f94ba49e99113" -H "Content-Type: application/json" -d '{"interfaceAttachment": {"net_id": "728b6584-8f52-4613-b799-b1bff4f42f53"}}' DEBUG (connectionpool:396) http://ip:8774 "POST /v2.1/b060ad44b2cd4592bdfc50948256ab02/servers/506260c2-343b-4f56-9409-5c4b5ea9d269/os-interface HTTP/1.1" 500 211 DEBUG (session:423) RESP: [500] Openstack-Api-Version: compute 2.37 X-Openstack-Nova-Api-Version: 2.37 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version Content-Type: application/json; charset=UTF-8 Content-Length: 211 X-Compute-Request-Id: req-0725bd5b-f86e-4194-aa35-efe229413e90 Date: Thu, 04 Oct 2018 09:12:44 GMT RESP BODY: {"computeFault": {"message": "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.\n", "code": 500}} DEBUG (session:727) POST call to compute for http://ip:8774/v2.1/b060ad44b2cd4592bdfc50948256ab02/servers/506260c2-343b-4f56-9409-5c4b5ea9d269/os-interface used request id req-0725bd5b-f86e-4194-aa35-efe229413e90 DEBUG (client:85) POST call to compute for http://ip:8774/v2.1/b060ad44b2cd4592bdfc50948256ab02/servers/506260c2-343b-4f56-9409-5c4b5ea9d269/os-interface used request id req-0725bd5b-f86e-4194-aa35-efe22413e90 DEBUG (shell:984) Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-0725bd5b-f86e-4194-aa35-efe229413e90) Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 982, in main OpenStackComputeShell().main(argv) File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 909, in main args.func(self.cs, args) File "/usr/lib/python2.7/dist-packages/novaclient/v2/shell.py", line 5047, in do_interface_attach res = server.interface_attach(args.port_id, args.net_id, args.fixed_ip) File "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 552, in interface_attach return self.manager.interface_attach(self, port_id, net_id, fixed_ip) File "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 1822, in interface_attach body, 'interfaceAttachment') File "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 356, in _create resp, body = self.api.client.post(url, body=body) File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 294, in post return self.request(url, 'POST', **kwargs) File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 117, in request raise exceptions.from_response(resp, body, url, method) ClientException: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (
[Yahoo-eng-team] [Bug 1815629] Re: api and rpc worker defaults are problematic
Reviewed: https://review.openstack.org/636363 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=7e09b25b964dde82caf5f5159d25810b6f8ebd3c Submitter: Zuul Branch:master commit 7e09b25b964dde82caf5f5159d25810b6f8ebd3c Author: Doug Wiegley Date: Tue Feb 12 08:47:19 2019 -0700 Modify api and rpc default number of workers - Limit number of api workers to roughly using half of system RAM. Spawning a bunch, just to have the OOM killer nuke them regularly is not useful. - Bump the rpc_workers default to half of the api_workers. A default of 1 falls behind on any reasonably sized node. Change-Id: I8b84a359f83133014b3d4414aafc10e6b7c6a876 Closes-bug: #1815629 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1815629 Title: api and rpc worker defaults are problematic Status in neutron: Fix Released Bug description: We default the number of api workers to the number of cores. At approximately 2GB per neutron-server, sometimes that's more RAM than is available, and the OOM killer comes out. We default the number of rpc workers to 1, which seems to fall behind on all but the smallest deployments. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1815629/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1746561] Re: Make host_manager use scatter-gather and ignore down cells
** Also affects: nova/queens Importance: Undecided Status: New ** Changed in: nova/queens Status: New => In Progress ** Changed in: nova/queens Importance: Undecided => Medium ** Changed in: nova/queens Assignee: (unassigned) => Elod Illes (elod-illes) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1746561 Title: Make host_manager use scatter-gather and ignore down cells Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) pike series: Confirmed Status in OpenStack Compute (nova) queens series: In Progress Bug description: Currently the "_get_computes_for_cells" function in the host_manager of scheduler runs sequentially and this affects the performance in case of large deployments (running a lot of cells) : https://github.com/openstack/nova/blob/stable/pike/nova/scheduler/host_manager.py#L601 So it would be nice to use the scatter_gather_all_cells function to do this operation in parallel. Also apart from the performance scaling point of view, in case connection to a particular cell fails, it would be nice to have sentinels returned which is done by the scatter_gather_all_cells. This helps when a cell is down. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1746561/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1812177] Re: BuildRequest ovo incorrectly assumes that IncompatibleObjectVersion exception has an objver field
** Also affects: nova/ocata Importance: Undecided Status: New ** Also affects: nova/pike Importance: Undecided Status: New ** Also affects: nova/queens Importance: Undecided Status: New ** Also affects: nova/rocky Importance: Undecided Status: New ** Changed in: nova/rocky Status: New => Fix Released ** Changed in: nova/rocky Importance: Undecided => Low ** Changed in: nova/rocky Assignee: (unassigned) => Elod Illes (elod-illes) ** Changed in: nova/queens Status: New => In Progress ** Changed in: nova/pike Status: New => In Progress ** Changed in: nova/pike Status: In Progress => Confirmed ** Changed in: nova/ocata Status: New => Confirmed ** Changed in: nova/ocata Importance: Undecided => Low ** Changed in: nova/pike Importance: Undecided => Low ** Changed in: nova/queens Importance: Undecided => Low ** Changed in: nova/queens Assignee: (unassigned) => Elod Illes (elod-illes) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1812177 Title: BuildRequest ovo incorrectly assumes that IncompatibleObjectVersion exception has an objver field Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) ocata series: Confirmed Status in OpenStack Compute (nova) pike series: Confirmed Status in OpenStack Compute (nova) queens series: In Progress Status in OpenStack Compute (nova) rocky series: Fix Released Bug description: BuildRequest ovo incorrectly assumes that IncompatibleObjectVersion exception has an objver field while handling such exception. This leads to an AttributeError during exception handling in [1]. [1] https://github.com/openstack/nova/blob/e3b517b6fd7c470d5cee420fce1456b98495d310/nova/objects/build_request.py#L83 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1812177/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1817119] [NEW] [rfe] add rbac for security groups
Public bug reported: This change started as a small performance fix, allowing hundreds of tenants to share one 3000+ rule group, instead of having hundreds of them. Adds "security_group" as a supported RBAC type: Neutron-lib: https://review.openstack.org/635313 Neutron: https://review.openstack.org/635311 Tempest tests: https://review.openstack.org/635312 Client: https://review.openstack.org/636760 ** Affects: neutron Importance: Undecided Assignee: Doug Wiegley (dougwig) Status: New ** Tags: rfe -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1817119 Title: [rfe] add rbac for security groups Status in neutron: New Bug description: This change started as a small performance fix, allowing hundreds of tenants to share one 3000+ rule group, instead of having hundreds of them. Adds "security_group" as a supported RBAC type: Neutron-lib: https://review.openstack.org/635313 Neutron: https://review.openstack.org/635311 Tempest tests: https://review.openstack.org/635312 Client: https://review.openstack.org/636760 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1817119/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1810977] Re: Oversubscription broken for instances with NUMA topologies
** Also affects: nova/rocky Importance: Undecided Status: New ** Changed in: nova/rocky Status: New => In Progress ** Changed in: nova/rocky Importance: Undecided => Medium ** Changed in: nova/rocky Assignee: (unassigned) => Stephen Finucane (stephenfinucane) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1810977 Title: Oversubscription broken for instances with NUMA topologies Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) rocky series: In Progress Bug description: As described in [1], the fix to [2] appears to have inadvertently broken oversubscription of memory for instances with a NUMA topology but no hugepages. Steps to reproduce: 1. Create a flavor that will consume > 50% available memory for your host(s) and specify an explicit NUMA topology. For example, on my all- in-one deployment where the host has 32GB RAM, we will request a 20GB instance: $ openstack flavor create --vcpu 2 --disk 0 --ram 20480 test.numa $ openstack flavor set test.numa --property hw:numa_nodes=2 2. Boot an instance using this flavor: $ openstack server create --flavor test.numa --image cirros-0.3.6-x86_64-disk --wait test 3. Boot another instance using this flavor: $ openstack server create --flavor test.numa --image cirros-0.3.6-x86_64-disk --wait test2 # Expected result: The second instance should boot. # Actual result: The second instance fails to boot. We see the following error message in the logs. nova-scheduler[18295]: DEBUG nova.virt.hardware [None req-f7a6594b-8d25-424c-9c6e-8522f66ffd22 demo admin] No specific pagesize requested for instance, selected pagesize: 4 {{(pid=18318) _numa_fit_instance_cell /opt/stack/nova/nova/virt/hardware.py:1045}} nova-scheduler[18295]: DEBUG nova.virt.hardware [None req-f7a6594b-8d25-424c-9c6e-8522f66ffd22 demo admin] Not enough available memory to schedule instance with pagesize 4. Required: 10240, available: 5676, total: 15916. {{(pid=18318) _numa_fit_instance_cell /opt/stack/nova/nova/virt/hardware.py:1055}} If we revert the patch that addressed the bug [3] then we revert to the correct behaviour and the instance boots. With this though, we obviously lose whatever benefits that change gave us. [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001459.html [2] https://bugs.launchpad.net/nova/+bug/1734204 [3] https://review.openstack.org/#/c/532168 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1810977/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1794773] Re: Unnecessary warning when ironic node properties are not set
The comment in the bug report description linking to [1] https://review.openstack.org/#/c/565841/ was merged in Stein, not Rocky. ** Also affects: nova/rocky Importance: Undecided Status: New ** Changed in: nova/rocky Status: New => In Progress ** Changed in: nova/rocky Assignee: (unassigned) => Lee Yarwood (lyarwood) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1794773 Title: Unnecessary warning when ironic node properties are not set Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) rocky series: In Progress Bug description: If an ironic node is registered without either of the 'memory_mb' or 'cpus' properties, the following warning messages are seen in the nova-compute logs: Warning, memory usage is 0 for on baremetal node . Warning, number of cpus is 0 for on baremetal node . As of the Rocky release [1], the standard compute resources (VCPU, MEMORY_MB, DISK_GB) are not registered with placement for ironic nodes. They were not required to be set since the Pike release, but still this warning is emitted. [1] https://review.openstack.org/#/c/565841/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1794773/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1817082] [NEW] [RFE] Please add encrypted_data_bag_secret to client.rb.tmpl in cc_chef
Public bug reported: This is a request to add support for the client configuration option "encrypted_data_bag_secret" in `chef_client.rb.tmpl` and the `chef` configuration block. Use Case: Enable cloud-init to manage Chef deployments where encrypted data bags are in use. The path to the secrets can be configured with Cloud init, while the secrets files themselves can be supplied via an external facility (e.g., Barbican, Vault). Example: # cloud-init chef: install_type: "packages" server_url: https://api.opscode.com/organizations/myorg environment: dev validation_name: dev-validator validation_cert: dev-validator.pem run_list: role[db] encrypted_data_bag_secret: /etc/chef/encrypted_data_bag_secret => # /etc/chef/client.rb log_level :info log_location "/var/log/chef/client.log" ssl_verify_mode:verify_none validation_client_name "dev-validator" validation_key "/etc/chef/validation.pem" client_key "/etc/chef/client.pem" chef_server_url"https://api.opscode.com/organizations/myorg"; environment"dev" node_name "5a2f89c3-da3a-4c83-85d8-cbc8fa63f429" json_attribs "/etc/chef/firstboot.json" file_cache_path"/var/cache/chef" file_backup_path "/var/backups/chef" pid_file "/var/run/chef/client.pid" Chef::Log::Formatter.show_time = true encrypted_data_bag_secret "encrypted_data_bag_secret" Thanks, Eric ** Affects: cloud-init Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1817082 Title: [RFE] Please add encrypted_data_bag_secret to client.rb.tmpl in cc_chef Status in cloud-init: New Bug description: This is a request to add support for the client configuration option "encrypted_data_bag_secret" in `chef_client.rb.tmpl` and the `chef` configuration block. Use Case: Enable cloud-init to manage Chef deployments where encrypted data bags are in use. The path to the secrets can be configured with Cloud init, while the secrets files themselves can be supplied via an external facility (e.g., Barbican, Vault). Example: # cloud-init chef: install_type: "packages" server_url: https://api.opscode.com/organizations/myorg environment: dev validation_name: dev-validator validation_cert: dev-validator.pem run_list: role[db] encrypted_data_bag_secret: /etc/chef/encrypted_data_bag_secret => # /etc/chef/client.rb log_level :info log_location "/var/log/chef/client.log" ssl_verify_mode:verify_none validation_client_name "dev-validator" validation_key "/etc/chef/validation.pem" client_key "/etc/chef/client.pem" chef_server_url"https://api.opscode.com/organizations/myorg"; environment"dev" node_name "5a2f89c3-da3a-4c83-85d8-cbc8fa63f429" json_attribs "/etc/chef/firstboot.json" file_cache_path"/var/cache/chef" file_backup_path "/var/backups/chef" pid_file "/var/run/chef/client.pid" Chef::Log::Formatter.show_time = true encrypted_data_bag_secret "encrypted_data_bag_secret" Thanks, Eric To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1817082/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1817035] Re: eth0 lost carrier / down after restart and IP change on older EC2-classic instance
Marking this cloud-init task as Invalid in favor of tracking out SRU to each ubuntu series in LP: #1802073 ** Changed in: cloud-init Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1817035 Title: eth0 lost carrier / down after restart and IP change on older EC2-classic instance Status in cloud-images: New Status in cloud-init: Invalid Bug description: I'm experiencing a consistent issue where older EC2 instance types (e.g. c3.large) launched in EC2-Classic from the bionic AMI lose network connection if they're stopped and subsequently restarted. They work fine on the first boot, but when restarted they time out both for things like SSH and also for EC2's status checks. They also appear to have no outbound connection e.g. to the metadata service etc. Rebooting does not resolve the issue, nor does stopping and starting again. On one occasion when testing, I resumed the instance very quickly and Amazon allocated it the same IP address as before - the instance booted with no problems. Normally however the instance gets a new IP address - so it appears this may be related. This is happening consistently with ami-08d658f84a6d84a80 (ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-20190212.1) and I've also reproduced with ami-0c21eb76a5574aa2f (ubuntu/images /hvm-ssd/ubuntu-bionic-18.04-amd64-server-20190210) It does not happen if launching a newer instance type into EC2-VPC. Steps to reproduce: * Launch ami-08d658f84a6d84a80 on a c3.large in EC2-Classic, with a securing group allowing port 22 from anywhere and other configuration all as AWS defaults * Wait for instance to boot, SSH to instance and observe all working normally. Wait for EC2 status checks to initialise and observe they pass. * Stop instance * Wait a minute or two - if restarted very rapidly AWS may reallocate the previous IP * Start instance and observe it has been allocated a new IP address * Wait a few minutes * Attempt to SSH to the instance and observe the connection times out * Observe that the EC2 instance reachability status check is failing * Use the EC2 console to take an instance screenshot and observe that the console is showing the login prompt By attaching the root volume from the broken instance to a new instance, I was able to capture and compare the syslog for the two boots. Both appear broadly similar at first, DHCP works as expected over eth0. In both boots, systemd-networkd then reports "eth0: lost carrier". On the successful boot, systemd-networkd almost immediately afterwards then reports "eth0: gained carrier" and "eth0: IPv6 successfully enabled". However on the failed boot these entries never appear. Shortly afterwards cloud-init runs and on the success boot shows eth0 up with both IPv4 and IPv6 addresses, and valid routing tables. On the failed boot it shows eth0 down, no IPv4 routing table and an empty IPv6 routing table. Also later on in the log from the failed boot amazon-ssm-agent.amazon- ssm-agent reports that it cannot contact the metadata service (dial tcp 169.254.169.254:80: connect: network is unreachable). One thing I did notice is that the images don't appear to have been configured to disable Predictable Network Interface Names. I've tried changing that but it didn't resolve the issue. On reflection I think that's perhaps unrelated, since presumably the interface names don't change between a stop and start of the same instance on the same EC2 instance type, and the first boot works happily. Also the logs are all consistently showing eth0 rather than one of the newer interface names. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-images/+bug/1817035/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1817035] Re: eth0 lost carrier / down after restart and IP change on older EC2-classic instance
Hello Andrew, thank you for reporting this bug. I have added the cloud- init project as the issue is specific to that code and they can look into your issue. ** Also affects: cloud-init Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1817035 Title: eth0 lost carrier / down after restart and IP change on older EC2-classic instance Status in cloud-images: New Status in cloud-init: New Bug description: I'm experiencing a consistent issue where older EC2 instance types (e.g. c3.large) launched in EC2-Classic from the bionic AMI lose network connection if they're stopped and subsequently restarted. They work fine on the first boot, but when restarted they time out both for things like SSH and also for EC2's status checks. They also appear to have no outbound connection e.g. to the metadata service etc. Rebooting does not resolve the issue, nor does stopping and starting again. On one occasion when testing, I resumed the instance very quickly and Amazon allocated it the same IP address as before - the instance booted with no problems. Normally however the instance gets a new IP address - so it appears this may be related. This is happening consistently with ami-08d658f84a6d84a80 (ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-20190212.1) and I've also reproduced with ami-0c21eb76a5574aa2f (ubuntu/images /hvm-ssd/ubuntu-bionic-18.04-amd64-server-20190210) It does not happen if launching a newer instance type into EC2-VPC. Steps to reproduce: * Launch ami-08d658f84a6d84a80 on a c3.large in EC2-Classic, with a securing group allowing port 22 from anywhere and other configuration all as AWS defaults * Wait for instance to boot, SSH to instance and observe all working normally. Wait for EC2 status checks to initialise and observe they pass. * Stop instance * Wait a minute or two - if restarted very rapidly AWS may reallocate the previous IP * Start instance and observe it has been allocated a new IP address * Wait a few minutes * Attempt to SSH to the instance and observe the connection times out * Observe that the EC2 instance reachability status check is failing * Use the EC2 console to take an instance screenshot and observe that the console is showing the login prompt By attaching the root volume from the broken instance to a new instance, I was able to capture and compare the syslog for the two boots. Both appear broadly similar at first, DHCP works as expected over eth0. In both boots, systemd-networkd then reports "eth0: lost carrier". On the successful boot, systemd-networkd almost immediately afterwards then reports "eth0: gained carrier" and "eth0: IPv6 successfully enabled". However on the failed boot these entries never appear. Shortly afterwards cloud-init runs and on the success boot shows eth0 up with both IPv4 and IPv6 addresses, and valid routing tables. On the failed boot it shows eth0 down, no IPv4 routing table and an empty IPv6 routing table. Also later on in the log from the failed boot amazon-ssm-agent.amazon- ssm-agent reports that it cannot contact the metadata service (dial tcp 169.254.169.254:80: connect: network is unreachable). One thing I did notice is that the images don't appear to have been configured to disable Predictable Network Interface Names. I've tried changing that but it didn't resolve the issue. On reflection I think that's perhaps unrelated, since presumably the interface names don't change between a stop and start of the same instance on the same EC2 instance type, and the first boot works happily. Also the logs are all consistently showing eth0 rather than one of the newer interface names. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-images/+bug/1817035/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1816955] Re: [Fwaasv1][Fwaasv2]can update a firewall rule with icmp protocol when source/destination port is specified which should not be allowed
This is not a CLI bug. This should be fixed in neutron-fwaas. ** Project changed: python-neutronclient => neutron ** Tags added: fwaas ** Changed in: neutron Importance: Undecided => Medium ** Changed in: neutron Status: New => Confirmed ** Changed in: neutron Importance: Medium => Low -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1816955 Title: [Fwaasv1][Fwaasv2]can update a firewall rule with icmp protocol when source/destination port is specified which should not be allowed Status in neutron: Confirmed Bug description: firewall group rule with protocol: icmp, source/destination port, and action any it throws the following error, nicira@utu1604template:/opt/stack/neutron-fwaas/neutron_fwaas/db/firewall/v2$ openstack firewall group rule create --protocol icmp --source-port 25 --name xy Source, destination port are not allowed when protocol is set to ICMP. Neutron server returns request_ids: ['req-09cc6a16-7215-45ce-89c8-3226bfd4ca64'] but when user created a firewall group rule with protocol: tcp and --source-port:23 nnicira@utu1604template:~/devstack$ openstack firewall group rule create --protocol tcp --source-port 23 --name bg-rl ++--+ | Field | Value| ++--+ | Action | deny | | Description| | | Destination IP Address | None | | Destination Port | None | | Enabled| True | | ID | 79f8c59e-38bc-4b45-afff-fe963df4080d | | IP Version | 4| | Name | bg-rl| | Project| 7e5ec032563948eeb3f443c9ca258f71 | | Protocol | tcp | | Shared | False| | Source IP Address | None | | Source Port| 23 | | firewall_policy_id | None | | project_id | 7e5ec032563948eeb3f443c9ca258f71 | ++--+ and updated it with protocol icmp it allows. nicira@utu1604template:~/devstack$ openstack firewall group rule set --protocol icmp bg-rl nicira@utu1604template:~/devstack$ openstack firewall group rule show bg-rl ++--+ | Field | Value| ++--+ | Action | deny | | Description| | | Destination IP Address | None | | Destination Port | None | | Enabled| True | | ID | 79f8c59e-38bc-4b45-afff-fe963df4080d | | IP Version | 4| | Name | bg-rl| | Project| 7e5ec032563948eeb3f443c9ca258f71 | | Protocol | icmp | | Shared | False| | Source IP Address | None | | Source Port| 23 | | firewall_policy_id | None | | project_id | 7e5ec032563948eeb3f443c9ca258f71 | ++--+ when icmp + port is not allowed this should be validated while updating rule. There should be a validation needed while updating firewall rules to check if port is specified and the protocol is icmp. The traces are here, ^[[00;36mINFO neutron.wsgi [^[[01;36mNone req-86f01b1f-f413-4aa4-82d2-74d03ec57e85 ^[[00;36madmin admin^[[00;36m] ^[[01;35m^[[00;36m10.144.139.12 "GET /v2.0/fwaas/firewall_rules?name=bg-rl HTTP/1.1" status: 200 len: 624 time: 0.0692658^[[00m^[[00m ^[[00;32mDEBUG neutron.api.v2.base [^[[01;36mNone req-b5132d41-3e1e-47b0-8f68-fbb7cb44d578 ^[[00;36madmin admin^[[00;32m] ^[[01;35m^[[00;32mRequest body: {u'firewall_rule': {u'protocol': u'icmp'}}^[[00m ^[[00;33m{{(pid=28763) prepare_request_body /opt/stack/neutron/neutron/api/v2/base.py:716}}^[[00m^[[00m ^[[00;32mDEBUG neutron_fwaas.services.firewall.fwaas_plugin_v2 [^[[01;
[Yahoo-eng-team] [Bug 1816955] [NEW] [Fwaasv1][Fwaasv2]can update a firewall rule with icmp protocol when source/destination port is specified which should not be allowed
You have been subscribed to a public bug: firewall group rule with protocol: icmp, source/destination port, and action any it throws the following error, nicira@utu1604template:/opt/stack/neutron-fwaas/neutron_fwaas/db/firewall/v2$ openstack firewall group rule create --protocol icmp --source-port 25 --name xy Source, destination port are not allowed when protocol is set to ICMP. Neutron server returns request_ids: ['req-09cc6a16-7215-45ce-89c8-3226bfd4ca64'] but when user created a firewall group rule with protocol: tcp and --source-port:23 nnicira@utu1604template:~/devstack$ openstack firewall group rule create --protocol tcp --source-port 23 --name bg-rl ++--+ | Field | Value| ++--+ | Action | deny | | Description| | | Destination IP Address | None | | Destination Port | None | | Enabled| True | | ID | 79f8c59e-38bc-4b45-afff-fe963df4080d | | IP Version | 4| | Name | bg-rl| | Project| 7e5ec032563948eeb3f443c9ca258f71 | | Protocol | tcp | | Shared | False| | Source IP Address | None | | Source Port| 23 | | firewall_policy_id | None | | project_id | 7e5ec032563948eeb3f443c9ca258f71 | ++--+ and updated it with protocol icmp it allows. nicira@utu1604template:~/devstack$ openstack firewall group rule set --protocol icmp bg-rl nicira@utu1604template:~/devstack$ openstack firewall group rule show bg-rl ++--+ | Field | Value| ++--+ | Action | deny | | Description| | | Destination IP Address | None | | Destination Port | None | | Enabled| True | | ID | 79f8c59e-38bc-4b45-afff-fe963df4080d | | IP Version | 4| | Name | bg-rl| | Project| 7e5ec032563948eeb3f443c9ca258f71 | | Protocol | icmp | | Shared | False| | Source IP Address | None | | Source Port| 23 | | firewall_policy_id | None | | project_id | 7e5ec032563948eeb3f443c9ca258f71 | ++--+ when icmp + port is not allowed this should be validated while updating rule. There should be a validation needed while updating firewall rules to check if port is specified and the protocol is icmp. The traces are here, ^[[00;36mINFO neutron.wsgi [^[[01;36mNone req-86f01b1f-f413-4aa4-82d2-74d03ec57e85 ^[[00;36madmin admin^[[00;36m] ^[[01;35m^[[00;36m10.144.139.12 "GET /v2.0/fwaas/firewall_rules?name=bg-rl HTTP/1.1" status: 200 len: 624 time: 0.0692658^[[00m^[[00m ^[[00;32mDEBUG neutron.api.v2.base [^[[01;36mNone req-b5132d41-3e1e-47b0-8f68-fbb7cb44d578 ^[[00;36madmin admin^[[00;32m] ^[[01;35m^[[00;32mRequest body: {u'firewall_rule': {u'protocol': u'icmp'}}^[[00m ^[[00;33m{{(pid=28763) prepare_request_body /opt/stack/neutron/neutron/api/v2/base.py:716}}^[[00m^[[00m ^[[00;32mDEBUG neutron_fwaas.services.firewall.fwaas_plugin_v2 [^[[01;36mNone req-b5132d41-3e1e-47b0-8f68-fbb7cb44d578 ^[[00;36madmin admin^[[00;32m] ^[[01;35m^[[00;32mneutron_fwaas.services.firewall.fwaas_plugin_v2.FirewallPluginV2 method get_firewall_rule called with arguments (, u'79f8c59e-38bc-4b45-afff-fe963df4080d') {'fields': ['firewall_policy_id', 'id', 'shared', 'project_id', 'tenant_id']}^[[00m ^[[00;33m{{(pid=28763) wrapper /usr/local/lib/python2.7/dist-packages/oslo_log/helpers.py:66}}^[[00m^[[00m ^[[00;32mDEBUG neutron_fwaas.services.firewall.fwaas_plugin_v2 [^[[01;36mNone req-b5132d41-3e1e-47b0-8f68-fbb7cb44d578 ^[[00;36madmin admin^[[00;32m] ^[[01;35m^[[00;32mneutron_fwaas.services.firewall.fwaas_plugin_v2.FirewallPluginV2 method update_firewall_rule called with arguments (,
[Yahoo-eng-team] [Bug 1816684] Re: Create an Application Credential of the name has exist, the error message is not clear
Reviewed: https://review.openstack.org/638062 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=eb6a78f5176609309ed6ef8b30ee4cf87b36e924 Submitter: Zuul Branch:master commit eb6a78f5176609309ed6ef8b30ee4cf87b36e924 Author: pengyuesheng Date: Wed Feb 20 10:13:20 2019 +0800 Throws exceptions.Conflict() in the interface application_credential_create Change-Id: I285a588acf30b5e0858f98ff3d847a4049eb6b34 Closes-Bug: #1816684 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1816684 Title: Create an Application Credential of the name has exist, the error message is not clear Status in OpenStack Dashboard (Horizon): Fix Released Bug description: Create an Application Credential of the name has exist, the error message is not clear To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1816684/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1817054] [NEW] Attribute Error just after creating a volume group
Public bug reported: Just after creating a volume group, Attribute error happens. http://paste.openstack.org/show/745454/ ** Affects: horizon Importance: Medium Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1817054 Title: Attribute Error just after creating a volume group Status in OpenStack Dashboard (Horizon): New Bug description: Just after creating a volume group, Attribute error happens. http://paste.openstack.org/show/745454/ To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1817054/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1817047] [NEW] 404 requested URL not found in Keystone User Guide
Public bug reported: This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [x] This doc is inaccurate in this way: Link generates 404 error. at url: https://docs.openstack.org/keystone/rocky/admin/identity- concepts.html Chapter: Identity concepts section: Service management what: the link [OpenStack Administrator Guide] generates a 404 error. --- Release: on 2019-01-07 15:31 SHA: 718f4a9c4c55f5766895eff94eda66d420451235 Source: https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/admin/identity-concepts.rst URL: https://docs.openstack.org/keystone/rocky/admin/identity-concepts.html ** Affects: keystone Importance: Undecided Status: New ** Tags: doc -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1817047 Title: 404 requested URL not found in Keystone User Guide Status in OpenStack Identity (keystone): New Bug description: This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [x] This doc is inaccurate in this way: Link generates 404 error. at url: https://docs.openstack.org/keystone/rocky/admin/identity- concepts.html Chapter: Identity concepts section: Service management what: the link [OpenStack Administrator Guide] generates a 404 error. --- Release: on 2019-01-07 15:31 SHA: 718f4a9c4c55f5766895eff94eda66d420451235 Source: https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/admin/identity-concepts.rst URL: https://docs.openstack.org/keystone/rocky/admin/identity-concepts.html To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1817047/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1817045] [NEW] Port forwarding API and scenario tests are missing
Public bug reported: We don't have any API and/or scenario tests for port forwarding feature. We should add them and enable port_forwarding service plugin in our jobs. Devstack plugin for port_forwarding was added in https://review.openstack.org/#/c/617045/ but it isn't used in any of our jobs currently ** Affects: neutron Importance: Medium Status: Confirmed ** Tags: l3-ha -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1817045 Title: Port forwarding API and scenario tests are missing Status in neutron: Confirmed Bug description: We don't have any API and/or scenario tests for port forwarding feature. We should add them and enable port_forwarding service plugin in our jobs. Devstack plugin for port_forwarding was added in https://review.openstack.org/#/c/617045/ but it isn't used in any of our jobs currently To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1817045/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1815539] Re: Self-service policies for credential APIs are broken in stable/rocky
** Also affects: keystone/rocky Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1815539 Title: Self-service policies for credential APIs are broken in stable/rocky Status in OpenStack Identity (keystone): Triaged Status in OpenStack Identity (keystone) rocky series: New Bug description: Service-service policies for credential APIs are broken in stable/rocky. More specifically, Get/Update/Delete no longer works with the following policies. "identity:get_credential": "rule:admin_required or user_id:%(target.credential.user_id)s" "identity:update_credential": "rule:admin_required or user_id:%(target.credential.user_id)s" "identity:delete_credential": "rule:admin_required or user_id:%(target.credential.user_id)s" This used to work in Pike and Queens because we pass the entity to policy enforcement via get_member_from_driver. https://github.com/openstack/keystone/blob/stable/queens/keystone/credential/controllers.py#L36 However, in stable/rocky we no longer pass the entity as part of the target. https://github.com/openstack/keystone/blob/stable/rocky/keystone/api/credentials.py#L86 Therefore, any policy rule which has target.credential.* no longer works. Stein seems to be working again as the problem was fixed as part of https://bugs.launchpad.net/keystone/+bug/1788415. We'll need to fix stable/rocky by conveying the credential entity to the target again. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1815539/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1804519] Related fix merged to keystone (master)
Reviewed: https://review.openstack.org/619616 Committed: https://git.openstack.org/cgit/openstack/keystone/commit/?id=e4e258a5dccd6188564d54305ad6e3d1805c17d8 Submitter: Zuul Branch:master commit e4e258a5dccd6188564d54305ad6e3d1805c17d8 Author: Lance Bragstad Date: Thu Nov 22 16:28:53 2018 + Add tests for project users interacting with mappings This commit introduces some tests that show how project users are expected to behave with the federated mappings API. A subsequent patch will clean up the now obsolete policies in the policy.v3cloudsample.json file. Change-Id: I4c8d8dd8474a8374d68458e3903c379ee44bc731 Related-Bug: 1804519 ** Changed in: keystone Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1804519 Title: Remove obsolete mapping policies from policy.v3cloudsample.json Status in OpenStack Identity (keystone): Fix Released Bug description: Once support for scope types landed in the mapping API policies, the policies in policy.v3cloudsample.json became obsolete [0][1]. We should add formal protection for the policies with enforce_scope = True in keystone.tests.unit.protection.v3 and remove the old policies from the v3 sample policy file. This will reduce confusion by having a true default policy for mappings. [0] https://review.openstack.org/#/c/525701/ [1] https://git.openstack.org/cgit/openstack/keystone/tree/etc/policy.v3cloudsample.json?id=fb73912d87b61c419a86c0a9415ebdcf1e186927#n210 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1804519/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1817022] [NEW] RFE: set inactivity_probe and max_backoff for OVS bridge controller
Public bug reported: It would be useful to have the option to specify inactivity_probe and max_backoff for OVS bridge controllers in neutron config. OVS documentation says (https://github.com/openvswitch/ovs/blob/master/ovn/TODO.rst): The default 5 seconds inactivity_probe value is not sufficient and ovsdb-server drops the client IDL connections for openstack deployments when the neutron server is heavily loaded. This indeed can happen under the heavy load in neutron-ovs-agent. This was discussed in http://eavesdrop.openstack.org/irclogs/%23openstack- neutron/%23openstack-neutron.2017-01-27.log.html#t2017-01-27T02:46:22 , and the solution was to increase inactivity_probe. Alternative is to set this settings manually after each neutron-ovs-agent restart: ovs-vsctl set Controller br-tun inactivity_probe=3 ovs-vsctl set Controller br-int inactivity_probe=3 ovs-vsctl set Controller br-ex inactivity_probe=3 ovs-vsctl set Controller br-tun max_backoff=5000 ovs-vsctl set Controller br-int max_backoff=5000 ovs-vsctl set Controller br-ex max_backoff=5000 ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1817022 Title: RFE: set inactivity_probe and max_backoff for OVS bridge controller Status in neutron: New Bug description: It would be useful to have the option to specify inactivity_probe and max_backoff for OVS bridge controllers in neutron config. OVS documentation says (https://github.com/openvswitch/ovs/blob/master/ovn/TODO.rst): The default 5 seconds inactivity_probe value is not sufficient and ovsdb-server drops the client IDL connections for openstack deployments when the neutron server is heavily loaded. This indeed can happen under the heavy load in neutron-ovs-agent. This was discussed in http://eavesdrop.openstack.org/irclogs/%23openstack- neutron/%23openstack-neutron.2017-01-27.log.html#t2017-01-27T02:46:22 , and the solution was to increase inactivity_probe. Alternative is to set this settings manually after each neutron-ovs-agent restart: ovs-vsctl set Controller br-tun inactivity_probe=3 ovs-vsctl set Controller br-int inactivity_probe=3 ovs-vsctl set Controller br-ex inactivity_probe=3 ovs-vsctl set Controller br-tun max_backoff=5000 ovs-vsctl set Controller br-int max_backoff=5000 ovs-vsctl set Controller br-ex max_backoff=5000 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1817022/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp