[Yahoo-eng-team] [Bug 1843634] Re: cloud-init misconfigure the network on SLES
OK, thanks for the logs. Could you re-attach those running via sudo (or as root)? The default user on SLES does not have permissions to read the journal. What I see so far looks like networking did not come up after cloud- init-local.service completes and writes out a network config. 2019-09-11 18:00:15,242 - stages.py[INFO]: Applying network configuration from ds bringup=False: {'ethernets': {'eth0': {'set-name': 'eth0', 'match': {'macaddress': u'00:0d:3a:6e:6f:8f'}, 'dhcp4': True}}, 'version': 2} This results in the following files being written: % cat test_azure_sles/etc/sysconfig/network/ifcfg-eth0 # Created by cloud-init on instance boot automatically, do not edit. # BOOTPROTO=dhcp DEVICE=eth0 HWADDR=00:0d:3a:6e:6f:8f NM_CONTROLLED=no ONBOOT=yes STARTMODE=auto TYPE=Ethernet USERCTL=no Upstream cloud-init on SLES does not generate/update /etc/resolv.conf but in the logs the cloud-init in does: 2019-09-11 18:00:15,246 - util.py[DEBUG]: Writing to /etc/sysconfig/network/ifcfg-eth0 - wb: [644] 191 bytes 2019-09-11 18:00:15,247 - util.py[DEBUG]: Reading from /etc/resolv.conf (quiet=False) 2019-09-11 18:00:15,247 - util.py[DEBUG]: Read 795 bytes from /etc/resolv.conf 2019-09-11 18:00:15,247 - util.py[DEBUG]: Writing to /etc/resolv.conf - wb: [644] 866 bytes At first, I thought maybe it was missing this commit: % git show b74ebca563a21332b29482c8029e7908f60225a4 commit b74ebca563a21332b29482c8029e7908f60225a4 Author: Robert Schweikert Date: Wed Jan 23 22:35:32 2019 + net/sysconfig: do not write a resolv.conf file with only the header. Writing the file with no dns information may prevent distro tools from writing a resolv.conf file with dns information obtained from a dhcp server. diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py index ae41f7b..fd8e501 100644 --- a/cloudinit/net/sysconfig.py +++ b/cloudinit/net/sysconfig.py @@ -557,6 +557,8 @@ class Renderer(renderer.Renderer): content.add_nameserver(nameserver) for searchdomain in network_state.dns_searchdomains: content.add_search_domain(searchdomain) +if not str(content): +return None header = _make_header(';') content_str = str(content) if not content_str.startswith(header): @@ -666,7 +668,8 @@ class Renderer(renderer.Renderer): dns_path = util.target_path(target, self.dns_path) resolv_content = self._render_dns(network_state, existing_dns_path=dns_path) -util.write_file(dns_path, resolv_content, file_mode) +if resolv_content: +util.write_file(dns_path, resolv_content, file_mode) if self.networkmanager_conf_path: nm_conf_path = util.target_path(target, self.networkmanager_conf_path) diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py index d679e92..5313d2d 100644 --- a/tests/unittests/test_net.py +++ b/tests/unittests/test_net.py @@ -2098,6 +2098,10 @@ TYPE=Ethernet USERCTL=no """ self.assertEqual(expected, found[nspath + 'ifcfg-interface0']) +# The configuration has no nameserver information make sure we +# do not write the resolv.conf file +respath = '/etc/resolv.conf' +self.assertNotIn(respath, found.keys()) def test_config_with_explicit_loopback(self): ns = network_state.parse_net_config_data(CONFIG_V1_EXPLICIT_LOOPBACK) @@ -2456,6 +2460,10 @@ TYPE=Ethernet USERCTL=no """ self.assertEqual(expected, found[nspath + 'ifcfg-interface0']) +# The configuration has no nameserver information make sure we +# do not write the resolv.conf file +respath = '/etc/resolv.conf' +self.assertNotIn(respath, found.keys()) def test_config_with_explicit_loopback(self): ns = network_state.parse_net_config_data(CONFIG_V1_EXPLICIT_LOOPBACK) But, I believe that is in 19.1 (or likely patched in the distro version). Later in the boot, we can see that networking didn't actually come up as Azure datasource can't find a lease file and then goes into some sort of fallback mode which tries to bring up networking (it does) but not with dhcp which is why you're missing DNS (it's provided via option to the DHCP response. 2019-09-11 18:00:15,946 - azure.py[DEBUG]: Unable to find endpoint in dhclient logs. Falling back to check lease files 2019-09-11 18:00:15,946 - azure.py[DEBUG]: Looking for endpoint in lease file /var/lib/dhcp/dhclient.eth0.leases 2019-09-11 18:00:15,946 - handlers.py[DEBUG]: start: azure-ds/_get_value_from_leases_file: _get_value_from_leases_file 2019-09-11 18:00:15,946 - util.py[DEBUG]: Reading from /var/lib/dhcp/dhclient.eth0.leases (quiet=False) 2019-09-11 18:00:15,947 - azure.py[ERROR]: Failed to read /var/lib/dhcp/dhclient.eth0.leases: [Errno 2] No such file or directory:
[Yahoo-eng-team] [Bug 1843502] Re: Network config is incorrectly parsed when nameservers are specified
Issue is related to local changes, marking invalid. ** Changed in: cloud-init Status: Incomplete => Invalid ** Changed in: cloud-init (Suse) Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1843502 Title: Network config is incorrectly parsed when nameservers are specified Status in cloud-init: Invalid Status in cloud-init package in Suse: Invalid Bug description: The issue was reproduced on Azure with cloud-init 19.1 on a SLES12 SP4 machine. Looking at the code, the same behavior could be reproduced on any other configuration where the cloud provider specifies nameservers in the network configuration. The specified nameservers in network configuration are ignored and cloud-init raises an error. In network_state.py the function _v2_common builds a name_cmd dictionary which is then passed to the function handle_nameserver. The handle_nameserver has a decorator that enforces that passed in dictionary to have the key "address". But the _v2_common build a dictionary that has the key "addresses" instead. That results in raising an error. Here's a snapshot of the cloud-init.log 2019-09-09 16:21:29,479 - network_state.py[DEBUG]: v2(nameserver) -> v1(nameserver): {'search': 'xkf00b0rtzgejkug4xc2pcinre.xx.internal.cloudapp.net', 'type': 'nameserver', 'addresses': '168.63.129.16'} 2019-09-09 16:21:29,479 - network_state.py[WARNING]: Skipping invalid command: {'nameservers': {'search': 'xkf00b0rtzgejkug4xc2pcinre.xx.internal.cloudapp.net', 'addresses': '168.63.129.16'}, 'eth0': {'set-name': 'eth0', 'match': {'macaddress': u'00:0d:3a:6d:ca:25'}, 'dhcp4': True}} Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/cloudinit/net/network_state.py", line 321, in parse_config_v2 self._v2_common(command) File "/usr/lib/python2.7/site-packages/cloudinit/net/network_state.py", line 697, in _v2_common self.handle_nameserver(name_cmd) File "/usr/lib/python2.7/site-packages/cloudinit/net/network_state.py", line 118, in decorator required_keys)) InvalidCommand: Command missing set(['address']) of required keys ['address'] To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1843502/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1842130] Re: Hyper-V virtualization platform in nova doc error
Reviewed: https://review.opendev.org/679588 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=8e062b3fb46dc7612e4337f86835bd05 Submitter: Zuul Branch:master commit 8e062b3fb46dc7612e4337f86835bd05 Author: chenxing Date: Mon Sep 2 11:06:21 2019 +0800 Fix the incorrect powershell command Change-Id: I28fb4ddacd87b6fb98d8da6bc6a5dea69ae51431 backport: stein rocky Closes-Bug: #1842130 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1842130 Title: Hyper-V virtualization platform in nova doc error Status in OpenStack Compute (nova): Fix Released Bug description: This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [x] This doc is inaccurate in this way: One of the powershell command under Configure NTP is incorrect. >w32tm /config /manualpeerlist:pool.ntp.org,0x8 /syncfromflags:MANUAL Produces an error when tried to run. Error states: The following arguments were unexpected: 8 - [ ] This is a doc addition request. - [x] I have a fix to the document that I can paste below including example: input and output. Need to add quotation marks around pool.ntp.org, 0x8 Change from: >w32tm /config /manualpeerlist:pool.ntp.org,0x8 /syncfromflags:MANUAL to: >w32tm /config /manualpeerlist:"pool.ntp.org,0x8" /syncfromflags:MANUAL --- Release: 18.2.3.dev9 on 2019-08-29 19:02 SHA: 7be800d14a69225a1bbf7823bac57f318ad21412 Source: https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/configuration/hypervisor-hyper-v.rst URL: https://docs.openstack.org/nova/rocky/admin/configuration/hypervisor-hyper-v.html To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1842130/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1841967] Re: ML2 mech driver sometimes receives network context without provider attributes in delete_network_postcommit
Reviewed: https://review.opendev.org/679483 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=fea2d9091f71a2ec88318121ed9a22180e1ae96f Submitter: Zuul Branch:master commit fea2d9091f71a2ec88318121ed9a22180e1ae96f Author: Mark Goddard Date: Fri Aug 30 16:58:34 2019 +0100 Create _mech_context before delete to avoid race When a network is deleted, precommit handlers are notified prior to the deletion of the network from the database. One handler exists in the ML2 plugin - _network_delete_precommit_handler. This handler queries the database for the current state of the network and uses it to create a NetworkContext which it saves under context._mech_context. When the postcommit handler _network_delete_after_delete_handler is triggered later, it passess the saved context._mech_context to mechanism drivers. A problem can occur with provider networks since the segments service also registers a precommit handler - _delete_segments_for_network. Both precommit handlers use the default priority, so the order in which they are called is random, and determined by dict ordering. If the segment precommit handler executes first, it will delete the segments associated with the network. When the ML2 plugin precommit handler runs it then sees no segments for the network and sets the provider attributes of the network in the NetworkContext to None. A mechanism driver that is passed a NetworkContext without provider attributes in its delete_network_postcommit method will not have the information to perform the necessary actions. In the case of the networking-generic-switch mechanism driver where this was observed, this resulted in the driver ignoring the event, because the network did not look like a VLAN. This change uses a priority of zero for ML2 network delete precommit handler, to ensure they query the network and store the NetworkContext before the segments service has a chance to delete segments. A similar change has been made for subnets, both to keep the pattern consistent and avoid any similar issues. Change-Id: I6482223ed2a479de4f5ef4cef056c311c0281408 Closes-Bug: #1841967 Depends-On: https://review.opendev.org/680001 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1841967 Title: ML2 mech driver sometimes receives network context without provider attributes in delete_network_postcommit Status in neutron: Fix Released Bug description: When a network is deleted, sometimes the delete_network_postcommit method of my ML2 mechanism driver receives a network object in the context that has the provider attributes set to None. I am using Rocky (13.0.4), on CentOS 7.5 + RDO, and kolla-ansible. I have three controllers running neutron-server. Specifically, the mechanism driver is networking-generic-switch. It needs the provider information in order to configure VLANs on physical switches, and without it I am left with stale switch configuration. In my testing I have found that reducing the number of neutron-server instances reduces the likelihood of seeing this issue. I did not see it with only one instance running, but only tested ~10 times. I have collected logs from a broken case and a working case, and one key difference I can see is that in the working case I see two of these messages, and in the broken case I see three: Network 3ed87da6-0b3a-455a-b813-7d069dc9e112 has no segments _extend_network_dict_provider /usr/lib/python2.7/site- packages/neutron/plugins/ml2/managers.py:168 Indeed, _extend_network_dict_provider sets the provider attributes to None if there are no segments found in the DB. It seems to be a race condition between segment deletion and creation of the _mech_context in the network precommit. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1841967/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1843643] [NEW] VM on encrypted boot volume fails to start after compute host reboot
Public bug reported: Description === Create volume from image and boot instance, all good. https://docs.openstack.org/newton/user-guide/cli-nova-launch-instance- from-volume.html Restart the compute host, VM fails to start. Manually run nova hard reboot is able to recover. The root cause is nova uses admin_context upon compute reboot to resume VMs, admin_context does not have the information for libvirt to start the VM. Manual nova hard reboot can be used as a walk-around, but auto-resume is ideal. Steps to reproduce == 1. Create volume from image and boot instance 2. Restart the compute host Expected result === VM is restarted after compute host reboot. Actual result = VM failed to restart Environment === 1. Nova context used by manual nova hard reboot 2019-09-11 17:20:52.314 9 ERROR nova.volume.cinder [req-8467f506-fbce- 447a-a5a2-63e048dedf43 97b696e8d26f4cd4bc5f3352981e9987 dd838c0dfb2540e39ca34ab44ecbc58f - default default] http://172.17.1.27:9696', u'region': u'regionOne', u'internalURL': u'http://172.17.1.27:9696', u'publicURL': u'https://10.75.239.200:13696'} ], u'type': u'network', u'name': u'neutron'}, {u'endpoints': [ {u'adminURL': u'http://172.17.1.27:9292', u'region': u'regionOne', u'internalURL': u'http://172.17.1.27:9292', u'publicURL': u'https://10.75.239.200:13292'} ], u'type': u'image', u'name': u'glance'}, {u'endpoints': [ {u'adminURL': u'https://172.17.1.27:9311', u'region': u'regionOne', u'internalURL': u'https://172.17.1.27:9311', u'publicURL': u'https://172.17.1.27:9311'} ], u'type': u'key-manager', u'name': u'barbican'}, {u'endpoints': [ {u'adminURL': u'http://172.17.1.27:8778/placement', u'region': u'regionOne', u'internalURL': u'http://172.17.1.27:8778/placement', u'publicURL': u'https://10.75.239.200:13778/placement'} ], u'type': u'placement', u'name': u'placement'}, {u'endpoints': [ {u'adminURL': u'http://172.17.1.27:8776/v3/dd838c0dfb2540e39ca34ab44ecbc58f', u'region': u'regionOne', u'internalURL': u'http://172.17.1.27:8776/v3/dd838c0dfb2540e39ca34ab44ecbc58f', u'publicURL': u'https://10.75.239.200:13776/v3/dd838c0dfb2540e39ca34ab44ecbc58f'} ], u'type': u'volumev3', u'name': u'cinderv3'}], 'tenant': u'dd838c0dfb2540e39ca34ab44ecbc58f', 'read_only': False, 'project_id': u'dd838c0dfb2540e39ca34ab44ecbc58f', 'user_id': u'97b696e8d26f4cd4bc5f3352981e9987', 'show_deleted': False, 'system_scope': None, 'user_identity': u'97b696e8d26f4cd4bc5f3352981e9987 dd838c0dfb2540e39ca34ab44ecbc58f - default default', 'is_admin_project': True, 'project': u'dd838c0dfb2540e39ca34ab44ecbc58f', 'read_deleted': u'no', 'request_id': u'req-8467f506-fbce-447a-a5a2-63e048dedf43', 'roles': [u'reader', u'_member_', u'admin', u'member'], 'user_domain': u'default', 'user_name': u'admin'}> 2. Nova admin context used by nova upon compute host reboot 2019-09-11 17:20:52.315 9 ERROR nova.volume.cinder [req-8467f506-fbce- 447a-a5a2-63e048dedf43 97b696e8d26f4cd4bc5f3352981e9987 dd838c0dfb2540e39ca34ab44ecbc58f - default default] ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1843643 Title: VM on encrypted boot volume fails to start after compute host reboot Status in OpenStack Compute (nova): New Bug description: Description === Create volume from image and boot instance, all good. https://docs.openstack.org/newton/user-guide/cli-nova-launch-instance- from-volume.html Restart the compute host, VM fails to start. Manually run nova hard reboot is able to recover. The root cause is nova uses admin_context upon compute reboot to resume VMs, admin_context does not have the information for libvirt to start the VM. Manual nova hard reboot can be used as a walk-around, but auto-resume is ideal. Steps to reproduce == 1. Create volume from image and boot instance 2. Restart the compute host Expected result === VM is restarted after compute host reboot. Actual result = VM failed to restart Environment === 1. Nova context used by manual nova hard reboot 2019-09-11 17:20:52.314 9 ERROR nova.volume.cinder [req-8467f506-fbce- 447a-a5a2-63e048dedf43 97b696e8d26f4cd4bc5f3352981e9987 dd838c0dfb2540e39ca34ab44ecbc58f - default default] http://172.17.1.27:9696', u'region': u'regionOne', u'internalURL': u'http://172.17.1.27:9696', u'publicURL': u'https://10.75.239.200:13696'} ], u'type': u'network', u'name': u'neutron'}, {u'endpoints': [ {u'adminURL': u'http://172.17.1.27:9292', u'region': u'regionOne', u'internalURL': u'http://172.17.1.27:9292', u'publicURL': u'https://10.75.239.200:13292'} ], u'type': u'image', u'name': u'glance'}, {u'endpoints': [ {u'adminURL': u'https://172.17.1.27:9311', u'region':
[Yahoo-eng-team] [Bug 1842666] Re: Bulk port creation with supplied security group also adds default security group
Reviewed: https://review.opendev.org/679852 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=88c7be55c221a87b4326a915580657f34d1ff582 Submitter: Zuul Branch:master commit 88c7be55c221a87b4326a915580657f34d1ff582 Author: Nate Johnston Date: Tue Sep 3 15:56:59 2019 -0400 Fix bulk port functioning with requested security groups When bulk ports are created with a security group supplied, the resulting port(s) should only have that security group assigned. But the resulting ports are getting both the requested security group as well as the tenant default security group assigned. This fixes that condition. In order to ensure that bulk port creation results in the proper assignment of security groups, add some testing. Change-Id: I65aca7cd14447cc988e4bc4ab62bc7b9279e7522 Fixes-Bug: #1842666 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1842666 Title: Bulk port creation with supplied security group also adds default security group Status in neutron: Fix Released Bug description: When bulk ports are created with a security group supplied, the resulting port(s) should only have that security group assigned. But the resulting ports are getting both the requested security group as well as the tenant default security group assigned. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1842666/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1843639] [NEW] libvirt: post_live_migration failures to disconnect volumes result in the rollback of live migrations
Public bug reported: Description === At present any exceptions encountered during post_live_migration on the source after an instance has successfully migrated result in the overall failure of the migration and the instance being listed as running on the source while actually being on the destination. Any such errors should be logged but otherwise ignored allowing the migration to complete and for the instance to continue to be tracked correctly. Steps to reproduce == - Live migrate an instance from host A to host B, ensuring post_live_migration fails. Expected result === Any failures on the source encountered by post_live_migration are logged but the overall migration still completes successfully. Actual result = The instance and overall migration are left in error states. Additionally the instance is reported as residing on the source host while actually running on the destination. Environment === 1. Exact version of OpenStack you are running. See the following list for all releases: http://docs.openstack.org/releases/ ba3147420c0a6f8b17a46b1a493b89bcd67af6f1 2. Which hypervisor did you use? (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...) What's the version of that? Libvirt + KVM 2. Which storage type did you use? (For example: Ceph, LVM, GPFS, ...) What's the version of that? N/A 3. Which networking type did you use? (For example: nova-network, Neutron with OpenVSwitch, ...) N/A ** Affects: nova Importance: Undecided Status: New ** Tags: libvirt live-migration volumes -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1843639 Title: libvirt: post_live_migration failures to disconnect volumes result in the rollback of live migrations Status in OpenStack Compute (nova): New Bug description: Description === At present any exceptions encountered during post_live_migration on the source after an instance has successfully migrated result in the overall failure of the migration and the instance being listed as running on the source while actually being on the destination. Any such errors should be logged but otherwise ignored allowing the migration to complete and for the instance to continue to be tracked correctly. Steps to reproduce == - Live migrate an instance from host A to host B, ensuring post_live_migration fails. Expected result === Any failures on the source encountered by post_live_migration are logged but the overall migration still completes successfully. Actual result = The instance and overall migration are left in error states. Additionally the instance is reported as residing on the source host while actually running on the destination. Environment === 1. Exact version of OpenStack you are running. See the following list for all releases: http://docs.openstack.org/releases/ ba3147420c0a6f8b17a46b1a493b89bcd67af6f1 2. Which hypervisor did you use? (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...) What's the version of that? Libvirt + KVM 2. Which storage type did you use? (For example: Ceph, LVM, GPFS, ...) What's the version of that? N/A 3. Which networking type did you use? (For example: nova-network, Neutron with OpenVSwitch, ...) N/A To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1843639/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1843634] [NEW] cloud-init misconfigure the network on SLES
Public bug reported: I reproduced the issue on an Azure VM with SLES12 SP4 and cloud-init 19.1. The DNS is unreachable when cloud-init takes the responsibility of configuring the network. No nameservers or search domains are added to the /etc/resolv.conf as following: ; Created by cloud-init on instance boot automatically, do not edit. ; ### /etc/resolv.conf file autogenerated by netconfig! # # Before you change this file manually, consider to define the # static DNS configuration using the following variables in the # /etc/sysconfig/network/config file: # NETCONFIG_DNS_STATIC_SEARCHLIST # NETCONFIG_DNS_STATIC_SERVERS # NETCONFIG_DNS_FORWARDER # or disable DNS configuration updates via netconfig by setting: # NETCONFIG_DNS_POLICY='' # # See also the netconfig(8) manual page and other documentation. # # Note: Manual change of this file disables netconfig too, but # may get lost when this file contains comments or empty lines # only, the netconfig settings are same with settings in this # file and in case of a "netconfig update -f" call. # ### Please remove (at least) this line when you modify the file! Here is also the contents of /etc/sysconfig/network/config for your reference: ## Type:integer ## Default: "" # # How log to wait for IPv6 autoconfig in ifup when requested with # the auto6 or +auto6 tag in BOOTPROTO variable. # When unset, a wicked built-in default defer time (10sec) is used. # AUTO6_WAIT_AT_BOOT="" ## Type:list(all,dns,none,"") ## Default: "" # # Whether to update system (DNS) settings from IPv6 RA when requested # with the auto6 or +auto6 tag in BOOTPROTO variable. # Defaults to update if autoconf sysctl (address autoconf) is enabled. # AUTO6_UPDATE="" ## Type:list(auto,yes,no) ## Default: "auto" # # Permits to specify/modify a global ifcfg default. Use with care! # # This settings breaks rules for many things, which require carrier # before they can start, e.g. L2 link protocols, link authentication, # ipv4 duplicate address detection, ipv6 duplicate detection will # happen "post-mortem" and maybe even cause to disable ipv6 at all. # See also "man ifcfg" for further informations. # LINK_REQUIRED="auto" ## Type:string ## Default: "" # # Allows to specify a comma separated list of debug facilities used # by wicked. Negated facility names can be prepended by a "-", e.g.: # "all,-events,-socket,-objectmodel,xpath,xml,dbus" # # When set, wicked debug level is automatically enabled. # For a complete list of facility names, see: "wicked --debug help". # WICKED_DEBUG="" ## Type:list("",error,warning,notice,info,debug,debug1,debug2,debug3) ## Default: "" # # Allows to specify wicked debug level. Default level is "notice". # WICKED_LOG_LEVEL="" ## Path:Network/General ## Description: Global network configuration # # Note: # Most of the options can and should be overridden by per-interface # settings in the ifcfg-* files. # # Note: The ISC dhclient started by the NetworkManager is not using any # of these options -- NetworkManager is not using any sysconfig settings. # ## Type:yesno ## Default: yes # If ifup should check if an IPv4 address is already in use, set this to yes. # # Make sure that packet sockets (CONFIG_PACKET) are supported in the kernel, # since this feature uses arp, which depends on that. # Also be aware that this takes one second per interface; consider that when # setting up a lot of interfaces. CHECK_DUPLICATE_IP="yes" ## Type:list(auto,yes,no) ## Default: auto # If ifup should send a gratuitous ARP to inform the receivers about its # IPv4 addresses. Default is to send gratuitous ARP, when duplicate IPv4 # address check is enabled and the check were sucessful. # # Make sure that packet sockets (CONFIG_PACKET) are supported in the kernel, # since this feature uses arp, which depends on that. SEND_GRATUITOUS_ARP="auto" ## Type:yesno ## Default: no # Switch on/off debug messages for all network configuration stuff. If set to no # most scripts can enable it locally with "-o debug". DEBUG="no" ## Type:integer ## Default: 30 # # Some interfaces need some time to come up or come asynchronously via hotplug. # WAIT_FOR_INTERFACES is a global wait for all mandatory interfaces in # seconds. If empty no wait occurs. # WAIT_FOR_INTERFACES="30" ## Type:yesno ## Default: yes # # With this variable you can determine if the SuSEfirewall when enabled # should get started when network interfaces are started. FIREWALL="yes" ## Type:int ## Default: 30 # # When using NetworkManager you may define a timeout to wait for NetworkManager # to connect in NetworkManager-wait-online.service. Other network services # may require the system to have a valid network setup in order to succeed. # # This variable has no effect if NetworkManager is disabled. # NM_ONLINE_TIMEOUT="30" ## Type:string ## Default: "dns-resolver dns-bind
[Yahoo-eng-team] [Bug 1843615] Re: TestInstanceNotificationSampleWithMultipleCompute.test_multiple_compute_actions intermittently failing since Sept 10, 2019
** Also affects: nova/stein Importance: Undecided Status: New ** Changed in: nova/stein Status: New => Confirmed ** Changed in: nova/stein Importance: Undecided => High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1843615 Title: TestInstanceNotificationSampleWithMultipleCompute.test_multiple_compute_actions intermittently failing since Sept 10, 2019 Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) stein series: Confirmed Bug description: Seen here: https://openstack.fortnebula.com:13808/v1/AUTH_e8fd161dc34c421a979a9e6421f823e9/zuul_opendev_logs_c4c/671072/18/gate /nova-tox-functional/c4ca604/job-output.txt 2019-09-11 16:01:31.460243 | ubuntu-bionic | {3} nova.tests.functional.notification_sample_tests.test_instance.TestInstanceNotificationSampleWithMultipleCompute.test_multiple_compute_actions [15.126947s] ... FAILED 2019-09-11 16:01:31.460323 | ubuntu-bionic | 2019-09-11 16:01:31.460383 | ubuntu-bionic | Captured traceback: 2019-09-11 16:01:31.460442 | ubuntu-bionic | ~~~ 2019-09-11 16:01:31.460525 | ubuntu-bionic | Traceback (most recent call last): 2019-09-11 16:01:31.460714 | ubuntu-bionic | File "nova/tests/functional/notification_sample_tests/test_instance.py", line 61, in test_multiple_compute_actions 2019-09-11 16:01:31.460775 | ubuntu-bionic | action(server) 2019-09-11 16:01:31.460975 | ubuntu-bionic | File "nova/tests/functional/notification_sample_tests/test_instance.py", line 306, in _test_live_migration_force_complete 2019-09-11 16:01:31.461065 | ubuntu-bionic | fake_notifier.VERSIONED_NOTIFICATIONS) 2019-09-11 16:01:31.461297 | ubuntu-bionic | File "/home/zuul/src/opendev.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/testtools/testcase.py", line 411, in assertEqual 2019-09-11 16:01:31.461394 | ubuntu-bionic | self.assertThat(observed, matcher, message) 2019-09-11 16:01:31.461628 | ubuntu-bionic | File "/home/zuul/src/opendev.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/testtools/testcase.py", line 498, in assertThat 2019-09-11 16:01:31.461695 | ubuntu-bionic | raise mismatch_error 2019-09-11 16:01:31.484778 | ubuntu-bionic | testtools.matchers._impl.MismatchError: 6 != 7: [{'priority': 'INFO', 'payload': {'nova_object.namespace': 'nova', 'nova_object.name': 'RequestSpecPayload', 'nova_object.version': '1.1', 'nova_object.data': {'flavor': {'nova_object.namespace': 'nova', 'nova_object.name': 'FlavorPayload', 'nova_object.version': '1.4', 'nova_object.data': {'flavorid': u'a22d5517-147c-4147-a0d1-e698df5cd4e3', 'is_public': True, 'ephemeral_gb': 0, 'vcpus': 1, 'root_gb': 1, 'disabled': False, 'description': None, 'projects': None, 'vcpu_weight': 0, 'memory_mb': 512, 'name': u'test_flavor', 'rxtx_factor': 1.0, 'extra_specs': {'trait:COMPUTE_STATUS_DISABLED': u'forbidden', u'hw:watchdog_action': u'disabled'}, 'swap': 0}}, 'image': {'nova_object.namespace': 'nova', 'nova_object.name': 'ImageMetaPayload', 'nova_object.version': '1.0', 'nova_object.data': {'direct_url': None, 'container_format': u'raw', 'visibility': u'public', 'size': 25165824, 'disk_format': u'raw', 'virtual_size': None, 'protected': False, 'status': u'active', 'updated_at': '2011-01-01T01:02:03Z', 'tags': [u'tag1', u'tag2'], 'name': u'fakeimage123456', 'created_at': '2011-01-01T01:02:03Z', 'min_disk': 0, 'checksum': None, 'owner': None, 'id': u'155d900f-4e14-4e4c-a73d-069cbf4541e6', 'properties': {'nova_object.namespace': 'nova', 'nova_object.name': 'ImageMetaPropsPayload', 'nova_object.version': '1.1', 'nova_object.data': {'hw_architecture': u'x86_64'}}, 'min_ram': 0}}, 'requested_destination': {'nova_object.namespace': 'nova', 'nova_object.name': 'DestinationPayload', 'nova_object.version': '1.0', 'nova_object.data': {'host': u'host2', 'aggregates': None, 'node': u'host2', 'cell': {'nova_object.namespace': 'nova', 'nova_object.name': 'CellMappingPayload', 'nova_object.version': '2.0', 'nova_object.data': {'disabled': False, 'uuid': u'49bb4305-6acb-4b60-abff-382e2e85108a', 'name': u'cell1', 'security_groups': [u'default'], 'scheduler_hints': {}, 'project_id': u'6f70656e737461636b20342065766572', 'retry': None, 'num_instances': 1, 'instance_group': None, 'force_nodes': None, 'ignore_hosts': [u'compute'], 'force_hosts': None, 'numa_topology': None, 'instance_uuid': u'8d65a36d-36e8-4994-9bdd-89a455166ab9', 'availability_zone': None, 'user_id': u'fake', 'pci_requests': {'nova_object.namespace': 'nova', 'nova_object.name': 'InstancePCIRequestsPayload', 'nova_object.version': '1.0', 'nova_object.data': {'requests': [], 'instance_uuid': u'8d65a36d-36e8-4994-9bdd-89a455166ab9', 'publisher_id': u'nova-scheduler:host2',
[Yahoo-eng-team] [Bug 1843615] [NEW] TestInstanceNotificationSampleWithMultipleCompute.test_multiple_compute_actions intermittently failing since Sept 10, 2019
Public bug reported: Seen here: https://openstack.fortnebula.com:13808/v1/AUTH_e8fd161dc34c421a979a9e6421f823e9/zuul_opendev_logs_c4c/671072/18/gate /nova-tox-functional/c4ca604/job-output.txt 2019-09-11 16:01:31.460243 | ubuntu-bionic | {3} nova.tests.functional.notification_sample_tests.test_instance.TestInstanceNotificationSampleWithMultipleCompute.test_multiple_compute_actions [15.126947s] ... FAILED 2019-09-11 16:01:31.460323 | ubuntu-bionic | 2019-09-11 16:01:31.460383 | ubuntu-bionic | Captured traceback: 2019-09-11 16:01:31.460442 | ubuntu-bionic | ~~~ 2019-09-11 16:01:31.460525 | ubuntu-bionic | Traceback (most recent call last): 2019-09-11 16:01:31.460714 | ubuntu-bionic | File "nova/tests/functional/notification_sample_tests/test_instance.py", line 61, in test_multiple_compute_actions 2019-09-11 16:01:31.460775 | ubuntu-bionic | action(server) 2019-09-11 16:01:31.460975 | ubuntu-bionic | File "nova/tests/functional/notification_sample_tests/test_instance.py", line 306, in _test_live_migration_force_complete 2019-09-11 16:01:31.461065 | ubuntu-bionic | fake_notifier.VERSIONED_NOTIFICATIONS) 2019-09-11 16:01:31.461297 | ubuntu-bionic | File "/home/zuul/src/opendev.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/testtools/testcase.py", line 411, in assertEqual 2019-09-11 16:01:31.461394 | ubuntu-bionic | self.assertThat(observed, matcher, message) 2019-09-11 16:01:31.461628 | ubuntu-bionic | File "/home/zuul/src/opendev.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/testtools/testcase.py", line 498, in assertThat 2019-09-11 16:01:31.461695 | ubuntu-bionic | raise mismatch_error 2019-09-11 16:01:31.484778 | ubuntu-bionic | testtools.matchers._impl.MismatchError: 6 != 7: [{'priority': 'INFO', 'payload': {'nova_object.namespace': 'nova', 'nova_object.name': 'RequestSpecPayload', 'nova_object.version': '1.1', 'nova_object.data': {'flavor': {'nova_object.namespace': 'nova', 'nova_object.name': 'FlavorPayload', 'nova_object.version': '1.4', 'nova_object.data': {'flavorid': u'a22d5517-147c-4147-a0d1-e698df5cd4e3', 'is_public': True, 'ephemeral_gb': 0, 'vcpus': 1, 'root_gb': 1, 'disabled': False, 'description': None, 'projects': None, 'vcpu_weight': 0, 'memory_mb': 512, 'name': u'test_flavor', 'rxtx_factor': 1.0, 'extra_specs': {'trait:COMPUTE_STATUS_DISABLED': u'forbidden', u'hw:watchdog_action': u'disabled'}, 'swap': 0}}, 'image': {'nova_object.namespace': 'nova', 'nova_object.name': 'ImageMetaPayload', 'nova_object.version': '1.0', 'nova_object.data': {'direct_url': None, 'container_format': u'raw', 'visibility': u'public', 'size': 25165824, 'disk_format': u'raw', 'virtual_size': None, 'protected': False, 'status': u'active', 'updated_at': '2011-01-01T01:02:03Z', 'tags': [u'tag1', u'tag2'], 'name': u'fakeimage123456', 'created_at': '2011-01-01T01:02:03Z', 'min_disk': 0, 'checksum': None, 'owner': None, 'id': u'155d900f-4e14-4e4c-a73d-069cbf4541e6', 'properties': {'nova_object.namespace': 'nova', 'nova_object.name': 'ImageMetaPropsPayload', 'nova_object.version': '1.1', 'nova_object.data': {'hw_architecture': u'x86_64'}}, 'min_ram': 0}}, 'requested_destination': {'nova_object.namespace': 'nova', 'nova_object.name': 'DestinationPayload', 'nova_object.version': '1.0', 'nova_object.data': {'host': u'host2', 'aggregates': None, 'node': u'host2', 'cell': {'nova_object.namespace': 'nova', 'nova_object.name': 'CellMappingPayload', 'nova_object.version': '2.0', 'nova_object.data': {'disabled': False, 'uuid': u'49bb4305-6acb-4b60-abff-382e2e85108a', 'name': u'cell1', 'security_groups': [u'default'], 'scheduler_hints': {}, 'project_id': u'6f70656e737461636b20342065766572', 'retry': None, 'num_instances': 1, 'instance_group': None, 'force_nodes': None, 'ignore_hosts': [u'compute'], 'force_hosts': None, 'numa_topology': None, 'instance_uuid': u'8d65a36d-36e8-4994-9bdd-89a455166ab9', 'availability_zone': None, 'user_id': u'fake', 'pci_requests': {'nova_object.namespace': 'nova', 'nova_object.name': 'InstancePCIRequestsPayload', 'nova_object.version': '1.0', 'nova_object.data': {'requests': [], 'instance_uuid': u'8d65a36d-36e8-4994-9bdd-89a455166ab9', 'publisher_id': u'nova-scheduler:host2', 'event_type': u'scheduler.select_destinations.start'}, {'priority': 'INFO', 'payload': {'nova_object.namespace': 'nova', 'nova_object.name': 'RequestSpecPayload', 'nova_object.version': '1.1', 'nova_object.data': {'flavor': {'nova_object.namespace': 'nova', 'nova_object.name': 'FlavorPayload', 'nova_object.version': '1.4', 'nova_object.data': {'flavorid': u'a22d5517-147c-4147-a0d1-e698df5cd4e3', 'is_public': True, 'ephemeral_gb': 0, 'vcpus': 1, 'root_gb': 1, 'disabled': False, 'description': None, 'projects': None, 'vcpu_weight': 0, 'memory_mb': 512, 'name': u'test_flavor', 'rxtx_factor': 1.0, 'extra_specs': {'trait:COMPUTE_STATUS_DISABLED':
[Yahoo-eng-team] [Bug 1843609] [NEW] Domain-specific domain ID resolution breaks with system-scoped tokens
Public bug reported: System-scope was introduced in Queens [0] but recently we discovered a weird case where system users aren't able to do things they should be able to with system-scoped tokens when domain-specific drivers are enabled. For example, they are unable to list groups or users because the API logic for GET /v3/groups and GET /v3/users tries to resolve a domain ID from the request [1]. If domain-specific drivers are enabled and there isn't a domain ID associated to the request (either with a domain-scoped token or a project-scoped token) the API returns a 401, which makes no sense from the context of a system user [2]. You can recreate this locally by enabling domain-specific drivers in keystone.conf [3] and running the test_groups or test_users v3 protection tests using: $ tox -e py37 -- keystone.tests.unit.protection.v3.test_groups Observed failures: https://pasted.tech/pastes/b45c6b015b97c865018c4b3290f60e0456fe304a.raw This isn't blocking the gate because domain-specific drivers are off by default and the logic short-circuits [4]. [0] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html [1] https://opendev.org/openstack/keystone/src/branch/master/keystone/api/groups.py#L84 [2] https://opendev.org/openstack/keystone/src/branch/master/keystone/server/flask/common.py#L917-L943 [3] https://pasted.tech/pastes/e8ffce7a3377b960dd88de8c88e2ccfd173ec726.raw [4] https://opendev.org/openstack/keystone/src/branch/master/keystone/server/flask/common.py#L924-L926 ** Affects: keystone Importance: High Status: Triaged ** Changed in: keystone Status: New => Triaged ** Changed in: keystone Importance: Undecided => High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1843609 Title: Domain-specific domain ID resolution breaks with system-scoped tokens Status in OpenStack Identity (keystone): Triaged Bug description: System-scope was introduced in Queens [0] but recently we discovered a weird case where system users aren't able to do things they should be able to with system-scoped tokens when domain-specific drivers are enabled. For example, they are unable to list groups or users because the API logic for GET /v3/groups and GET /v3/users tries to resolve a domain ID from the request [1]. If domain-specific drivers are enabled and there isn't a domain ID associated to the request (either with a domain-scoped token or a project-scoped token) the API returns a 401, which makes no sense from the context of a system user [2]. You can recreate this locally by enabling domain-specific drivers in keystone.conf [3] and running the test_groups or test_users v3 protection tests using: $ tox -e py37 -- keystone.tests.unit.protection.v3.test_groups Observed failures: https://pasted.tech/pastes/b45c6b015b97c865018c4b3290f60e0456fe304a.raw This isn't blocking the gate because domain-specific drivers are off by default and the logic short-circuits [4]. [0] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html [1] https://opendev.org/openstack/keystone/src/branch/master/keystone/api/groups.py#L84 [2] https://opendev.org/openstack/keystone/src/branch/master/keystone/server/flask/common.py#L917-L943 [3] https://pasted.tech/pastes/e8ffce7a3377b960dd88de8c88e2ccfd173ec726.raw [4] https://opendev.org/openstack/keystone/src/branch/master/keystone/server/flask/common.py#L924-L926 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1843609/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1843502] Re: Network config is incorrectly parsed when nameservers are specified
Thanks for the bug and the logs. Looking at the network-config that was generated: >>> print(yaml.dump(nc, default_flow_style=False, indent=4)) ethernets: eth0: dhcp4: true match: macaddress: 00:0d:3a:6d:ca:25 set-name: eth0 nameservers: addresses: 168.63.129.16 search: xkf00b0rtzgejk The bug is that nameservers needs to be indented *under* eth0. However, cloud-init upstream does not parse or process nameservers[1] from Azure metadata, so I can't understand why you have this bug unless the cloud-init 19.1 on SLES has some downstream patches. 1. https://git.launchpad.net/cloud- init/tree/cloudinit/sources/DataSourceAzure.py#n1305 ** Changed in: cloud-init Status: New => Incomplete ** Also affects: cloud-init (Suse) Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1843502 Title: Network config is incorrectly parsed when nameservers are specified Status in cloud-init: Incomplete Status in cloud-init package in Suse: New Bug description: The issue was reproduced on Azure with cloud-init 19.1 on a SLES12 SP4 machine. Looking at the code, the same behavior could be reproduced on any other configuration where the cloud provider specifies nameservers in the network configuration. The specified nameservers in network configuration are ignored and cloud-init raises an error. In network_state.py the function _v2_common builds a name_cmd dictionary which is then passed to the function handle_nameserver. The handle_nameserver has a decorator that enforces that passed in dictionary to have the key "address". But the _v2_common build a dictionary that has the key "addresses" instead. That results in raising an error. Here's a snapshot of the cloud-init.log 2019-09-09 16:21:29,479 - network_state.py[DEBUG]: v2(nameserver) -> v1(nameserver): {'search': 'xkf00b0rtzgejkug4xc2pcinre.xx.internal.cloudapp.net', 'type': 'nameserver', 'addresses': '168.63.129.16'} 2019-09-09 16:21:29,479 - network_state.py[WARNING]: Skipping invalid command: {'nameservers': {'search': 'xkf00b0rtzgejkug4xc2pcinre.xx.internal.cloudapp.net', 'addresses': '168.63.129.16'}, 'eth0': {'set-name': 'eth0', 'match': {'macaddress': u'00:0d:3a:6d:ca:25'}, 'dhcp4': True}} Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/cloudinit/net/network_state.py", line 321, in parse_config_v2 self._v2_common(command) File "/usr/lib/python2.7/site-packages/cloudinit/net/network_state.py", line 697, in _v2_common self.handle_nameserver(name_cmd) File "/usr/lib/python2.7/site-packages/cloudinit/net/network_state.py", line 118, in decorator required_keys)) InvalidCommand: Command missing set(['address']) of required keys ['address'] To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1843502/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1843602] [NEW] cloud-init collect logs needs to defer to distro for debug data
Public bug reported: While triaging a bug in sles'based distro using cloud-init, the collect- logs output had a few issues: 1) the user did not have systemd journal privs Unexpected error while running command. Command: ['journalctl', '--boot=0', '-o', 'short-precise'] Exit code: 1 Reason: - Stdout: Stderr: Hint: You are currently not seeing messages from other users and the system. Users in the 'systemd-journal' group can see all messages. Pass -q to turn off this notice. No journal files were opened due to insufficient permissions. 2) we tried to collect dpkg-version on a non debian distro Unexpected error while running command. Command: ['dpkg-query', '--show', '-f=${Version}\n', 'cloud-init'] Exit code: - Reason: [Errno 2] No such file or directory Stdout: - Stderr: - 3) version file is empty 4) dmesg as non-root user fails Unexpected error while running command. Command: ['dmesg'] Exit code: 1 Reason: - Stdout: Stderr: dmesg: read kernel buffer failed: Operation not permitted ** Affects: cloud-init Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1843602 Title: cloud-init collect logs needs to defer to distro for debug data Status in cloud-init: New Bug description: While triaging a bug in sles'based distro using cloud-init, the collect-logs output had a few issues: 1) the user did not have systemd journal privs Unexpected error while running command. Command: ['journalctl', '--boot=0', '-o', 'short-precise'] Exit code: 1 Reason: - Stdout: Stderr: Hint: You are currently not seeing messages from other users and the system. Users in the 'systemd-journal' group can see all messages. Pass -q to turn off this notice. No journal files were opened due to insufficient permissions. 2) we tried to collect dpkg-version on a non debian distro Unexpected error while running command. Command: ['dpkg-query', '--show', '-f=${Version}\n', 'cloud-init'] Exit code: - Reason: [Errno 2] No such file or directory Stdout: - Stderr: - 3) version file is empty 4) dmesg as non-root user fails Unexpected error while running command. Command: ['dmesg'] Exit code: 1 Reason: - Stdout: Stderr: dmesg: read kernel buffer failed: Operation not permitted To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1843602/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1843025] Re: FWaaS v2 fails to add ICMPv6 rules via horizon
*** This bug is a duplicate of bug 1799904 *** https://bugs.launchpad.net/bugs/1799904 ** This bug has been marked a duplicate of bug 1799904 ICMPv6 is not an available protocol when creating Firewall-Rule -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1843025 Title: FWaaS v2 fails to add ICMPv6 rules via horizon Status in neutron: In Progress Bug description: In rocky, FWaaS v2 fails to add the correct ip6tables rules for ICMPv6. Steps to reproduce: * Create rule with Protocol ICMP, IP version 6 in horizon * Add the rule to a policy, and make sure the firewall group with that policy is attached to a port * Login to the neutron network node that has the netns for your router and run ip6tables-save Observe that your rule is added like: -A neutron-l3-agent-iv63872a6fc -s 2001:db8:1d00:13::/64 -p icmp -j neutron-l3-agent-accepted It should've added: -A neutron-l3-agent-iv63872a6fc -s 2001:db8:1d00:13::/64 -p ipv6-icmp -j neutron-l3-agent-accepted Ubuntu 18.04 neutron-l3-agent 2:13.0.4-0ubuntu1~cloud0 python-neutron-fwaas 1:13.0.2-0ubuntu1~cloud0 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1843025/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1843584] [NEW] cloudinit/net/sysconfig.py lacks support for openSUSE 15.x and Tumbleweed
Public bug reported: On openSUSE 15.x and Tumbleweed, network config fails due to a missing network renderer. Adding 'opensuse-leap' and 'opensuse-tumbleweed' to `KNOWN_DISTROS` solves the problem. Please extend `KNOWN_DISTROS` to support newer versions of openSUSE. ** Affects: cloud-init Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1843584 Title: cloudinit/net/sysconfig.py lacks support for openSUSE 15.x and Tumbleweed Status in cloud-init: New Bug description: On openSUSE 15.x and Tumbleweed, network config fails due to a missing network renderer. Adding 'opensuse-leap' and 'opensuse-tumbleweed' to `KNOWN_DISTROS` solves the problem. Please extend `KNOWN_DISTROS` to support newer versions of openSUSE. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1843584/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1843368] Re: objname error in QosPolicy
Reviewed: https://review.opendev.org/681158 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=2c83e0750954798024212062a20bb5f90788ab2e Submitter: Zuul Branch:master commit 2c83e0750954798024212062a20bb5f90788ab2e Author: zhanghao2 Date: Tue Aug 13 17:11:08 2019 -0400 Fix objname error in QosPolicy This patch fixes objname error in class QosPolicy. Change-Id: Idc67b2d8f7cca19f59f39b9c884d8cec2c12e867 Closes-Bug: #1843368 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1843368 Title: objname error in QosPolicy Status in neutron: Fix Released Bug description: class QosPolicy(rbac_db.NeutronRbacObject): def obj_make_compatible(self, primitive, target_version): if _target_version < (1, 3): standard_fields = ['revision_number', 'created_at', 'updated_at'] for f in standard_fields: primitive.pop(f) if primitive['description'] is None: # description was not nullable before raise exception.IncompatibleObjectVersion( objver=target_version, objname='QoSPolicy') objname should be QosPolicy. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1843368/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1843574] [NEW] nova diagnostics shows cpu_utilization output as null
Public bug reported: When executing "nova diagnostics " the output of cpu_utilization shows null. ** Affects: nova Importance: Undecided Status: New ** Description changed: When executing "nova diagnostics " the output of - cpu_utilization is null. + cpu_utilization shows null. -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1843574 Title: nova diagnostics shows cpu_utilization output as null Status in OpenStack Compute (nova): New Bug description: When executing "nova diagnostics " the output of cpu_utilization shows null. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1843574/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1843576] [NEW] Glance metadefs is missing Image property hw_vif_multiqueue_enabled
Public bug reported: Glance metadefs is missing Nova image property : hw_vif_multiqueue_enabled true|false ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1843576 Title: Glance metadefs is missing Image property hw_vif_multiqueue_enabled Status in Glance: New Bug description: Glance metadefs is missing Nova image property : hw_vif_multiqueue_enabled true|false To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1843576/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1843542] [NEW] Flavor property "hw_rng:rate_period" should be milliseconds
Public bug reported: Error in "doc/source/user/flavors.rst" In "Random-number generator" section Currently reads: "RATE-PERIOD: (integer) Duration of the read period in seconds." Should read: "RATE-PERIOD: (integer) Duration of the read period in milliseconds." Please see: https://libvirt.org/formatdomain.html#elementsRng for reference. Either the documentation needs to be updated of the nova needs to convert the given value into milliseconds before passing it to libvirt. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1843542 Title: Flavor property "hw_rng:rate_period" should be milliseconds Status in OpenStack Compute (nova): New Bug description: Error in "doc/source/user/flavors.rst" In "Random-number generator" section Currently reads: "RATE-PERIOD: (integer) Duration of the read period in seconds." Should read: "RATE-PERIOD: (integer) Duration of the read period in milliseconds." Please see: https://libvirt.org/formatdomain.html#elementsRng for reference. Either the documentation needs to be updated of the nova needs to convert the given value into milliseconds before passing it to libvirt. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1843542/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1843541] [NEW] Flavors in nova
Public bug reported: This doc is inaccurate in this way: "hw_rng:rate_period" is actually in *milliseconds* See https://libvirt.org/formatdomain.html#elementsRng --- Release: 16.1.9.dev7 on 2019-08-23 17:35 SHA: 6e2ec5c5a397b508cddb646e865b6f4d2132f1d6 Source: https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/flavors.rst URL: https://docs.openstack.org/nova/pike/admin/flavors.html ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1843541 Title: Flavors in nova Status in OpenStack Compute (nova): New Bug description: This doc is inaccurate in this way: "hw_rng:rate_period" is actually in *milliseconds* See https://libvirt.org/formatdomain.html#elementsRng --- Release: 16.1.9.dev7 on 2019-08-23 17:35 SHA: 6e2ec5c5a397b508cddb646e865b6f4d2132f1d6 Source: https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/flavors.rst URL: https://docs.openstack.org/nova/pike/admin/flavors.html To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1843541/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1813265] Re: Documentation should use endpoints with path /identity instead of port 5000
** Also affects: ubuntu Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1813265 Title: Documentation should use endpoints with path /identity instead of port 5000 Status in OpenStack Identity (keystone): Triaged Status in Ubuntu: New Bug description: In devstack we configure keystone to run on port 80/443 proxied through the /identity URL path. We semi-officially recommend doing the same in production, but all of our documentation points to using port 5000 with no path. We should update the documentation to use the recommended endpoint configuration. Note that keystone and horizon are commonly co-located and horizon by default runs on port 80/443 with no URL path, so the documentation will need to explain how to configure apache/nginx/haproxy such that horizon and keystone don't collide. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1813265/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp