[Yahoo-eng-team] [Bug 1864160] [NEW] mete_date shows region information
Public bug reported: Description === I want to show region_name in mete_date. Do you have any Suggestions and ideas? In the instance execution 'curl http://169.254.169.254/openstack/2013-04-04/mete_date.json ', the information of the current instance region (region_name) is displayed.Do you have any good ideas? Thank you very much. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1864160 Title: mete_date shows region information Status in OpenStack Compute (nova): New Bug description: Description === I want to show region_name in mete_date. Do you have any Suggestions and ideas? In the instance execution 'curl http://169.254.169.254/openstack/2013-04-04/mete_date.json ', the information of the current instance region (region_name) is displayed.Do you have any good ideas? Thank you very much. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1864160/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1864159] [NEW] metedate
Public bug reported: Description === I want to show region_name in mete_date. Do you have any Suggestions and ideas? In the instance execution 'curl http://169.254.169.254/openstack/2013-04-04/mete_date.json ', the information of the current instance region (region_name) is displayed.Do you have any good ideas? Thank you very much. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1864159 Title: metedate Status in OpenStack Compute (nova): New Bug description: Description === I want to show region_name in mete_date. Do you have any Suggestions and ideas? In the instance execution 'curl http://169.254.169.254/openstack/2013-04-04/mete_date.json ', the information of the current instance region (region_name) is displayed.Do you have any good ideas? Thank you very much. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1864159/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1863021] Re: eventlet monkey patch results in assert len(_active) == 1 AssertionError
This bug was fixed in the package nova - 2:21.0.0~b2~git2020021008 .1fcd74730d-0ubuntu2 --- nova (2:21.0.0~b2~git2020021008.1fcd74730d-0ubuntu2) focal; urgency=medium * d/p/monkey-patch-original-current-thread-active.patch: Cherry-picked from https://review.opendev.org/#/c/707474/. This fixes nova service failures that autopkgtests are hitting with Python 3.8 (LP: #1863021). -- Corey Bryant Thu, 20 Feb 2020 09:35:53 -0500 ** Changed in: nova (Ubuntu) Status: Triaged => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1863021 Title: eventlet monkey patch results in assert len(_active) == 1 AssertionError Status in OpenStack Compute (nova): In Progress Status in nova package in Ubuntu: Fix Released Bug description: This appears to be the same issue documented here: https://github.com/eventlet/eventlet/issues/592 However I seem to only hit this with python3.8. Basically nova services fail with: Exception ignored in: Traceback (most recent call last): File "/usr/lib/python3.8/threading.py", line 1454, in _after_fork assert len(_active) == 1 AssertionError: Exception ignored in: Traceback (most recent call last): File "/usr/lib/python3.8/threading.py", line 1454, in _after_fork assert len(_active) == 1 AssertionError: Patching nova/monkey_patch.py with the following appears to fix this: diff --git a/nova/monkey_patch.py b/nova/monkey_patch.py index a07ff91dac..bb7252c643 100644 --- a/nova/monkey_patch.py +++ b/nova/monkey_patch.py @@ -59,6 +59,9 @@ def _monkey_patch(): else: eventlet.monkey_patch() +import __original_module_threading +import threading +__original_module_threading.current_thread.__globals__['_active'] = threading._active # NOTE(rpodolyaka): import oslo_service first, so that it makes eventlet # hub use a monotonic clock to avoid issues with drifts of system time (see To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1863021/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1864122] [NEW] Instances (bare metal) queue for long time when managing a large amount of Ironic nodes
Public bug reported: Description === We have two deployments, one with ~150 bare metal nodes, and another with ~300. These are each managed by one nova-compute process running the Ironic driver. After upgrading from the Ocata release, we noticed that instance launches would be stuck in the spawning state for a long time, up to 30 minutes to an hour in some cases. After investigation, the root cause appeared to be contention between the update_resources periodic task and the instance claim step. There is one semaphore "compute_resources" that is used to control every access within the resource_tracker. In our case, what was happening was the update_resources job, which runs every minute by default, was constantly queuing up accesses to this semaphore, because each hypervisor is updated independently, in series. This meant that, for us, each Ironic node was being processed and was holding the semaphore during its update (which took about 2-5 seconds in practice.) Multiply this by 150 and our update task was running constantly. Because an instance claim also needs to access this semaphore, this led to instances getting stuck in the "Build" state, after scheduling, for tens of minutes on average. There seemed to be some probabilistic effect here, which I hypothesize is related to the locking mechanism not using a "fair" lock (first-come, first-served) by default. Steps to reproduce == I suspect this is only visible on deployments of >100 Ironic nodes or so (and, they have to be managed by one nova-compute-ironic service.) Due to the non-deterministic nature of the lock, the behavior is sporadic, but launching an instance is enough to observe the behavior. Expected result === Instance proceeds to networking phase of creation after <60 seconds. Actual result = Instance stuck in BUILD state for 30-60 minutes before proceeding to networking phase. Environment === 1. Exact version of OpenStack you are running. See the following list for all releases: http://docs.openstack.org/releases/ Nova 20.0.1 2. Which hypervisor did you use? Ironic 2. Which storage type did you use? N/A 3. Which networking type did you use? Neutron/OVS Logs & Configs == Links = First report, on openstack-discuss: http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006192.html ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1864122 Title: Instances (bare metal) queue for long time when managing a large amount of Ironic nodes Status in OpenStack Compute (nova): New Bug description: Description === We have two deployments, one with ~150 bare metal nodes, and another with ~300. These are each managed by one nova-compute process running the Ironic driver. After upgrading from the Ocata release, we noticed that instance launches would be stuck in the spawning state for a long time, up to 30 minutes to an hour in some cases. After investigation, the root cause appeared to be contention between the update_resources periodic task and the instance claim step. There is one semaphore "compute_resources" that is used to control every access within the resource_tracker. In our case, what was happening was the update_resources job, which runs every minute by default, was constantly queuing up accesses to this semaphore, because each hypervisor is updated independently, in series. This meant that, for us, each Ironic node was being processed and was holding the semaphore during its update (which took about 2-5 seconds in practice.) Multiply this by 150 and our update task was running constantly. Because an instance claim also needs to access this semaphore, this led to instances getting stuck in the "Build" state, after scheduling, for tens of minutes on average. There seemed to be some probabilistic effect here, which I hypothesize is related to the locking mechanism not using a "fair" lock (first-come, first-served) by default. Steps to reproduce == I suspect this is only visible on deployments of >100 Ironic nodes or so (and, they have to be managed by one nova-compute-ironic service.) Due to the non-deterministic nature of the lock, the behavior is sporadic, but launching an instance is enough to observe the behavior. Expected result === Instance proceeds to networking phase of creation after <60 seconds. Actual result = Instance stuck in BUILD state for 30-60 minutes before proceeding to networking phase. Environment === 1. Exact version of OpenStack you are running. See the following list for all releases: http://docs.openstack.org/releases/ Nova 20.0.1 2. Which hypervisor did you use? Ironic 2
[Yahoo-eng-team] [Bug 1860789] Re: ssh_authkey_fingerprints must use sha256 not md5
This bug is believed to be fixed in cloud-init in version 20.1. If this is still a problem for you, please make a comment and set the state back to New Thank you. ** Also affects: cloud-init Importance: Undecided Status: New ** Changed in: cloud-init Status: New => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1860789 Title: ssh_authkey_fingerprints must use sha256 not md5 Status in cloud-init: Fix Released Status in cloud-init package in Ubuntu: Fix Released Bug description: ssh_authkey_fingerprints must use sha256sum not md5 on focal and up. or maybe you should show both, becuase old ssh clients might only show md5 checksums, and like ssh clients on Windows, etc. If you switch to show both, it then can be backported to all stable releases, as md5 is no longer secure for this purpose. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1860789/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1860795] Re: cc_set_passwords is too short for RANDOM
This bug is believed to be fixed in cloud-init in version 20.1. If this is still a problem for you, please make a comment and set the state back to New Thank you. ** Also affects: cloud-init Importance: Undecided Status: New ** Changed in: cloud-init Status: New => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1860795 Title: cc_set_passwords is too short for RANDOM Status in cloud-init: Fix Released Status in cloud-init package in Ubuntu: Fix Released Bug description: PW_SET = (''.join([x for x in ascii_letters + digits if x not in 'loLOI01'])) def rand_user_password(pwlen=9): return util.rand_str(pwlen, select_from=PW_SET) len(PW_SET) is 55 log_2(55^20) is 115 bits, which is above 112, which matches the default OpenSSL SECLEVEL=2 setting in focal fossa. Please bump PW_SET to 20, as 9 is crackable (provides 52 bits of security which is less than SECLEVEL 0). As I'm about to use this on a mainframe, which by definition can crack that. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1860795/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1861412] Re: cloud-init crashes with static network configuration
This bug is believed to be fixed in cloud-init in version 20.1. If this is still a problem for you, please make a comment and set the state back to New Thank you. ** Also affects: cloud-init Importance: Undecided Status: New ** Changed in: cloud-init Status: New => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1861412 Title: cloud-init crashes with static network configuration Status in cloud-init: Fix Released Status in Ubuntu on IBM z Systems: Fix Released Status in cloud-init package in Ubuntu: Fix Released Bug description: I am booting an s390x VM with vlan & static ip= configuration on the kernel command line. It appears that cloudinit is trying to parse the ipconfig results, and is failing. Attaching: cmdline - /proc/cmdline net-encc000.2653.conf - the /run/net-encc000.2653.conf klibc ipconfig state file encc000.2653.yaml - /run/netplan/encc000.2653.yaml which initramfs-tools generates, but cloud-init is not using cloud-init-output.log & cloud-init.log - showing a crash traceback To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1861412/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1863943] Re: do not log imdsv2 token headers
This bug is believed to be fixed in cloud-init in version 20.1. If this is still a problem for you, please make a comment and set the state back to New Thank you. ** Changed in: cloud-init Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1863943 Title: do not log imdsv2 token headers Status in cloud-init: Fix Released Bug description: Cloud-init currently logs all headers when processing URLs. On Ec2, this includes the IMDSv2 token negotiation and all IMDS interactions. The value of seeing the headers in the log is quite useful, especially for confirming whether cloud-init is using IMDSv2 or not; however the actual value of the token does not aide in this effort. Reported by: https://github.com/ishug86 To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1863943/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1863954] Re: Release 20.1
This bug is believed to be fixed in cloud-init in version 20.1. If this is still a problem for you, please make a comment and set the state back to New Thank you. ** Changed in: cloud-init Status: New => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1863954 Title: Release 20.1 Status in cloud-init: Fix Released Bug description: == Release Notes == Cloud-init release 20.1 is now available The 20.1 release: * spanned about 9 weeks * had 19 contributors from 19 domains * fixed 13 Launchpad issues Highlights: - Python 2 support has been dropped - A number of FreeBSD improvements landed - Two (low priority) CVEs were addressed: - utils: use SystemRandom when generating random password (CVE-2020-8631) - cc_set_password: increase random pwlength from 9 to 20 (CVE-2020-8632) == Changelog == - ec2: Do not log IMDSv2 token values, instead use REDACTED (#219) (LP: #1863943) - utils: use SystemRandom when generating random password. (#204) [Dimitri John Ledkov] - docs: mount_default_files is a list of 6 items, not 7 (#212) - azurecloud: fix issues with instances not starting (#205) (LP: #1861921) - unittest: fix stderr leak in cc_set_password random unittest output. (#208) - cc_disk_setup: add swap filesystem force flag (#207) - import sysvinit patches from freebsd-ports tree (#161) [Igor Galić] - docs: fix typo (#195) [Edwin Kofler] - sysconfig: distro-specific config rendering for BOOTPROTO option (#162) [Robert Schweikert] (LP: #1800854) - cloudinit: replace "from six import X" imports (except in util.py) (#183) - run-container: use 'test -n' instead of 'test ! -z' (#202) [Paride Legovini] - net/cmdline: correctly handle static ip= config (#201) [Dimitri John Ledkov] (LP: #1861412) - Replace mock library with unittest.mock (#186) - HACKING.rst: update CLA link (#199) - Scaleway: Fix DatasourceScaleway to avoid backtrace (#128) [Louis Bouchard] - cloudinit/cmd/devel/net_convert.py: add missing space (#191) - tools/run-container: drop support for python2 (#192) [Paride Legovini] - Print ssh key fingerprints using sha256 hash (#188) (LP: #1860789) - Make the RPM build use Python 3 (#190) [Paride Legovini] - cc_set_password: increase random pwlength from 9 to 20 (#189) (LP: #1860795) - .travis.yml: use correct Python version for xenial tests (#185) - cloudinit: remove ImportError handling for mock imports (#182) - Do not use fallocate in swap file creation on xfs. (#70) [Eduardo Otubo] (LP: #1781781) - .readthedocs.yaml: install cloud-init when building docs (#181) (LP: #1860450) - Introduce an RTD config file, and pin the Sphinx version to the RTD default (#180) - Drop most of the remaining use of six (#179) - Start removing dependency on six (#178) - Add Rootbox & HyperOne to list of cloud in README (#176) [Adam Dobrawy] - docs: add proposed SRU testing procedure (#167) - util: rename get_architecture to get_dpkg_architecture (#173) - Ensure util.get_architecture() runs only once (#172) - Only use gpart if it is the BSD gpart (#131) [Conrad Hoffmann] - freebsd: remove superflu exception mapping (#166) [Gonéri Le Bouder] - ssh_auth_key_fingerprints_disable test: fix capitalization (#165) [Paride Legovini] - util: move uptime's else branch into its own boottime function (#53) [Igor Galić] (LP: #1853160) - workflows: add contributor license agreement checker (#155) - net: fix rendering of 'static6' in network config (#77) (LP: #1850988) - Make tests work with Python 3.8 (#139) [Conrad Hoffmann] - fixed minor bug with mkswap in cc_disk_setup.py (#143) [andreaf74] - freebsd: fix create_group() cmd (#146) [Gonéri Le Bouder] - doc: make apt_update example consistent (#154) - doc: add modules page toc with links (#153) (LP: #1852456) - Add support for the amazon variant in cloud.cfg.tmpl (#119) [Frederick Lefebvre] - ci: remove Python 2.7 from CI runs (#137) - modules: drop cc_snap_config config module (#134) - migrate-lp-user-to-github: ensure Launchpad repo exists (#136) - docs: add initial troubleshooting to FAQ (#104) [Joshua Powers] - doc: update cc_set_hostname frequency and descrip (#109) [Joshua Powers] (LP: #1827021) - freebsd: introduce the freebsd renderer (#61) [Gonéri Le Bouder] - cc_snappy: remove deprecated module (#127) - HACKING.rst: clarify that everyone needs to do the LP->GH dance (#130) - freebsd: cloudinit service requires devd (#132) [Gonéri Le Bouder] - cloud-init: fix capitalisation of SSH (#126) - doc: update cc_ssh clarify host and auth keys [Joshua Powers] (LP: #1827021) - ci: emit names of tests run in Travis (#120) To manage notifications about this bug go to: https:/
[Yahoo-eng-team] [Bug 1861921] Re: cloud tests ssh time out
This bug is believed to be fixed in cloud-init in version 20.1. If this is still a problem for you, please make a comment and set the state back to New Thank you. ** Changed in: cloud-init Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1861921 Title: cloud tests ssh time out Status in cloud-init: Fix Released Bug description: The function platforms/instance._wait_for_system causes ssh timeout on Azure cloud tests in the instance.start after initializing VM. Here is the tracebacks before and after removing it. Before FULL: https://paste.ubuntu.com/p/zJ3DNKxRTx/ https://paste.ubuntu.com/p/QyBnfrhVSs/ After removing FULL: https://paste.ubuntu.com/p/JfksGJyRYc/ https://paste.ubuntu.com/p/gxKZffv2zd/ Something in platforms/instance._wait_for_system is causing this, don't want to mess with it because it's also used in ec2 and other cloud tests. I found the easiest way to reproduce this is by going idle. -- Added verbose logging to investigate Thanks To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1861921/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1781781] Re: /swap.img w/fallocate has holes
This bug is believed to be fixed in cloud-init in version 20.1. If this is still a problem for you, please make a comment and set the state back to New Thank you. ** Changed in: cloud-init Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1781781 Title: /swap.img w/fallocate has holes Status in cloud-init: Fix Released Status in curtin: Confirmed Bug description: The /swap.img file on an XFS root filesystem is not being used. The dmesg says it "has holes". From the swapon manpage; The swap file implementation in the kernel expects to be able to write to the file directly, without the assistance of the filesystem. This is a problem on preallocated files (e.g. fallocate(1)) on filesystems like XFS or ext4, and on copy-on-write filesystems like btrfs. It is recommended to use dd(1) and /dev/zero to avoid holes on XFS and ext4. I've tracked down this commit which seems to be a step in the right direction; https://github.com/CanonicalLtd/curtin/commit/a909966f9c235282267024e86adf404dab83ccfe To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1781781/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1850988] Re: [Cloud-init 18.5][CentOS 7 on vSphere] Crash when configuring static dual-stack (IPv4 + IPv6) networking
This bug is believed to be fixed in cloud-init in version 20.1. If this is still a problem for you, please make a comment and set the state back to New Thank you. ** Changed in: cloud-init Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1850988 Title: [Cloud-init 18.5][CentOS 7 on vSphere] Crash when configuring static dual-stack (IPv4 + IPv6) networking Status in cloud-init: Fix Released Bug description: Environment: - Stock CentOS 7 image template (comes with OpenVM tools) with cloud-init 18.5 installed - Single NIC VM - vSphere 6.5 hypervisor Repro steps: - Customize the VM with a vSphere customization spec that has NIC setting with static IPv4 and IPv6 information - OpenVM tools running inside guest will delegate guest customization to cloud-init - Cloud-init crashes with ValueError: Unknown subnet type 'static6' found for interface 'ens192' . See the following relevant excerts and stacktrace (found in /var/log/cloudinit.log): [...snip...] 2019-11-01 02:23:41,899 - DataSourceOVF.py[DEBUG]: Found VMware Customization Config File at /var/run/vmware-imc/cust.cfg 2019-11-01 02:23:41,899 - config_file.py[INFO]: Parsing the config file /var/run/vmware-imc/cust.cfg. 2019-11-01 02:23:41,900 - config_file.py[DEBUG]: FOUND CATEGORY = 'NETWORK' 2019-11-01 02:23:41,900 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'NETWORK|NETWORKING' = 'yes' 2019-11-01 02:23:41,900 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'NETWORK|BOOTPROTO' = 'dhcp' 2019-11-01 02:23:41,900 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'NETWORK|HOSTNAME' = 'pr-centos-ci' 2019-11-01 02:23:41,900 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'NETWORK|DOMAINNAME' = 'gsslabs.local' 2019-11-01 02:23:41,900 - config_file.py[DEBUG]: FOUND CATEGORY = 'NIC-CONFIG' 2019-11-01 02:23:41,900 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'NIC-CONFIG|NICS' = 'NIC1' 2019-11-01 02:23:41,900 - config_file.py[DEBUG]: FOUND CATEGORY = 'NIC1' 2019-11-01 02:23:41,902 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'NIC1|MACADDR' = '00:50:56:89:b7:48' 2019-11-01 02:23:41,902 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'NIC1|ONBOOT' = 'yes' 2019-11-01 02:23:41,902 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'NIC1|IPv4_MODE' = 'BACKWARDS_COMPATIBLE' 2019-11-01 02:23:41,902 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'NIC1|BOOTPROTO' = 'static' 2019-11-01 02:23:41,902 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'NIC1|IPADDR' = '1.1.1.4' 2019-11-01 02:23:41,902 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'NIC1|NETMASK' = '255.255.255.0' 2019-11-01 02:23:41,902 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'NIC1|IPv6ADDR|1' = '2600::10' 2019-11-01 02:23:41,902 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'NIC1|IPv6NETMASK|1' = '64' 2019-11-01 02:23:41,903 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'NIC1|IPv6GATEWAY|1' = '2600::1' 2019-11-01 02:23:41,903 - config_file.py[DEBUG]: FOUND CATEGORY = 'DNS' 2019-11-01 02:23:41,903 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'DNS|DNSFROMDHCP' = 'no' 2019-11-01 02:23:41,904 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'DNS|SUFFIX|1' = 'sqa.local' 2019-11-01 02:23:41,904 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'DNS|NAMESERVER|1' = '192.168.0.10' 2019-11-01 02:23:41,904 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'DNS|NAMESERVER|2' = 'fc00:10:118:192:250:56ff:fe89:64a8' 2019-11-01 02:23:41,904 - config_file.py[DEBUG]: FOUND CATEGORY = 'DATETIME' 2019-11-01 02:23:41,904 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'DATETIME|TIMEZONE' = 'Asia/Kolkata' 2019-11-01 02:23:41,904 - config_file.py[DEBUG]: ADDED KEY-VAL :: 'DATETIME|UTC' = 'no' 2019-11-01 02:23:41,904 - DataSourceOVF.py[DEBUG]: Preparing the Network configuration 2019-11-01 02:23:41,907 - util.py[DEBUG]: Running command ['ip', 'addr', 'show'] with allowed return codes [0] (shell=False, capture=True) 2019-11-01 02:23:41,926 - config_nic.py[INFO]: Configuring the interfaces file 2019-11-01 02:23:41,927 - config_nic.py[INFO]: Debian OS not detected. Skipping the configure step 2019-11-01 02:23:41,927 - util.py[DEBUG]: Recursively deleting /var/run/vmware-imc [...snip...] 2019-11-01 02:23:43,225 - stages.py[INFO]: Applying network configuration from ds bringup=False: {'version': 1, 'config': [{'subnets': [{'control': 'auto', 'netmask': '255.255.255.0', 'type': 'static', 'address': '1.1.1.4'}, {'netmask': '64', 'type': 'static6', 'address': '2600::10'}], 'type': 'physical', 'name': u'ens192', 'mac_address': '00:50:56:89:b7:48'}, {'search': ['sqa.local'], 'type': 'nameserver', 'address': ['192.168.0.10', 'fc00:10:118:192:250:56ff:fe89:64a8']}]} 2019-11-01 02:23:43,226 - __init__.py[DEBUG]: Selected renderer 'sysconfig' from priority list: None 2019-11-01 02:23:43,244 - util.py[WARNING]: failed stage init-local 2019
[Yahoo-eng-team] [Bug 1800854] Re: BOTOPROTO handling between RHEL/Centos/Fedora and SUSE distros is different
This bug is believed to be fixed in cloud-init in version 20.1. If this is still a problem for you, please make a comment and set the state back to New Thank you. ** Changed in: cloud-init Status: Triaged => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1800854 Title: BOTOPROTO handling between RHEL/Centos/Fedora and SUSE distros is different Status in cloud-init: Fix Released Bug description: Looks like we need to figure out how to do distribution specific handling in sysconfig.py for the file content anyway. For a static network configuration on openSUSE and SLES BOTOPROTO must be set to "static", but based on the comment in sysconfig.py # grep BOOTPROTO sysconfig.txt -A2 | head -3 # BOOTPROTO=none|bootp|dhcp # 'bootp' or 'dhcp' cause a DHCP client # to run on the device. Any other # value causes any static configuration # in the file to be applied. # ==> the following should not be set to 'static' # but should remain 'none' # if iface_cfg['BOOTPROTO'] == 'none': #iface_cfg['BOOTPROTO'] = 'static' This might cause trouble on RHEL/Centos/Fedora To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1800854/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1852456] Re: doc: list of modules is no longer present
This bug is believed to be fixed in cloud-init in version 20.1. If this is still a problem for you, please make a comment and set the state back to New Thank you. ** Changed in: cloud-init Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1852456 Title: doc: list of modules is no longer present Status in cloud-init: Fix Released Bug description: the list of modules has disappeared from the documentation sidebar. - version 19.3: https://cloudinit.readthedocs.io/en/19.3/topics/modules.html - version 19.2: https://cloudinit.readthedocs.io/en/19.2/topics/modules.html In 19.2, the sidebar has an entry for each module, and it's a lot easier to find and navigate to appropriate module. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1852456/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1853160] Re: uptime code does not work on FreeBSD with python 3
** Changed in: cloud-init Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1853160 Title: uptime code does not work on FreeBSD with python 3 Status in cloud-init: Fix Released Bug description: The uptime code in cloudinit/util.py does not work for FreeBSD (any more) https://github.com/canonical/cloud- init/blob/3baabe76a70b28abeee2da77826a35e27cf9019a/cloudinit/util.py#L1806-L1825 (link to the code in question at the time of reporting) root@container-host-02:~ # python3.6 Python 3.6.9 (default, Oct 24 2019, 01:18:01) [GCC 4.2.1 Compatible FreeBSD Clang 6.0.1 (tags/RELEASE_601/final 335540)] on freebsd12 Type "help", "copyright", "credits" or "license" for more information. >>> import ctypes >>> libc = ctypes.CDLL('/lib/libc.so.7') >>> import time >>> size = ctypes.c_size_t() >>> buf = ctypes.c_int() >>> size.value = ctypes.sizeof(buf) >>> libc.sysctlbyname("kern.boottime", ctypes.byref(buf), ctypes.byref(size), None, 0) -1 >>> root@container-host-02:~ # and here's what happens when we ask for kern.boottime via sysctl(8): root@container-host-02:~ # sysctl kern.boottime kern.boottime: { sec = 1573656128, usec = 384300 } Wed Nov 13 14:42:08 2019 root@container-host-02:~ # To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1853160/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1862633] Re: unshelve leak allocation if update port fails
Reviewed: https://review.opendev.org/706868 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=e65d4a131a7ebc02261f5df69fa1b394a502f268 Submitter: Zuul Branch:master commit e65d4a131a7ebc02261f5df69fa1b394a502f268 Author: Balazs Gibizer Date: Mon Feb 10 15:48:04 2020 +0100 Clean up allocation if unshelve fails due to neutron When port binding update fails during unshelve of a shelve offloaded instance compute manager has to catch the exception and clean up the destination host allocation. Change-Id: I4c3fbb213e023ac16efc0b8561f975a659311684 Closes-Bug: #1862633 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1862633 Title: unshelve leak allocation if update port fails Status in OpenStack Compute (nova): Fix Released Bug description: If updating the port binding during unshelve of an offloaded server fails then nova leaks placement allocation. Steps to reproduce == 1) boot a server with a neutron port 2) shelve and offload the server 3) disable the original host of the server to force scheduling during unshelve to select a differetn host. This is important as it triggers a non empty port update during unshelve 4) unshelve the server and inject network fault in the communication between nova and neutron. You can try to simply shut down neutron-server at the right moment as well. Right means just before the target compute tries to send the port update 5) observer that the unshelve fails, the server goes back to offloaded state, but the placement allocation on the target host remains. Triage: the problem is cause by a missing fault handling code in the compute manager[1]. The compute manager has proper error handling if the unshelve fails in the virt driver spawn call, but it does not handle failure if the neutron communication fails. The compute manager method simply logs and re-raises the neutron exceptions. This means that the exception is dropped as the unshelve_instance compute RPC is a cast. [1] https://github.com/openstack/nova/blob/1fcd74730d343b7cee12a0a50ea537dc4ff87f65/nova/compute/manager.py#L6473 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1862633/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1863982] Re: Upgrade from Rocky to Stein, router namespace disappear
** Also affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1863982 Title: Upgrade from Rocky to Stein, router namespace disappear Status in kolla-ansible: Confirmed Status in neutron: New Bug description: Upgrade All-in-one from Rocky to Stein. Upgrading finished but the router namespace disappears. Before: ip netns list qrouter-79658dd5-e3b4-4b13-a361-16d696ed1d1c (id: 1) qdhcp-4a183162-64f5-49f9-a615-7c0fd63cf2a8 (id: 0) After: ip netns list After about 1 minutes, dhcp ns has appeared and no error on dhcp-agent, but qrouter ns is still missing, until manually restart the docker container l3-agent. l3-agent error after upgrade: 2020-02-20 02:57:07.306 12 INFO neutron.common.config [-] Logging enabled! 2020-02-20 02:57:07.308 12 INFO neutron.common.config [-] /var/lib/kolla/venv/bin/neutron-l3-agent version 14.0.4 2020-02-20 02:57:08.616 12 INFO neutron.agent.l3.agent [req-95654890-dab3-4106-b56d-c2685fb96f29 - - - - -] Agent HA routers count 0 2020-02-20 02:57:08.619 12 INFO neutron.agent.agent_extensions_manager [req-95654890-dab3-4106-b56d-c2685fb96f29 - - - - -] Loaded agent extensions: [] 2020-02-20 02:57:08.657 12 INFO eventlet.wsgi.server [-] (12) wsgi starting up on http:/var/lib/neutron/keepalived-state-change 2020-02-20 02:57:08.710 12 INFO neutron.agent.l3.agent [-] L3 agent started 2020-02-20 02:57:10.716 12 INFO oslo.privsep.daemon [req-681aad3f-ae14-4315-b96d-5e95225cdf92 - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpg8Ihqa/privsep.sock'] 2020-02-20 02:57:11.750 12 INFO oslo.privsep.daemon [req-681aad3f-ae14-4315-b96d-5e95225cdf92 - - - - -] Spawned new privsep daemon via rootwrap 2020-02-20 02:57:11.614 29 INFO oslo.privsep.daemon [-] privsep daemon starting 2020-02-20 02:57:11.622 29 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0 2020-02-20 02:57:11.627 29 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN/none 2020-02-20 02:57:11.628 29 INFO oslo.privsep.daemon [-] privsep daemon running as pid 29 2020-02-20 02:57:14.449 12 INFO neutron.agent.l3.agent [-] Starting router update for 79658dd5-e3b4-4b13-a361-16d696ed1d1c, action 3, priority 2, update_id 49908db7-8a8c-410f-84a7-9e95a3dede16. Wait time elapsed: 0.000 2020-02-20 02:57:24.160 12 ERROR neutron.agent.linux.utils [-] Exit code: 4; Stdin: # Generated by iptables_manager 2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info self.process_floating_ip_address_scope_rules() 2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ 2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info self.gen.next() 2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info File "/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py", line 438, in defer_apply 2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info raise l3_exc.IpTablesApplyException(msg) 2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info IpTablesApplyException: Failure applying iptables rules 2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent [-] Failed to process compatible router: 79658dd5-e3b4-4b13-a361-16d696ed1d1c: IpTablesApplyException: Failure applying iptables rules 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent Traceback (most recent call last): 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent File "/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 723, in _process_routers_if_compatible 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent self._process_router_if_compatible(router) 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent File "/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 567, in _process_router_if_compatible 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent self._process_added_router(router) 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent File "/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 575, in _process_added_router 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent ri.process() 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent File
[Yahoo-eng-team] [Bug 1492140] Re: [OSSA-2020-001] Nova can leak consoleauth token into log files (CVE-2015-9543)
** Also affects: nova/pike Importance: Undecided Status: New ** Changed in: nova/pike Importance: Undecided => Low ** Changed in: nova/pike Status: New => In Progress ** Changed in: nova/pike Assignee: (unassigned) => Balazs Gibizer (gibizer) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1492140 Title: [OSSA-2020-001] Nova can leak consoleauth token into log files (CVE-2015-9543) Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) pike series: In Progress Status in OpenStack Compute (nova) queens series: Fix Committed Status in OpenStack Compute (nova) rocky series: Fix Committed Status in OpenStack Compute (nova) stein series: Fix Released Status in OpenStack Compute (nova) train series: Fix Released Status in oslo.utils: Fix Released Status in OpenStack Security Advisory: Fix Released Bug description: when instance console is accessed auth token is displayed nova- consoleauth.log nova-consoleauth.log:874:2015-09-02 14:20:36 29941 INFO nova.consoleauth.manager [req-6bc7c116-5681-43ee-828d-4b8ff9d566d0 fe3cd6b7b56f44c9a0d3f5f2546ad4db 37b377441b174b8ba2deda6a6221e399] Received Token: f8ea537c-b924-4d92-935e-4c22ec90d5f7, {'instance_uuid': u'dd29a899-0076-4978-aa50-8fb752f0c3ed', 'access_url': u'http://192.168.245.9:6080/vnc_auto.html?token=f8ea537c-b924-4d92-935e-4c22ec90d5f7', 'token': u'f8ea537c-b924-4d92-935e-4c22ec90d5f7', 'last_activity_at': 1441203636.387588, 'internal_access_path': None, 'console_type': u'novnc', 'host': u'192.168.245.6', 'port': u'5900'} nova-consoleauth.log:881:2015-09-02 14:20:52 29941 INFO nova.consoleauth.manager [req-a29ab7d8-ab26-4ef2-b942-9bb02d5703a0 None None] Checking Token: f8ea537c-b924-4d92-935e-4c22ec90d5f7, True and nova-novncproxy.log:30:2015-09-02 14:20:52 31927 INFO nova.console.websocketproxy [req-a29ab7d8-ab26-4ef2-b942-9bb02d5703a0 None None] 3: connect info: {u'instance_uuid': u'dd29a899-0076-4978-aa50-8fb752f0c3ed', u'internal_access_path': None, u'last_activity_at': 1441203636.387588, u'console_type': u'novnc', u'host': u'192.168.245.6', u'token': u'f8ea537c-b924-4d92 -935e-4c22ec90d5f7', u'access_url': u'http://192.168.245.9:6080/vnc_auto.html?token=f8ea537c-b924-4d92 -935e-4c22ec90d5f7', u'port': u'5900'} This token has a short lifetime but the exposure still represents a potential security weakness, especially as the log record in question are INFO level and thus available via centralized logging. A user with real time access to these records could mount a denial of service attack by accessing the instance console and performing a ctl alt del to reboot it Alternatively data privacy could be compromised if the attacker were able to obtain user credentials To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1492140/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1841700] Please test proposed package
Hello Jorge, or anyone else affected, Accepted neutron into ocata-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository. Please help us by testing this new package. To enable the -proposed repository: sudo add-apt-repository cloud-archive:ocata-proposed sudo apt-get update Your feedback will aid us getting this update out to other Ubuntu users. If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-ocata-needed to verification-ocata-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-ocata-failed. In either case, details of your testing will help us make a better decision. Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance! ** Changed in: cloud-archive/ocata Status: Won't Fix => Fix Committed ** Tags removed: verification-ocata-failed ** Tags added: verification-ocata-needed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1841700 Title: instance ingress bandwidth limiting doesn't works in ocata. Status in Ubuntu Cloud Archive: Invalid Status in Ubuntu Cloud Archive ocata series: Fix Committed Status in neutron: Fix Released Status in neutron package in Ubuntu: Fix Released Status in neutron source package in Xenial: Invalid Bug description: [Environment] Xenial-Ocata deployment [Description] The instance ingress bandwidth limit implementation was targeted for Ocata [0], but the full implementation ingress/egress was done during the pike [1] cycle. However, isn't reported or explicit that ingress direction isn't supported in ocata, which causes the following exception when --ingress is specified. It would be desirable for this feature to be available on Ocata for being able to set ingress/egress bandwidth limits on the ports. [Testing] Without these patches, trying to set a ingress bandwidth-limit rule the following exception will be raised. $ openstack network qos rule create --type bandwidth-limit --max-kbps 300 --max-burst-kbits 300 --ingress bw-limiter Failed to create Network QoS rule: BadRequestException: 400: Client Error for url: https://openstack:9696/v2.0/qos/policies//bandwidth_limit_rules, Unrecognized attribute(s) 'direction' A single policy set (without the --ingress parameter) as supported in Ocata will just create a limiter on the egress side. 1) Check the policy list $ openstack network qos policy list +--+++-+--+ | ID | Name | Shared | Default | Project | +--+++-+--+ | 2c9c85e2-4b65-4146-b7bf-47895379c938 | bw-limiter | False | None | c45b1c0a681d4d9788f911e29166056d | +--+++-+--+ 2) Check that the qoes rule is set to 300 kbps. $ openstack network qos rule list 2c9c85e2-4b65-4146-b7bf-47895379c938 | 01eb228d-5803-4095-9e8e-f13d4312b2ef | 2c9c85e2-4b65-4146-b7bf- 47895379c938 | bandwidth_limit | 300 | 300 | | | | 3) Set the Qos policy on any port. $ openstack port set 9a74b3c8-9ed8-4670-ad1f-932febfcf059 --qos-policy 2c9c85e2-4b65-4146-b7bf-47895379c938 $ openstack port show 9a74b3c8-9ed8-4670-ad1f-932febfcf059 | grep qos | qos_policy_id | 2c9c85e2-4b65-4146-b7bf-47895379c938 | 4) Check that the egress traffic rules have been applied # iperf3 -c 192.168.21.9 -t 10 Connecting to host 192.168.21.9, port 5201 [ 4] local 192.168.21.3 port 34528 connected to 192.168.21.9 port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 121 KBytes 988 Kbits/sec 23 2.44 KBytes [ 4] 7.00-8.00 sec 40.2 KBytes 330 Kbits/sec 14 3.66 KBytes [ 4] 8.00-9.00 sec 36.6 KBytes 299 Kbits/sec 15 2.44 KBytes [ 4] 9.00-10.00 sec 39.0 KBytes 320 Kbits/sec 18 3.66 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 435 KBytes 356 Kbits/sec 159 sender [ 4] 0.00-10.00 sec 384 KBytes 314 Kbits/sec receiver iperf Done. 5) Check that no ingress traffic limit has been applied. # iperf3 -c 192.168.21.9 -R -t 10 Connecting to host 192.168.21.9, port 5201 Reverse mode, remote host 192.168.21.9 is sending [ 4] local 192.168.21.3 port 34524 connected to 192.168.21.9 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 38.1 MBytes 319 Mbits/sec [ 4] 8.00-9.00 sec 74.6 MBytes 626 Mbits/sec [ 4] 9.00-10.00 sec 73.2 MBytes 614 Mbits/
[Yahoo-eng-team] [Bug 1861876] Re: [Neutron API] Neutron Floating IP not always have 'port_details'
** Changed in: neutron Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez) ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1861876 Title: [Neutron API] Neutron Floating IP not always have 'port_details' Status in neutron: Fix Released Status in OpenStack Compute (nova): Fix Released Bug description: 1) Neutron Floating IP 'port_details' key is part of an extension. That means not always will be present in the Floating IP dictionary. Since [1], in [2] is assumed that this key is always present. That should be checked first. 2) The same patch [1] also introduced an error when retrieving the network ID [3]. The network ID is stored in a key named "floating_network_id" [4]. n-api log example: http://paste.openstack.org/show/789108/ CI job failing (breaking Neutron CI!!!): https://0ebef4bf8afa09d1c4c9-5e4b426cf1ca8d9cb4613ee1042e28ab.ssl.cf5.rackcdn.com/704530/4/check /neutron-ovn-tempest-ovs-release/c0a29b4/testr_results.html [1]https://review.opendev.org/#/c/697153/ [2]https://review.opendev.org/#/c/697153/16/nova/api/openstack/compute/floating_ips.py@40 [3]https://review.opendev.org/#/c/697153/16/nova/api/openstack/compute/floating_ips.py@47 [4]https://github.com/openstack/neutron/blob/master/neutron/db/l3_db.py#L1030-L1037 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1861876/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1816468] Re: [SRU] Acceleration cinder - glance with ceph not working
nova 2:19.0.1-0ubuntu2 has been released for a while now. ** Changed in: cloud-archive/stein Status: Fix Committed => Fix Released ** Changed in: cloud-archive Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1816468 Title: [SRU] Acceleration cinder - glance with ceph not working Status in Cinder: Fix Released Status in Ubuntu Cloud Archive: Fix Released Status in Ubuntu Cloud Archive rocky series: Fix Released Status in Ubuntu Cloud Archive stein series: Fix Released Status in Ubuntu Cloud Archive train series: Fix Released Status in OpenStack Compute (nova): Fix Released Status in cinder package in Ubuntu: Fix Released Status in nova package in Ubuntu: Fix Released Status in cinder source package in Cosmic: Won't Fix Status in nova source package in Cosmic: Won't Fix Status in cinder source package in Disco: Fix Released Status in nova source package in Disco: Fix Released Status in nova source package in Eoan: Fix Released Bug description: [Impact] For >= rocky (i.e. if using py3 packages) librados.cluster.get_fsid() is returning a binary string which means that the fsid can't be matched against a string version of the same value from glance when deciding whether to use an image that is stored in Ceph. [Test Case] * deploy openstack rocky (using p3 packages) * deploy ceph and use for glance backend * set /etc/glance/glance-api.conf:show_multiple_locations = True /etc/glance/glance-api.conf:show_image_direct_url = True * upload image to glance * attempt to boot an instance using this image * confirm that instance booted properly and check that the image it booted from is a cow clone of the glance image by doing the following in ceph: rbd -p nova info | grep parent: * confirm that you see "parent: glance/@snap" [Regression Potential] None expected [Other Info] None expected. When using cinder, glance with ceph, in a code is support for creating volumes from images INSIDE ceph environment as copy-on-write volume. This option is saving space in ceph cluster, and increase speed of instance spawning because volume is created directly in ceph. <= THIS IS NOT WORKING IN PY3 If this function is not enabled , image is copying to compute-host ..convert ..create volume, and upload to ceph ( which is time consuming of course ). Problem is , that even if glance-cinder acceleration is turned-on , code is executed as when it is disabled, so ..the same as above , copy image , create volume, upload to ceph ... BUT it should create copy- on-write volume inside the ceph internally. <= THIS IS A BUG IN PY3 Glance config ( controller ): [DEFAULT] show_image_direct_url = true <= this has to be set to true to reproduce issue workers = 7 transport_url = rabbit://openstack:openstack@openstack-db [cors] [database] connection = mysql+pymysql://glance:Eew7shai@openstack-db:3306/glance [glance_store] stores = file,rbd default_store = rbd filesystem_store_datadir = /var/lib/glance/images rbd_store_pool = images rbd_store_user = images rbd_store_ceph_conf = /etc/ceph/ceph.conf [image_format] [keystone_authtoken] auth_url = http://openstack-ctrl:35357 project_name = service project_domain_name = default username = glance user_domain_name = default password = Eew7shai www_authenticate_uri = http://openstack-ctrl:5000 auth_uri = http://openstack-ctrl:35357 cache = swift.cache region_name = RegionOne auth_type = password [matchmaker_redis] [oslo_concurrency] lock_path = /var/lock/glance [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [paste_deploy] flavor = keystone [store_type_location_strategy] [task] [taskflow_executor] [profiler] enabled = true trace_sqlalchemy = true hmac_keys = secret connection_string = redis://127.0.0.1:6379 trace_wsgi_transport = True trace_message_store = True trace_management_store = True Cinder conf (controller) : root@openstack-controller:/tmp# cat /etc/cinder/cinder.conf | grep -v '^#' | awk NF [DEFAULT] my_ip = 192.168.10.15 glance_api_servers = http://openstack-ctrl:9292 auth_strategy = keystone enabled_backends = rbd osapi_volume_workers = 7 debug = true transport_url = rabbit://openstack:openstack@openstack-db [backend] [backend_defaults] rbd_pool = volumes rbd_user = volumes1 rbd_secret_uuid = b2efeb49-9844-475b-92ad-5df4a3e1300e volume_driver = cinder.volume.drivers.rbd.RBDDriver [barbican] [brcd_fabric_example] [cisco_fabric_example] [coordination] [cors] [database] connection = my
[Yahoo-eng-team] [Bug 1841700] Re: instance ingress bandwidth limiting doesn't works in ocata.
@jorge, thanks. I've uploaded neutron 2:10.0.7-0ubuntu1~cloud3 which will revert them. ** Changed in: cloud-archive/ocata Status: Fix Committed => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1841700 Title: instance ingress bandwidth limiting doesn't works in ocata. Status in Ubuntu Cloud Archive: Invalid Status in Ubuntu Cloud Archive ocata series: Won't Fix Status in neutron: Fix Released Status in neutron package in Ubuntu: Fix Released Status in neutron source package in Xenial: Invalid Bug description: [Environment] Xenial-Ocata deployment [Description] The instance ingress bandwidth limit implementation was targeted for Ocata [0], but the full implementation ingress/egress was done during the pike [1] cycle. However, isn't reported or explicit that ingress direction isn't supported in ocata, which causes the following exception when --ingress is specified. It would be desirable for this feature to be available on Ocata for being able to set ingress/egress bandwidth limits on the ports. [Testing] Without these patches, trying to set a ingress bandwidth-limit rule the following exception will be raised. $ openstack network qos rule create --type bandwidth-limit --max-kbps 300 --max-burst-kbits 300 --ingress bw-limiter Failed to create Network QoS rule: BadRequestException: 400: Client Error for url: https://openstack:9696/v2.0/qos/policies//bandwidth_limit_rules, Unrecognized attribute(s) 'direction' A single policy set (without the --ingress parameter) as supported in Ocata will just create a limiter on the egress side. 1) Check the policy list $ openstack network qos policy list +--+++-+--+ | ID | Name | Shared | Default | Project | +--+++-+--+ | 2c9c85e2-4b65-4146-b7bf-47895379c938 | bw-limiter | False | None | c45b1c0a681d4d9788f911e29166056d | +--+++-+--+ 2) Check that the qoes rule is set to 300 kbps. $ openstack network qos rule list 2c9c85e2-4b65-4146-b7bf-47895379c938 | 01eb228d-5803-4095-9e8e-f13d4312b2ef | 2c9c85e2-4b65-4146-b7bf- 47895379c938 | bandwidth_limit | 300 | 300 | | | | 3) Set the Qos policy on any port. $ openstack port set 9a74b3c8-9ed8-4670-ad1f-932febfcf059 --qos-policy 2c9c85e2-4b65-4146-b7bf-47895379c938 $ openstack port show 9a74b3c8-9ed8-4670-ad1f-932febfcf059 | grep qos | qos_policy_id | 2c9c85e2-4b65-4146-b7bf-47895379c938 | 4) Check that the egress traffic rules have been applied # iperf3 -c 192.168.21.9 -t 10 Connecting to host 192.168.21.9, port 5201 [ 4] local 192.168.21.3 port 34528 connected to 192.168.21.9 port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 121 KBytes 988 Kbits/sec 23 2.44 KBytes [ 4] 7.00-8.00 sec 40.2 KBytes 330 Kbits/sec 14 3.66 KBytes [ 4] 8.00-9.00 sec 36.6 KBytes 299 Kbits/sec 15 2.44 KBytes [ 4] 9.00-10.00 sec 39.0 KBytes 320 Kbits/sec 18 3.66 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 435 KBytes 356 Kbits/sec 159 sender [ 4] 0.00-10.00 sec 384 KBytes 314 Kbits/sec receiver iperf Done. 5) Check that no ingress traffic limit has been applied. # iperf3 -c 192.168.21.9 -R -t 10 Connecting to host 192.168.21.9, port 5201 Reverse mode, remote host 192.168.21.9 is sending [ 4] local 192.168.21.3 port 34524 connected to 192.168.21.9 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 38.1 MBytes 319 Mbits/sec [ 4] 8.00-9.00 sec 74.6 MBytes 626 Mbits/sec [ 4] 9.00-10.00 sec 73.2 MBytes 614 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 1.07 GBytes 918 Mbits/sec 1045 sender [ 4] 0.00-10.00 sec 1.07 GBytes 916 Mbits/sec receiver ---> 6) With the patches applied from the PPA or proposed, run the migration steps on the neutron-api node, repeat the previous steps, but make sure to specify the traffic direction with --ingress as follows: $ openstack network qos rule create --type bandwidth-limit --max-kbps 300 --ingress testing-policy ++--+ | Field | Value | ++--+ | direction | ingress | | id | 6d01cefa-0042-40cd-ae74-bcb723ca7ca4 | | max_burst_kbps | 0 | | max_kbps | 300 | | name | None | | project_id | | ++--+ 7) Set the policy into any server port. $ openstack port set 50b8f714-3ee4-4260-8359-8
[Yahoo-eng-team] [Bug 1864027] [NEW] [OVN] DHCP doesn't work while instance has disabled port security
Public bug reported: While instance has disabled port security its not able to reach DHCP service. Looks like the change [1] introduced this regression. Port has [unknown] address set: +---++ root@mjozefcz-ovn-train-lb:~# ovn-nbctl list logical_switch_port a09a1ac7-62ad-46ad-b802-c4abf65dcf70 _uuid : 32a741bc-a185-4291-8b36-dc9c387bb662 addresses : [unknown] dhcpv4_options : 7c94ec89-3144-4920-b624-193d968c637a dhcpv6_options : [] dynamic_addresses : [] enabled : true external_ids: {"neutron:cidrs"="10.2.1.134/24", "neutron:device_id"="9f4a705f-b438-4da1-975d-1a0cdf81e124", "neutron:device_owner"="compute:nova", "neutron:network_name"=neutron-cd1ee69d-06b6-4502-ba26-e1280fd66ad9, "neutron:port_fip"="172.24.4.132", "neutron:port_name"="", "neutron:project_id"="98b165bfeeca4efd84724f3118d84f6f", "neutron:revision_number"="4", "neutron:security_group_ids"=""} ha_chassis_group: [] name: "a09a1ac7-62ad-46ad-b802-c4abf65dcf70" options : {requested-chassis=mjozefcz-ovn-train-lb} parent_name : [] port_security : [] tag : [] tag_request : [] type: "" up : true ovn-controller doesn't respond for DHCP requests. It was caught by failing OVN Provider driver tempest test: octavia_tempest_plugin.tests.scenario.v2.test_traffic_ops.TrafficOperationsScenarioTest [1] https://review.opendev.org/#/c/702249/ ** Affects: neutron Importance: Undecided Status: New ** Tags: ovn -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1864027 Title: [OVN] DHCP doesn't work while instance has disabled port security Status in neutron: New Bug description: While instance has disabled port security its not able to reach DHCP service. Looks like the change [1] introduced this regression. Port has [unknown] address set: +---++ root@mjozefcz-ovn-train-lb:~# ovn-nbctl list logical_switch_port a09a1ac7-62ad-46ad-b802-c4abf65dcf70 _uuid : 32a741bc-a185-4291-8b36-dc9c387bb662 addresses : [unknown] dhcpv4_options : 7c94ec89-3144-4920-b624-193d968c637a dhcpv6_options : [] dynamic_addresses : [] enabled : true external_ids: {"neutron:cidrs"="10.2.1.134/24", "neutron:device_id"="9f4a705f-b438-4da1-975d-1a0cdf81e124", "neutron:device_owner"="compute:nova", "neutron:network_name"=neutron-cd1ee69d-06b6-4502-ba26-e1280fd66ad9, "neutron:port_fip"="172.24.4.132", "neutron:port_name"="", "neutron:project_id"="98b165bfeeca4efd84724f3118d84f6f", "neutron:revision_number"="4", "neutron:security_group_ids"=""} ha_chassis_group: [] name: "a09a1ac7-62ad-46ad-b802-c4abf65dcf70" options : {requested-chassis=mjozefcz-ovn-train-lb} parent_name : [] port_security : [] tag : [] tag_request : [] type: "" up : true ovn-controller doesn't respond for DHCP requests. It was caught by failing OVN Provider driver tempest test: octavia_tempest_plugin.tests.scenario.v2.test_traffic_ops.TrafficOperationsScenarioTest [1] https://review.opendev.org/#/c/702249/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1864027/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1533087] Re: there is useless 'u' in the wrong info when execute a wrong nova command
This should not be an issue with Python 3, which is all we support now. Closing as a result. ** Changed in: python-novaclient Status: In Progress => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1533087 Title: there is useless 'u' in the wrong info when execute a wrong nova command Status in OpenStack Compute (nova): Invalid Status in python-novaclient: Won't Fix Bug description: [Summary] there is useless 'u' in the wrong info when execute a wrong nova command [Topo] devstack all-in-one node [Description and expect result] no useless 'u' in the wrong info when execute a wrong nova command [Reproduceable or not] reproduceable [Recreate Steps] 1) there is useless 'u' in the wrong info when execute a wrong nova command: root@45-59:/opt/stack/devstack# nova wrongcmd usage: nova [--version] [--debug] [--os-cache] [--timings] [--os-region-name ] [--service-type ] [--service-name ] [--volume-service-name ] [--os-endpoint-type ] [--os-compute-api-version ] [--bypass-url ] [--insecure] [--os-cacert ] [--os-cert ] [--os-key ] [--timeout ] [--os-auth-type ] [--os-auth-url OS_AUTH_URL] [--os-domain-id OS_DOMAIN_ID] [--os-domain-name OS_DOMAIN_NAME] [--os-project-id OS_PROJECT_ID] [--os-project-name OS_PROJECT_NAME] [--os-project-domain-id OS_PROJECT_DOMAIN_ID] [--os-project-domain-name OS_PROJECT_DOMAIN_NAME] [--os-trust-id OS_TRUST_ID] [--os-default-domain-id OS_DEFAULT_DOMAIN_ID] [--os-default-domain-name OS_DEFAULT_DOMAIN_NAME] [--os-user-id OS_USER_ID] [--os-user-name OS_USERNAME] [--os-user-domain-id OS_USER_DOMAIN_ID] [--os-user-domain-name OS_USER_DOMAIN_NAME] [--os-password OS_PASSWORD] ... error: argument : invalid choice: u'wrongcmd' ISSUE Try 'nova help ' for more information. root@45-59:/opt/stack/devstack# 2)below is a correct example for reference: root@45-59:/opt/stack/devstack# keystone wrongcmd usage: keystone [--version] [--debug] [--os-username ] [--os-password ] [--os-tenant-name ] [--os-tenant-id ] [--os-auth-url ] [--os-region-name ] [--os-identity-api-version ] [--os-token ] [--os-endpoint ] [--os-cache] [--force-new-token] [--stale-duration ] [--insecure] [--os-cacert ] [--os-cert ] [--os-key ] [--timeout ] ... keystone: error: argument : invalid choice: 'wrongcmd' [Configration] reproduceable bug, no need [logs] reproduceable bug, no need [Root cause anlyze or debug inf] reproduceable bug [Attachment] None To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1533087/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1864020] [NEW] libvirt.libvirtError: Requested operation is not valid: format of backing image %s of image %s was not specified in the image metadata (See https://libvirt.org/kba
Public bug reported: The following was discovered using Fedora 30 and a virt-preview job in the below change: zuul: Add the fedora-latest-virt-preview job to the experimental queue https://review.opendev.org/#/c/704573/ Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [None req-7efa9e8b-3c21-4787-8b47-54cab5fe3756 tempest-AggregatesAdminTestJSON-76056319 tempest-AggregatesAdminTestJSON-76056319] [instance: 543723fb-3afc-460c-9139-809bcacd1840] Instance failed to spawn: libvirt.libvirtError: Requested operation is not valid: format of backing image '/opt/stack/data/nova/instances/_base/8e0569aaf1cbdb522514c3dc9d0fa8fad6f78c50' of image '/opt/stack/data/nova/instances/543723fb-3afc-460c-9139-809bcacd1840/disk' was not specified in the image metadata (See https://libvirt.org/kbase/backing_chains.html for troubleshooting) Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] Traceback (most recent call last): Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] File "/opt/stack/nova/nova/compute/manager.py", line 2604, in _build_resources Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] yield resources Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] File "/opt/stack/nova/nova/compute/manager.py", line 2377, in _build_and_run_instance Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] block_device_info=block_device_info) Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3399, in spawn Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] power_on=power_on) Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6193, in _create_domain_and_network Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] destroy_disks_on_failure) Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] File "/usr/local/lib/python3.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] self.force_reraise() Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] File "/usr/local/lib/python3.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] six.reraise(self.type_, self.value, self.tb) Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] File "/usr/local/lib/python3.7/site-packages/six.py", line 703, in reraise Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] raise value Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6165, in _create_domain_and_network Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] post_xml_callback=post_xml_callback) Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6106, in _create_domain Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] guest.launch(pause=pause) Feb 19 16:45:21.405351 fedora-30-rax-ord-0014691277 nova-compute[2019]: ERROR nova.compute.manager [instance: 543723fb-3afc-460c-9139-809bcacd1840] File
[Yahoo-eng-team] [Bug 1864014] [NEW] Upgrade from Rocky to Stein, router namespace disappear
Public bug reported: Upgrade All-in-one from Rocky to Stein. Upgrading finished but the router namespace disappears. Before: ip netns list qrouter-79658dd5-e3b4-4b13-a361-16d696ed1d1c (id: 1) qdhcp-4a183162-64f5-49f9-a615-7c0fd63cf2a8 (id: 0) After: ip netns list After about 1 minutes, dhcp ns has appeared and no error on dhcp-agent, but qrouter ns is still missing, until manually restart the docker container l3-agent. l3-agent error after upgrade: 2020-02-20 02:57:07.306 12 INFO neutron.common.config [-] Logging enabled! 2020-02-20 02:57:07.308 12 INFO neutron.common.config [-] /var/lib/kolla/venv/bin/neutron-l3-agent version 14.0.4 2020-02-20 02:57:08.616 12 INFO neutron.agent.l3.agent [req-95654890-dab3-4106-b56d-c2685fb96f29 - - - - -] Agent HA routers count 0 2020-02-20 02:57:08.619 12 INFO neutron.agent.agent_extensions_manager [req-95654890-dab3-4106-b56d-c2685fb96f29 - - - - -] Loaded agent extensions: [] 2020-02-20 02:57:08.657 12 INFO eventlet.wsgi.server [-] (12) wsgi starting up on http:/var/lib/neutron/keepalived-state-change 2020-02-20 02:57:08.710 12 INFO neutron.agent.l3.agent [-] L3 agent started 2020-02-20 02:57:10.716 12 INFO oslo.privsep.daemon [req-681aad3f-ae14-4315-b96d-5e95225cdf92 - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpg8Ihqa/privsep.sock'] 2020-02-20 02:57:11.750 12 INFO oslo.privsep.daemon [req-681aad3f-ae14-4315-b96d-5e95225cdf92 - - - - -] Spawned new privsep daemon via rootwrap 2020-02-20 02:57:11.614 29 INFO oslo.privsep.daemon [-] privsep daemon starting 2020-02-20 02:57:11.622 29 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0 2020-02-20 02:57:11.627 29 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN/none 2020-02-20 02:57:11.628 29 INFO oslo.privsep.daemon [-] privsep daemon running as pid 29 2020-02-20 02:57:14.449 12 INFO neutron.agent.l3.agent [-] Starting router update for 79658dd5-e3b4-4b13-a361-16d696ed1d1c, action 3, priority 2, update_id 49908db7-8a8c-410f-84a7-9e95a3dede16. Wait time elapsed: 0.000 2020-02-20 02:57:24.160 12 ERROR neutron.agent.linux.utils [-] Exit code: 4; Stdin: # Generated by iptables_manager 2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info self.process_floating_ip_address_scope_rules() 2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ 2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info self.gen.next() 2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info File "/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py", line 438, in defer_apply 2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info raise l3_exc.IpTablesApplyException(msg) 2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info IpTablesApplyException: Failure applying iptables rules 2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent [-] Failed to process compatible router: 79658dd5-e3b4-4b13-a361-16d696ed1d1c: IpTablesApplyException: Failure applying iptables rules 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent Traceback (most recent call last): 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent File "/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 723, in _process_routers_if_compatible 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent self._process_router_if_compatible(router) 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent File "/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 567, in _process_router_if_compatible 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent self._process_added_router(router) 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent File "/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 575, in _process_added_router 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent ri.process() 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent File "/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/common/utils.py", line 161, in call 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent self.logger(e) 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent File "/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent self.force_reraise() 2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent File "/var/lib/kolla/venv/
[Yahoo-eng-team] [Bug 1864015] [NEW] neutron-tempest-plugin-designate-scenario fails on stable branches with "SyntaxError: invalid syntax" installing dnspython
Public bug reported: Most probably a follow-up of https://review.opendev.org/#/c/704084/ dropping py2.7 testing, stable branches reviews now fail on neutron-tempest-plugin-designate-scenario job with failure like: 2020-02-20 08:05:43.987705 | controller | 2020-02-20 08:05:43.987 | Collecting dnspython3!=1.13.0,!=1.14.0,>=1.12.0 (from designate-tempest-plugin==0.7.1.dev1) 2020-02-20 08:05:44.204759 | controller | 2020-02-20 08:05:44.204 | Downloading http://mirror.ord.rax.opendev.org/pypifiles/packages/f0/bb/f41cbc8eaa807afb9d44418f092aa3e4acf0e4f42b439c49824348f1f45c/dnspython3-1.15.0.zip 2020-02-20 08:05:44.734839 | controller | 2020-02-20 08:05:44.734 | Complete output from command python setup.py egg_info: 2020-02-20 08:05:44.734956 | controller | 2020-02-20 08:05:44.734 | Traceback (most recent call last): 2020-02-20 08:05:44.735018 | controller | 2020-02-20 08:05:44.734 | File "", line 1, in 2020-02-20 08:05:44.735078 | controller | 2020-02-20 08:05:44.735 | File "/tmp/pip-build-gSLP9k/dnspython3/setup.py", line 25 2020-02-20 08:05:44.735165 | controller | 2020-02-20 08:05:44.735 | """+"="*78, file=sys.stdout) 2020-02-20 08:05:44.735231 | controller | 2020-02-20 08:05:44.735 | ^ 2020-02-20 08:05:44.735290 | controller | 2020-02-20 08:05:44.735 | SyntaxError: invalid syntax Sample reviews: https://review.opendev.org/#/c/708576/ (queens) https://review.opendev.org/#/c/705421/ (rocky) https://review.opendev.org/#/c/708488/ (stein) As "[ussuri][goal] Drop python 2.7 support and testing" is the only new commit since tag 0.7.0 in designate-tempest-plugin, we can try to use this tag for our py2 stable branches jobs ** Affects: neutron Importance: Critical Status: New ** Tags: dns gate-failure -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1864015 Title: neutron-tempest-plugin-designate-scenario fails on stable branches with "SyntaxError: invalid syntax" installing dnspython Status in neutron: New Bug description: Most probably a follow-up of https://review.opendev.org/#/c/704084/ dropping py2.7 testing, stable branches reviews now fail on neutron-tempest-plugin-designate-scenario job with failure like: 2020-02-20 08:05:43.987705 | controller | 2020-02-20 08:05:43.987 | Collecting dnspython3!=1.13.0,!=1.14.0,>=1.12.0 (from designate-tempest-plugin==0.7.1.dev1) 2020-02-20 08:05:44.204759 | controller | 2020-02-20 08:05:44.204 | Downloading http://mirror.ord.rax.opendev.org/pypifiles/packages/f0/bb/f41cbc8eaa807afb9d44418f092aa3e4acf0e4f42b439c49824348f1f45c/dnspython3-1.15.0.zip 2020-02-20 08:05:44.734839 | controller | 2020-02-20 08:05:44.734 | Complete output from command python setup.py egg_info: 2020-02-20 08:05:44.734956 | controller | 2020-02-20 08:05:44.734 | Traceback (most recent call last): 2020-02-20 08:05:44.735018 | controller | 2020-02-20 08:05:44.734 | File "", line 1, in 2020-02-20 08:05:44.735078 | controller | 2020-02-20 08:05:44.735 | File "/tmp/pip-build-gSLP9k/dnspython3/setup.py", line 25 2020-02-20 08:05:44.735165 | controller | 2020-02-20 08:05:44.735 | """+"="*78, file=sys.stdout) 2020-02-20 08:05:44.735231 | controller | 2020-02-20 08:05:44.735 | ^ 2020-02-20 08:05:44.735290 | controller | 2020-02-20 08:05:44.735 | SyntaxError: invalid syntax Sample reviews: https://review.opendev.org/#/c/708576/ (queens) https://review.opendev.org/#/c/705421/ (rocky) https://review.opendev.org/#/c/708488/ (stein) As "[ussuri][goal] Drop python 2.7 support and testing" is the only new commit since tag 0.7.0 in designate-tempest-plugin, we can try to use this tag for our py2 stable branches jobs To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1864015/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp