[Yahoo-eng-team] [Bug 1423885] [NEW] nova flavor-show inconsistent with mixed case in names
Public bug reported: The nova flavor-show command accepts a flavor name as its parameter. The search via names works with an exact match and all-lowercase, but not with any other character casing. E.g. given an existing flavor named 'ASAv', 'nova flavor-show ASAv' will get the same result as 'nova flavor-show asav', but 'nova flavor-show Asav' will return failure. Also note that 'nova flavor-create Asav ...' will reject flavor creation because the flavor exists already. # nova flavor-show asav ++--+ | Property | Value| ++--+ | OS-FLV-DISABLED:disabled | False| | OS-FLV-EXT-DATA:ephemeral | 0| | disk | 0| | extra_specs| {} | | id | a9215596-5f05-43ff-b150-7344a3112304 | | name | ASAv | | os-flavor-access:is_public | True | | ram| 2048 | | rxtx_factor| 1.0 | | swap | | | vcpus | 1| ++--+ # nova flavor-show ASAv ++--+ | Property | Value| ++--+ | OS-FLV-DISABLED:disabled | False| | OS-FLV-EXT-DATA:ephemeral | 0| | disk | 0| | extra_specs| {} | | id | a9215596-5f05-43ff-b150-7344a3112304 | | name | ASAv | | os-flavor-access:is_public | True | | ram| 2048 | | rxtx_factor| 1.0 | | swap | | | vcpus | 1| ++--+ # nova flavor-show Asav ERROR: No flavor with a name or ID of 'Asav' exists. # nova flavor-create Asav 10 1024 10 1 ERROR: Flavor with name Asav already exists. (HTTP 409) (Request-ID: req-c90d775c-6846-47e5-a32e-9badf568fbd1) ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1423885 Title: nova flavor-show inconsistent with mixed case in names Status in OpenStack Compute (Nova): New Bug description: The nova flavor-show command accepts a flavor name as its parameter. The search via names works with an exact match and all-lowercase, but not with any other character casing. E.g. given an existing flavor named 'ASAv', 'nova flavor-show ASAv' will get the same result as 'nova flavor-show asav', but 'nova flavor-show Asav' will return failure. Also note that 'nova flavor-create Asav ...' will reject flavor creation because the flavor exists already. # nova flavor-show asav ++--+ | Property | Value| ++--+ | OS-FLV-DISABLED:disabled | False| | OS-FLV-EXT-DATA:ephemeral | 0| | disk | 0| | extra_specs| {} | | id | a9215596-5f05-43ff-b150-7344a3112304 | | name | ASAv | | os-flavor-access:is_public | True | | ram| 2048 | | rxtx_factor| 1.0 | | swap | | | vcpus | 1| ++--+ # nova flavor-show ASAv ++--+ | Property | Value|
[Yahoo-eng-team] [Bug 1423861] [NEW] Jenkins pythnon26 and python27 failure for stable/juno
Public bug reported: Following tests are failing on Jenkins for python26 and python27 glance.tests.functional.v1.test_copy_to_file.TestCopyToFile.test_copy_from_http_nonexistent glance.tests.unit.test_store_image.TestStoreImage.test_image_change_delete_locations glance.tests.unit.test_store_image.TestStoreImage.test_image_delete glance.tests.unit.test_store_image.TestStoreImage.test_image_set_data_location_metadata glance.tests.unit.test_store_image.TestStoreImage.test_image_set_data_unknown_size glance.tests.unit.v1.test_api.TestGlanceAPI.test_add_copy_from_image_authorized_upload_image_authorized glance.tests.unit.v2.test_image_data_resource.TestImageDataSerializer.test_download_not_found glance.tests.unit.v2.test_images_resource.TestImagesController.test_update_add_locations_list These tests are failing because stable/juno is using new glance_store 0.1.11 Refrences: stable/juno patch: https://review.openstack.org/#/c/157067/ https://jenkins06.openstack.org/job/gate-glance-python26/774/console https://jenkins02.openstack.org/job/gate-glance-python27/4681/console ** Affects: glance Importance: Undecided Status: New ** Description changed: Following tests are failing on Jenkins for python26 and python27 glance.tests.functional.v1.test_copy_to_file.TestCopyToFile.test_copy_from_http_nonexistent glance.tests.unit.test_store_image.TestStoreImage.test_image_change_delete_locations glance.tests.unit.test_store_image.TestStoreImage.test_image_delete glance.tests.unit.test_store_image.TestStoreImage.test_image_set_data_location_metadata glance.tests.unit.test_store_image.TestStoreImage.test_image_set_data_unknown_size glance.tests.unit.v1.test_api.TestGlanceAPI.test_add_copy_from_image_authorized_upload_image_authorized glance.tests.unit.v2.test_image_data_resource.TestImageDataSerializer.test_download_not_found glance.tests.unit.v2.test_images_resource.TestImagesController.test_update_add_locations_list These tests are failing because stable/juno is using new glance_store 0.1.11 - Refrences: stable/juno patch: https://review.openstack.org/#/c/157067/ https://jenkins06.openstack.org/job/gate-glance-python26/774/console https://jenkins02.openstack.org/job/gate-glance-python27/4681/console -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1423861 Title: Jenkins pythnon26 and python27 failure for stable/juno Status in OpenStack Image Registry and Delivery Service (Glance): New Bug description: Following tests are failing on Jenkins for python26 and python27 glance.tests.functional.v1.test_copy_to_file.TestCopyToFile.test_copy_from_http_nonexistent glance.tests.unit.test_store_image.TestStoreImage.test_image_change_delete_locations glance.tests.unit.test_store_image.TestStoreImage.test_image_delete glance.tests.unit.test_store_image.TestStoreImage.test_image_set_data_location_metadata glance.tests.unit.test_store_image.TestStoreImage.test_image_set_data_unknown_size glance.tests.unit.v1.test_api.TestGlanceAPI.test_add_copy_from_image_authorized_upload_image_authorized glance.tests.unit.v2.test_image_data_resource.TestImageDataSerializer.test_download_not_found glance.tests.unit.v2.test_images_resource.TestImagesController.test_update_add_locations_list These tests are failing because stable/juno is using new glance_store 0.1.11 Refrences: stable/juno patch: https://review.openstack.org/#/c/157067/ https://jenkins06.openstack.org/job/gate-glance-python26/774/console https://jenkins02.openstack.org/job/gate-glance-python27/4681/console To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1423861/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1423845] [NEW] In certain cases compute does not clean up neutron ports after unsuccessful vm spawn
Public bug reported: When allocating networks for instance compute first creates ports and then fetches them from neutron to build network info. Under high load it might be possible that neutron/keystone timeouts on a request to fetch ports for instance (traceback attached). In this case exception is caught and _shutdown_instance() with try_deallocate_networks=False is called with the assumption that Network deallocation is already handled in this code path so it should not happen in _shutdown_instance. [1] Then the exception is reraised, caught in _build_and_run_instance() and reraised as RescheduledException [2]. RescheduledException is caught in _do_build_and_run_instance [3] Eventually only self.network_api.cleanup_instance_network_on_host() is called and instance resheduling initiated. self.network_api.cleanup_instance_network_on_host() does nothing in case of neutron so we have orphaned ports. I see two possible fixes: either do network deallocation on _shutdown_instance() or implement cleanup_instance_network_on_host() to do ports cleanup. [1] bug 1332198 commit 5120c4f7c2670eaa71898fe6941029bbb0081949 [2] https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2233 [3] https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2089 [4] https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2113 ** Affects: nova Importance: Undecided Assignee: Oleg Bondarev (obondarev) Status: New ** Tags: network ** Attachment added: traceback.txt https://bugs.launchpad.net/bugs/1423845/+attachment/4323154/+files/traceback.txt ** Tags added: network -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1423845 Title: In certain cases compute does not clean up neutron ports after unsuccessful vm spawn Status in OpenStack Compute (Nova): New Bug description: When allocating networks for instance compute first creates ports and then fetches them from neutron to build network info. Under high load it might be possible that neutron/keystone timeouts on a request to fetch ports for instance (traceback attached). In this case exception is caught and _shutdown_instance() with try_deallocate_networks=False is called with the assumption that Network deallocation is already handled in this code path so it should not happen in _shutdown_instance. [1] Then the exception is reraised, caught in _build_and_run_instance() and reraised as RescheduledException [2]. RescheduledException is caught in _do_build_and_run_instance [3] Eventually only self.network_api.cleanup_instance_network_on_host() is called and instance resheduling initiated. self.network_api.cleanup_instance_network_on_host() does nothing in case of neutron so we have orphaned ports. I see two possible fixes: either do network deallocation on _shutdown_instance() or implement cleanup_instance_network_on_host() to do ports cleanup. [1] bug 1332198 commit 5120c4f7c2670eaa71898fe6941029bbb0081949 [2] https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2233 [3] https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2089 [4] https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2113 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1423845/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1423888] [NEW] Subnet Modals broken/way out of position
Public bug reported: The subnet modals are rendering off the bottom of the page and not really usable. This change https://review.openstack.org/#/c/137417/ adds some CSS that breaks it. ** Affects: horizon Importance: Undecided Assignee: Rob Cresswell (robcresswell) Status: Confirmed ** Changed in: horizon Assignee: (unassigned) = Rob Cresswell (robcresswell) ** Description changed: The subnet modals are rendering off the bottom of the page and not - really usable. + really usable. This change https://review.openstack.org/#/c/137417/ adds + some CSS that breaks it. -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1423888 Title: Subnet Modals broken/way out of position Status in OpenStack Dashboard (Horizon): Confirmed Bug description: The subnet modals are rendering off the bottom of the page and not really usable. This change https://review.openstack.org/#/c/137417/ adds some CSS that breaks it. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1423888/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1423900] [NEW] Don't try to query dbus if the credentials are in the environment variables
Public bug reported: I use nova with credentials as environment variables, and was suprised to get the following traceback : # nova list Traceback (most recent call last): File /usr/bin/nova, line 6, in module from novaclient.shell import main File /usr/lib/python2.7/dist-packages/novaclient/shell.py, line 38, in module import keyring File /usr/lib/python2.7/dist-packages/keyring/__init__.py, line 12, in module from .core import (set_keyring, get_keyring, set_password, get_password, File /usr/lib/python2.7/dist-packages/keyring/core.py, line 180, in module init_backend() File /usr/lib/python2.7/dist-packages/keyring/core.py, line 59, in init_backend set_keyring(load_config() or _get_best_keyring()) File /usr/lib/python2.7/dist-packages/keyring/core.py, line 67, in _get_best_keyring keyrings = backend.get_all_keyring() File /usr/lib/python2.7/dist-packages/keyring/util/__init__.py, line 24, in wrapper func.always_returns = func(*args, **kwargs) File /usr/lib/python2.7/dist-packages/keyring/backend.py, line 127, in get_all_keyring exceptions=TypeError)) File /usr/lib/python2.7/dist-packages/keyring/util/__init__.py, line 35, in suppress_exceptions for callable in callables: File /usr/lib/python2.7/dist-packages/keyring/backend.py, line 119, in is_class_viable keyring_cls.priority File /usr/lib/python2.7/dist-packages/keyring/util/properties.py, line 22, in __get__ return self.fget.__get__(None, owner)() File /usr/lib/python2.7/dist-packages/keyring/util/XDG.py, line 18, in wrapper return func(*args, **kwargs) * self.multiplier File /usr/lib/python2.7/dist-packages/keyring/backends/SecretService.py, line 32, in priority list(secretstorage.get_all_collections(bus)) File /usr/lib/python2.7/dist-packages/secretstorage/collection.py, line 158, in get_all_collections service_obj = bus_get_object(bus, SECRETS, SS_PATH) File /usr/lib/python2.7/dist-packages/secretstorage/util.py, line 50, in bus_get_object return bus.get_object(name, object_path, introspect=False) File /usr/lib/python2.7/dist-packages/dbus/bus.py, line 241, in get_object follow_name_owner_changes=follow_name_owner_changes) File /usr/lib/python2.7/dist-packages/dbus/proxies.py, line 248, in __init__ self._named_service = conn.activate_name_owner(bus_name) File /usr/lib/python2.7/dist-packages/dbus/bus.py, line 180, in activate_name_owner self.start_service_by_name(bus_name) File /usr/lib/python2.7/dist-packages/dbus/bus.py, line 278, in start_service_by_name 'su', (bus_name, flags))) File /usr/lib/python2.7/dist-packages/dbus/connection.py, line 651, in call_blocking message, timeout) dbus.exceptions.DBusException: org.freedesktop.DBus.Error.TimedOut: Activation of org.freedesktop.secrets timed out This might be an explanation of why my nova commands have been slugish lately. Would it be possible for nova not to try dbus if it finds the credentials in the environment ? ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1423900 Title: Don't try to query dbus if the credentials are in the environment variables Status in OpenStack Compute (Nova): New Bug description: I use nova with credentials as environment variables, and was suprised to get the following traceback : # nova list Traceback (most recent call last): File /usr/bin/nova, line 6, in module from novaclient.shell import main File /usr/lib/python2.7/dist-packages/novaclient/shell.py, line 38, in module import keyring File /usr/lib/python2.7/dist-packages/keyring/__init__.py, line 12, in module from .core import (set_keyring, get_keyring, set_password, get_password, File /usr/lib/python2.7/dist-packages/keyring/core.py, line 180, in module init_backend() File /usr/lib/python2.7/dist-packages/keyring/core.py, line 59, in init_backend set_keyring(load_config() or _get_best_keyring()) File /usr/lib/python2.7/dist-packages/keyring/core.py, line 67, in _get_best_keyring keyrings = backend.get_all_keyring() File /usr/lib/python2.7/dist-packages/keyring/util/__init__.py, line 24, in wrapper func.always_returns = func(*args, **kwargs) File /usr/lib/python2.7/dist-packages/keyring/backend.py, line 127, in get_all_keyring exceptions=TypeError)) File /usr/lib/python2.7/dist-packages/keyring/util/__init__.py, line 35, in suppress_exceptions for callable in callables: File /usr/lib/python2.7/dist-packages/keyring/backend.py, line 119, in is_class_viable keyring_cls.priority File /usr/lib/python2.7/dist-packages/keyring/util/properties.py, line 22, in __get__ return self.fget.__get__(None, owner)() File
[Yahoo-eng-team] [Bug 1362171] Re: Reuse process management classes from dnsmasq for radvd
** Changed in: neutron Status: In Progress = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1362171 Title: Reuse process management classes from dnsmasq for radvd Status in OpenStack Neutron (virtual network service): Fix Released Bug description: When reviewing/discussing the change to add functional tests for radvd[1] it was requested that radvd should be managed similar to the dnsmasq process. We should reuse the classes already existing for dnsmasq. Extract common functionality to allow them to work for both radvd and dnsmasq. This will allow some common functional testing too. [1] https://review.openstack.org/109889 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1362171/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1423747] Re: neutron port-create error message is not helpful
** Project changed: neutron = python-neutronclient -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1423747 Title: neutron port-create error message is not helpful Status in Python client library for Neutron: New Bug description: nova-manage version: 2015.1 Issue: Error message for neutron port-create failure is not helpful. ubuntu@trusty1:~/devstack$ neutron help port-create usage: neutron port-create [-h] [-f {html,json,shell,table,value,yaml}] [-c COLUMN] [--max-width integer] [--prefix PREFIX] [--request-format {json,xml}] [--tenant-id TENANT_ID] [--name NAME] [--admin-state-down] [--mac-address MAC_ADDRESS] [--device-id DEVICE_ID] [--fixed-ip subnet_id=SUBNET,ip_address=IP_ADDR] [--security-group SECURITY_GROUP | --no-security-groups] [--extra-dhcp-opt EXTRA_DHCP_OPTS] NETWORK Create a port for a given tenant. positional arguments: NETWORK Network ID or name this port belongs to. optional arguments: -h, --helpshow this help message and exit --request-format {json,xml} The XML or JSON request format. --tenant-id TENANT_ID The owner tenant ID. --name NAME Name of this port. --admin-state-downSet admin state up to false. --mac-address MAC_ADDRESS MAC address of this port. --device-id DEVICE_ID Device ID of this port. --fixed-ip subnet_id=SUBNET,ip_address=IP_ADDR Desired IP and/or subnet for this port: subnet_id=name_or_id,ip_address=ip.You can repeat this option. --security-group SECURITY_GROUP Security group associated with the port. You can repeat this option. --no-security-groups Associate no security groups with the port. --extra-dhcp-opt EXTRA_DHCP_OPTS Extra dhcp options to be assigned to this port: opt_name=dhcp_option_name,opt_value=value. You can repeat this option. output formatters: output formatter options -f {html,json,shell,table,value,yaml}, --format {html,json,shell,table,value,yaml} the output format, defaults to table -c COLUMN, --column COLUMN specify the column(s) to include, can be repeated table formatter: --max-width integer Maximum display width, 0 to disable shell formatter: a format a UNIX shell can parse (variable=value) --prefix PREFIX add a prefix to all variable names Note the optional argument [--fixed-ip subnet_id=SUBNET,ip_address=IP_ADDR] I accidentally omit ip_address= when using this option: --fixed-ip 10.0.0.4 The correct syntax should be: --fixed-ip ip_address=10.0.0.4 ubuntu@trusty1:~/devstack$ neutron port-create --fixed-ip 10.0.0.4 private dictionary update sequence element #0 has length 1; 2 is required ERROR message not helpful To manage notifications about this bug go to: https://bugs.launchpad.net/python-neutronclient/+bug/1423747/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1422993] Re: cloud-init AzureDS fails on 15.04
This bug was fixed in the package cloud-init - 0.7.7~bzr1060-0ubuntu1 --- cloud-init (0.7.7~bzr1060-0ubuntu1) vivid; urgency=medium * New upstream snapshot. * Fix for ascii decode in DataSourceAzure (LP: #1422993). -- Scott Moser smo...@ubuntu.com Fri, 20 Feb 2015 08:05:20 -0500 ** Changed in: cloud-init (Ubuntu) Status: Confirmed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1422993 Title: cloud-init AzureDS fails on 15.04 Status in Init scripts for use on cloud images: Fix Committed Status in cloud-init package in Ubuntu: Fix Released Bug description: Ubuntu 15.04 fails to provision on Windows Azure: 2015-02-17 23:07:07,824 - util.py[DEBUG]: Getting data from class 'cloudinit.sources.DataSourceAzure.DataSourceAzureNet' failed Traceback (most recent call last): File /usr/lib/python3/dist-packages/cloudinit/sources/__init__.py, line 255, in find_source if s.get_data(): File /usr/lib/python3/dist-packages/cloudinit/sources/DataSourceAzure.py, line 99, in get_data ret = util.mount_cb(cdev, load_azure_ds_dir) File /usr/lib/python3/dist-packages/cloudinit/util.py, line 1493, in mount_cb ret = callback(mountpoint) File /usr/lib/python3/dist-packages/cloudinit/sources/DataSourceAzure.py, line 597, in load_azure_ds_dir contents = fp.read() File /usr/lib/python3.4/encodings/ascii.py, line 26, in decode return codecs.ascii_decode(input, self.errors)[0] UnicodeDecodeError: 'ascii' codec can't decode byte 0xef in position 0: ordinal not in range(128) 2015-02-17 23:07:07,987 - util.py[WARNING]: No instance datasource found! Likely bad things to come! 2015-02-17 23:07:07,993 - util.py[DEBUG]: No instance datasource found! Likely bad things to come! To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1422993/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1382064] Re: Failure to allocate tunnel id when creating networks concurrently
** Changed in: neutron Status: Fix Released = In Progress ** Changed in: neutron Milestone: kilo-2 = kilo-3 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1382064 Title: Failure to allocate tunnel id when creating networks concurrently Status in OpenStack Neutron (virtual network service): In Progress Bug description: When multiple networks are created concurrently, the following trace is observed: WARNING neutron.plugins.ml2.drivers.helpers [req-34103ce8-b6d0-459b-9707-a24e369cf9de None] Allocate gre segment from pool failed after 10 failed attempts DEBUG neutron.context [req-2995f877-e3e6-4b32-bdae-da6295e492a1 None] Arguments dropped when creating context: {u'project_name': None, u'tenant': None} __init__ /usr/lib/python2.7/dist-packages/neutron/context.py:83 DEBUG neutron.plugins.ml2.drivers.helpers [req-3541998d-44df-468f-b65b-36504e893dfb None] Allocate gre segment from pool, attempt 1 failed with segment {'gre_id': 300L} allocate_partially_specified_segment /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/helpers.py:138 DEBUG neutron.context [req-6dcfb91d-2c5b-4e4f-9d81-55ba381ad232 None] Arguments dropped when creating context: {u'project_name': None, u'tenant': None} __init__ /usr/lib/python2.7/dist-packages/neutron/context.py:83 ERROR neutron.api.v2.resource [req-34103ce8-b6d0-459b-9707-a24e369cf9de None] create failed TRACE neutron.api.v2.resource Traceback (most recent call last): TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py, line 87, in resource TRACE neutron.api.v2.resource result = method(request=request, **args) TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/neutron/api/v2/base.py, line 448, in create TRACE neutron.api.v2.resource obj = obj_creator(request.context, **kwargs) TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py, line 497, in create_network TRACE neutron.api.v2.resource tenant_id) TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py, line 160, in create_network_segments TRACE neutron.api.v2.resource segment = self.allocate_tenant_segment(session) TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py, line 189, in allocate_tenant_segment TRACE neutron.api.v2.resource segment = driver.obj.allocate_tenant_segment(session) TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/type_tunnel.py, line 115, in allocate_tenant_segment TRACE neutron.api.v2.resource alloc = self.allocate_partially_specified_segment(session) TRACE neutron.api.v2.resource File /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/helpers.py, line 143, in allocate_partially_specified_segment TRACE neutron.api.v2.resource raise exc.NoNetworkFoundInMaximumAllowedAttempts() TRACE neutron.api.v2.resource NoNetworkFoundInMaximumAllowedAttempts: Unable to create the network. No available network found in maximum allowed attempts. TRACE neutron.api.v2.resource Additional conditions: multiserver deployment and mysql. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1382064/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1423925] [NEW] show MTU on network details page
Public bug reported: once this patch https://review.openstack.org/#/c/154921/ goes into neutron to enable setting the MTU for a network we should show the MTU in the network details page as this will be useful for debugging ** Affects: horizon Importance: Undecided Assignee: Bradley Jones (bradjones) Status: In Progress ** Changed in: horizon Assignee: (unassigned) = Bradley Jones (bradjones) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1423925 Title: show MTU on network details page Status in OpenStack Dashboard (Horizon): In Progress Bug description: once this patch https://review.openstack.org/#/c/154921/ goes into neutron to enable setting the MTU for a network we should show the MTU in the network details page as this will be useful for debugging To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1423925/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1424096] [NEW] DVR routers attached to shared networks aren't being unscheduled from a compute node after deleting the VMs using the shared net
Public bug reported: As the administrator, a DVR router is created and attached to a shared network. The administrator also created the shared network. As a non-admin tenant, a VM is created with the port using the shared network. The only VM using the shared network is scheduled to a compute node. When the VM is deleted, it is expected the qrouter namespace of the DVR router is removed. But it is not. This doesn't happen with routers attached to networks that are not shared. The environment consists of 1 controller node and 1 compute node. Routers having the problem are created by the administrator attached to shared networks that are also owned by the admin: As the administrator, do the following commands on a setup having 1 compute node and 1 controller node: 1. neutron net-create shared-net -- --shared True Shared net's uuid is f9ccf1f9-aea9-4f72-accc-8a03170fa242. 2. neutron subnet-create --name shared-subnet shared-net 10.0.0.0/16 3. neutron router-create shared-router Router's UUID is ab78428a-9653-4a7b-98ec-22e1f956f44f. 4. neutron router-interface-add shared-router shared-subnet 5. neutron router-gateway-set shared-router public As a non-admin tenant (tenant-id: 95cd5d9c61cf45c7bdd4e9ee52659d13), boot a VM using the shared-net network: 1. neutron net-show shared-net +-+--+ | Field | Value| +-+--+ | admin_state_up | True | | id | f9ccf1f9-aea9-4f72-accc-8a03170fa242 | | name| shared-net | | router:external | False| | shared | True | | status | ACTIVE | | subnets | c4fd4279-81a7-40d6-a80b-01e8238c1c2d | | tenant_id | 2a54d6758fab47f4a2508b06284b5104 | +-+--+ At this point, there are no VMs using the shared-net network running in the environment. 2. Boot a VM that uses the shared-net network: nova boot ... --nic net-id=f9ccf1f9-aea9-4f72-accc-8a03170fa242 ... vm_sharednet 3. Assign a floating IP to the VM vm_sharednet 4. Delete vm_sharednet. On the compute node, the qrouter namespace of the shared router (qrouter-ab78428a-9653-4a7b-98ec-22e1f956f44f) is left behind stack@DVR-CN2:~/DEVSTACK/manage$ ip netns qrouter-ab78428a-9653-4a7b-98ec-22e1f956f44f ... This is consistent with the output of neutron l3-agent-list-hosting-router command. It shows the router is still being hosted on the compute node. $ neutron l3-agent-list-hosting-router ab78428a-9653-4a7b-98ec-22e1f956f44f +--+++---+ | id | host | admin_state_up | alive | +--+++---+ | 42f12eb0-51bc-4861-928a-48de51ba7ae1 | DVR-Controller | True | :-) | | ff869dc5-d39c-464d-86f3-112b55ec1c08 | DVR-CN2| True | :-) | +--+++---+ Running the neutron l3-agent-router-remove command removes the qrouter namespace from the compute node: $ neutron l3-agent-router-remove ff869dc5-d39c-464d-86f3-112b55ec1c08 ab78428a-9653-4a7b-98ec-22e1f956f44f Removed router ab78428a-9653-4a7b-98ec-22e1f956f44f from L3 agent stack@DVR-CN2:~/DEVSTACK/manage$ ip netns stack@DVR-CN2:~/DEVSTACK/manage$ This is a workaround to get the qrouter namespace deleted from the compute node. The L3-agent scheduler should have removed the router from the compute node when the VM is deleted. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1424096 Title: DVR routers attached to shared networks aren't being unscheduled from a compute node after deleting the VMs using the shared net Status in OpenStack Neutron (virtual network service): New Bug description: As the administrator, a DVR router is created and attached to a shared network. The administrator also created the shared network. As a non-admin tenant, a VM is created with the port using the shared network. The only VM using the shared network is scheduled to a compute node. When the VM is deleted, it is expected the qrouter namespace of the DVR router is removed. But it is not. This doesn't happen with routers attached to networks that are not shared. The environment consists of 1 controller node and 1 compute node. Routers having the problem are created by the administrator attached to shared networks that are also owned by the admin: As the administrator, do the following commands on a
[Yahoo-eng-team] [Bug 1424089] Re: Use SystemRandom rather than random
The patch says security hardening (which I think it probably is), making it class D (or maybe C1) in our incident report taxonomy. https://wiki.openstack.org/wiki/Vulnerability_Management#Incident_report_taxonomy If you agree, we should switch the bug type from public security to public (and maybe add the security bug tag instead). ** Also affects: ossa Importance: Undecided Status: New ** Changed in: ossa Status: New = Incomplete -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1424089 Title: Use SystemRandom rather than random Status in OpenStack Identity (Keystone): In Progress Status in OpenStack Security Advisories: Incomplete Bug description: SystemRandom should be preferred over direct use of random. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1424089/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1424099] [NEW] Unable to pass additional parameters to update_router tesmpest test case
Public bug reported: While writing tempest test case, I encountered the following: Consider the following scenario: Suppose a third-party plugin has additinal attributes that can be passed during router-creation and router-update. Now, the _update_router method in our json network client does not consider these additional parameter. See the method _update_router in json network client. ( tempest/services/network/json/network_client.py ) ** Affects: neutron Importance: Undecided Assignee: Chirag Shahani (chirag-shahani) Status: New ** Changed in: neutron Assignee: (unassigned) = Chirag Shahani (chirag-shahani) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1424099 Title: Unable to pass additional parameters to update_router tesmpest test case Status in OpenStack Neutron (virtual network service): New Bug description: While writing tempest test case, I encountered the following: Consider the following scenario: Suppose a third-party plugin has additinal attributes that can be passed during router-creation and router-update. Now, the _update_router method in our json network client does not consider these additional parameter. See the method _update_router in json network client. ( tempest/services/network/json/network_client.py ) To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1424099/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1424089] [NEW] Use SystemRandom rather than random
*** This bug is a security vulnerability *** Public security bug reported: SystemRandom should be preferred over direct use of random. ** Affects: keystone Importance: Undecided Assignee: Brant Knudson (blk-u) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1424089 Title: Use SystemRandom rather than random Status in OpenStack Identity (Keystone): In Progress Bug description: SystemRandom should be preferred over direct use of random. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1424089/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1399998] Re: Cisco Nexus VXLAN: incomplete switch configuration when launch multiple VMs simultaneously.
** Project changed: neutron = networking-cisco ** Changed in: networking-cisco Status: Confirmed = Fix Committed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/138 Title: Cisco Nexus VXLAN: incomplete switch configuration when launch multiple VMs simultaneously. Status in Cisco Vendor Code for OpenStack Neutron: Fix Committed Bug description: Cisco Nexus VXLAN setup: Compute-1 and Compute-2 connect to N9K-1, Compute-3 and Controller+Network node connect to N9K-2. Issue: when launch multiple VMs simultaneously, the Nexus switch is not configured properly. Sometimes missing the VLAN, or the VNI mapping. It is not consistent. Sometimes happen to the 1st switch, sometimes to the 2nd switch. Steps to reproduce: 1. Fresh reboot both N9K switches. 2. At Controller CLI, launch 10 VMs each with 3 interfaces in different subnet. 3. Check the switches configuration. To manage notifications about this bug go to: https://bugs.launchpad.net/networking-cisco/+bug/138/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1424061] [NEW] keystone server should default to localhost-only
*** This bug is a security vulnerability *** Public security bug reported: By default keystone will listen on all interfaces. Keystone should use secure defaults. In this case, listen on localhost-only by default. ** Affects: keystone Importance: Undecided Assignee: Brant Knudson (blk-u) Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1424061 Title: keystone server should default to localhost-only Status in OpenStack Identity (Keystone): New Bug description: By default keystone will listen on all interfaces. Keystone should use secure defaults. In this case, listen on localhost-only by default. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1424061/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1405239] Re: ML2 Cisco Nexus Cfg not persistent after reboot
** Project changed: neutron = networking-cisco ** Changed in: networking-cisco Status: New = Fix Committed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1405239 Title: ML2 Cisco Nexus Cfg not persistent after reboot Status in Cisco Vendor Code for OpenStack Neutron: Fix Committed Bug description: Once Ml2 configurations are applied to the Nexus and config is stable, they are lost when Nexus reboots. To manage notifications about this bug go to: https://bugs.launchpad.net/networking-cisco/+bug/1405239/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1423972] Re: cloud-init skips cloud-config modules on 15.04 Azure
** Attachment added: Potential patch https://bugs.launchpad.net/ubuntu/+bug/1423972/+attachment/4323594/+files/lp1423972.patched ** Also affects: cloud-init Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1423972 Title: cloud-init skips cloud-config modules on 15.04 Azure Status in Init scripts for use on cloud images: New Status in Ubuntu: New Bug description: Cloud-init is skipping the init modules on Azure. The Azure Datasource is run and then cloud-config modules are run. This prevents user- creation, default SSH keys, etc. [ORIGINAL REPORT] Cloud-init is not creating the fabric defined user. The root cause appears that the users-groups cloud-config module is not being run. Feb 20 16:35:33 utl-vivid-en-200932 kernel: [ 34.032275] init: cloud-init main process (603) terminated with status 1 Feb 20 16:35:33 utl-vivid-en-200932 kernel: [ 34.139874] init: Error while reading from descriptor: Broken pipe Feb 20 16:35:33 utl-vivid-en-200932 kernel: [ 34.152660] init: failsafe main process (838) killed by TERM signal Feb 20 16:12:47 utl-vivid-en-200910 [CLOUDINIT] util.py[DEBUG]: Running module ssh-authkey-fingerprints (module 'cloudinit.config.cc_s sh_authkey_fingerprints' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_ssh_authkey_fingerprints.py') failed#012Traceback (m ost recent call last):#012 File /usr/lib/python3/dist-packages/cloudinit/stages.py, line 664, in _run_modules#012cc.run(run_name , mod.handle, func_args, freq=freq)#012 File /usr/lib/python3/dist-packages/cloudinit/cloud.py, line 63, in run#012 return self._ runners.run(name, functor, args, freq, clear_on_fail)#012 File /usr/lib/python3/dist-packages/cloudinit/helpers.py, line 198, in run #012results = functor(*args)#012 File /usr/lib/python3/dist-packages/cloudinit/config/cc_ssh_authkey_fingerprints.py, line 103, in handle#012(key_fn, key_entries) = ssh_util.extract_authorized_keys(user_name)#012 File /usr/lib/python3/dist-packages/cloudini t/ssh_util.py, line 211, in extract_authorized_keys#012(ssh_dir, pw_ent) = users_ssh_info(username)#012 File /usr/lib/python3/di st-packages/cloudinit/ssh_util.py, line 204, in users_ssh_info#012pw_ent = pwd.getpwnam(username)#012KeyError: 'getpwnam(): name n ot found: utlemming' Feb 20 16:12:47 utl-vivid-en-200910 [CLOUDINIT] stages.py[DEBUG]: Running module keys-to-console (module 'cloudinit.config.cc_keys_to_ console' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_keys_to_console.py') with frequency once-per-instance Snippet on the user {'cfg': {'_pubkeys': [{'fingerprint': '8ECDF2795861824ADBE631CF8FF016B85D2A0B04', 'path': '/home/utlemming/.ssh/authorized_keys'}], 'cloud_config_modules': [ ... ], 'disk_setup': {'ephemeral0': {'layout': [100], 'overwrite': True, 'table_type': 'gpt'}}, 'fs_setup': [{'device': 'ephemeral0.1', 'filesystem': 'ext4', 'replace_fs': 'ntfs'}], 'ssh_pwauth': False, 'system_info': {'default_user': {'lock_passwd': False, 'name': 'utlemming', 'passwd': '...'}}}, In --- ApportVersion: 2.16.1-0ubuntu2 Architecture: amd64 DistroRelease: Ubuntu 15.04 Package: cloud-init 0.7.7~bzr1060-0ubuntu1 PackageArchitecture: all ProcVersionSignature: Ubuntu 3.18.0-13.14-generic 3.18.5 Tags: vivid uec-images Uname: Linux 3.18.0-13-generic x86_64 UpgradeStatus: No upgrade log present (probably fresh install) UserGroups: _MarkForUpload: True To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1423972/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1353554] Re: Neutron test suite leaks memory like a sieve (still)
[Expired for neutron because there has been no activity for 60 days.] ** Changed in: neutron Status: Incomplete = Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1353554 Title: Neutron test suite leaks memory like a sieve (still) Status in OpenStack Neutron (virtual network service): Expired Bug description: Reported this originally as bug #1065276. When we try to run unit tests on neutron, we have to run them on an 8 core box, because the test suite will not pass successfully otherwise. One of our developers needed to add 100G of swap—and yes, that's gigabytes—just to get them to pass locally for him while he was trying to track down a problem. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1353554/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1385318] Re: Nova fails to add fixed IP
[Expired for neutron because there has been no activity for 60 days.] ** Changed in: neutron Status: Incomplete = Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1385318 Title: Nova fails to add fixed IP Status in OpenStack Neutron (virtual network service): Expired Status in OpenStack Compute (Nova): Incomplete Bug description: I created instance with one NIC attached. Then I try to attach another NIC: nova add-fixed-ip ServerId NetworkId Nova compute raises exception: 2014-10-24 15:40:33.925 31955 ERROR oslo.messaging.rpc.dispatcher [req-43570a05-937a-4ddf-a0e9-e05d42660817 ] Exception during message handling: Network could not be found for instance 09b6e137-37d6-475d-992c-bdcb7d3cb841. 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 134, in _dispatch_and_reply 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher incoming.message)) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 177, in _dispatch 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 123, in _do_dispatch 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py, line 414, in decorated_function 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/exception.py, line 88, in wrapped 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher payload) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 82, in __exit__ 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/exception.py, line 71, in wrapped 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py, line 326, in decorated_function 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher kwargs['instance'], e, sys.exc_info()) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 82, in __exit__ 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py, line 314, in decorated_function 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py, line 3915, in add_fixed_ip_to_instance 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher network_id) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/network/base_api.py, line 61, in wrapper 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher res = f(self, context, *args, **kwargs) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/network/neutronv2/api.py, line 684, in add_fixed_ip_to_instance 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher instance_id=instance['uuid']) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher NetworkNotFoundForInstance: Network could
[Yahoo-eng-team] [Bug 1424113] [NEW] Add missing db model changes related to nuage plugin
Public bug reported: Due to decomposition of plugin work initiated during K release, nuage plugin have gone through much changes without them being incorporated upstream. But as per the decomp spec guideline, we need to keep migration and model definition upstream. So need to add new schema along with migration script into K release. ** Affects: neutron Importance: Undecided Assignee: Ronak Shah (ronak-malav-shah) Status: New ** Changed in: neutron Assignee: (unassigned) = Ronak Shah (ronak-malav-shah) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1424113 Title: Add missing db model changes related to nuage plugin Status in OpenStack Neutron (virtual network service): New Bug description: Due to decomposition of plugin work initiated during K release, nuage plugin have gone through much changes without them being incorporated upstream. But as per the decomp spec guideline, we need to keep migration and model definition upstream. So need to add new schema along with migration script into K release. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1424113/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1379682] Re: instance_template_name does not work for vmware driver
Reviewed: https://review.openstack.org/156924 Committed: https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=bf8afba46c78f545f5e2590b4f0e0e36f842b067 Submitter: Jenkins Branch:master commit bf8afba46c78f545f5e2590b4f0e0e36f842b067 Author: Hiroki Aramaki h-aram...@netone.co.jp Date: Wed Feb 18 17:51:10 2015 +0900 Add vCenter driver instance name section Add instance name section to VMware configuration reference. KVM use instance_template name but VMware vCenter driver use instance ID. Change-Id: I325b5282dfdbe1b71657c4043644b5641cb7320f Closes-Bug: #1379682 ** Changed in: openstack-manuals Status: In Progress = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1379682 Title: instance_template_name does not work for vmware driver Status in OpenStack Compute (Nova): Confirmed Status in OpenStack Manuals: Fix Released Bug description: Currently vmware driver will adopt uuid for instance names. This will lead two problems: 1) instance name template will not apply for vmware driver. But it will display instance name with nova show command. It will be misleading. [root@cmwo cmwo]# nova show temp-vm-host1-99 +--+---+ | Property | Value | +--+---+ | OS-DCF:diskConfig| MANUAL | | OS-EXT-AZ:availability_zone | cluster01 | | OS-EXT-SRV-ATTR:host | vcenter-cluster01 | | OS-EXT-SRV-ATTR:hypervisor_hostname | domain-c129(cluster01) | | OS-EXT-SRV-ATTR:instance_name| instance-00ec | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state| - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2014-10-09T10:02:15.00 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-10-09T09:59:15Z | | demovlan network | 25.0.0.21 | | flavor | m1.tiny (1) | | hostId | 0cfba386e1e5cad832d2fbc316c33ca6a124c3d0b386127a55707070 | | id | 6d3e0f11-0a4c-46eb-a3a6-4e397d917228 | 2) This uuid is not user friendly for VM names. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1379682/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1423952] [NEW] It is impossible to delete an instance that has failed due to neutron/nova notification problems
Public bug reported: If you attempt to boot a nova instance without Neutron properly configured for neutron/nova notifications, the instance will eventually fail to spawn: [-] [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] Instance failed to spawn [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] Traceback (most recent call last): [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] File /usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2243, in _build_resources [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] yield resources [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] File /usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2113, in _build_and_run_instance [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] block_device_info=block_device_info) [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] File /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 2622, in spawn [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] block_device_info, disk_info=disk_info) [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] File /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 4439, in _create_domain_and_network [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] raise exception.VirtualInterfaceCreateException() [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] VirtualInterfaceCreateException: Virtual Interface creation failed If you try to delete this instance, the delete operation will fail. In the logs, you see: AUDIT nova.compute.manager [req-a4b30d0b-e6d3-429f-8f7a-b7788b79c86c None] [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] Terminating instance WARNING nova.virt.libvirt.driver [-] [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] During wait destroy, instance disappeared. INFO nova.virt.libvirt.driver [req-a4b30d0b-e6d3-429f-8f7a-b7788b79c86c None] [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] Deletion of /var/lib/nova/instances/1541a197-9f80-4ee5-a7d6-08e591aa83fd_del complete INFO nova.compute.manager [req-a4b30d0b-e6d3-429f-8f7a-b7788b79c86c None] [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] Instance disappeared during terminate At this point, `nova list` will show: | 1541a197-9f80-4ee5-a7d6-08e591aa83fd | test0| ERROR | deleting | NOSTATE | | And it appears to be impossible to delete this instance. Running nova reset-state instance has no effect (with or without --active), nor does correctly configuring neutron. The only way to get rid of this instance appears to be directly editing the database. ** Affects: nova Importance: Medium Status: Triaged -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1423952 Title: It is impossible to delete an instance that has failed due to neutron/nova notification problems Status in OpenStack Compute (Nova): Triaged Bug description: If you attempt to boot a nova instance without Neutron properly configured for neutron/nova notifications, the instance will eventually fail to spawn: [-] [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] Instance failed to spawn [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] Traceback (most recent call last): [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] File /usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2243, in _build_resources [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] yield resources [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] File /usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2113, in _build_and_run_instance [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] block_device_info=block_device_info) [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] File /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 2622, in spawn [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] block_device_info, disk_info=disk_info) [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] File /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 4439, in _create_domain_and_network [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] raise exception.VirtualInterfaceCreateException() [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] VirtualInterfaceCreateException: Virtual Interface creation failed If you try to delete this instance, the delete operation will fail. In the logs, you see: AUDIT nova.compute.manager [req-a4b30d0b-e6d3-429f-8f7a-b7788b79c86c None] [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] Terminating instance WARNING nova.virt.libvirt.driver [-] [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] During wait destroy, instance disappeared. INFO nova.virt.libvirt.driver [req-a4b30d0b-e6d3-429f-8f7a-b7788b79c86c None] [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd]
[Yahoo-eng-team] [Bug 1415336] Re: type: parameter should be replaced by healthmonitor_type in loadbalancer
** Changed in: neutron Milestone: None = kilo-3 ** Changed in: neutron Importance: Undecided = Low ** Also affects: python-neutronclient Importance: Undecided Status: New ** Changed in: python-neutronclient Importance: Undecided = Low ** Changed in: python-neutronclient Milestone: None = 2.3.12 ** Changed in: python-neutronclient Assignee: (unassigned) = Amandeep (amandeep-m) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1415336 Title: type: parameter should be replaced by healthmonitor_type in loadbalancer Status in OpenStack Neutron (virtual network service): New Status in Python client library for Neutron: New Bug description: As per V2 API specification load balancer healthmonitor has a parameter type which can not be parsed by JSON parser. So, it must be replaced by healthmonitor_type as per the OpenDayLight Bug- ( https://bugs.opendaylight.org/show_bug.cgi?id=1674 ) Further information related to lbaas healthmonitor can be found here: http://docs.openstack.org/api/openstack-network/2.0/content/POST_createHealthMonitor__v2.0_healthmonitors_lbaas_ext_ops_health_monitor.html To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1415336/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1424139] [NEW] misspelling works in neutron code
Public bug reported: Fixing misspelling words in neutron code ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1424139 Title: misspelling works in neutron code Status in OpenStack Neutron (virtual network service): New Bug description: Fixing misspelling words in neutron code To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1424139/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1423885] Re: nova flavor-show inconsistent with mixed case in names
** Project changed: nova = python-novaclient -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1423885 Title: nova flavor-show inconsistent with mixed case in names Status in Python client library for Nova: New Bug description: The nova flavor-show command accepts a flavor name as its parameter. The search via names works with an exact match and all-lowercase, but not with any other character casing. E.g. given an existing flavor named 'ASAv', 'nova flavor-show ASAv' will get the same result as 'nova flavor-show asav', but 'nova flavor-show Asav' will return failure. Also note that 'nova flavor-create Asav ...' will reject flavor creation because the flavor exists already. # nova flavor-show asav ++--+ | Property | Value| ++--+ | OS-FLV-DISABLED:disabled | False| | OS-FLV-EXT-DATA:ephemeral | 0| | disk | 0| | extra_specs| {} | | id | a9215596-5f05-43ff-b150-7344a3112304 | | name | ASAv | | os-flavor-access:is_public | True | | ram| 2048 | | rxtx_factor| 1.0 | | swap | | | vcpus | 1| ++--+ # nova flavor-show ASAv ++--+ | Property | Value| ++--+ | OS-FLV-DISABLED:disabled | False| | OS-FLV-EXT-DATA:ephemeral | 0| | disk | 0| | extra_specs| {} | | id | a9215596-5f05-43ff-b150-7344a3112304 | | name | ASAv | | os-flavor-access:is_public | True | | ram| 2048 | | rxtx_factor| 1.0 | | swap | | | vcpus | 1| ++--+ # nova flavor-show Asav ERROR: No flavor with a name or ID of 'Asav' exists. # nova flavor-create Asav 10 1024 10 1 ERROR: Flavor with name Asav already exists. (HTTP 409) (Request-ID: req-c90d775c-6846-47e5-a32e-9badf568fbd1) To manage notifications about this bug go to: https://bugs.launchpad.net/python-novaclient/+bug/1423885/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1407987] Re: ML2 Cisco replay Cfg on loss of Nexus connect
** Project changed: neutron = networking-cisco ** Changed in: networking-cisco Status: New = Fix Committed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1407987 Title: ML2 Cisco replay Cfg on loss of Nexus connect Status in Cisco Vendor Code for OpenStack Neutron: Fix Committed Bug description: When the connection to Nexus is lost, replay occurs only when a transaction is incomplete. Performing 'copy run start' is not an optimum solution. Instead we need to detect when the connection is lost even if there is not transaction in flight. Then tickle the replay code when it occurs. To manage notifications about this bug go to: https://bugs.launchpad.net/networking-cisco/+bug/1407987/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1422738] Re: Suppress exception when nexus mech replay enabled
** Project changed: neutron = networking-cisco ** Changed in: networking-cisco Status: New = In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1422738 Title: Suppress exception when nexus mech replay enabled Status in Cisco Vendor Code for OpenStack Neutron: In Progress Bug description: When 'switch_heartbeat_timeout' is configured zero in local.conf benath [ml2_cisco], Cisco Nexus mechanic replay is enabled. When this occurs, disable upper layer replay by not reporting exceptions from nexus driver code. To manage notifications about this bug go to: https://bugs.launchpad.net/networking-cisco/+bug/1422738/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1423973] [NEW] Use choices from oslo_config
Public bug reported: Support went into oslo_config recently that will allow us to use the choices keyword argument from argparse [1]. We should look at leveraging this in Keystone. [1] https://github.com/openstack/oslo.config/blame/578f9f4e60f58c210a9e1cb455925b9f310fe10e/oslo_config/cfg.py#L932 ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1423973 Title: Use choices from oslo_config Status in OpenStack Identity (Keystone): New Bug description: Support went into oslo_config recently that will allow us to use the choices keyword argument from argparse [1]. We should look at leveraging this in Keystone. [1] https://github.com/openstack/oslo.config/blame/578f9f4e60f58c210a9e1cb455925b9f310fe10e/oslo_config/cfg.py#L932 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1423973/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1423644] Affects OpenStack/Nova
Davanum, I am running OpenStack Juno. Its running on three pieces of bare metal. Nova --version returns 2.19.0 on host compute1. This is running over Ubuntu server 14.04 LTS. The systems are up to date. I run an 'apt-get update apt-get dist-upgrade' weekly. Please let me know if you need any more information. Thanks! Affects Openstack/Nova King Regards, William -Original Message- From: boun...@canonical.com [mailto:boun...@canonical.com] On Behalf Of Davanum Srinivas (DIMS) Sent: Friday, February 20, 2015 11:31 AM To: greek...@gmail.com Subject: [Bug 1423644] Re: CPU/MEM usage not accurate. Hits quotas while resources are available William, what version of OpenStack/Nova? Any specific distribution? ** Changed in: nova Status: New = Incomplete -- You received this bug notification because you are subscribed to the bug report. https://bugs.launchpad.net/bugs/1423644 Title: CPU/MEM usage not accurate. Hits quotas while resources are available Status in OpenStack Compute (Nova): Incomplete Status in Ubuntu: New Bug description: I tried to set my quotas to exactly the hardware specs of my compute node for accurate reporting. My compute node runs on bare metal. After I did this I got quotas exceeded, unable to launch instance. I checked the hypervisor from the web interface and cli. 5 of 12 VCPUs used and 6GB of 16GB used. When you try and change the quotas to match the hardware it errors. From the CLI it reports that quota cant be set to 12 VCPUs because 17 are in use. Same with mem says 17GB are in use. But they arent in use clearly. So the bandaid is to set the quotas really high and then it ignores them and allows you to create instances against the actual nova usage stats. This also causes some failure to launch instance errors, no host is available meaning there arent enough resources even though there are. Im running this as a production environment for my domain so I have spent hundreds of hours chasing my tail. Hope this is helpful for debugging the issue. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1423644/+subscriptions ** Also affects: juno Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1423644 Title: CPU/MEM usage not accurate. Hits quotas while resources are available Status in Juno | Third-party Jupiter Broadcasting client for Ubuntu Touch: New Status in OpenStack Compute (Nova): Incomplete Status in Ubuntu: New Bug description: I tried to set my quotas to exactly the hardware specs of my compute node for accurate reporting. My compute node runs on bare metal. After I did this I got quotas exceeded, unable to launch instance. I checked the hypervisor from the web interface and cli. 5 of 12 VCPUs used and 6GB of 16GB used. When you try and change the quotas to match the hardware it errors. From the CLI it reports that quota cant be set to 12 VCPUs because 17 are in use. Same with mem says 17GB are in use. But they arent in use clearly. So the bandaid is to set the quotas really high and then it ignores them and allows you to create instances against the actual nova usage stats. This also causes some failure to launch instance errors, no host is available meaning there arent enough resources even though there are. Im running this as a production environment for my domain so I have spent hundreds of hours chasing my tail. Hope this is helpful for debugging the issue. To manage notifications about this bug go to: https://bugs.launchpad.net/juno/+bug/1423644/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1422735] Re: Unknown host exception results in other except
** Project changed: neutron = networking-cisco -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1422735 Title: Unknown host exception results in other except Status in Cisco Vendor Code for OpenStack Neutron: New Bug description: When connecting to Nexus switch the first time, one must do ssh to the switch first otherwise an exception is raise. When this exception is raised, a couple other exceptions result. This bug aims to resolve those subsequent exceptions so the correct 'Reason' code is printed. To manage notifications about this bug go to: https://bugs.launchpad.net/networking-cisco/+bug/1422735/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp