[Yahoo-eng-team] [Bug 1271838] [NEW] neutron should provide available advanced services API
Public bug reported: I think it may be meaningful to expose available advanced services to users. For example: $neutron adv-service-list the command would show all available services in detail (service_type, service_provider, status, etc) for users including FW/VPN/LB. Users would have a clear knowledge about all active services and can use all the neutron services. ** Affects: neutron Importance: Undecided Status: New ** Tags: neutron-core -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1271838 Title: neutron should provide available advanced services API Status in OpenStack Neutron (virtual network service): New Bug description: I think it may be meaningful to expose available advanced services to users. For example: $neutron adv-service-list the command would show all available services in detail (service_type, service_provider, status, etc) for users including FW/VPN/LB. Users would have a clear knowledge about all active services and can use all the neutron services. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1271838/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1214947] Re: chown in postinst fails on netapp storage
** Project changed: glance = ubuntu ** Changed in: ubuntu Status: Invalid = Confirmed ** Package changed: ubuntu = glance (Ubuntu) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1214947 Title: chown in postinst fails on netapp storage Status in “glance” package in Ubuntu: Confirmed Bug description: We have /var/lib/glance/images on an nfs share served from a netapp filer. Netapp exports have .snapshot directories that are read only, and the chown calls in glance-common's postinst fails on them. I suggest changing chown -R to find -xdev ... Cheers, To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/glance/+bug/1214947/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1268631] Re: Unit tests failing with raise UnknownMethodCallError('management_url')
** Also affects: horizon/grizzly Importance: Undecided Status: New ** Changed in: horizon/grizzly Importance: Undecided = High ** Changed in: horizon/grizzly Status: New = Confirmed ** Changed in: horizon/grizzly Assignee: (unassigned) = Julie Pichon (jpichon) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1268631 Title: Unit tests failing with raise UnknownMethodCallError('management_url') Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Dashboard (Horizon) grizzly series: Confirmed Status in OpenStack Dashboard (Horizon) havana series: In Progress Bug description: A number of unit tests are failing for every review, likely related to the release of keystoneclient 0.4.2: fungi i think python-keystoneclient==0.4.2 may have just broken horizon fungi looks like all python unit test runs for horizon are now failing on keystone-specific tests as of the last few minutes, and the only change in the pip freeze output for the tests is python-keystoneclient==0.4.2 instead of 0.4.1 bknudson fungi: UnknownMethodCallError: Method called is not a member of the object: management_url ? fungi horizon will presumably need patching to work around that bknudson Looks like the horizon test is trying to create a mock keystoneclient and creating the mock fails for some reason. 2014-01-13 14:42:38.747 | == 2014-01-13 14:42:38.747 | FAIL: test_get_default_role (openstack_dashboard.test.api_tests.keystone_tests.RoleAPITests) 2014-01-13 14:42:38.748 | -- 2014-01-13 14:42:38.748 | Traceback (most recent call last): 2014-01-13 14:42:38.748 | File /home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/test/api_tests/keystone_tests.py, line 77, in test_get_default_role 2014-01-13 14:42:38.748 | keystoneclient = self.stub_keystoneclient() 2014-01-13 14:42:38.748 | File /home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/test/helpers.py, line 306, in stub_keystoneclient 2014-01-13 14:42:38.748 | self.keystoneclient = self.mox.CreateMock(keystone_client.Client) 2014-01-13 14:42:38.748 | File /home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py, line 258, in CreateMock 2014-01-13 14:42:38.748 | new_mock = MockObject(class_to_mock, attrs=attrs) 2014-01-13 14:42:38.748 | File /home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py, line 556, in __init__ 2014-01-13 14:42:38.749 | attr = getattr(class_to_mock, method) 2014-01-13 14:42:38.749 | File /home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py, line 608, in __getattr__ 2014-01-13 14:42:38.749 | raise UnknownMethodCallError(name) 2014-01-13 14:42:38.749 | UnknownMethodCallError: Method called is not a member of the object: management_url 2014-01-13 14:42:38.749 | raise UnknownMethodCallError('management_url') 2014-01-13 14:42:38.749 | 2014-01-13 14:42:38.749 | 2014-01-13 14:42:38.749 | == 2014-01-13 14:42:38.749 | FAIL: Tests api.keystone.remove_tenant_user 2014-01-13 14:42:38.749 | -- 2014-01-13 14:42:38.750 | Traceback (most recent call last): 2014-01-13 14:42:38.750 | File /home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/test/api_tests/keystone_tests.py, line 61, in test_remove_tenant_user 2014-01-13 14:42:38.750 | keystoneclient = self.stub_keystoneclient() 2014-01-13 14:42:38.750 | File /home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/test/helpers.py, line 306, in stub_keystoneclient 2014-01-13 14:42:38.750 | self.keystoneclient = self.mox.CreateMock(keystone_client.Client) 2014-01-13 14:42:38.750 | File /home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py, line 258, in CreateMock 2014-01-13 14:42:38.750 | new_mock = MockObject(class_to_mock, attrs=attrs) 2014-01-13 14:42:38.750 | File /home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py, line 556, in __init__ 2014-01-13 14:42:38.750 | attr = getattr(class_to_mock, method) 2014-01-13 14:42:38.750 | File /home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py, line 608, in __getattr__ 2014-01-13 14:42:38.750 | raise UnknownMethodCallError(name) 2014-01-13 14:42:38.751 | UnknownMethodCallError: Method called is not a member of the object: management_url 2014-01-13 14:42:38.751
[Yahoo-eng-team] [Bug 1271945] [NEW] unit tests shouldn't import NeutronManager class
Public bug reported: Some of unit tests import the NetworkManager class instead of using module level import. ** Affects: neutron Importance: Undecided Assignee: Sylvain Afchain (sylvain-afchain) Status: New ** Changed in: neutron Assignee: (unassigned) = Sylvain Afchain (sylvain-afchain) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1271945 Title: unit tests shouldn't import NeutronManager class Status in OpenStack Neutron (virtual network service): New Bug description: Some of unit tests import the NetworkManager class instead of using module level import. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1271945/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1271948] [NEW] Unexpected response in agent-list command with --field param
Public bug reported: On Neutron Havana (server and client). If you read the documentation about agent-list command you can read this: # neutron help agent-list ... -F FIELD, --field FIELD specify the field(s) to be returned by server, can be repeated ... I think there is a problem with --field FIELD parameter. If you try this command you obtain an unexpected response: # neutron agent-list --field agent_type 'alive' I think 'alive' is NOT the expected response for that request. I was expected to obtain something like: # neutron agent-list --field agent_type ++ | agent_type | ++ | DHCP agent | | Open vSwitch agent | ++ But, if you try to obtain alive field, the response is the expected one. # neutron agent-list --field alive +---+ | alive | +---+ | :-) | | :-) | +---+ ** Affects: neutron Importance: Undecided Assignee: Marcos Lobo (marcos-fermin-lobo) Status: New ** Changed in: neutron Assignee: (unassigned) = Marcos Lobo (marcos-fermin-lobo) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1271948 Title: Unexpected response in agent-list command with --field param Status in OpenStack Neutron (virtual network service): New Bug description: On Neutron Havana (server and client). If you read the documentation about agent-list command you can read this: # neutron help agent-list ... -F FIELD, --field FIELD specify the field(s) to be returned by server, can be repeated ... I think there is a problem with --field FIELD parameter. If you try this command you obtain an unexpected response: # neutron agent-list --field agent_type 'alive' I think 'alive' is NOT the expected response for that request. I was expected to obtain something like: # neutron agent-list --field agent_type ++ | agent_type | ++ | DHCP agent | | Open vSwitch agent | ++ But, if you try to obtain alive field, the response is the expected one. # neutron agent-list --field alive +---+ | alive | +---+ | :-) | | :-) | +---+ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1271948/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1271958] [NEW] nova compute fail to remove instance with port if network is broken
Public bug reported: If user was capable to create broken network configuration, instance become undeletable. Reason why user can create broken networking is under investigation (current hypothesis: if network (neutron) created in one tennant and instance in other, and user is admin in both tenants, it cause broken configuration). But such instance deletetion cause trace on nova-compute: Traceback (most recent call last): File /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line 461, in _process_data **args) File /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py, line 172, in dispatch result = getattr(proxyobj, method)(ctxt, **kwargs) File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 353, in decorated_function return function(self, context, *args, **kwargs) File /usr/lib/python2.7/dist-packages/nova/exception.py, line 90, in wrapped payload) File /usr/lib/python2.7/dist-packages/nova/exception.py, line 73, in wrapped return f(self, context, *args, **kw) File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 243, in decorated_function pass File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 229, in decorated_function return function(self, context, *args, **kwargs) File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 294, in decorated_function function(self, context, *args, **kwargs) File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 271, in decorated_function e, sys.exc_info()) File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 258, in decorated_function return function(self, context, *args, **kwargs) File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1792, in terminate_instance do_terminate_instance(instance, bdms) File /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py, line 246, in inner return f(*args, **kwargs) File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1784, in do_terminate_instance reservations=reservations) File /usr/lib/python2.7/dist-packages/nova/hooks.py, line 105, in inner rv = f(*args, **kwargs) File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1757, in _delete_instance user_id=user_id) File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1729, in _delete_instance self._shutdown_instance(context, db_inst, bdms) File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1639, in _shutdown_instance network_info = self._get_instance_nw_info(context, instance) File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 876, in _get_instance_nw_info instance) File /usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py, line 455, in get_instance_nw_info result = self._get_instance_nw_info(context, instance, networks) File /usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py, line 463, in _get_instance_nw_info nw_info = self._build_network_info_model(context, instance, networks) File /usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py, line 1009, in _build_network_info_model subnets) File /usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py, line 962, in _nw_info_build_network label=network_name, UnboundLocalError: local variable 'network_name' referenced before assignment The reason is following code : def _nw_info_build_network(self, port, networks, subnets): # NOTE(danms): This loop can't fail to find a network since we # filtered ports to only the ones matching networks in our parent for net in networks: if port['network_id'] == net['id']: network_name = net['name'] break (if no net found network_name become undefined). Following patch should allow instance deletion in case of networking problems: diff --git a/nova/network/neutronv2/api.py b/nova/network/neutronv2/api.py index a41924d..8a44f99 100644 --- a/nova/network/neutronv2/api.py +++ b/nova/network/neutronv2/api.py @@ -939,6 +939,8 @@ class API(base.Base): if port['network_id'] == net['id']: network_name = net['name'] break +else: +network_name = bridge = None ovs_interfaceid = None ** Affects: nova Importance: Undecided Status: New ** Attachment added: Fix deletion of instances with broken networking https://bugs.launchpad.net/bugs/1271958/+attachment/3955123/+files/nova-compute-fix-broken-net-instance-deletion.patch ** Description changed: If user was capable to create broken network configuration, instance become undeletable. Reason why user can create broken networking is under investigation (current hypothesis: if network (neutron) created in one tennant and instance in other, and user is admin in both
[Yahoo-eng-team] [Bug 1271966] [NEW] Not possible to spawn vmware instance with multiple disks
Public bug reported: The behaviour of spawn() in the vmwareapi driver wrt images and block device mappings is currently as follows: If there are any block device mappings, images are ignored If there are any block device mappings, the last becomes the root device and all others are ignored This means that, for example, the following scenarios are not possible: 1. Spawn an instance with a root device from an image, and a secondary volume 2. Spawn an instance with a volume as a root device, and a secondary volume The behaviour of the libvirt driver is as follows: If there is an image, it will be the root device unless there is also a block device mapping for the root device All block device mappings are used If there are multiple block device mappings for the same device, the last one is used The vmwareapi driver's behaviour is surprising, and should be modified to follow the libvirt driver. ** Affects: nova Importance: Undecided Status: New ** Tags: vmware -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1271966 Title: Not possible to spawn vmware instance with multiple disks Status in OpenStack Compute (Nova): New Bug description: The behaviour of spawn() in the vmwareapi driver wrt images and block device mappings is currently as follows: If there are any block device mappings, images are ignored If there are any block device mappings, the last becomes the root device and all others are ignored This means that, for example, the following scenarios are not possible: 1. Spawn an instance with a root device from an image, and a secondary volume 2. Spawn an instance with a volume as a root device, and a secondary volume The behaviour of the libvirt driver is as follows: If there is an image, it will be the root device unless there is also a block device mapping for the root device All block device mappings are used If there are multiple block device mappings for the same device, the last one is used The vmwareapi driver's behaviour is surprising, and should be modified to follow the libvirt driver. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1271966/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1263122] Re: services should not restart on SIGHUP when running in the foreground
** Changed in: oslo Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1263122 Title: services should not restart on SIGHUP when running in the foreground Status in devstack - openstack dev environments: Confirmed Status in OpenStack Compute (Nova): Fix Released Status in Oslo - a Library of Common OpenStack Code: Fix Released Bug description: As reported on the mailing list (http://lists.openstack.org/pipermail /openstack-dev/2013-December/022796.html) the behavior of the ServiceLauncher has changed in a way that breaks devstack. The work for blueprint https://blueprints.launchpad.net/oslo/+spec /cfg-reload-config-files introduced changes to have the process restart on SIGHUP, but screen under devstack also uses that signal to kill the services. That means lots of developers are having to manually kill services to avoid having multiple copies running. To fix the problem we should only restart on SIGHUP when not running in the foreground. There are a few suggestions for detecting foreground operation on http://stackoverflow.com/questions/2425005 /how-do-i-know-if-an-c-programs-executable-is-run-in-foreground-or- background To manage notifications about this bug go to: https://bugs.launchpad.net/devstack/+bug/1263122/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1270212] Re: regression: multiple calls to Message.__mod__ trigger exceptions
** Changed in: oslo Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1270212 Title: regression: multiple calls to Message.__mod__ trigger exceptions Status in OpenStack Neutron (virtual network service): New Status in Oslo - a Library of Common OpenStack Code: Fix Released Bug description: http://logs.openstack.org/58/63558/6/check/gate-neutron- python27/ec233e7/console.html To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1270212/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1247217] Re: Sanitize passwords when logging payload in wsgi for API Extensions
** Changed in: oslo Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1247217 Title: Sanitize passwords when logging payload in wsgi for API Extensions Status in OpenStack Compute (Nova): Fix Released Status in Oslo - a Library of Common OpenStack Code: Fix Released Bug description: The fix for bug 1231263 ( https://bugs.launchpad.net/nova/+bug/1231263 ) addressed not logging the clear-text password in the nova wsgi.py module for the adminPass attribute for the Server Change Password REST API, but this only addressed that specific attribute. Since Nova has support for the ability to add REST API Extensions (in the contrib directory), there could any number of other password-related attributes in the request/response body for those additional extensions. Although it would not be possible to know all of the various sensitive attributes that these API's would pass in the request/response (the only way to totally eliminate the exposure would be to not log the request/response which is useful for debugging), I would like to propose a change similar to the one that was made in keystone (under https://bugs.launchpad.net/keystone/+bug/1166697) to mask the password in the log statement for any attribute that contains the password sub-string in it. The change would in essence be to update the _SANITIZE_KEYS / _SANITIZE_PATTERNS lists in the nova/api/openstack/wsgi.py module to include a pattern for the password sub-string. Also, for a slight performance benefit, it may be useful to put a check in to see if debug logging level is enabled around the debug statement that does the sanitize call (since the request/response bodies could be fairly large and wouldn't want to take the hit to do the pattern matches if debug isn't on). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1247217/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1245629] Re: keystone is ignoring debug=True
** Changed in: oslo Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1245629 Title: keystone is ignoring debug=True Status in OpenStack Identity (Keystone): In Progress Status in Oslo - a Library of Common OpenStack Code: Fix Released Bug description: even after setting debug = True and verbose = True in keystone.conf, default_log_levels[keystone] stays on INFO, preventing (for example) the identity drivers from producing any debug output, thus making it impossible to track problems with them. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1245629/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1095346] Re: Excessive CPU usage in ProcessLauncher()'s wait loop
** Changed in: oslo Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1095346 Title: Excessive CPU usage in ProcessLauncher()'s wait loop Status in OpenStack Neutron (virtual network service): In Progress Status in Oslo - a Library of Common OpenStack Code: Fix Released Bug description: See https://review.openstack.org/18689 for some background We can't use os.wait() to block until a child exited so, instead, we're busy-looping We should be able to come up with another way of doing this - e.g. using pipes provided to each child to give us a selectable handle we can block on To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1095346/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266962] Re: Remove set_time_override in timeutils
** Changed in: oslo.messaging Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1266962 Title: Remove set_time_override in timeutils Status in OpenStack Telemetry (Ceilometer): In Progress Status in Cinder: Invalid Status in Gantt: New Status in OpenStack Image Registry and Delivery Service (Glance): Triaged Status in Ironic (Bare Metal Provisioning): In Progress Status in OpenStack Identity (Keystone): In Progress Status in Manila: New Status in OpenStack Message Queuing Service (Marconi): In Progress Status in OpenStack Compute (Nova): Triaged Status in Oslo - a Library of Common OpenStack Code: Triaged Status in Messaging API for OpenStack: Fix Released Status in Python client library for Keystone: Fix Committed Status in Python client library for Nova: Fix Committed Status in Tuskar: Fix Released Bug description: set_time_override was written as a helper function to mock utcnow in unittests. However we now use mock or fixture to mock our objects so set_time_override has become obsolete. We should first remove all usage of set_time_override from downstream projects before deleting it from oslo. To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1266962/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1257295] Re: openstack is full of misspelled words
** Changed in: oslo Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1257295 Title: openstack is full of misspelled words Status in OpenStack Neutron (virtual network service): In Progress Status in OpenStack Compute (Nova): In Progress Status in Oslo - a Library of Common OpenStack Code: Fix Released Status in Python Build Reasonableness: Fix Committed Status in Python client library for Nova: Fix Committed Bug description: List of known misspellings http://paste.openstack.org/show/54354 Generated with: pip install misspellings git ls-files | grep -v locale | misspellings -f - To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1257295/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1251700] Re: migration error: invalid version number '0.7.3.dev'
** Changed in: oslo Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1251700 Title: migration error: invalid version number '0.7.3.dev' Status in Ironic (Bare Metal Provisioning): Fix Released Status in OpenStack Compute (Nova): Fix Released Status in Oslo - a Library of Common OpenStack Code: Fix Released Status in Tuskar: Fix Released Bug description: Using a tripleO seed VM I hit this issue today when trying to run the nova db migrations: (nova)[root@localhost migrate]# /opt/stack/venvs/nova/bin/nova-manage --debug --verbose db sync Command failed, please check log for more info 2013-11-15 16:53:18,579.579 9082 CRITICAL nova [-] invalid version number '0.7.3.dev' 2013-11-15 16:53:18,579.579 9082 TRACE nova Traceback (most recent call last): 2013-11-15 16:53:18,579.579 9082 TRACE nova File /opt/stack/venvs/nova/bin/nova-manage, line 10, in module 2013-11-15 16:53:18,579.579 9082 TRACE nova sys.exit(main()) 2013-11-15 16:53:18,579.579 9082 TRACE nova File /opt/stack/venvs/nova/lib/python2.7/site-packages/nova/cmd/manage.py, line 1378, in main 2013-11-15 16:53:18,579.579 9082 TRACE nova ret = fn(*fn_args, **fn_kwargs) 2013-11-15 16:53:18,579.579 9082 TRACE nova File /opt/stack/venvs/nova/lib/python2.7/site-packages/nova/cmd/manage.py, line 886, in sync 2013-11-15 16:53:18,579.579 9082 TRACE nova return migration.db_sync(version) 2013-11-15 16:53:18,579.579 9082 TRACE nova File /opt/stack/venvs/nova/lib/python2.7/site-packages/nova/db/migration.py, line 31, in db_sync 2013-11-15 16:53:18,579.579 9082 TRACE nova return IMPL.db_sync(version=version) 2013-11-15 16:53:18,579.579 9082 TRACE nova File /opt/stack/venvs/nova/lib/python2.7/site-packages/nova/utils.py, line 438, in __getattr__ 2013-11-15 16:53:18,579.579 9082 TRACE nova backend = self.__get_backend() 2013-11-15 16:53:18,579.579 9082 TRACE nova File /opt/stack/venvs/nova/lib/python2.7/site-packages/nova/utils.py, line 434, in __get_backend 2013-11-15 16:53:18,579.579 9082 TRACE nova self.__backend = __import__(name, None, None, fromlist) 2013-11-15 16:53:18,579.579 9082 TRACE nova File /opt/stack/venvs/nova/lib/python2.7/site-packages/nova/db/sqlalchemy/migration.py, line 52, in module 2013-11-15 16:53:18,579.579 9082 TRACE nova dist_version.StrictVersion(migrate.__version__) MIN_PKG_VERSION): 2013-11-15 16:53:18,579.579 9082 TRACE nova File /usr/lib64/python2.7/distutils/version.py, line 40, in __init__ 2013-11-15 16:53:18,579.579 9082 TRACE nova self.parse(vstring) 2013-11-15 16:53:18,579.579 9082 TRACE nova File /usr/lib64/python2.7/distutils/version.py, line 107, in parse 2013-11-15 16:53:18,579.579 9082 TRACE nova raise ValueError, invalid version number '%s' % vstring 2013-11-15 16:53:18,579.579 9082 TRACE nova ValueError: invalid version number '0.7.3.dev' To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1251700/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1265561] Re: Log message printed for unhandled exception is not very helpful.
** Changed in: oslo Status: Fix Committed = Fix Released ** Changed in: oslo Milestone: None = icehouse-2 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1265561 Title: Log message printed for unhandled exception is not very helpful. Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Status in OpenStack Compute (Nova): In Progress Status in Oslo - a Library of Common OpenStack Code: Fix Released Bug description: Currently, on an unhandled exception, the error message logged through sys.excepthook is not very helpful. Currently it prints only the exception_value. https://github.com/openstack/oslo-incubator/blob/master/openstack/common/log.py#L396 Fix: Make the log message print both the exception_type and exception_value of the unhandled exception. PS: Currently the traceback is printed only when the VERBOSE is ON. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1265561/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1262148] Re: duplicated definition of config option memcached_servers
** Changed in: oslo Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1262148 Title: duplicated definition of config option memcached_servers Status in OpenStack Compute (Nova): Invalid Status in Oslo - a Library of Common OpenStack Code: Fix Released Status in Python client library for Keystone: Invalid Bug description: When using the latest config generator from oslo-incubator, see https://review.openstack.org/62815, and trying to generate the nova.conf.sample by using the following commands: NOVA_CONFIG_GENERATOR_EXTRA_MODULES=keystoneclient.middleware.auth_token tools/config/generate_sample.sh -p nova 2013-12-18 18:22:03.187 30506 CRITICAL nova [-] Unable to find group for option memcached_servers, maybe it's defined twice in the same group? This is because the config option memcached_servers is defined both in the python module keystoneclient.middleware.auth_token and nova.openstack.common.memorycache. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1262148/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1257829] Re: Misspelled encryption field in QemuImgInfo
** Changed in: oslo Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1257829 Title: Misspelled encryption field in QemuImgInfo Status in Cinder: In Progress Status in OpenStack Compute (Nova): In Progress Status in Oslo - a Library of Common OpenStack Code: Fix Released Bug description: Location: openstack.common.imageutils.QemuImgInfo Method: __init__ Error: line 45, self.encryption = details.get('encryption') The parsing of the encryption field for qemu-img commands does not work. The key used to index the details dictionary for encryption information, 'encryption', does not match the key generated by qemu- img, 'encrypted.' As a result, the encryption field is always 'None', regardless of the image's encryption status. Example call to 'qemu-img info': $ qemu-img info encrypted_disk.qcow2 Disk image 'encrypted_disk.qcow2' is encrypted. password: image: encrypted_disk.qcow2 file format: qcow2 virtual size: 16G (17179869184 bytes) disk size: 136K encrypted: yes cluster_size: 65536 backing file: debian_squeeze_i386_standard.qcow2 (actual path: debian_squeeze_i386_standard.qcow2) Proposed Fix: Simply change the key used to index the encryption information. self.encrypted = details.get('encrypted') Since the fields in __init__ seem to be named to match the keys used to index the corresponding information, I would also propose changing the attribute from self.encryption to self.encrypted, and updating any references to it wherever appropriate. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1257829/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1259229] Re: Undesired migrate repository path caching.
** Changed in: oslo Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1259229 Title: Undesired migrate repository path caching. Status in OpenStack Identity (Keystone): Triaged Status in Oslo - a Library of Common OpenStack Code: Fix Released Bug description: Oslo.db migration.py uses singleton to store defined `migrate.versioning.repository.Repository` instance containing migration repository path. Most of plugins have its own migrate repo. Usage of singleton in point of Repository instances causes implicit and unexpected behavior therefore as current Repository instance may contain inappropriate path to migrate repo. Solution: do not use singleton. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1259229/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1257293] Re: [messaging] QPID broadcast RPC requests to all servers for a given topic
** Changed in: oslo Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1257293 Title: [messaging] QPID broadcast RPC requests to all servers for a given topic Status in OpenStack Telemetry (Ceilometer): Fix Released Status in Ceilometer havana series: Fix Released Status in Cinder: Fix Released Status in Cinder havana series: Fix Released Status in Orchestration API (Heat): Fix Released Status in heat havana series: Fix Released Status in OpenStack Identity (Keystone): Fix Released Status in Keystone havana series: In Progress Status in OpenStack Neutron (virtual network service): Fix Released Status in neutron havana series: Fix Released Status in OpenStack Compute (Nova): Fix Released Status in OpenStack Compute (nova) havana series: Fix Released Status in Oslo - a Library of Common OpenStack Code: Fix Released Status in oslo havana series: Fix Committed Status in Messaging API for OpenStack: Fix Committed Bug description: According to the oslo.messaging documentation, when a RPC request is made to a given topic, and there are multiple servers for that topic, only _one_ server should service that RPC request. See http://docs.openstack.org/developer/oslo.messaging/target.html topic (str) – A name which identifies the set of interfaces exposed by a server. Multiple servers may listen on a topic and messages will be dispatched to one of the servers in a round-robin fashion. In the case of a QPID-based deployment using topology version 2, this is not the case. Instead, each listening server gets a copy of the RPC and will process it. For more detail, see https://bugs.launchpad.net/oslo/+bug/1178375/comments/26 To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1257293/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1229324] Re: extraneous vim editor configuration comments
** Changed in: oslo Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1229324 Title: extraneous vim editor configuration comments Status in OpenStack Telemetry (Ceilometer): Fix Released Status in Cinder: Fix Released Status in OpenStack Image Registry and Delivery Service (Glance): Fix Released Status in Orchestration API (Heat): In Progress Status in OpenStack Neutron (virtual network service): New Status in OpenStack Compute (Nova): New Status in Oslo - a Library of Common OpenStack Code: Fix Released Status in Python client library for Ceilometer: In Progress Status in Python client library for Glance: Fix Committed Status in Python client library for heat: Fix Committed Status in Python client library for Neutron: New Status in OpenStack Object Storage (Swift): New Status in Tempest: Fix Released Bug description: Many of the source code files have a beginning line # vim: tabstop=4 shiftwidth=4 softtabstop=4 This should be deleted. Many of these lines are in the ceilometer/openstack/common directory. To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1229324/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1215627] Re: Keystone Should re-use non-expired tokens, instead of generating new tokens.
As Dolph pointed out this is not the direction we are going with tokens. This issue will be handled by ephemeral tokens. ** Changed in: keystone Status: Confirmed = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1215627 Title: Keystone Should re-use non-expired tokens, instead of generating new tokens. Status in OpenStack Identity (Keystone): Invalid Bug description: Keystone should re-use user non-expired tokens on create requests. If user requests a new token, then Keystone should authenticate and instead of generating a new token, reuse an existing token. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1215627/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1262424] Re: Files without code should not contain copyright notices
** Changed in: oslo Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1262424 Title: Files without code should not contain copyright notices Status in OpenStack Telemetry (Ceilometer): Fix Released Status in Cinder: Fix Released Status in Orchestration API (Heat): Fix Released Status in OpenStack Dashboard (Horizon): Fix Released Status in Ironic (Bare Metal Provisioning): Fix Released Status in OpenStack Message Queuing Service (Marconi): Triaged Status in OpenStack Neutron (virtual network service): In Progress Status in OpenStack Compute (Nova): New Status in Oslo - a Library of Common OpenStack Code: Fix Released Status in Python client library for Cinder: Fix Committed Status in Python client library for Neutron: In Progress Status in OpenStack Command Line Client: In Progress Status in Trove client binding: In Progress Status in Tempest: In Progress Status in Trove - Database as a Service: In Progress Bug description: Due to a recent policy change in HACKING (http://docs.openstack.org/developer/hacking/#openstack-licensing), empty files should no longer contain copyright notices. To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1262424/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1271331] Re: unit test failure in gate nova.tests.db.test_sqlite.TestSqlite.test_big_int_mapping
** Changed in: oslo Status: Fix Committed = Fix Released ** Changed in: oslo Milestone: None = icehouse-2 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1271331 Title: unit test failure in gate nova.tests.db.test_sqlite.TestSqlite.test_big_int_mapping Status in OpenStack Compute (Nova): In Progress Status in Oslo - a Library of Common OpenStack Code: Fix Released Bug description: We are occasionally seeing the test nova.tests.db.test_sqlite.TestSqlite.test_big_int_mapping fail in the gate due to Traceback (most recent call last): File nova/tests/db/test_sqlite.py, line 53, in test_big_int_mapping output, _ = utils.execute(get_schema_cmd, shell=True) File nova/utils.py, line 166, in execute return processutils.execute(*cmd, **kwargs) File nova/openstack/common/processutils.py, line 168, in execute result = obj.communicate() File /usr/lib/python2.7/subprocess.py, line 754, in communicate return self._communicate(input) File /usr/lib/python2.7/subprocess.py, line 1314, in _communicate stdout, stderr = self._communicate_with_select(input) File /usr/lib/python2.7/subprocess.py, line 1438, in _communicate_with_select data = os.read(self.stdout.fileno(), 1024) OSError: [Errno 11] Resource temporarily unavailable logstash query: message:FAIL: nova.tests.db.test_sqlite.TestSqlite.test_big_int_mapping http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTDogbm92YS50ZXN0cy5kYi50ZXN0X3NxbGl0ZS5UZXN0U3FsaXRlLnRlc3RfYmlnX2ludF9tYXBwaW5nXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImFsbCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTAzMzk1MTU1NDcsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0= To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1271331/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1253497] Re: Replace uuidutils.generate_uuid() with str(uuid.uuid4())
** Changed in: oslo Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1253497 Title: Replace uuidutils.generate_uuid() with str(uuid.uuid4()) Status in OpenStack Image Registry and Delivery Service (Glance): Fix Released Status in Orchestration API (Heat): Fix Released Status in Ironic (Bare Metal Provisioning): Fix Released Status in OpenStack Neutron (virtual network service): In Progress Status in OpenStack Compute (Nova): In Progress Status in Oslo - a Library of Common OpenStack Code: Fix Released Status in OpenStack Data Processing (Savanna): Fix Released Status in Trove - Database as a Service: Fix Released Bug description: http://lists.openstack.org/pipermail/openstack- dev/2013-November/018980.html Hi all, We had a discussion of the modules that are incubated in Oslo. https://etherpad.openstack.org/p/icehouse-oslo-status One of the conclusions we came to was to deprecate/remove uuidutils in this cycle. The first step into this change should be to remove generate_uuid() from uuidutils. The reason is that 1) generating the UUID string seems trivial enough to not need a function and 2) string representation of uuid4 is not what we want in all projects. To address this, a patch is now on gerrit. https://review.openstack.org/#/c/56152/ Each project should directly use the standard uuid module or implement its own helper function to generate uuids if this patch gets in. Any thoughts on this change? Thanks. Unfortunately it looks like that change went through before I caught up on email. Shouldn't we have removed its use in the downstream projects (at least integrated projects) before removing it from Oslo? Doug To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1253497/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1266590] Re: db connection string is cleartext in debug log
** Changed in: oslo Status: Fix Committed = Fix Released ** Changed in: oslo Milestone: None = icehouse-2 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1266590 Title: db connection string is cleartext in debug log Status in OpenStack Image Registry and Delivery Service (Glance): Triaged Status in OpenStack Identity (Keystone): In Progress Status in Oslo - a Library of Common OpenStack Code: Fix Released Bug description: When I start up keystone-all with --debug it logs the config settings. The config setting for the database connection string is printed out: (keystone-all): 2014-01-06 16:32:56,983 DEBUG cfg log_opt_values database.connection= mysql://root:rootpwd@127.0.0.1/keystone?charset=utf8 The database connection string will typically contain the user password, so this value should be masked (like admin_token). This is a regression from Havana, which masked the db connection string. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1266590/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1214850] Re: vmware driver does not work with more than one datacenter in vC
** Also affects: nova/havana Importance: Undecided Status: New ** Changed in: nova/havana Status: New = In Progress ** Changed in: nova/havana Importance: Undecided = High ** Changed in: nova/havana Assignee: (unassigned) = Gary Kotton (garyk) ** Tags removed: havana-backport-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1214850 Title: vmware driver does not work with more than one datacenter in vC Status in OpenStack Compute (Nova): Fix Released Status in OpenStack Compute (nova) havana series: In Progress Bug description: CreateVM_Task, vm_folder_ref, config=config_spec, pool=res_pool_ref) specifies a vm_folder_ref that has no relationship to the datastore. This may lead to VM construction and placement errors. NOTE: code selects the 0th datacenter To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1214850/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1180044] Re: nova failures when vCenter has multiple datacenters
** Also affects: nova/havana Importance: Undecided Status: New ** Changed in: nova/havana Status: New = In Progress ** Changed in: nova/havana Importance: Undecided = High ** Changed in: nova/havana Assignee: (unassigned) = Gary Kotton (garyk) ** Tags removed: havana-backport-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1180044 Title: nova failures when vCenter has multiple datacenters Status in OpenStack Compute (Nova): Fix Released Status in OpenStack Compute (nova) havana series: In Progress Status in The OpenStack VMwareAPI subTeam: Fix Committed Bug description: The method at vmops.py _get_datacenter_ref_and_name does not calculate datacenter properly. def _get_datacenter_ref_and_name(self): Get the datacenter name and the reference. dc_obj = self._session._call_method(vim_util, get_objects, Datacenter, [name]) vm_util._cancel_retrieve_if_necessary(self._session, dc_obj) return dc_obj.objects[0].obj, dc_obj.objects[0].propSet[0].val This will not be correct on systems with more than one datacenter. Stack trace from logs: ERROR nova.compute.manager [req-9395fe41-cf04-4434-bd77-663e93de1d4a foo bar] [instance: 484a42a2-642e-4594-93fe-4f72ddad361f] Error: ['Traceback (most recent call last):\n', ' File /opt/stack/nova/nova/compute/manager.py, line 942, in _build_instance\nset_access_ip=set_access_ip)\n', ' File /opt/stack/nova/nova/compute/manager.py, line 1204, in _spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', ' File /usr/lib/python2.7/contextlib.py, line 24, in __exit__\n self.gen.next()\n', ' File /opt/stack/nova/nova/compute/manager.py, line 1200, in _spawn\nblock_device_info)\n', ' File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 176, in spawn\n block_device_info)\n', ' File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 208, in spawn\n _execute_create_vm()\n', ' File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 204, in _execute_create_vm\n self._session._wait_for_task(instance[\'uuid\'], vm_create_task)\n', ' File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 559, in _wait_for_task\nret_val = done.wait()\n', ' File /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait\nreturn hubs.get_hub().switch()\n', ' File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in switch\nreturn self.greenlet.switch()\n', 'NovaException: A specified parameter was not correct. \nspec.location.folder\n'] vCenter error is: A specified parameter was not correct. spec.location.folder Work around: use only one datacenter, use only one cluster, turn on DRS Additional failures: 2013-07-18 10:59:12.788 DEBUG nova.virt.vmwareapi.vmware_images [req-e8306ffe-c6c7-4d0f-a466-fb532375cbd3 7799f10ca7da47f3b2660feb363b370b 0e1771f8db984a3599596fae62609d9a] [instance: 5b3961b6-38d9-409c-881e-fe50f67b1539] Got image size of 687865856 for the image cde14862-60b8-4360-a145-06585b06577c get_vmdk_size_and_properties /usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmware_images.py:156 2013-07-18 10:59:12.963 WARNING nova.virt.vmwareapi.network_util [req-e8306ffe-c6c7-4d0f-a466-fb532375cbd3 7799f10ca7da47f3b2660feb363b370b 0e1771f8db984a3599596fae62609d9a] [(ManagedObjectReference){ value = network-1501 _type = Network }, (ManagedObjectReference){ value = network-1458 _type = Network }, (ManagedObjectReference){ value = network-2085 _type = Network }, (ManagedObjectReference){ value = network-1143 _type = Network }] 2013-07-18 10:59:13.326 DEBUG nova.virt.vmwareapi.vmops [req-e8306ffe-c6c7-4d0f-a466-fb532375cbd3 7799f10ca7da47f3b2660feb363b370b 0e1771f8db984a3599596fae62609d9a] [instance: 5b3961b6-38d9-409c-881e-fe50f67b1539] Creating VM on the ESX host _execute_create_vm /usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py:207 2013-07-18 10:59:14.258 3145 DEBUG nova.openstack.common.rpc.amqp [-] Making synchronous call on conductor ... multicall /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:583 2013-07-18 10:59:14.259 3145 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 8ef36d061a9341a09d3a5451df798673 multicall /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:586 2013-07-18 10:59:14.259 3145 DEBUG nova.openstack.common.rpc.amqp [-] UNIQUE_ID is 680b790574c64a9783fd2138c43f5f6d. _add_unique_id /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:337 2013-07-18 10:59:18.757 3145 WARNING nova.virt.vmwareapi.driver [-] Task [CreateVM_Task] (returnval){ value = task-33558 _type = Task } status: error The input arguments had entities that did
[Yahoo-eng-team] [Bug 1272072] [NEW] nova losing images without advise
Public bug reported: Hi I've just found a strange behavior. When booting from a volume I have found that nova sometimes will loose the attached volume. My investigations lead me to the point that the instance is too small for the image. But the first time I boot the image from a volume (new) I can do it. After first reboot it looses the ability to boot. 2014-01-23 21:59:35.547 2398 TRACE nova.openstack.common.rpc.amqp raise exception.InstanceTypeDiskTooSmall() 2014-01-23 21:59:35.547 2398 TRACE nova.openstack.common.rpc.amqp InstanceTypeDiskTooSmall: Instance type's disk is too small for requested image. 2014-01-23 21:59:35.547 2398 TRACE nova.openstack.common.rpc.amqp I also found that the disk on the VM should be something like this: /dev/disk/by-path/ip-172.16.0.119:3260-iscsi-iqn.2010-10.org.openstack:volume-137bc77b-c9e6-47ba-b2f5-c83f440a988b-lun-1 And when image is lost I found something like OSError: [Errno 2] No such file or directory: '/var/lib/nova/instances/29c7b639-cb6e-4ea3-b913-2a84d518d1ed/disk' That seems curious for me since the only thing that I see in horizon is that the image cannot be launched. I can reproduce this error in havana (ubuntu saucy) 100% time. Best regards, ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1272072 Title: nova losing images without advise Status in OpenStack Compute (Nova): New Bug description: Hi I've just found a strange behavior. When booting from a volume I have found that nova sometimes will loose the attached volume. My investigations lead me to the point that the instance is too small for the image. But the first time I boot the image from a volume (new) I can do it. After first reboot it looses the ability to boot. 2014-01-23 21:59:35.547 2398 TRACE nova.openstack.common.rpc.amqp raise exception.InstanceTypeDiskTooSmall() 2014-01-23 21:59:35.547 2398 TRACE nova.openstack.common.rpc.amqp InstanceTypeDiskTooSmall: Instance type's disk is too small for requested image. 2014-01-23 21:59:35.547 2398 TRACE nova.openstack.common.rpc.amqp I also found that the disk on the VM should be something like this: /dev/disk/by-path/ip-172.16.0.119:3260-iscsi-iqn.2010-10.org.openstack:volume-137bc77b-c9e6-47ba-b2f5-c83f440a988b-lun-1 And when image is lost I found something like OSError: [Errno 2] No such file or directory: '/var/lib/nova/instances/29c7b639-cb6e-4ea3-b913-2a84d518d1ed/disk' That seems curious for me since the only thing that I see in horizon is that the image cannot be launched. I can reproduce this error in havana (ubuntu saucy) 100% time. Best regards, To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1272072/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1272076] [NEW] VolumeNotCreated - Instance failed, cinder too slow
Public bug reported: Hi, I've found that under certain circumstances cinder does not create volumes fast enough. I can launch an image from a new volume from image with 4GB. I use LVM to allocate space. After a while I found that the instance didn't worked. Looking at logs I can find: 2014-01-23 21:44:15.337 2398 TRACE nova.compute.manager [instance: a0e35767-424e-434d-99b4-35e19422054f] attempts=attempts) 2014-01-23 21:44:15.337 2398 TRACE nova.compute.manager [instance: a0e35767-424e-434d-99b4-35e19422054f] VolumeNotCreated: Volume 137bc77b-c9e6-47ba-b2f 5-c83f440a988b did not finish being created even after we waited 66 seconds or 60 attempts. I was looking around and the cinder was downloading. I think it was taking the image from the image server and building the volume. I don't know why it took so long since installation is gigabit ethernet and even more, the image is in a instance launched on the cinder hardware machine. So it does not even any networking. All resolves internally. Image is saucy (About 300MB). The problem is that after a while volume creation finished and instance failed. So I recereated instance and made it work from volume with no problems. How should I track where the processing slows down? I know that iscsi attachment is slow. One of possible point of faillure is when you have iscsi target that are in a machine that's not reachable. This slows down the rest of processing but I'm not sure if this is a point here. Anyway. I'm sure hardware is not the best but pretty decent. Raid1 array with WD black label. Good sata controller and Intel gigabit network cards. So disk should not be the problem. I'm thinking about networking/config related problem. But I'm lost on this. Any help. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1272076 Title: VolumeNotCreated - Instance failed, cinder too slow Status in OpenStack Compute (Nova): New Bug description: Hi, I've found that under certain circumstances cinder does not create volumes fast enough. I can launch an image from a new volume from image with 4GB. I use LVM to allocate space. After a while I found that the instance didn't worked. Looking at logs I can find: 2014-01-23 21:44:15.337 2398 TRACE nova.compute.manager [instance: a0e35767-424e-434d-99b4-35e19422054f] attempts=attempts) 2014-01-23 21:44:15.337 2398 TRACE nova.compute.manager [instance: a0e35767-424e-434d-99b4-35e19422054f] VolumeNotCreated: Volume 137bc77b-c9e6-47ba-b2f 5-c83f440a988b did not finish being created even after we waited 66 seconds or 60 attempts. I was looking around and the cinder was downloading. I think it was taking the image from the image server and building the volume. I don't know why it took so long since installation is gigabit ethernet and even more, the image is in a instance launched on the cinder hardware machine. So it does not even any networking. All resolves internally. Image is saucy (About 300MB). The problem is that after a while volume creation finished and instance failed. So I recereated instance and made it work from volume with no problems. How should I track where the processing slows down? I know that iscsi attachment is slow. One of possible point of faillure is when you have iscsi target that are in a machine that's not reachable. This slows down the rest of processing but I'm not sure if this is a point here. Anyway. I'm sure hardware is not the best but pretty decent. Raid1 array with WD black label. Good sata controller and Intel gigabit network cards. So disk should not be the problem. I'm thinking about networking/config related problem. But I'm lost on this. Any help. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1272076/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1272086] [NEW] endpoint schema needs deleted, deleted_at columns
Public bug reported: Simple enough. If you delete your endpoints, you should be able to recover them via the database. ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1272086 Title: endpoint schema needs deleted, deleted_at columns Status in OpenStack Identity (Keystone): New Bug description: Simple enough. If you delete your endpoints, you should be able to recover them via the database. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1272086/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1272103] [NEW] v3/endpoints should use relative URLs
Public bug reported: Example: http://paste.openstack.org/show/61794/ In this case the $.endpoints[0].links.self entry should be a relative URL. Ether that or the URL should always use the same prefix as the endpoint from which the request came in on (so basically the same thing). Otherwise these URLs are probably incorrect since most production deployments run keystone behind load balancers and with multiple disconnected IP networks. ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1272103 Title: v3/endpoints should use relative URLs Status in OpenStack Identity (Keystone): New Bug description: Example: http://paste.openstack.org/show/61794/ In this case the $.endpoints[0].links.self entry should be a relative URL. Ether that or the URL should always use the same prefix as the endpoint from which the request came in on (so basically the same thing). Otherwise these URLs are probably incorrect since most production deployments run keystone behind load balancers and with multiple disconnected IP networks. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1272103/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1270355] Re: ERROR nova.virt.libvirt.driver virNetMessageFree:XX msg=0xnumber found in error logs of check job
00:25 jog0 sdague: not sure how to treat this bug https://bugs.launchpad.net/nova/+bug/1270355 00:25 jog0 we turned off fail on stacktrace right? 00:25 jog0 this looks like its related to debug forlibvirt 00:29 sdague jog0: looking 00:30 sdague jog0: so we've turned enforcement back off 00:30 jog0 sdague: and the libvrirt debug logs? 00:30 sdague that's in gate 00:30 jog0 looks like an enforcment glitch 00:30 sdague yeh 00:30 jog0 so move to tempest? I don't think this is a nova bug 00:31 sdague yeh 00:31 sdague assign to dkranz 00:31 sdague he'll need to throw it on the whitelist ** Also affects: tempest Importance: Undecided Status: New ** Changed in: tempest Assignee: (unassigned) = David Kranz (david-kranz) ** Changed in: nova Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1270355 Title: ERROR nova.virt.libvirt.driver virNetMessageFree:XX msg=0xnumber found in error logs of check job Status in OpenStack Compute (Nova): Invalid Status in Tempest: New Bug description: 2014-01-18 02:05:31.528 | Checking logs... 2014-01-18 02:05:32.316 | Log File: n-cpu 2014-01-18 02:05:32.317 | 2014-01-18 01:45:00.948 26765 ERROR nova.virt.libvirt.driver [-] [instance: 5fcba897-e4df-4e75-8a63-d08f136a5e0a]2014-01-18 01:45:00.948+: 29523: debug : virNetMessageFree:75 : msg=0x7f398c001690 nfds=0 cb=(nil) 2014-01-18 02:05:32.317 | 2014-01-18 02:05:35.968 | Logs have errors 2014-01-18 02:05:35.968 | FAILED See: http://logs.openstack.org/47/65347/6/check/check-tempest-dsvm- full/8f71a0b/console.html To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1270355/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1272128] [NEW] Support Port Binding Extension in Cisco N1kv plugin
Public bug reported: Plugins using libvirt_ovs_bridge config are affected due to changes in nova's VIF plugging code. Fix port crud in the Cisco N1kv Neutron plugin by extending Port Bindings Extension. ** Affects: neutron Importance: Undecided Assignee: Abhishek Raut (abhraut) Status: New ** Tags: cisco ** Changed in: neutron Assignee: (unassigned) = Abhishek Raut (abhraut) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1272128 Title: Support Port Binding Extension in Cisco N1kv plugin Status in OpenStack Neutron (virtual network service): New Bug description: Plugins using libvirt_ovs_bridge config are affected due to changes in nova's VIF plugging code. Fix port crud in the Cisco N1kv Neutron plugin by extending Port Bindings Extension. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1272128/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1215711] Re: update of server without deserializer for v3
We are going to drop xml, so this bug is invalid now. ** Changed in: nova Status: In Progress = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1215711 Title: update of server without deserializer for v3 Status in OpenStack Compute (Nova): Won't Fix Bug description: That make the format of update's request is different with create, resize and rebuild. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1215711/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1272136] [NEW] Case test_get_index_sort_updated_at_desc failed sometimes
Public bug reported: I have seen this failure several times. And it doesn't happen always, so it seems like a race condition issue. And until, I just saw it on py27, so not sure if it's existed in py26. 2014-01-23 15:12:33.910 | FAIL: glance.tests.unit.v2.test_registry_client.TestRegistryV2Client.test_get_index_sort_updated_at_desc 2014-01-23 15:12:33.910 | -- 2014-01-23 15:12:33.911 | _StringException: Traceback (most recent call last): 2014-01-23 15:12:33.911 | File /home/jenkins/workspace/gate-glance-python27/glance/tests/unit/v2/test_registry_client.py, line 268, in test_get_index_sort_updated_at_desc 2014-01-23 15:12:33.911 | unjsonify=False) 2014-01-23 15:12:33.911 | File /home/jenkins/workspace/gate-glance-python27/glance/tests/utils.py, line 472, in assertEqualImages 2014-01-23 15:12:33.911 | self.assertEqual(images[i]['id'], value) 2014-01-23 15:12:33.911 | File /home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py, line 324, in assertEqual 2014-01-23 15:12:33.911 | self.assertThat(observed, matcher, message) 2014-01-23 15:12:33.911 | File /home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py, line 414, in assertThat 2014-01-23 15:12:33.911 | raise MismatchError(matchee, matcher, mismatch, verbose) 2014-01-23 15:12:33.911 | MismatchError: !=: 2014-01-23 15:12:33.911 | reference = u'db4ddeb5-edd6-4557-b635-6ecb4e5265a0' 2014-01-23 15:12:33.912 | actual= '406c995a-70e0-4010-a6eb-9dff61a2d2a7' http://logs.openstack.org/19/67019/2/check/gate-glance- python27/783d216/console.html ** Affects: glance Importance: Low Status: Triaged ** Changed in: glance Status: New = Triaged ** Changed in: glance Importance: Undecided = Low -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1272136 Title: Case test_get_index_sort_updated_at_desc failed sometimes Status in OpenStack Image Registry and Delivery Service (Glance): Triaged Bug description: I have seen this failure several times. And it doesn't happen always, so it seems like a race condition issue. And until, I just saw it on py27, so not sure if it's existed in py26. 2014-01-23 15:12:33.910 | FAIL: glance.tests.unit.v2.test_registry_client.TestRegistryV2Client.test_get_index_sort_updated_at_desc 2014-01-23 15:12:33.910 | -- 2014-01-23 15:12:33.911 | _StringException: Traceback (most recent call last): 2014-01-23 15:12:33.911 | File /home/jenkins/workspace/gate-glance-python27/glance/tests/unit/v2/test_registry_client.py, line 268, in test_get_index_sort_updated_at_desc 2014-01-23 15:12:33.911 | unjsonify=False) 2014-01-23 15:12:33.911 | File /home/jenkins/workspace/gate-glance-python27/glance/tests/utils.py, line 472, in assertEqualImages 2014-01-23 15:12:33.911 | self.assertEqual(images[i]['id'], value) 2014-01-23 15:12:33.911 | File /home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py, line 324, in assertEqual 2014-01-23 15:12:33.911 | self.assertThat(observed, matcher, message) 2014-01-23 15:12:33.911 | File /home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py, line 414, in assertThat 2014-01-23 15:12:33.911 | raise MismatchError(matchee, matcher, mismatch, verbose) 2014-01-23 15:12:33.911 | MismatchError: !=: 2014-01-23 15:12:33.911 | reference = u'db4ddeb5-edd6-4557-b635-6ecb4e5265a0' 2014-01-23 15:12:33.912 | actual= '406c995a-70e0-4010-a6eb-9dff61a2d2a7' http://logs.openstack.org/19/67019/2/check/gate-glance- python27/783d216/console.html To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1272136/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1272141] [NEW] term consistency
Public bug reported: 1. Admin Users: Under action, it says 'Edit', but when you click on 'Edit', the modal window says 'Update User.' = change to 'Edit User' 2. Project Overview: Under 'Usage Summary' it says 'Active RAM: 0Bytes', but under Admin Overview, it says 'Active RAM: 0 bytes' = change to Bytes too. 3. Project Volume: Under the 'More' action, it says 'Edit Attachments', but if you click on it, the modal window says 'Manage Volume Attachment' = change the modal to 'Edit Volume Attachments' 4. Project Instances: If you click on 'Launch Instance', click the 'Access and Security' tab on the modal. It says Admin Pass and Confirm Admin Pass, it should say Password 5. Project Instances: If you look at the Instance Details, and go to the bottom, it says: 'Volumes Attached Volume No volumes attached.' Now if you go to Project Volumes and look at Volume Details, it says: 'Attachments Attached To Not attached' = please change for volumes to match that for Instances. ** Affects: horizon Importance: Undecided Assignee: Cindy Lu (clu-m) Status: New ** Description changed: 1. Admin Users: - Under action, it says 'Edit', but when you click on 'Edit', the modal window says 'Update User.' + Under action, it says 'Edit', but when you click on 'Edit', the modal window says 'Update User.' = change to 'Edit User' 2. Project Overview: Under 'Usage Summary' it says 'Active RAM: 0Bytes', but under Admin Overview, it says 'Active RAM: 0 bytes' = change to Bytes too. 3. Project Volume: - Under the 'More' action, it says 'Edit Attachment', but if you click on it, the modal window says 'Manage Volume Attachment' + Under the 'More' action, it says 'Edit Attachments', but if you click on it, the modal window says 'Manage Volume Attachment' = change the modal to 'Edit Volume Attachments' 4. Project Instances: If you click on 'Launch Instance', click the 'Access and Security' tab on the modal. It says Admin Pass and Confirm Admin Pass, it should say Password 5. Project Instances: If you look at the Instance Details, and go to the bottom, it says: 'Volumes Attached Volume No volumes attached.' Now if you go to Project Volumes and look at Volume Details, it says: 'Attachments Attached To Not attached' = please change for volumes to match that for Instances. ** Changed in: horizon Assignee: (unassigned) = Cindy Lu (clu-m) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1272141 Title: term consistency Status in OpenStack Dashboard (Horizon): New Bug description: 1. Admin Users: Under action, it says 'Edit', but when you click on 'Edit', the modal window says 'Update User.' = change to 'Edit User' 2. Project Overview: Under 'Usage Summary' it says 'Active RAM: 0Bytes', but under Admin Overview, it says 'Active RAM: 0 bytes' = change to Bytes too. 3. Project Volume: Under the 'More' action, it says 'Edit Attachments', but if you click on it, the modal window says 'Manage Volume Attachment' = change the modal to 'Edit Volume Attachments' 4. Project Instances: If you click on 'Launch Instance', click the 'Access and Security' tab on the modal. It says Admin Pass and Confirm Admin Pass, it should say Password 5. Project Instances: If you look at the Instance Details, and go to the bottom, it says: 'Volumes Attached Volume No volumes attached.' Now if you go to Project Volumes and look at Volume Details, it says: 'Attachments Attached To Not attached' = please change for volumes to match that for Instances. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1272141/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1264076] Re: Should use a new xml namespace in v3 api
we wii drop tha xml in V3 ** Changed in: nova Status: In Progress = Invalid ** Changed in: nova Assignee: Shuangtai Tian (shuangtai-tian) = (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1264076 Title: Should use a new xml namespace in v3 api Status in OpenStack Compute (Nova): Invalid Bug description: The V3 APIs now also use the XMLNS_V11: XMLNS_V11 = 'http://docs.openstack.org/compute/api/v1.1' For example :https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/plugins/v3/flavors.py#L45 I thinks we should add a new for V3. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1264076/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1272169] [NEW] Volume Name field got overwritten in Create Volume
Public bug reported: The Volume Name field will get automatically overwritten in Create Volume when choosing Image as the source. As shown in the following screenshot: I specified the Volume Name as Test, once I selected the Image, the Volume Name field was automatically replaced with the image name. If this is done by design, i.e. we intend to use the image name as the volume name in this case, it might be better to have some mechanism to let the user know about the name replacement. ** Affects: horizon Importance: Undecided Status: New ** Attachment added: Create Volume Name Field.png https://bugs.launchpad.net/bugs/1272169/+attachment/3955529/+files/Create%20Volume%20Name%20Field.png -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1272169 Title: Volume Name field got overwritten in Create Volume Status in OpenStack Dashboard (Horizon): New Bug description: The Volume Name field will get automatically overwritten in Create Volume when choosing Image as the source. As shown in the following screenshot: I specified the Volume Name as Test, once I selected the Image, the Volume Name field was automatically replaced with the image name. If this is done by design, i.e. we intend to use the image name as the volume name in this case, it might be better to have some mechanism to let the user know about the name replacement. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1272169/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1272172] [NEW] nicira: any string is accepted at interface-name option of net-gateway-create
Public bug reported: [Issue] I attempt to net-gateway-create with interface-name which string is no sense. I expected returning error which code is 400, but it works. [Reproduce] openstack@devstack:~/devstack$ neutron net-gateway-create --device \ id=c4369a1c-3fb2-4f45-8ac7-17d15b20508e,interface_name=foobar \ NetworkgatewayName +---+--+ | Field | Value | +---+--+ | default | False | | devices | {interface_name: foobar, id: c4369a1c-3fb2-4f45-8ac7-17d15b20508e} | | id| 213640a8-7f65-4e72-bdc3-91ce00bd527d | | name | NetworkgatewayName | | ports | | | tenant_id | ec2918c3e7514158987c8f04c64d7521 | +---+--+ ** Affects: neutron Importance: Undecided Status: New ** Tags: nicira -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1272172 Title: nicira: any string is accepted at interface-name option of net- gateway-create Status in OpenStack Neutron (virtual network service): New Bug description: [Issue] I attempt to net-gateway-create with interface-name which string is no sense. I expected returning error which code is 400, but it works. [Reproduce] openstack@devstack:~/devstack$ neutron net-gateway-create --device \ id=c4369a1c-3fb2-4f45-8ac7-17d15b20508e,interface_name=foobar \ NetworkgatewayName +---+--+ | Field | Value | +---+--+ | default | False | | devices | {interface_name: foobar, id: c4369a1c-3fb2-4f45-8ac7-17d15b20508e} | | id| 213640a8-7f65-4e72-bdc3-91ce00bd527d | | name | NetworkgatewayName | | ports | | | tenant_id | ec2918c3e7514158987c8f04c64d7521 | +---+--+ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1272172/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1220131] Re: glance_api_servers list in nova.conf requires endpoints to be using the standard URI format seems lacking
** Changed in: nova Status: Invalid = New ** Changed in: nova Assignee: Yang Yu (yuyangbj) = Ya Hong Du (yahongdu) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1220131 Title: glance_api_servers list in nova.conf requires endpoints to be using the standard URI format seems lacking Status in OpenStack Compute (Nova): New Bug description: The fact that nova.conf's glance_api_servers list requires those endpoints to be using the standard URI format seems lacking. For example there is nothing stopping users from running glance under some arbitrary URI (ex: http://host:port/image-service/zone1/base) however the current nova code does not allow for this flexibility but rather makes assumptions about the URI structure of the glance endpoint(s). Ideally glance url should be able to be specified on any host/port/URI scheme and nova should work fine -- not limit the URI glance scheme to some defined URI format. # URL for connecting to neutron (string value) neutron_url=https://9.123.106.99:9973/0bcdb4dcd6d14ed7a3dc39b1d141d1dc/public - virtual url for neutron # auth url for connecting to neutron in admin context (string # value) neutron_admin_auth_url=https://9.123.106.99:9973/aeb337113f264e13984fda81dc165d21/admin/v2.0 - virtual url for keystone glance_api_servers = http://9.123.106.99:9973/0bcdb4dcd6d14ed7a3dc39b1d141d1dc/public/ The error is that in nova glance.py code I see here: https://github.com/openstack/nova/blob/master/nova/image/glance.py#L135 endpoint = '%s://%s:%s' % (scheme, host, port) return glanceclient.Client(str(version), endpoint, **params) From consumer perspective, could we configure all component url in same way? For neutron/keystone url, we can point a customer url, but for glance url, nova has limited it in one way. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1220131/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1272195] [NEW] Previous ip show twice after 'interface-attach'
Public bug reported: Hi team, I encounter this problem in the below situation: OS : Ubuntu Version : Icehouse 1. Normally boot a vm xianghui@xianghui:/opt/stack/nova$ nova list +--+--+-++-+--+ | ID | Name | Status | Task State | Power State | Networks | +--+--+-++-+--+ | 35cc6c7c-31b1-491a-8e0d-766a098fb8d9 | cirros_3 | ACTIVE | None | Running | private=10.0.0.5 | +--+--+-++-+--+ xianghui@xianghui:/opt/stack/nova$ neutron net-list +--+-++ | id | name| subnets | +--+-++ | 2d6842e2-b82c-4d5c-8601-7928ab85a8fd | private | 44d8d50a-197d-4f52-90c4-487495fdb8b5 10.0.0.0/24 | +--+-++ 2. Attach the instance with an interface xianghui@xianghui:/opt/stack/nova$ nova interface-attach cirros_3 --net-id=2d6842e2-b82c-4d5c-8601-7928ab85a8fd xianghui@xianghui:/opt/stack/nova$ nova list +--+--+-++-+--+ | ID | Name | Status | Task State | Power State | Networks | +--+--+-++-+--+ | 35cc6c7c-31b1-491a-8e0d-766a098fb8d9 | cirros_3 | ACTIVE | None | Running | private=10.0.0.5, 10.0.0.5, 10.0.0.6 | | fa79e7a9-a838-484b-b8a2-4447d4f5d6a0 | fedora-1 | SHUTOFF | None | Shutdown| private=10.0.0.3, 172.24.0.2 | | 97a3758f-9777-44cc-9035-ac95e57f8304 | fedora-2 | SHUTOFF | None | Shutdown| private=10.0.0.4 | +--+--+-++-+--+ 3. Above shows the previous ip twice until next update_info_cache() happens. ** Affects: nova Importance: Undecided Assignee: Xiang Hui (xianghui) Status: New ** Changed in: nova Assignee: (unassigned) = Xiang Hui (xianghui) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1272195 Title: Previous ip show twice after 'interface-attach' Status in OpenStack Compute (Nova): New Bug description: Hi team, I encounter this problem in the below situation: OS : Ubuntu Version : Icehouse 1. Normally boot a vm xianghui@xianghui:/opt/stack/nova$ nova list +--+--+-++-+--+ | ID | Name | Status | Task State | Power State | Networks | +--+--+-++-+--+ | 35cc6c7c-31b1-491a-8e0d-766a098fb8d9 | cirros_3 | ACTIVE | None | Running | private=10.0.0.5 | +--+--+-++-+--+ xianghui@xianghui:/opt/stack/nova$ neutron net-list +--+-++ | id | name| subnets | +--+-++ | 2d6842e2-b82c-4d5c-8601-7928ab85a8fd | private | 44d8d50a-197d-4f52-90c4-487495fdb8b5 10.0.0.0/24 | +--+-++ 2. Attach the instance with an interface xianghui@xianghui:/opt/stack/nova$ nova interface-attach cirros_3 --net-id=2d6842e2-b82c-4d5c-8601-7928ab85a8fd xianghui@xianghui:/opt/stack/nova$ nova list +--+--+-++-+--+ | ID | Name | Status | Task State | Power State | Networks | +--+--+-++-+--+ |
[Yahoo-eng-team] [Bug 1212555] Re: remove schedule_network in nicira plugin
I reproduced in 2014.1.b1. The code mentioned at comment #3 seems to be existing in master branch of Neutron. In master, schedule_network will be called via handle_network_dhcp_access in create_network. Is it has been really fixed? ** Changed in: neutron Status: Invalid = Incomplete -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1212555 Title: remove schedule_network in nicira plugin Status in OpenStack Neutron (virtual network service): Incomplete Bug description: In neutron/api/rpc/agentnotifiers/dhcp_rpc_agent_api.py, DhcpAgentNotifyAPI._notification() will call schedule_network() when method is port_create_end. and the comment says: # we don't schedule when we create network # because we want to give admin a chance to # schedule network manually by API which i think it is reasonable However, in neutron/plugins/nicira/NeutronPlugin.py, the NvpPluginV2.create_network() and NvpPluginV2.create_port() both will call the schedule_network(). I think fresh created network should not call schedule_network(), and the create_port() should not too because the DhcpAgentNotifyAPI will call it again. If we meant to make mistake in neutron.tests.unit.nicira.test_agent_scheduler.NVPDhcpAgentNotifierTestCase.test_network_port_create_notification we can see the log warning: WARNING [neutron.db.agentschedulers_db] Fail scheduling network because the default value of 'dhcp_agents_per_network' is 1, and all the duplicated calling of schedule_network() will cause log.warn() from neutron/db/agentschedulers_db.py So i think it will be better to remove calling schedule_network() in nicira plugin and leave it to dhcp_rpc_agent_api, or we can remove schedule_network() from dhcp_rpc_agent and leave it to all plugin. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1212555/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1272200] [NEW] image-create failed, but not return correct messages
Public bug reported: When use glance image-create , and --location is image path, then this cmd will return data. User will be confused what data mean, therefore, the return message should illustrate the --location is not a file path, but a URL. ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1272200 Title: image-create failed, but not return correct messages Status in OpenStack Image Registry and Delivery Service (Glance): New Bug description: When use glance image-create , and --location is image path, then this cmd will return data. User will be confused what data mean, therefore, the return message should illustrate the --location is not a file path, but a URL. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1272200/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp