[Yahoo-eng-team] [Bug 1311470] [NEW] Disabling an ML2 type driver can leave orphaned DB records
Public bug reported: If an ML2 type driver is disabled after segments have been allocated using that type driver, subsequent network deletions will not remove the DB records allocated by that type driver since the driver isn't there to release the segment[1]. These orphaned segments will then be unavailable for use if the type driver is re-enabled later. 1. https://github.com/openstack/neutron/blob/af89d74d2961db6a04572375150ad908c9e72e78/neutron/plugins/ml2/managers.py#L103 ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1311470 Title: Disabling an ML2 type driver can leave orphaned DB records Status in OpenStack Neutron (virtual network service): New Bug description: If an ML2 type driver is disabled after segments have been allocated using that type driver, subsequent network deletions will not remove the DB records allocated by that type driver since the driver isn't there to release the segment[1]. These orphaned segments will then be unavailable for use if the type driver is re-enabled later. 1. https://github.com/openstack/neutron/blob/af89d74d2961db6a04572375150ad908c9e72e78/neutron/plugins/ml2/managers.py#L103 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1311470/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311463] [NEW] disk-setup unable to partition disks
Public bug reported: The problem is with is_disk_used in cc_disk_setup.py use_count is an array, which doesn't have a splitlines attribute. This is broken on ubuntu precise 12.04 with the latest updates. def is_disk_used(device): """ Check if the device is currently used. Returns true if the device has either a file system or a partition entry is no filesystem found on the disk. """ # If the child count is higher 1, then there are child nodes # such as partition or device mapper nodes use_count = [x for x in enumerate_disk(device)] if len(use_count.splitlines()) > 1: return True # If we see a file system, then its used _, check_fstype, _ = check_fs(device) if check_fstype: return True return False ** Affects: cloud-init Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1311463 Title: disk-setup unable to partition disks Status in Init scripts for use on cloud images: New Bug description: The problem is with is_disk_used in cc_disk_setup.py use_count is an array, which doesn't have a splitlines attribute. This is broken on ubuntu precise 12.04 with the latest updates. def is_disk_used(device): """ Check if the device is currently used. Returns true if the device has either a file system or a partition entry is no filesystem found on the disk. """ # If the child count is higher 1, then there are child nodes # such as partition or device mapper nodes use_count = [x for x in enumerate_disk(device)] if len(use_count.splitlines()) > 1: return True # If we see a file system, then its used _, check_fstype, _ = check_fs(device) if check_fstype: return True return False To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1311463/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1281483] Re: i18n info lost in fedora's rpm spec cause build failed
[Expired for OpenStack Dashboard (Horizon) because there has been no activity for 60 days.] ** Changed in: horizon Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1281483 Title: i18n info lost in fedora's rpm spec cause build failed Status in OpenStack Dashboard (Horizon): Expired Bug description: It might not should be reported here, but I couldn't find fedora's bug tracker. The rpm spec file is in in http://pkgs.fedoraproject.org/cgit/python-django-horizon.git/ (el6-havana) when build the rpm pakages, it throws === error: can't copy 'horizon/locale/en/LC_MESSAGES/django.po': doesn't exist or not a regular file error: Bad exit status from /var/tmp/rpm-tmp.UMUzma (%build) === That's because in openstack we have MANIFEST.in which defined --- recursive-include openstack_dashboard --- However in fedora's http://pkgs.fedoraproject.org/cgit/python-django-horizon.git/tree/python-django-horizon.spec?h=el6-havana L167: # remove unnecessary .po files find . -name "django*.po" -exec rm -f '{}' \; which deleted the i18n files. As a result when we using fedora's spec file build rpm package, it follow MANIFEST.in to copy files to package which files had been deleted in the previous scripts. Knowing the reason, it's easy to fix it. 2 solutions: 1. delete the "remove" scripts and addbelow info in the %files %{python_sitelib}/horizon/locale/??/LC_MESSAGES/ %{python_sitelib}/horizon/locale/??_??/LC_MESSAGES/ 2. Apply a patch when build package which edit the MANIFEST.in file. Please guys from redhat confirm it. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1281483/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1265459] Re: Set Qos failed to vm instance which booted from volume
Please make nova extra in flavor first. nova flavor-key m1.small set quota:disk_read_bytes_sec=1024 nova flavor-key m1.small set quota:disk_write_bytes_sec=1024 You can look this: https://wiki.openstack.org/wiki/InstanceResourceQuota ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1265459 Title: Set Qos failed to vm instance which booted from volume Status in OpenStack Compute (Nova): Invalid Bug description: First,i created a volume type named "test", cinder type-create test Then,created cinder qos and associate it to the volume type named "test" cinder qos-create disk total_bytes_sec=1000 total_iops_sec=100 cinder qos-associate 1e17e6da-aa7e-4695-8d80-5ada0eaa09eb 6ff55fa4-e54c-443e-8902-637b915e372 Next,i created a volume with params above: cinder create --display-name cirros --volume-type test --image-id 9345c318-3ce0-48ba-b26b-f49947ccacf2 Finally,create instance with this volume nova boot cirros --flavor m1.tiny --boot-volume cirros while , after started the vm instance,i found no iotune item in instance`s xml . 08243f23-7488-4668-ad3e-46dc26d9a09f I use openstack nova havana . To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1265459/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1310507] Re: Duplicated image names.
** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1310507 Title: Duplicated image names. Status in OpenStack Image Registry and Delivery Service (Glance): Invalid Bug description: When I try to create a new image with command "glance image-create", I found that the name of an already existing image can be used to name the new one. For example: +--+-+-+--+---++ | ID | Name| Disk Format | Container Format | Size | Status | +--+-+-+--+---++ | a52c2602-ff9f-471e-b414-52d2c967e728 | cirros-0.3.1-x86_64-uec | ami | ami | 25165824 | active | | 3c5974ba-5ff9-4d93-9e1f-e0489f1ca85f | cirros-0.3.1-x86_64-uec-kernel | aki | aki | 4955792 | active | | 793c880f-aeba-419a-b603-cc59959d4c4f | cirros-0.3.1-x86_64-uec-ramdisk | ari | ari | 3714968 | active | | 366bd9e4-f696-46cd-85c4-5abbed492450 | F17-x86_64-cfntools | qcow2 | bare | 476704768 | active | | 1a6588c1-282c-4b26-a5fb-82ccb12bf943 | precise-server-img | qcow2 | bare | 258277888 | active | | 2320f9d3-74c3-410c-a80a-19278f67d67a | precise-server-img | qcow2 | bare | 258277888 | active | | 3f3f7ca7-999d-497b-a97b-55b5d1d21103 | precise-server-img | qcow2 | bare | 258277888 | active | +--+-+-+--+---++ I have seen a similar report at : https://bugs.launchpad.net/glance/+bug/687949 The answer was "This is by design. We can't expect users to avoid other user's image names..." which I totally agree: users should have their freedom choosing names. However, I think we can make it better since this has some negative impacts on other actions. For instance, 1. "heat stack-create" requires the name of a image while duplicated image name(s) would cause an error. 2. Inconvenience is also found when deleting such image(s), since one would have to use "ID" instead of "Name". 3. Moreover when a user creates a image without checking if there is any duplicates, he would run into trouble afterwards since it requires some extra efforts to distinguish "his image" from the duplicated one(s). So, I think we should at least warn the users when they try to create image with existing image name(s) and (maybe) give them a chance to rename the image before further action is actually carried out. Using duplicated name is still allowed. What do you think? This is my first time proposing anything here. Any suggestion is welcome. Thank you. Best regards. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1310507/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311419] [NEW] An unexpected error has occurred when opening the pseudo-folder under a container
Public bug reported: [Wed Apr 23 01:43:14.077580 2014] [authz_core:debug] [pid 14043:tid 139916978800384] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of Require all granted: granted [Wed Apr 23 01:43:14.077651 2014] [authz_core:debug] [pid 14043:tid 139916978800384] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of : granted [Wed Apr 23 01:43:14.077741 2014] [authz_core:debug] [pid 14043:tid 139916978800384] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of Require all granted: granted [Wed Apr 23 01:43:14.077757 2014] [authz_core:debug] [pid 14043:tid 139916978800384] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of : granted [Wed Apr 23 01:43:14.833565 2014] [authz_core:debug] [pid 14043:tid 139916970399488] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of Require all granted: granted [Wed Apr 23 01:43:14.833648 2014] [authz_core:debug] [pid 14043:tid 139916970399488] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of : granted [Wed Apr 23 01:43:14.833721 2014] [authz_core:debug] [pid 14043:tid 139916970399488] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of Require all granted: granted [Wed Apr 23 01:43:14.833735 2014] [authz_core:debug] [pid 14043:tid 139916970399488] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of : granted [Wed Apr 23 01:43:17.165668 2014] [authz_core:debug] [pid 14043:tid 139916881164032] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of Require all granted: granted, referer: http://10.22.157.37/project/containers/my-container-1/ [Wed Apr 23 01:43:17.165707 2014] [authz_core:debug] [pid 14043:tid 139916881164032] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of : granted, referer: http://10.22.157.37/project/containers/my-container-1/ [Wed Apr 23 01:43:17.165755 2014] [authz_core:debug] [pid 14043:tid 139916881164032] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of Require all granted: granted, referer: http://10.22.157.37/project/containers/my-container-1/ [Wed Apr 23 01:43:17.165764 2014] [authz_core:debug] [pid 14043:tid 139916881164032] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of : granted, referer: http://10.22.157.37/project/containers/my-container-1/ ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1311419 Title: An unexpected error has occurred when opening the pseudo-folder under a container Status in OpenStack Dashboard (Horizon): New Bug description: [Wed Apr 23 01:43:14.077580 2014] [authz_core:debug] [pid 14043:tid 139916978800384] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of Require all granted: granted [Wed Apr 23 01:43:14.077651 2014] [authz_core:debug] [pid 14043:tid 139916978800384] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of : granted [Wed Apr 23 01:43:14.077741 2014] [authz_core:debug] [pid 14043:tid 139916978800384] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of Require all granted: granted [Wed Apr 23 01:43:14.077757 2014] [authz_core:debug] [pid 14043:tid 139916978800384] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of : granted [Wed Apr 23 01:43:14.833565 2014] [authz_core:debug] [pid 14043:tid 139916970399488] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of Require all granted: granted [Wed Apr 23 01:43:14.833648 2014] [authz_core:debug] [pid 14043:tid 139916970399488] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of : granted [Wed Apr 23 01:43:14.833721 2014] [authz_core:debug] [pid 14043:tid 139916970399488] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of Require all granted: granted [Wed Apr 23 01:43:14.833735 2014] [authz_core:debug] [pid 14043:tid 139916970399488] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of : granted [Wed Apr 23 01:43:17.165668 2014] [authz_core:debug] [pid 14043:tid 139916881164032] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of Require all granted: granted, referer: http://10.22.157.37/project/containers/my-container-1/ [Wed Apr 23 01:43:17.165707 2014] [authz_core:debug] [pid 14043:tid 139916881164032] mod_authz_core.c(802): [client 192.168.122.1:60003] AH01626: authorization result of : gr
[Yahoo-eng-team] [Bug 1311412] [NEW] Nicira test_looping_calls periodic failures in stable/havana
Public bug reported: Hit this 2/3 times while trying to get a trivial version change thru the review checks @ https://review.openstack.org/#/c/89441/. ft1.5395: neutron.tests.unit.nicira.test_nvp_sync.SyncLoopingCallTestCase.test_looping_calls_StringException: Empty attachments: pythonlogging:'' stderr stdout Traceback (most recent call last): File "neutron/tests/unit/nicira/test_nvp_sync.py", line 254, in test_looping_calls 5, synchronizer._synchronize_state.call_count) File "/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py", line 321, in assertEqual self.assertThat(observed, matcher, message) File "/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py", line 406, in assertThat raise mismatch_error MismatchError: 5 != 2 ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1311412 Title: Nicira test_looping_calls periodic failures in stable/havana Status in OpenStack Neutron (virtual network service): New Bug description: Hit this 2/3 times while trying to get a trivial version change thru the review checks @ https://review.openstack.org/#/c/89441/. ft1.5395: neutron.tests.unit.nicira.test_nvp_sync.SyncLoopingCallTestCase.test_looping_calls_StringException: Empty attachments: pythonlogging:'' stderr stdout Traceback (most recent call last): File "neutron/tests/unit/nicira/test_nvp_sync.py", line 254, in test_looping_calls 5, synchronizer._synchronize_state.call_count) File "/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py", line 321, in assertEqual self.assertThat(observed, matcher, message) File "/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py", line 406, in assertThat raise mismatch_error MismatchError: 5 != 2 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1311412/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311357] [NEW] Swift.py does not respect the OPENSTACK_SSL_NO_VERIFY setting for use with self signed certs
Public bug reported: The swift api client connection does not respect the OPENSTACK_SSL_NO_VERIFY setting in local_settings.py. This results in a test deployment with self signed certificates not being able to use the Horizon project containers web interface with an SSL verification error. A patch would be something like this: diff -Naur openstack-dashboard/openstack_dashboard/api/swift.py openstack-dashboard.new/openstack_dashboard/api/swift.py --- openstack-dashboard/openstack_dashboard/api/swift.py2014-04-22 21:44:37.293082690 + +++ openstack-dashboard.new/openstack_dashboard/api/swift.py2014-04-22 21:47:57.541082727 + @@ -108,6 +108,7 @@ def swift_api(request): endpoint = base.url_for(request, 'object-store') +insecure = getattr(settings, 'OPENSTACK_SSL_NO_VERIFY', False) cacert = getattr(settings, 'OPENSTACK_SSL_CACERT', None) LOG.debug('Swift connection created using token "%s" and url "%s"' % (request.user.token.id, endpoint)) @@ -117,6 +118,7 @@ preauthtoken=request.user.token.id, preauthurl=endpoint, cacert=cacert, + insecure=insecure, auth_version="2.0") ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1311357 Title: Swift.py does not respect the OPENSTACK_SSL_NO_VERIFY setting for use with self signed certs Status in OpenStack Dashboard (Horizon): New Bug description: The swift api client connection does not respect the OPENSTACK_SSL_NO_VERIFY setting in local_settings.py. This results in a test deployment with self signed certificates not being able to use the Horizon project containers web interface with an SSL verification error. A patch would be something like this: diff -Naur openstack-dashboard/openstack_dashboard/api/swift.py openstack-dashboard.new/openstack_dashboard/api/swift.py --- openstack-dashboard/openstack_dashboard/api/swift.py 2014-04-22 21:44:37.293082690 + +++ openstack-dashboard.new/openstack_dashboard/api/swift.py 2014-04-22 21:47:57.541082727 + @@ -108,6 +108,7 @@ def swift_api(request): endpoint = base.url_for(request, 'object-store') +insecure = getattr(settings, 'OPENSTACK_SSL_NO_VERIFY', False) cacert = getattr(settings, 'OPENSTACK_SSL_CACERT', None) LOG.debug('Swift connection created using token "%s" and url "%s"' % (request.user.token.id, endpoint)) @@ -117,6 +118,7 @@ preauthtoken=request.user.token.id, preauthurl=endpoint, cacert=cacert, + insecure=insecure, auth_version="2.0") To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1311357/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311324] [NEW] documentation does not specify that [auth] drivers only work with v3 API
Public bug reported: The documentation on auth plugins (http://docs.openstack.org/developer/keystone/configuration.html#how-to- implement-an-authentication-plugin) does not state that it's a V3 feature. I did a bunch of tests today and found that it's being ignored. You can set the config to complete garbage values and it was ignored. I also found that calls to get a token skip the auth drivers and talk right to the identity ones. morganfainberg: perhaps you can comment on a mystery, when I use password auth and request a token, is it supposed to go through the auth modules? mfisch, v2.0 or v3? mfisch, v3 is where the auth plugins/modules are used vs. the logic in the token auth controller morganfainberg: v2 morganfainberg: I did see the token driver just calling right to the identity driver morganfainberg: ugh, so whats the point of an auth module in v2? mfisch, https://github.com/openstack/keystone/blob/master/keystone/token/controllers.py#L60 mfisch, this is one of the benefits of using V3 (yes, I know, not supported everywhere yet) morganfainberg: yeah, thats the code I was looking at earlier, authenticate_local calls direct to ident mfisch, yep mfisch, v2.0 doesn't have the auth plugin mechanisms mfisch, it wasn't really designed with that in mind. morganfainberg: so the docs for it are really designed for v3 mfisch, if we weren't clear on the auth plugins being a v3 thing we should get the docs updated mfisch, but yes, v3 is where auth plugin logic is used morganfainberg: I dont see it called out here: http://docs.openstack.org/developer/keystone/configuration.html#how-to-implement-an-authentication-plugin mfisch, yep, don't see it either. file a bug on this if you don't mind (feel free to fix it too if you're so inclined) mfisch, good catch. not sure if happy to be right or sad that it doesn't work mfisch, well, help us get everyone moved to v3 :) then it'll work like you expect! mfisch (shameless plug for help to get OpenStack on keystone V3) I'm on board ** Affects: keystone Importance: Low Status: Triaged ** Tags: documentation low-hanging-fruit -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1311324 Title: documentation does not specify that [auth] drivers only work with v3 API Status in OpenStack Identity (Keystone): Triaged Bug description: The documentation on auth plugins (http://docs.openstack.org/developer/keystone/configuration.html#how- to-implement-an-authentication-plugin) does not state that it's a V3 feature. I did a bunch of tests today and found that it's being ignored. You can set the config to complete garbage values and it was ignored. I also found that calls to get a token skip the auth drivers and talk right to the identity ones. morganfainberg: perhaps you can comment on a mystery, when I use password auth and request a token, is it supposed to go through the auth modules? mfisch, v2.0 or v3? mfisch, v3 is where the auth plugins/modules are used vs. the logic in the token auth controller morganfainberg: v2 morganfainberg: I did see the token driver just calling right to the identity driver morganfainberg: ugh, so whats the point of an auth module in v2? mfisch, https://github.com/openstack/keystone/blob/master/keystone/token/controllers.py#L60 mfisch, this is one of the benefits of using V3 (yes, I know, not supported everywhere yet) morganfainberg: yeah, thats the code I was looking at earlier, authenticate_local calls direct to ident mfisch, yep mfisch, v2.0 doesn't have the auth plugin mechanisms mfisch, it wasn't really designed with that in mind. morganfainberg: so the docs for it are really designed for v3 mfisch, if we weren't clear on the auth plugins being a v3 thing we should get the docs updated mfisch, but yes, v3 is where auth plugin logic is used morganfainberg: I dont see it called out here: http://docs.openstack.org/developer/keystone/configuration.html#how-to-implement-an-authentication-plugin mfisch, yep, don't see it either. file a bug on this if you don't mind (feel free to fix it too if you're so inclined) mfisch, good catch. not sure if happy to be right or sad that it doesn't work mfisch, well, help us get everyone moved to v3 :) then it'll work like you expect! mfisch (shameless plug for help to get OpenStack on keystone V3) I'm on board To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1311324/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1297642] Re: Evacuate fails during rebuild of the VM on the target host with RPC timeout
Closing this as Invalid as this appears to be an incorrect use of the multi host mode in nova-network. That is, the network has been configured as multi host but the deployment isn't running nova-network on every compute host. The newest documentation (Grizzly) about the nova-network multi host feature I could find says: "The multi_host option must be in place when you create the network and nova-network must be run on every compute host. These created multi hosts networks will send all network related commands to the host that the specific VM is on." [1] [1] http://docs.openstack.org/grizzly/openstack-compute/admin/content /existing-ha-networking-options.html#d6e9503 Please re-open if needed. ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1297642 Title: Evacuate fails during rebuild of the VM on the target host with RPC timeout Status in OpenStack Compute (Nova): Invalid Bug description: when using the 'nova evacuate' to evacuate a VM with no shared storage to a target host the command fails during the rebuild step leaving the VM in the rebuilding state on the target host. The VM is evacuated from the failed host but fails with RPC timeout error during the rebuild on the target host. Here are steps to recreate the issue: 1) create a vm on a host nova boot --flavor m1.small --image my_image test-vm 2) disable the compute host of the VM and stop the nova-compute process on it 3) nova evacuate test-vm target-host the VM is evacuated from the failed host and starts rebuilding on the target host | 5) check test-vm nova show test-vm server error 500 with roc timeout and the VM is suck in the rebuilding state on the target host. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1297642/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311300] [NEW] NSX: rasies 404 error on update port if lswitch not found in nvp
Public bug reported: In havana if one calls update_port on a port where the lswitch is not in nsx we raise a 404 error. Instead we should just update the db return the port in error state. ** Affects: neutron Importance: Undecided Assignee: Aaron Rosen (arosen) Status: New ** Changed in: neutron Assignee: (unassigned) => Aaron Rosen (arosen) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1311300 Title: NSX: rasies 404 error on update port if lswitch not found in nvp Status in OpenStack Neutron (virtual network service): New Bug description: In havana if one calls update_port on a port where the lswitch is not in nsx we raise a 404 error. Instead we should just update the db return the port in error state. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1311300/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311291] [NEW] NSX: fake_api_client does not raise not found
Public bug reported: If one queries nsx doing GET /ws.v1/lswitch/LS_UUID/lport/tag=blah and LS_UUID is a UUID and not * nsx returns 404 instead of a result list. ** Affects: neutron Importance: Undecided Assignee: Aaron Rosen (arosen) Status: New ** Changed in: neutron Assignee: (unassigned) => Aaron Rosen (arosen) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1311291 Title: NSX: fake_api_client does not raise not found Status in OpenStack Neutron (virtual network service): New Bug description: If one queries nsx doing GET /ws.v1/lswitch/LS_UUID/lport/tag=blah and LS_UUID is a UUID and not * nsx returns 404 instead of a result list. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1311291/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1285306] Re: N1kv plugin string split on None exception
** Changed in: neutron Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1285306 Title: N1kv plugin string split on None exception Status in OpenStack Neutron (virtual network service): Invalid Bug description: In function _initialize_network_ranges() in n1kv_neutron_plugin.py, _get_segment_range() is called regardless of network type. For trunk and multi-segment networks, which don't have a segment range, this results in an exception. The fix is to check for network type, before the _get_segment_range() operation. A patch for this will be submitted. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1285306/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311260] [NEW] SDN-VE controller expects information not in Neutron request
Public bug reported: The SDN-VE controller expects the port-id information for update_router. Furthermore, the controller requires the use of string 'null' for null present in Neutron requests. The controller cannot accept ":" as part of the incoming requests. ** Affects: neutron Importance: Undecided Assignee: Mohammad Banikazemi (mb-s) Status: In Progress ** Changed in: neutron Assignee: (unassigned) => Mohammad Banikazemi (mb-s) ** Changed in: neutron Status: New => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1311260 Title: SDN-VE controller expects information not in Neutron request Status in OpenStack Neutron (virtual network service): In Progress Bug description: The SDN-VE controller expects the port-id information for update_router. Furthermore, the controller requires the use of string 'null' for null present in Neutron requests. The controller cannot accept ":" as part of the incoming requests. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1311260/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311232] [NEW] LBaaSAgentSchedulerTestCaseXML.test_schedule_pool_with_down_agen broken
Public bug reported: ft1.6695: neutron.tests.unit.services.loadbalancer.test_agent_scheduler.LBaaSAgentSchedulerTestCaseXML.test_schedule_pool_with_down_agent_StringException: Empty attachments: stderr stdout pythonlogging:'': {{{ 2014-04-22 01:44:50,176 INFO [neutron.manager] Loading Plugin: neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 2014-04-22 01:44:50,178ERROR [neutron.services.loadbalancer.drivers.haproxy.agent_manager] Failed reporting state! Traceback (most recent call last): File "neutron/services/loadbalancer/drivers/haproxy/agent_manager.py", line 178, in _report_state self.agent_state) File "neutron/agent/rpc.py", line 74, in report_state return self.cast(context, msg, topic=self.topic) File "neutron/openstack/common/rpc/proxy.py", line 171, in cast rpc.cast(context, self._get_topic(topic), msg) File "neutron/openstack/common/rpc/__init__.py", line 158, in cast return _get_impl().cast(CONF, context, topic, msg) File "neutron/openstack/common/rpc/impl_fake.py", line 166, in cast check_serialize(msg) File "neutron/openstack/common/rpc/impl_fake.py", line 131, in check_serialize json.dumps(msg) File "/usr/lib/python2.7/json/__init__.py", line 231, in dumps return _default_encoder.encode(obj) File "/usr/lib/python2.7/json/encoder.py", line 201, in encode chunks = self.iterencode(o, _one_shot=True) File "/usr/lib/python2.7/json/encoder.py", line 264, in iterencode return _iterencode(o, 0) File "/usr/lib/python2.7/json/encoder.py", line 178, in default raise TypeError(repr(o) + " is not JSON serializable") TypeError: is not JSON serializable 2014-04-22 01:44:50,180ERROR [neutron.openstack.common.loopingcall] in fixed duration looping call Traceback (most recent call last): File "neutron/openstack/common/loopingcall.py", line 82, in _inner delay = interval - timeutils.delta_seconds(start, end) TypeError: unsupported operand type(s) for -: 'Mock' and 'float' 2014-04-22 01:44:50,180ERROR [neutron.services.loadbalancer.drivers.haproxy.agent_manager] Failed reporting state! Traceback (most recent call last): File "neutron/services/loadbalancer/drivers/haproxy/agent_manager.py", line 178, in _report_state self.agent_state) File "neutron/agent/rpc.py", line 74, in report_state return self.cast(context, msg, topic=self.topic) File "neutron/openstack/common/rpc/proxy.py", line 171, in cast rpc.cast(context, self._get_topic(topic), msg) File "neutron/openstack/common/rpc/__init__.py", line 158, in cast return _get_impl().cast(CONF, context, topic, msg) File "neutron/openstack/common/rpc/impl_fake.py", line 166, in cast check_serialize(msg) File "neutron/openstack/common/rpc/impl_fake.py", line 131, in check_serialize json.dumps(msg) File "/usr/lib/python2.7/json/__init__.py", line 231, in dumps return _default_encoder.encode(obj) File "/usr/lib/python2.7/json/encoder.py", line 201, in encode chunks = self.iterencode(o, _one_shot=True) File "/usr/lib/python2.7/json/encoder.py", line 264, in iterencode return _iterencode(o, 0) File "/usr/lib/python2.7/json/encoder.py", line 178, in default raise TypeError(repr(o) + " is not JSON serializable") TypeError: is not JSON serializable 2014-04-22 01:44:50,180ERROR [neutron.openstack.common.loopingcall] in fixed duration looping call Traceback (most recent call last): File "neutron/openstack/common/loopingcall.py", line 82, in _inner delay = interval - timeutils.delta_seconds(start, end) TypeError: unsupported operand type(s) for -: 'Mock' and 'float' 2014-04-22 01:44:50,181ERROR [neutron.services.loadbalancer.drivers.haproxy.agent_manager] Failed reporting state! Traceback (most recent call last): File "neutron/services/loadbalancer/drivers/haproxy/agent_manager.py", line 178, in _report_state self.agent_state) File "neutron/agent/rpc.py", line 74, in report_state return self.cast(context, msg, topic=self.topic) File "neutron/openstack/common/rpc/proxy.py", line 171, in cast rpc.cast(context, self._get_topic(topic), msg) File "neutron/openstack/common/rpc/__init__.py", line 158, in cast return _get_impl().cast(CONF, context, topic, msg) File "neutron/openstack/common/rpc/impl_fake.py", line 166, in cast check_serialize(msg) File "neutron/openstack/common/rpc/impl_fake.py", line 131, in check_serialize json.dumps(msg) File "/usr/lib/python2.7/json/__init__.py", line 231, in dumps return _default_encoder.encode(obj) File "/usr/lib/python2.7/json/encoder.py", line 201, in encode chunks = self.iterencode(o, _one_shot=True) File "/usr/lib/python2.7/json/encoder.py", line 264, in iterencode return _iterencode(o, 0) File "/usr/lib/python2.7/json/encoder.py", line 178, in default raise TypeError(repr(o) + " is not JSON serializable") TypeError: is not JSON serializable 2014-04-22 01:44:50,181ERROR [neutron.op
[Yahoo-eng-team] [Bug 1276221] Re: Keystone returns HTTP 400 as SQLAlchemy raises None exceptions
It doesn't sound like there's anything to fix in keystone then, if this is due to dependency version(s) already documented as unsupported. ** Changed in: keystone Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1276221 Title: Keystone returns HTTP 400 as SQLAlchemy raises None exceptions Status in OpenStack Identity (Keystone): Invalid Bug description: With RDO-Icehouse (m2 testday packages) on RHEL-6.5, negative Tempest identity tests fails as Keystone responds with HTTP 400. For example test tempest.api.identity.admin.v3.test_projects.ProjectsTestJSON.test_project_create_duplicate gives following output: > Request: POST http://192.168.1.16:35357/v3/projects > Request Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''} > Request Body: {"project": {"enabled": true, "description": null, "name": "project-dup--1737299968", "domain_id": "default"}} > Response Status: 400 > Response Headers: {'content-length': '143', 'date': 'Tue, 04 Feb 2014 07:47:53 GMT', 'content-type': 'application/json', 'vary': 'X-Auth-Token', 'connection': 'close'} > Response Body: {"error": {"message": "exceptions must be old-style classes or derived from BaseException, not NoneType", "code": 400, "title": "Bad Request"}} In keystone.log the exception can be seen as > keystone.common.wsgi Traceback (most recent call last): > keystone.common.wsgi File "/usr/lib/python2.6/site-packages/keystone/common/wsgi.py", line 214, in __call__ > keystone.common.wsgi result = method(context, **params) > keystone.common.wsgi File "/usr/lib/python2.6/site-packages/keystone/common/controller.py", line 174, in inner > keystone.common.wsgi return f(self, context, *args, **kwargs) > keystone.common.wsgi File "/usr/lib/python2.6/site-packages/keystone/assignment/controllers.py", line 390, in create_project > keystone.common.wsgi ref = self.assignment_api.create_project(ref['id'], ref) > keystone.common.wsgi File "/usr/lib/python2.6/site-packages/keystone/notifications.py", line 53, in wrapper > keystone.common.wsgi result = f(*args, **kwargs) > keystone.common.wsgi File "/usr/lib/python2.6/site-packages/keystone/assignment/core.py", line 72, in create_project > keystone.common.wsgi ret = self.driver.create_project(tenant_id, tenant) > keystone.common.wsgi File "/usr/lib/python2.6/site-packages/keystone/common/sql/core.py", line 165, in wrapper > keystone.common.wsgi return method(*args, **kwargs) > keystone.common.wsgi File "/usr/lib/python2.6/site-packages/keystone/assignment/backends/sql.py", line 411, in create_project > keystone.common.wsgi return tenant_ref.to_dict() > keystone.common.wsgi File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ > keystone.common.wsgi self.gen.next() > keystone.common.wsgi File "/usr/lib/python2.6/site-packages/keystone/common/sql/core.py", line 156, in transaction > keystone.common.wsgi yield session > keystone.common.wsgi File "/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py", line 405, in __exit__ > keystone.common.wsgi raise > keystone.common.wsgi TypeError: exceptions must be old-style classes or derived from BaseException, not NoneType > keystone.common.wsgi Deployed with packstack, no related configuration changes. Installed packages: > openstack-keystone.noarch 2014.1-0.3.b2.el6 > python-keystone.noarch 2014.1-0.3.b2.el6 > python-keystoneclient.noarch 1:0.4.1-3.el6 > python-sqlalchemy0.7.x86_640.7.8-1.el6 > python-eventlet.noarch 0.9.17-2.el6 > python-libs.x86_64 2.6.6-51.el6 This is NOT reproducible on Fedora20 with versions: > openstack-keystone.noarch 2014.1-0.3.b2.fc21 > python-keystone.noarch 2014.1-0.3.b2.fc21 > python-keystoneclient.noarch1:0.4.1-3.fc20 > python-sqlalchemy.x86_640.8.4-1.fc20 > python-eventlet.noarch 0.12.0-2.fc20 > python-libs.x86_64 2.7.5-9.fc20 Neither it happened on Fedora19/20 or RHEL-6.5 with devstack/tempest master branches. This seems as related to eventlet/tpool etc issues, here with SQLAlchemy. For example like https://bitbucket.org/eventlet/eventlet/issue/118/exceptions-are-cleared-during To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1276221/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1308984] Re: Floating IP addresses ordered in a weird way
** No longer affects: openstack-community -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1308984 Title: Floating IP addresses ordered in a weird way Status in OpenStack Dashboard (Horizon): New Bug description: The floating ip:s are ordered according to UUID instead of IP, more information in the patch. --- commit 83a10bf02a5079513741039860208e277e1d12e4 Author: Ian Kumlien Date: Thu Apr 17 13:49:32 2014 +0200 Sorting floating IP:s according to IP. While using alot of manually allocated floating ip:s we wondered why the IP list wasn't sorted. While looking at it we found that the UI actually does sort the IP but according to the UUID instead of the actual IP address. This change fixes this so that it's sorted according to IP. Found-By: Marko Bocevski diff --git a/openstack_dashboard/dashboards/project/access_and_security/floating_ips/workflows.py b/openstack_dashboard/dashboards/project/access_and_security/floating_ips/workflows.py index c4ebbd1..d884dee 100644 --- a/openstack_dashboard/dashboards/project/access_and_security/floating_ips/workflows.py +++ b/openstack_dashboard/dashboards/project/access_and_security/floating_ips/workflows.py @@ -69,7 +69,7 @@ class AssociateIPAction(workflows.Action): exceptions.handle(self.request, _('Unable to retrieve floating IP addresses.'), redirect=redirect) -options = sorted([(ip.id, ip.ip) for ip in ips if not ip.port_id]) +options = sorted([(ip.ip, ip.ip) for ip in ips if not ip.port_id]) if options: options.insert(0, ("", _("Select an IP address"))) else: To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1308984/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311197] [NEW] xenapi: refactor _find_vdi_refs and lookup_vm_vdis
Public bug reported: These functions look to do pretty much the same thing. It seems like we should be able to combine the functions into a single implementation. ** Affects: nova Importance: Undecided Assignee: Christopher Lefelhocz (christopher-lefelhoc) Status: In Progress ** Tags: xenserver ** Changed in: nova Status: New => In Progress ** Changed in: nova Assignee: (unassigned) => Christopher Lefelhocz (christopher-lefelhoc) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1311197 Title: xenapi: refactor _find_vdi_refs and lookup_vm_vdis Status in OpenStack Compute (Nova): In Progress Bug description: These functions look to do pretty much the same thing. It seems like we should be able to combine the functions into a single implementation. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1311197/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1289627] Re: VMware NoPermission faults do not log what permission was missing
** Also affects: oslo.vmware Importance: Undecided Status: New ** Changed in: oslo.vmware Status: New => In Progress ** Changed in: oslo.vmware Assignee: (unassigned) => Eric Brown (ericwb) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1289627 Title: VMware NoPermission faults do not log what permission was missing Status in OpenStack Compute (Nova): In Progress Status in Oslo VMware library for OpenStack projects: In Progress Bug description: NoPermission object has a privilegeId that tells us which permission the user did not have. Presently the VMware nova driver does not log this data. This is very useful for debugging user permissions problems on vCenter or ESX. http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.fault.NoPermission.html To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1289627/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311194] [NEW] Missing semi-colons in less files
Public bug reported: In horizon.less and horizon_charts.less there are missing semi-colons. I know it's not necesseray but IMHO, it's not a good practice. ** Affects: horizon Importance: Undecided Assignee: Robert Mizielski (miziel-r) Status: New ** Tags: low-hanging-fruit ** Changed in: horizon Assignee: (unassigned) => Robert Mizielski (miziel-r) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1311194 Title: Missing semi-colons in less files Status in OpenStack Dashboard (Horizon): New Bug description: In horizon.less and horizon_charts.less there are missing semi-colons. I know it's not necesseray but IMHO, it's not a good practice. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1311194/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311193] [NEW] Evacuate with a volume attached
Public bug reported: After evacuating my instance. I can see that my volume is still attached. But when i see the volume list in horizon. My volume appear available. My volume is stored in nfs on a freenas. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1311193 Title: Evacuate with a volume attached Status in OpenStack Dashboard (Horizon): New Bug description: After evacuating my instance. I can see that my volume is still attached. But when i see the volume list in horizon. My volume appear available. My volume is stored in nfs on a freenas. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1311193/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1308413] Re: Missing x-tenant-id header to registry will return list of all images while using v2 api with registry
** Information type changed from Private Security to Public ** Changed in: ossa Status: Incomplete => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1308413 Title: Missing x-tenant-id header to registry will return list of all images while using v2 api with registry Status in OpenStack Image Registry and Delivery Service (Glance): Incomplete Status in OpenStack Security Advisories: Won't Fix Bug description: $ ./run_tests.sh --subunit glance.tests.functional.v2.test_images.TestImages.test_permissions Running `tools/with_venv.sh python -m glance.openstack.common.lockutils python setup.py testr --testr-args='--subunit --concurrency 1 --subunit glance.tests.functional.v2.test_images.TestImages.test_permissions'` glance.tests.functional.v2.test_images.TestImages test_permissions FAIL Slowest 1 tests took 12.91 secs: glance.tests.functional.v2.test_images.TestImages test_permissions 12.91 == FAIL: glance.tests.functional.v2.test_images.TestImages.test_permissions -- Traceback (most recent call last): _StringException: Traceback (most recent call last): File "/home/ubuntu/glance/glance/tests/functional/v2/test_images.py", line 488, in test_permissions self.assertEqual(0, len(images)) File "/home/ubuntu/glance/.venv/local/lib/python2.7/site-packages/testtools/testcase.py", line 321, in assertEqual self.assertThat(observed, matcher, message) File "/home/ubuntu/glance/.venv/local/lib/python2.7/site-packages/testtools/testcase.py", line 406, in assertThat raise mismatch_error MismatchError: 0 != 1 Ran 2 tests in 26.407s FAILED (failures=1) 482 # TENANT2 should not see the image in their list 483 path = self._url('/v2/images') 484 headers = self._headers({'X-Tenant-Id': TENANT2}) 485 response = requests.get(path, headers=headers) 486 self.assertEqual(200, response.status_code) 487 images = jsonutils.loads(response.text)['images'] 488 self.assertEqual(0, len(images)) The reason only one image seen by wrong tenant is purely because this test has populated glance only with one image. Missing x-tenant-id header in the GET request made to registry server listing images will return all images. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1308413/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1302080] Re: Host is accessible from instance using Linux bridge IPv6 address
this bug should be fixed openly as a strengthening measure. ** Information type changed from Private Security to Public ** Changed in: ossa Status: Incomplete => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1302080 Title: Host is accessible from instance using Linux bridge IPv6 address Status in OpenStack Neutron (virtual network service): New Status in OpenStack Security Advisories: Won't Fix Bug description: Opening this as a security bug just in case - but I doubt it is. If a compute node has enabled auto configuration for IPv6 addresses on all interfaces, then the Linux bridges used for connecting instances will get IPv6 addresses too. Then an instance can reach the host using the address of the bridge it is connected to. Eg with the ovs-agent and hybrid VIF driver after booting an instance in devstack connected to the "private" network: vagrant@devstack:~$ brctl show bridge name bridge id STP enabled interfaces br-ex .9619b7f0614b no qg-97601dc1-77 br-int.cad7ebe11e46 no qr-edf68f52-f9 qvoe8eabd6a-46 tap09437e57-45 qbre8eabd6a-468000.0e8e27c7cdfa no qvbe8eabd6a-46 tape8eabd6a-46 vagrant@devstack:~$ ip address show dev qbre8eabd6a-46 15: qbre8eabd6a-46: mtu 1500 qdisc noqueue state UP link/ether 0e:8e:27:c7:cd:fa brd ff:ff:ff:ff:ff:ff inet6 fe80::dcc6:30ff:fe27:37a1/64 scope link valid_lft forever preferred_lft forever Note: the address fe80::dcc6:30ff:fe27:37a1 and login to instance: $ ssh ubuntu@172.24.4.3 ubuntu@vm1:~$ ip address show dev eth0 2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fa:16:3e:d1:7e:fe brd ff:ff:ff:ff:ff:ff inet 10.0.0.9/24 brd 10.0.0.255 scope global eth0 inet6 fe80::f816:3eff:fed1:7efe/64 scope link valid_lft forever preferred_lft forever ubuntu@vm1:~$ ping6 -c4 -I eth0 ff02::1 PING ff02::1(ff02::1) from fe80::f816:3eff:fed1:7efe eth0: 56 data bytes 64 bytes from fe80::f816:3eff:fed1:7efe: icmp_seq=1 ttl=64 time=16.9 ms 64 bytes from fe80::dcc6:30ff:fe27:37a1: icmp_seq=1 ttl=64 time=38.4 ms (DUP!) 64 bytes from fe80::f816:3eff:fed1:7efe: icmp_seq=2 ttl=64 time=1.44 ms 64 bytes from fe80::dcc6:30ff:fe27:37a1: icmp_seq=2 ttl=64 time=3.88 ms (DUP!) 64 bytes from fe80::f816:3eff:fed1:7efe: icmp_seq=3 ttl=64 time=8.63 ms 64 bytes from fe80::dcc6:30ff:fe27:37a1: icmp_seq=3 ttl=64 time=14.0 ms (DUP!) 64 bytes from fe80::f816:3eff:fed1:7efe: icmp_seq=4 ttl=64 time=0.476 ms ubuntu@vm1:~$ ping6 -c1 -I eth0 fe80::dcc6:30ff:fe27:37a1 PING fe80::dcc6:30ff:fe27:37a1(fe80::dcc6:30ff:fe27:37a1) from fe80::f816:3eff:fed1:7efe eth0: 56 data bytes 64 bytes from fe80::dcc6:30ff:fe27:37a1: icmp_seq=1 ttl=64 time=2.86 ms ubuntu@vm1:~$ ssh fe80::dcc6:30ff:fe27:37a1%eth0 The authenticity of host 'fe80::dcc6:30ff:fe27:37a1%eth0 (fe80::dcc6:30ff:fe27:37a1%eth0)' can't be established. ECDSA key fingerprint is 11:5d:55:29:8a:77:d8:08:b4:00:9b:a3:61:93:fe:e5. Are you sure you want to continue connecting (yes/no)? I thought the anti-spoof rules should block packets from the fe80 address, but looking at the ip6tables-save (attached) the spoof chain and its default DROP rule are missing. That must be because there is no IPv6 subnet on the "private" network - maybe that's another problem. I inserted them manually, but that did not work becuase these packets hit the host's INPUT chain and the security group filters are on the FORWARD chain. So maybe all that is needed is a note in the doc to say that auto config should be disabled by default and selectively enabled on interfaces if needed. E.g.: net.ipv6.conf.all.autoconf=0 net.ipv6.conf.all.disable_ipv6=1 net.ipv6.conf.default.disable_ipv6=1 # enable on lo and eth1 net.ipv6.conf.lo.disable_ipv6=0 net.ipv6.conf.eth1.disable_ipv6=0 Or maybe the VIF drivers should disable IPv6 on the bridge when creating it. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1302080/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311142] [NEW] Cache records for get_*_by_name are not invalidated on entity rename
Public bug reported: I have noticed in keystone code, that update_domain and update_project methods in assignment_api Manager invalidate cache for get_*_by_name() using new name, not the old one. For example in update_domain() if you are changing domain name from 'OldName' to 'NewName', get_domain_by_name.invalidate() is called with 'NewName' as argument. See: https://github.com/openstack/keystone/blob/1e948043fe2456bd91b398317c71c665d69e9935/keystone/assignment/core.py#L320 As a result the old name can be used in some requests until cache record is expired. For example if you rename a domain, old name can still be used for the authentication (note, caching should be enabled in keystone configuration): 1. Define domain by its name during login: curl -X POST -H 'Content-type: application/json' -d '{"auth":{"identity":{"methods":["password"], "password":{"user":{"name":"Alice","domain":{"name": "OldName"}, "password":"A12345678"}' -v http://192.168.56.101:5000/v3/auth/tokens 2. Change domain name: curl -X PATCH -H 'Content-type: application/json' -H 'X-Auth-Token: indigitus' -d '{"domain":{"name":"NewName"}}' http://192.168.56.101:5000/v3/domains/7e0629d4e31b4c5591a4a10d0b8931df 3. Login using old domain name (copy command from step 1). As a result Alice will be logged in, even though domain name specified is not available anymore. ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1311142 Title: Cache records for get_*_by_name are not invalidated on entity rename Status in OpenStack Identity (Keystone): New Bug description: I have noticed in keystone code, that update_domain and update_project methods in assignment_api Manager invalidate cache for get_*_by_name() using new name, not the old one. For example in update_domain() if you are changing domain name from 'OldName' to 'NewName', get_domain_by_name.invalidate() is called with 'NewName' as argument. See: https://github.com/openstack/keystone/blob/1e948043fe2456bd91b398317c71c665d69e9935/keystone/assignment/core.py#L320 As a result the old name can be used in some requests until cache record is expired. For example if you rename a domain, old name can still be used for the authentication (note, caching should be enabled in keystone configuration): 1. Define domain by its name during login: curl -X POST -H 'Content-type: application/json' -d '{"auth":{"identity":{"methods":["password"], "password":{"user":{"name":"Alice","domain":{"name": "OldName"}, "password":"A12345678"}' -v http://192.168.56.101:5000/v3/auth/tokens 2. Change domain name: curl -X PATCH -H 'Content-type: application/json' -H 'X-Auth-Token: indigitus' -d '{"domain":{"name":"NewName"}}' http://192.168.56.101:5000/v3/domains/7e0629d4e31b4c5591a4a10d0b8931df 3. Login using old domain name (copy command from step 1). As a result Alice will be logged in, even though domain name specified is not available anymore. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1311142/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1304320] Re: neutron port-update takes unavailable and invalid device-ids
** Information type changed from Private Security to Public ** Changed in: ossa Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1304320 Title: neutron port-update takes unavailable and invalid device-ids Status in OpenStack Neutron (virtual network service): Invalid Status in OpenStack Security Advisories: Invalid Bug description: TenantA can port-update device-ids belonging to tenantB or even other tenants $neutron port-update --device_owner=compute:az1 --device_id= Updated port: Expected Behavior: tenant should not be able to update unavailable or invalid device-ids using neutron port-update. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1304320/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1269418] Re: nova rescue doesn't put VM into RESCUE status on vmware (CVE-2014-2573)
Anyone up for a stable/havana backport ? ** Also affects: nova/icehouse Importance: Undecided Status: New ** Changed in: nova/icehouse Status: New => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1269418 Title: nova rescue doesn't put VM into RESCUE status on vmware (CVE-2014-2573) Status in OpenStack Compute (Nova): Fix Committed Status in OpenStack Compute (nova) havana series: New Status in OpenStack Compute (nova) icehouse series: In Progress Status in The OpenStack VMwareAPI subTeam: In Progress Status in OpenStack Security Advisories: In Progress Bug description: nova rescue of VM on vmWare will create a additional VM ($ORIGINAL_ID- rescue), but after that, the original VM has status ACTIVE. This leads to [root@jhenner-node ~(keystone_admin)]# nova unrescue foo ERROR: Cannot 'unrescue' while instance is in vm_state stopped (HTTP 409) (Request-ID: req-792cabb2-2102-47c5-9b15-96c74a9a4819) the original can be deleted, which then causes leaking of the -rescue VM. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1269418/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311137] [NEW] GlanceImageService client needs to handle image_id equal to none
Public bug reported: When we use methods "show", "download", "details", "delete" we need to check the value of image_id to avoid an unnecessary call to the api. in: nova/image/glance.py ** Affects: nova Importance: Low Assignee: sahid (sahid-ferdjaoui) Status: New ** Description changed: When we use methods "show", "download", "details", "delete" we need to check the value of image_id to avoid an unnecessary call to the api. in: - nova/glance.py + nova/image/glance.py -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1311137 Title: GlanceImageService client needs to handle image_id equal to none Status in OpenStack Compute (Nova): New Bug description: When we use methods "show", "download", "details", "delete" we need to check the value of image_id to avoid an unnecessary call to the api. in: nova/image/glance.py To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1311137/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1308984] Re: Floating IP addresses ordered in a weird way
** Also affects: openstack-community Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1308984 Title: Floating IP addresses ordered in a weird way Status in OpenStack Dashboard (Horizon): New Status in OpenStack Community Project: New Bug description: The floating ip:s are ordered according to UUID instead of IP, more information in the patch. --- commit 83a10bf02a5079513741039860208e277e1d12e4 Author: Ian Kumlien Date: Thu Apr 17 13:49:32 2014 +0200 Sorting floating IP:s according to IP. While using alot of manually allocated floating ip:s we wondered why the IP list wasn't sorted. While looking at it we found that the UI actually does sort the IP but according to the UUID instead of the actual IP address. This change fixes this so that it's sorted according to IP. Found-By: Marko Bocevski diff --git a/openstack_dashboard/dashboards/project/access_and_security/floating_ips/workflows.py b/openstack_dashboard/dashboards/project/access_and_security/floating_ips/workflows.py index c4ebbd1..d884dee 100644 --- a/openstack_dashboard/dashboards/project/access_and_security/floating_ips/workflows.py +++ b/openstack_dashboard/dashboards/project/access_and_security/floating_ips/workflows.py @@ -69,7 +69,7 @@ class AssociateIPAction(workflows.Action): exceptions.handle(self.request, _('Unable to retrieve floating IP addresses.'), redirect=redirect) -options = sorted([(ip.id, ip.ip) for ip in ips if not ip.port_id]) +options = sorted([(ip.ip, ip.ip) for ip in ips if not ip.port_id]) if options: options.insert(0, ("", _("Select an IP address"))) else: To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1308984/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311107] [NEW] fails to fetch meta-data from cloudstack
Public bug reported: Downloaded the latest ubuntu 14.04 cloud-image from cloud- images.ubuntu.com with cloud-init 0.7.5. cloud-init tries to get the meta-data from CloudStack via http://10.1.209.130/latest/meta-data this url doesn't exist, but the "http://10.1.209.130/latest/meta-data/"; does, so changing the code in this file fixes the error: /usr/lib/python2.7/dist-packages/cloudinit/ec2_utils.py 165 def get_instance_metadata(api_version='latest', 166 metadata_address='http://169.254.169.254', 167 ssl_details=None, timeout=5, retries=5): 168 md_url = url_helper.combine_url(metadata_address, api_version) 169 md_url = url_helper.combine_url(md_url, 'meta-data/') 170 caller = functools.partial(util.read_file_or_url, 171ssl_details=ssl_details, timeout=timeout, 172retries=retries) on line 169, the trailing slash in the meta-data string makes this work. entries from logfile before changing ec2_utils.py: 2014-04-22 08:24:24,781 - importer.py[DEBUG]: Looking for modules ['ubuntu', 'cloudinit.distros.ubuntu'] that have attributes ['Distro'] 2014-04-22 08:24:24,781 - importer.py[DEBUG]: Failed at attempted import of 'ubuntu' due to: No module named ubuntu 2014-04-22 08:24:24,782 - importer.py[DEBUG]: Found ubuntu with attributes ['Distro'] in ['cloudinit.distros.ubuntu'] 2014-04-22 08:24:24,782 - stages.py[DEBUG]: Using distro class 2014-04-22 08:24:24,782 - __init__.py[DEBUG]: Looking for for data source in: ['CloudStack', 'NoCloud'], via packages ['', 'cloudinit.sources'] that matches dependencies ['FILESYSTEM', 'NETWORK'] 2014-04-22 08:24:24,782 - importer.py[DEBUG]: Looking for modules ['DataSourceCloudStack', 'cloudinit.sources.DataSourceCloudStack'] that have attributes ['get_datasource_list'] 2014-04-22 08:24:24,783 - importer.py[DEBUG]: Failed at attempted import of 'DataSourceCloudStack' due to: No module named DataSourceCloudStack 2014-04-22 08:24:24,783 - importer.py[DEBUG]: Found DataSourceCloudStack with attributes ['get_datasource_list'] in ['cloudinit.sources.DataSourceCloudStack'] 2014-04-22 08:24:24,784 - importer.py[DEBUG]: Looking for modules ['DataSourceNoCloud', 'cloudinit.sources.DataSourceNoCloud'] that have attributes ['get_datasource_list'] 2014-04-22 08:24:24,784 - importer.py[DEBUG]: Failed at attempted import of 'DataSourceNoCloud' due to: No module named DataSourceNoCloud 2014-04-22 08:24:24,784 - importer.py[DEBUG]: Found DataSourceNoCloud with attributes ['get_datasource_list'] in ['cloudinit.sources.DataSourceNoCloud'] 2014-04-22 08:24:24,784 - __init__.py[DEBUG]: Searching for data source in: ['DataSourceCloudStack', 'DataSourceNoCloudNet'] 2014-04-22 08:24:24,784 - __init__.py[DEBUG]: Seeing if we can get any data from 2014-04-22 08:24:24,785 - DataSourceCloudStack.py[DEBUG]: Using /var/lib/dhcp lease directory 2014-04-22 08:24:24,785 - DataSourceCloudStack.py[DEBUG]: Found DHCP identifier 10.1.209.130 2014-04-22 08:24:24,785 - DataSourceCloudStack.py[DEBUG]: Found DHCP identifier 10.1.209.130 2014-04-22 08:24:24,785 - util.py[DEBUG]: Reading from /var/lib/cloud/seed/cs/meta-data (quiet=False) 2014-04-22 08:24:24,785 - url_helper.py[DEBUG]: [0/1] open 'http://10.1.209.130//latest/meta-data/instance-id' with {'url': 'http://10.1.209.130//latest/meta-data/instance-id', 'headers': {'User-Agent': 'Cloud-Init/0.7.5'}, 'allow_redirects': True, 'method': 'GET', 'timeout': 50.0} configuration 2014-04-22 08:24:24,849 - url_helper.py[DEBUG]: Read from http://10.1.209.130//latest/meta-data/instance-id (200, 36b) after 1 attempts 2014-04-22 08:24:24,849 - DataSourceCloudStack.py[DEBUG]: Using metadata source: 'http://10.1.209.130//latest/meta-data/instance-id' 2014-04-22 08:24:24,850 - url_helper.py[DEBUG]: [0/6] open 'http://10.1.209.130/latest/user-data' with {'url': 'http://10.1.209.130/latest/user-data', 'headers': {'User-Agent': 'Cloud-Init/0.7.5'}, 'allow_redirects': True, 'method': 'GET', 'timeout': 5.0} configuration 2014-04-22 08:24:24,853 - url_helper.py[DEBUG]: Read from http://10.1.209.130/latest/user-data (200, 0b) after 1 attempts 2014-04-22 08:24:24,854 - url_helper.py[DEBUG]: [0/6] open 'http://10.1.209.130/latest/meta-data' with {'url': 'http://10.1.209.130/latest/meta-data', 'headers': {'User-Agent': 'Cloud-Init/0.7.5'}, 'allow_redirects': True, 'method': 'GET', 'timeout': 5.0} configuration 2014-04-22 08:24:24,857 - url_helper.py[DEBUG]: Please wait 1 seconds while we wait to try again 2014-04-22 08:24:25,858 - url_helper.py[DEBUG]: [1/6] open 'http://10.1.209.130/latest/meta-data' with {'url': 'http://10.1.209.130/latest/meta-data', 'headers': {'User-Agent': 'Cloud-Init/0.7.5'}, 'allow_redirects': True, 'method': 'GET', 'timeout': 5.0} configuration 2014-04-22 08:24:25,861 - url_helper.py[DEBUG]: Please wait 1 seconds while we wait to try again 2014-04-22 08:24:26,863 - url_helper.py[DEBUG]: [2/6
[Yahoo-eng-team] [Bug 1311097] [NEW] can't set MTU size for router interface
Public bug reported: when working with jumbo frames you can set MTU in nova.conf for the instaces' tap devices, and you can set it in neturon/plugins.ini for br-int. this allows you to work with jumbo frames inside a network. However, if you want to work with jumbo frames across networks, there's no way to configure this on a router/l3-agent, and you have to manually set the mtu on the nics inside the namespaces ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1311097 Title: can't set MTU size for router interface Status in OpenStack Neutron (virtual network service): New Bug description: when working with jumbo frames you can set MTU in nova.conf for the instaces' tap devices, and you can set it in neturon/plugins.ini for br-int. this allows you to work with jumbo frames inside a network. However, if you want to work with jumbo frames across networks, there's no way to configure this on a router/l3-agent, and you have to manually set the mtu on the nics inside the namespaces To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1311097/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311082] [NEW] The details of libvirt events are not outputted to the log when nova-compute handles libvirt events.
Public bug reported: Currently, the details of libvirt events are not outputted to the log when nova-compute handles libvirt events. The details of libvirt events should be outputted to the log for troubleshooting. By the following blue print, nova has been able to catch libvirt events and synchronize the VM state with 'actual' state immediately. Compute Driver Events https://blueprints.launchpad.net/nova/+spec/compute-driver-events When nova-compute handles libvirt events, it outputs which libvirt events(EVENT_LIFECYCLE_STARTED, EVENT_LIFECYCLE_STOPPED, EVENT_LIFECYCLE_PAUSED, EVENT_LIFECYCLE_RESUMED) are received. But the details(cause/reason) of libvirt events are not outputted. For example, in the case of VIR_DOMAIN_EVENT_STOPPED(=EVENT_LIFECYCLE_STOPPED), the details are as follows(*1). VIR_DOMAIN_EVENT_STOPPED_SHUTDOWN = 0 Normal shutdown VIR_DOMAIN_EVENT_STOPPED_DESTROYED = 1 Forced poweroff from host VIR_DOMAIN_EVENT_STOPPED_CRASHED = 2 Guest crashed VIR_DOMAIN_EVENT_STOPPED_MIGRATED = 3 Migrated off to another host VIR_DOMAIN_EVENT_STOPPED_SAVED = 4 Saved to a state file VIR_DOMAIN_EVENT_STOPPED_FAILED = 5 Host emulator/mgmt failed VIR_DOMAIN_EVENT_STOPPED_FROM_SNAPSHOT = 6 offline snapshot loaded VIR_DOMAIN_EVENT_STOPPED_LAST = 7 *1: http://libvirt.org/html/libvirt-libvirt.html If the details of libvirt events are outputted to the log, it will be useful in troubleshoot. ** Affects: nova Importance: Undecided Assignee: Takashi NATSUME (natsume-takashi) Status: New ** Changed in: nova Assignee: (unassigned) => Takashi NATSUME (natsume-takashi) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1311082 Title: The details of libvirt events are not outputted to the log when nova- compute handles libvirt events. Status in OpenStack Compute (Nova): New Bug description: Currently, the details of libvirt events are not outputted to the log when nova-compute handles libvirt events. The details of libvirt events should be outputted to the log for troubleshooting. By the following blue print, nova has been able to catch libvirt events and synchronize the VM state with 'actual' state immediately. Compute Driver Events https://blueprints.launchpad.net/nova/+spec/compute-driver-events When nova-compute handles libvirt events, it outputs which libvirt events(EVENT_LIFECYCLE_STARTED, EVENT_LIFECYCLE_STOPPED, EVENT_LIFECYCLE_PAUSED, EVENT_LIFECYCLE_RESUMED) are received. But the details(cause/reason) of libvirt events are not outputted. For example, in the case of VIR_DOMAIN_EVENT_STOPPED(=EVENT_LIFECYCLE_STOPPED), the details are as follows(*1). VIR_DOMAIN_EVENT_STOPPED_SHUTDOWN = 0 Normal shutdown VIR_DOMAIN_EVENT_STOPPED_DESTROYED = 1 Forced poweroff from host VIR_DOMAIN_EVENT_STOPPED_CRASHED = 2 Guest crashed VIR_DOMAIN_EVENT_STOPPED_MIGRATED = 3 Migrated off to another host VIR_DOMAIN_EVENT_STOPPED_SAVED = 4 Saved to a state file VIR_DOMAIN_EVENT_STOPPED_FAILED = 5 Host emulator/mgmt failed VIR_DOMAIN_EVENT_STOPPED_FROM_SNAPSHOT = 6 offline snapshot loaded VIR_DOMAIN_EVENT_STOPPED_LAST = 7 *1: http://libvirt.org/html/libvirt-libvirt.html If the details of libvirt events are outputted to the log, it will be useful in troubleshoot. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1311082/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311047] [NEW] Non-escaped string in JS breaks "launch instance" form translated in French
Public bug reported: I noticed this in Icehouse with Horizon translated in French: the "Launch instance" form is broken due to a syntax error in the quota handling javascript when strings contain a single quote. Patch on the way. ** Affects: horizon Importance: Undecided Assignee: Adrien Cunin (adri2000) Status: In Progress ** Changed in: horizon Assignee: (unassigned) => Adrien Cunin (adri2000) ** Changed in: horizon Status: New => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1311047 Title: Non-escaped string in JS breaks "launch instance" form translated in French Status in OpenStack Dashboard (Horizon): In Progress Bug description: I noticed this in Icehouse with Horizon translated in French: the "Launch instance" form is broken due to a syntax error in the quota handling javascript when strings contain a single quote. Patch on the way. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1311047/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311040] [NEW] Subnet option to disable dns server
Public bug reported: Multiple NIC/subnet may attach to a VM instance, if both subnets provide dns servers, only the last one will be used (overrides the previous one), so we need a method to disable dns server on some subnets. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1311040 Title: Subnet option to disable dns server Status in OpenStack Neutron (virtual network service): New Bug description: Multiple NIC/subnet may attach to a VM instance, if both subnets provide dns servers, only the last one will be used (overrides the previous one), so we need a method to disable dns server on some subnets. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1311040/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1310815] Re: bad django conf example
Also affects horizon. openstack/horizon doc/source/topics/deployment.rst ** Also affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1310815 Title: bad django conf example Status in OpenStack Dashboard (Horizon): New Status in OpenStack Manuals: Triaged Bug description: With Django 1.6, the setting is wrong. You need to use SESSION_ENGINE = 'django.contrib.sessions.backends.db' as described on: https://docs.djangoproject.com/en/1.6/ref/settings/#std:setting- SESSION_ENGINE If i use the one described on this page (SESSION_ENGINE = 'django.core.cache.backends.db.DatabaseCache'), i got a 500 error and i have this in logs: File ".../django-1.6/django/core/handlers/base.py", line 90, in get_response response = middleware_method(request) File ".../django-1.6/django/contrib/sessions/middleware.py", line 10, in process_request engine = import_module(settings.SESSION_ENGINE) File ".../django-1.6/django/utils/importlib.py", line 40, in import_module __import__(name) ImportError: No module named DatabaseCache greetings, Thomas --- Built: 2014-04-07T07:45:00 00:00 git SHA: b7557a0bb682410c86f8022eb07980840d82c8cf URL: http://docs.openstack.org/havana/install-guide/install/apt/content/dashboard-session-database.html source File: file:/home/jenkins/workspace/openstack-install-deploy-guide-ubuntu/doc/common/section_dashboard_sessions.xml xml:id: dashboard-session-database To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1310815/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1207433] Re: When launching a new instance, flavor details are not changed when navigating the list with arrow keys
Marcos, did you look at comment #3? For now this is the way Firefox works. ** Changed in: horizon Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1207433 Title: When launching a new instance, flavor details are not changed when navigating the list with arrow keys Status in OpenStack Dashboard (Horizon): Invalid Bug description: Information in the flavor details pane is not refreshed when navigating the flavor drop-down box with arrow keys instead of the mouse. To reproduce: 1. Attempt to launch an instance 2. Navigate to the flavor drop-down box with the tab key, and change flavors with the arrow keys Expected results: 3. Information in the flavor details pane should be updated to reflect the currently selected flavor Actual results: 3. The flavor details pane retains the information from the default flavor (m1.tiny, on my system) rather than the flavor in the drop-down, until you tab or click out of the flavor drop-down. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1207433/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1225662] Re: Log in page: focus should be given to user name field
It looks like the fix in the django openstack auth project was sufficient to handle this. I'm going to close the Horizon task. ** Changed in: horizon Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1225662 Title: Log in page: focus should be given to user name field Status in Django OpenStack Auth: Fix Committed Status in OpenStack Dashboard (Horizon): Invalid Bug description: When the log in is diplayed the "user name" field should get the focus automatically. To manage notifications about this bug go to: https://bugs.launchpad.net/django-openstack-auth/+bug/1225662/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311019] [NEW] l2-population : linuxbridge agent should use IpNeighCommand
Public bug reported: ip_lib is now able to add neighboring entries. L2population should can use this new feature once this patch will be merged : https://review.openstack.org/#/c/89522/ ** Affects: neutron Importance: Undecided Assignee: Mathieu Rohon (mathieu-rohon) Status: New ** Tags: l2-pop ml2 ** Changed in: neutron Assignee: (unassigned) => Mathieu Rohon (mathieu-rohon) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1311019 Title: l2-population : linuxbridge agent should use IpNeighCommand Status in OpenStack Neutron (virtual network service): New Bug description: ip_lib is now able to add neighboring entries. L2population should can use this new feature once this patch will be merged : https://review.openstack.org/#/c/89522/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1311019/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311010] [NEW] Horizon does not include Swift admin info
Public bug reported: When I log into Horizon as the admin user on our pure-upstream system, I can see data on the number of instances, networks and so on. However there is no data on Swift. On this particular system I know that Swift is full, however there's no sign of that via the console. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1311010 Title: Horizon does not include Swift admin info Status in OpenStack Dashboard (Horizon): New Bug description: When I log into Horizon as the admin user on our pure-upstream system, I can see data on the number of instances, networks and so on. However there is no data on Swift. On this particular system I know that Swift is full, however there's no sign of that via the console. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1311010/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1311002] [NEW] ip-lib : "ip neigh" command should use "replace" instead of "add"
Public bug reported: with l2-population and linuxbridge agent, we used to use "ip neigh" command to add arp responder entries : this command has been generalized with this patch : https://review.openstack.org/#/c/88442/3 but using "ip neigh add" leads to a bug when restoring a previous entry : https://bugs.launchpad.net/neutron/+bug/1282662 we should use "ip neigh replace" instead of "ip neigh add'" ** Affects: neutron Importance: Undecided Assignee: Mathieu Rohon (mathieu-rohon) Status: New ** Changed in: neutron Assignee: (unassigned) => Mathieu Rohon (mathieu-rohon) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1311002 Title: ip-lib : "ip neigh" command should use "replace" instead of "add" Status in OpenStack Neutron (virtual network service): New Bug description: with l2-population and linuxbridge agent, we used to use "ip neigh" command to add arp responder entries : this command has been generalized with this patch : https://review.openstack.org/#/c/88442/3 but using "ip neigh add" leads to a bug when restoring a previous entry : https://bugs.launchpad.net/neutron/+bug/1282662 we should use "ip neigh replace" instead of "ip neigh add'" To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1311002/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1163569] Re: security groups don't work with vip and ovs plugin
** Also affects: ossa Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1163569 Title: security groups don't work with vip and ovs plugin Status in OpenStack Neutron (virtual network service): Triaged Status in OpenStack Security Advisories: New Bug description: http://codepad.org/xU8G4s00 I pinged nachi and he suggested to try using: interface_driver = quantum.agent.linux.interface.BridgeInterfaceDriver But after setting this it seems like the vip does not work at all. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1163569/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1310966] [NEW] nova network floating IP has direct DB access
Public bug reported: nova-network with floating IPs is currently broken due to a direct DB access 2014-04-22 08:04:32.395 ESC[01;31mERROR oslo.messaging.rpc.dispatcher [ESC[01;36mreq-8fa8da3a-2e61-47e9-a6c6-68f598e979ad ESC[00;36mTestServerAdvancedOps-228418898 TestServerAdvancedOps-892915532ESC[01;31m] ESC[01;35mESC[01;31mException during message handling: nova-computeESC[00m Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply incoming.message)) File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch return self._do_dispatch(endpoint, method, ctxt, args) File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch result = getattr(endpoint, method)(ctxt, **new_args) File "/opt/stack/nova/nova/exception.py", line 88, in wrapped payload) File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/opt/stack/nova/nova/exception.py", line 71, in wrapped return f(self, context, *args, **kw) File "/opt/stack/nova/nova/compute/manager.py", line 276, in decorated_function pass File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/opt/stack/nova/nova/compute/manager.py", line 262, in decorated_function return function(self, context, *args, **kwargs) File "/opt/stack/nova/nova/compute/manager.py", line 329, in decorated_function function(self, context, *args, **kwargs) File "/opt/stack/nova/nova/compute/manager.py", line 250, in decorated_function migration.instance_uuid, exc_info=True) File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/opt/stack/nova/nova/compute/manager.py", line 237, in decorated_function return function(self, context, *args, **kwargs) File "/opt/stack/nova/nova/compute/manager.py", line 305, in decorated_function e, sys.exc_info()) File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/opt/stack/nova/nova/compute/manager.py", line 292, in decorated_function return function(self, context, *args, **kwargs) File "/opt/stack/nova/nova/compute/manager.py", line 3472, in resize_instance migration_p) File "/opt/stack/nova/nova/network/api.py", line 45, in wrapped return func(self, context, *args, **kwargs) File "/opt/stack/nova/nova/network/api.py", line 492, in migrate_instance_start self._get_floating_ip_addresses(context, instance) File "/opt/stack/nova/nova/network/api.py", line 474, in _get_floating_ip_addresses instance['uuid']) File "/opt/stack/nova/nova/db/api.py", line 700, in instance_floating_address_get_all return IMPL.instance_floating_address_get_all(context, instance_uuid) File "/opt/stack/nova/nova/cmd/compute.py", line 52, in __call__ raise exception.DBNotAllowed('nova-compute') DBNotAllowed: nova-compute ** Affects: nova Importance: High Status: New ** Changed in: nova Importance: Undecided => High ** Description changed: + nova-network with floating IPs is currently broken due to a direct DB + access + 2014-04-22 08:04:32.395 ESC[01;31mERROR oslo.messaging.rpc.dispatcher [ESC[01;36mreq-8fa8da3a-2e61-47e9-a6c6-68f598e979ad ESC[00;36mTestServerAdvancedOps-228418898 TestServerAdvancedOps-892915532ESC[01;31m] ESC[01;35mESC[01;31mException during message handling: nova-computeESC[00m Traceback (most recent call last): - File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply - incoming.message)) - File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch - return self._do_dispatch(endpoint, method, ctxt, args) - File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch - result = getattr(endpoint, method)(ctxt, **new_args) - File "/opt/stack/nova/nova/exception.py", line 88, in wrapped - payload) - File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ - six.reraise(self.type_, self.value, self.tb) - File "/opt/stack/nova/nova/exception.py", line 71, in wrapped - return f(self, context, *args, **kw) - File "/opt/stack/nova/nova/compute/manager.py", line 276, in decorated_function - pass - File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ - six.reraise(self.type_, self.value, self.tb) - File "/opt/stack/nova/nova/compute/manager.py", line 262, in decorated_function - return function(self, context, *args, **kwargs) - File "/opt/stack/nova/nova/compute/manager.py",
[Yahoo-eng-team] [Bug 1310952] [NEW] Not assignment variable 'image' in api.v2.image_data
Public bug reported: Raise UnboundLocalError in 'try expect' block while catching exception using a var 'image'. File "/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py", line 393, in assertRaises self.assertThat(our_callable, matcher) File "/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py", line 404, in assertThat mismatch_error = self._matchHelper(matchee, matcher, message, verbose) File "/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py", line 454, in _matchHelper mismatch = matcher.match(matchee) File "/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py", line 108, in match mismatch = self.exception_matcher.match(exc_info) File "/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py", line 62, in match mismatch = matcher.match(matchee) File "/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py", line 385, in match reraise(*matchee) File "/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py", line 101, in match result = matchee() File "/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py", line 902, in __call__ return self._callable_object(*self._args, **self._kwargs) File "glance/common/utils.py", line 437, in wrapped return func(self, req, *args, **kwargs) File "glance/api/v2/image_data.py", line 120, in upload self._restore(image_repo, image) UnboundLocalError: local variable 'image' referenced before assignment ** Affects: glance Importance: Undecided Assignee: Sergey Nikitin (snikitin) Status: In Progress ** Changed in: glance Assignee: (unassigned) => Sergey Nikitin (snikitin) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1310952 Title: Not assignment variable 'image' in api.v2.image_data Status in OpenStack Image Registry and Delivery Service (Glance): In Progress Bug description: Raise UnboundLocalError in 'try expect' block while catching exception using a var 'image'. File "/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py", line 393, in assertRaises self.assertThat(our_callable, matcher) File "/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py", line 404, in assertThat mismatch_error = self._matchHelper(matchee, matcher, message, verbose) File "/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py", line 454, in _matchHelper mismatch = matcher.match(matchee) File "/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py", line 108, in match mismatch = self.exception_matcher.match(exc_info) File "/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py", line 62, in match mismatch = matcher.match(matchee) File "/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py", line 385, in match reraise(*matchee) File "/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py", line 101, in match result = matchee() File "/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py", line 902, in __call__ return self._callable_object(*self._args, **self._kwargs) File "glance/common/utils.py", line 437, in wrapped return func(self, req, *args, **kwargs) File "glance/api/v2/image_data.py", line 120, in upload self._restore(image_repo, image) UnboundLocalError: local variable 'image' referenced before assignment To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1310952/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp