[Bug 1198251] [NEW] zookeeper package should include logrotate config for /var/log/zookeeper
Public bug reported: Currently zookeeper's log files grow without limit which can cause a machine to eventually run out of disk space. The zookeeper package should include a logrotate script or configure log4j (whichever is most appropriate) to rotate the log files. We currently use the following logrotate config: /var/log/zookeeper/*.log { weekly rotate 52 copytruncate delaycompress compress notifempty missingok } ** Affects: zookeeper (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to zookeeper in Ubuntu. https://bugs.launchpad.net/bugs/1198251 Title: zookeeper package should include logrotate config for /var/log/zookeeper To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/zookeeper/+bug/1198251/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1198251] [NEW] zookeeper package should include logrotate config for /var/log/zookeeper
Public bug reported: Currently zookeeper's log files grow without limit which can cause a machine to eventually run out of disk space. The zookeeper package should include a logrotate script or configure log4j (whichever is most appropriate) to rotate the log files. We currently use the following logrotate config: /var/log/zookeeper/*.log { weekly rotate 52 copytruncate delaycompress compress notifempty missingok } ** Affects: zookeeper (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1198251 Title: zookeeper package should include logrotate config for /var/log/zookeeper To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/zookeeper/+bug/1198251/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1185441] [NEW] cups-browsed should enable 'BrowseRemoteProtocols = dnssd cups'
Public bug reported: Currently /etc/cups/cups-browsed.conf does not include 'cups' as a BrowseRemoteProtocols option. Since some older cups servers do not support dnssd enabling cups as well seems to be a good default. I do not believe that there are any additional security implications. See attached patch. ** Affects: cups-filters (Ubuntu) Importance: Undecided Status: New ** Patch added: Enable CUPS and DNSSD browser protocols https://bugs.launchpad.net/bugs/1185441/+attachment/3689913/+files/cups-filters.patch -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1185441 Title: cups-browsed should enable 'BrowseRemoteProtocols = dnssd cups' To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/cups-filters/+bug/1185441/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1161338] Re: python-nova should depend on ebtables
** Tags added: canonistack -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/1161338 Title: python-nova should depend on ebtables To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1161338/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1161338] Re: python-nova should depend on ebtables
** Tags added: canonistack -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1161338 Title: python-nova should depend on ebtables To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1161338/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1104584] [NEW] cinderclient fails when there are more than one volume endpoint
Public bug reported: After upgrading my installation from Folsom to Grizzly I received the below error message when attempting to create a volume using the EC2 API (I did not verify with the Nova API). I have configured two regions; one each with a cinder endpoint. This works for all other services except for cinder currently. The endpoints are seen to be ambiguous because the cinder client (or nova's use of the client) does not take region into consideration. The client uses a servicetype:servicename:endpointtype tuple to determine whether an endpoint is unique. /var/log/nova/nova-api.log: 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 Traceback (most recent call last): 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 File /usr/lib/python2.7/dist-packages/nova/api/ec2/__init__.py, line 486, in __call__ 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 result = api_request.invoke(context) 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 File /usr/lib/python2.7/dist-packages/nova/api/ec2/apirequest.py, li ne 79, in invoke 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 result = method(context, **args) 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 File /usr/lib/python2.7/dist-packages/nova/api/ec2/cloud.py, line 79 0, in create_volume 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 **create_kwargs) 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 File /usr/lib/python2.7/dist-packages/nova/volume/cinder.py, line 21 7, in create 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 item = cinderclient(context).volumes.create(size, **kwargs) 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 File /usr/lib/python2.7/dist-packages/nova/volume/cinder.py, line 65 , in cinderclient 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 endpoint_type=endpoint_type) 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 File /usr/lib/python2.7/dist-packages/cinderclient/service_catalog.py , line 75, in url_for 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 endpoints=matching_endpoints) 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 AmbiguousEndpoints: AmbiguousEndpoints: [{u'adminURL': u'http://10.55.58.1:8776/v1/df473f958e4f47949282696966e58f49', u'region': u'lcy01', u'id': u'a60223bd41df4cdf8fb28dcdffe5adad', 'serviceName': u'cinder', u'internalURL': u'http://10.55.58.1:8776/v1/df473f958e4f47949282696966e58f49', u'publicURL': u'https://cinder-lcy01.internal/v1/df473f958e4f47949282696966e58f49'}, {u'adminURL': u'http://10.55.62.1:8776/v1/df473f958e4f47949282696966e58f49', u'region': u'lcy02', u'id': u'14de3dace2284d4eaf95576ae7e2e40d', 'serviceName': u'cinder', u'internalURL': u'http://10.55.62.1:8776/v1/df473f958e4f47949282696966e58f49', u'publicURL': u'https://cinder-lcy02.internal/v1/df473f958e4f47949282696966e58f49'}] The endpoints prettyprinted: [ {'adminURL': 'http://10.55.58.1:8776/v1/df473f958e4f47949282696966e58f49', 'region': 'lcy01', 'id': 'a60223bd41df4cdf8fb28dcdffe5adad', 'serviceName': 'cinder', 'internalURL': 'http://10.55.58.1:8776/v1/df473f958e4f47949282696966e58f49', 'publicURL': 'https://cinder-lcy01.internal/v1/df473f958e4f47949282696966e58f49'}, {'adminURL': 'http://10.55.62.1:8776/v1/df473f958e4f47949282696966e58f49', 'region': 'lcy02', 'id': '14de3dace2284d4eaf95576ae7e2e40d', 'serviceName': 'cinder', 'internalURL': 'http://10.55.62.1:8776/v1/df473f958e4f47949282696966e58f49', 'publicURL': 'https://cinder-lcy02.internal/v1/df473f958e4f47949282696966e58f49'} ] Workaround: 1. I created another service name and service type named 'fakecinder' in keystone. 2. I removed the second region's 'volume' endpoint. 3. I then added the following to the second region's nova.conf: 'cinder_catalog_info=fakecinder:fakecinder:publicURL' 4. Restarted nova-api. System Information: # cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu 12.04.1 LTS # dpkg-query --show nova-api nova-api2013.1~g1-0ubuntu1~cloud0 ** Affects: nova (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/1104584 Title: cinderclient fails when there are more than one volume endpoint To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1104584/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1104584] Re: cinderclient fails when there are more than one volume endpoint
*** This bug is a duplicate of bug 1087735 *** https://bugs.launchpad.net/bugs/1087735 ** This bug has been marked a duplicate of bug 1087735 Attaching volume fails if keystone has multiple endpoints of Cinder -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/1104584 Title: cinderclient fails when there are more than one volume endpoint To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1104584/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1104584] [NEW] cinderclient fails when there are more than one volume endpoint
Public bug reported: After upgrading my installation from Folsom to Grizzly I received the below error message when attempting to create a volume using the EC2 API (I did not verify with the Nova API). I have configured two regions; one each with a cinder endpoint. This works for all other services except for cinder currently. The endpoints are seen to be ambiguous because the cinder client (or nova's use of the client) does not take region into consideration. The client uses a servicetype:servicename:endpointtype tuple to determine whether an endpoint is unique. /var/log/nova/nova-api.log: 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 Traceback (most recent call last): 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 File /usr/lib/python2.7/dist-packages/nova/api/ec2/__init__.py, line 486, in __call__ 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 result = api_request.invoke(context) 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 File /usr/lib/python2.7/dist-packages/nova/api/ec2/apirequest.py, li ne 79, in invoke 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 result = method(context, **args) 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 File /usr/lib/python2.7/dist-packages/nova/api/ec2/cloud.py, line 79 0, in create_volume 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 **create_kwargs) 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 File /usr/lib/python2.7/dist-packages/nova/volume/cinder.py, line 21 7, in create 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 item = cinderclient(context).volumes.create(size, **kwargs) 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 File /usr/lib/python2.7/dist-packages/nova/volume/cinder.py, line 65 , in cinderclient 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 endpoint_type=endpoint_type) 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 File /usr/lib/python2.7/dist-packages/cinderclient/service_catalog.py , line 75, in url_for 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 endpoints=matching_endpoints) 2013-01-24 22:48:26 23673 TRACE nova.api.ec2 AmbiguousEndpoints: AmbiguousEndpoints: [{u'adminURL': u'http://10.55.58.1:8776/v1/df473f958e4f47949282696966e58f49', u'region': u'lcy01', u'id': u'a60223bd41df4cdf8fb28dcdffe5adad', 'serviceName': u'cinder', u'internalURL': u'http://10.55.58.1:8776/v1/df473f958e4f47949282696966e58f49', u'publicURL': u'https://cinder-lcy01.internal/v1/df473f958e4f47949282696966e58f49'}, {u'adminURL': u'http://10.55.62.1:8776/v1/df473f958e4f47949282696966e58f49', u'region': u'lcy02', u'id': u'14de3dace2284d4eaf95576ae7e2e40d', 'serviceName': u'cinder', u'internalURL': u'http://10.55.62.1:8776/v1/df473f958e4f47949282696966e58f49', u'publicURL': u'https://cinder-lcy02.internal/v1/df473f958e4f47949282696966e58f49'}] The endpoints prettyprinted: [ {'adminURL': 'http://10.55.58.1:8776/v1/df473f958e4f47949282696966e58f49', 'region': 'lcy01', 'id': 'a60223bd41df4cdf8fb28dcdffe5adad', 'serviceName': 'cinder', 'internalURL': 'http://10.55.58.1:8776/v1/df473f958e4f47949282696966e58f49', 'publicURL': 'https://cinder-lcy01.internal/v1/df473f958e4f47949282696966e58f49'}, {'adminURL': 'http://10.55.62.1:8776/v1/df473f958e4f47949282696966e58f49', 'region': 'lcy02', 'id': '14de3dace2284d4eaf95576ae7e2e40d', 'serviceName': 'cinder', 'internalURL': 'http://10.55.62.1:8776/v1/df473f958e4f47949282696966e58f49', 'publicURL': 'https://cinder-lcy02.internal/v1/df473f958e4f47949282696966e58f49'} ] Workaround: 1. I created another service name and service type named 'fakecinder' in keystone. 2. I removed the second region's 'volume' endpoint. 3. I then added the following to the second region's nova.conf: 'cinder_catalog_info=fakecinder:fakecinder:publicURL' 4. Restarted nova-api. System Information: # cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu 12.04.1 LTS # dpkg-query --show nova-api nova-api2013.1~g1-0ubuntu1~cloud0 ** Affects: nova (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1104584 Title: cinderclient fails when there are more than one volume endpoint To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1104584/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1104584] Re: cinderclient fails when there are more than one volume endpoint
*** This bug is a duplicate of bug 1087735 *** https://bugs.launchpad.net/bugs/1087735 ** This bug has been marked a duplicate of bug 1087735 Attaching volume fails if keystone has multiple endpoints of Cinder -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1104584 Title: cinderclient fails when there are more than one volume endpoint To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1104584/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1098262] Re: Precise crashes hard when HP array rebuilds
Another data point: We have another machine with the same identical hardware, firmware and OS, but running an earlier kernel and it is not suffering from the same problem. Of course we haven't had a disk failure on the hardware so cannot say for sure that the bug does not exist in the earlier kernel version. We cannot simulate a failure on the sister hardware as the machine is in use with production work loads. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1098262 Title: Precise crashes hard when HP array rebuilds To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1098262/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1084261] [NEW] 'nova-manage project quota' command fails with 'nova-manage: error: no such option: --project'
Public bug reported: When attempting to change a quota for one of my customers, with the command below, I receive an error message instead of the command succeeding. What I see: $ sudo nova-manage project quota --project= --key=instances --value=15 nova-manage: error: no such option: --project What I expect to see: $ sudo nova-manage project quota --project= --key=instances --value=15 metadata_items: 128 injected_file_content_bytes: 10240 volumes: 5 gigabytes: 1000 ram: 51200 floating_ips: 3 security_group_rules: 20 instances: 15 key_pairs: 100 injected_files: 5 cores: 20 injected_file_path_bytes: 255 security_groups: 50 Debugging: Looking at the `nova-manage` command I notice that the @arg decorators are missing from the `quota` function. Replacing the decorators fixes the problem (see attached patch). I have looked at the current source from Github and the most recent packages from the cloud-archive and these decorators are missing from there are well. System Information: $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu 12.04.1 LTS $ dpkg -S $(which nova-manage) nova-common: /usr/bin/nova-manage $ dpkg-query --show nova-common nova-common 2012.2-0ubuntu3~cloud0 Please let me know if you need any further information? ** Affects: nova (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack ** Patch added: nova-manage.patch https://bugs.launchpad.net/bugs/1084261/+attachment/3446275/+files/nova-manage.patch -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/1084261 Title: 'nova-manage project quota' command fails with 'nova-manage: error: no such option: --project' To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1084261/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1084261] [NEW] 'nova-manage project quota' command fails with 'nova-manage: error: no such option: --project'
Public bug reported: When attempting to change a quota for one of my customers, with the command below, I receive an error message instead of the command succeeding. What I see: $ sudo nova-manage project quota --project= --key=instances --value=15 nova-manage: error: no such option: --project What I expect to see: $ sudo nova-manage project quota --project= --key=instances --value=15 metadata_items: 128 injected_file_content_bytes: 10240 volumes: 5 gigabytes: 1000 ram: 51200 floating_ips: 3 security_group_rules: 20 instances: 15 key_pairs: 100 injected_files: 5 cores: 20 injected_file_path_bytes: 255 security_groups: 50 Debugging: Looking at the `nova-manage` command I notice that the @arg decorators are missing from the `quota` function. Replacing the decorators fixes the problem (see attached patch). I have looked at the current source from Github and the most recent packages from the cloud-archive and these decorators are missing from there are well. System Information: $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu 12.04.1 LTS $ dpkg -S $(which nova-manage) nova-common: /usr/bin/nova-manage $ dpkg-query --show nova-common nova-common 2012.2-0ubuntu3~cloud0 Please let me know if you need any further information? ** Affects: nova (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack ** Patch added: nova-manage.patch https://bugs.launchpad.net/bugs/1084261/+attachment/3446275/+files/nova-manage.patch -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1084261 Title: 'nova-manage project quota' command fails with 'nova-manage: error: no such option: --project' To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1084261/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1078444] [NEW] python-boto should verify SSL certificates and should use the systems certificate repository
Public bug reported: Currently python-boto does not verify SSL certificates by default. This is unacceptable as this exposes users to man in the middle attacks. This can be worked around by the user (see below). Unfortunately after enabling verification, python-boto uses it's own cacerts.txt file to verify certificates and does not use the system provided certificates. If a valid certificate is not included in the python-boto shipped cacerts.txt file and certificate validation is tuned on, then verification will fail. I presume that this behaviour exists to enable cross platform compatibility. Python-boto should enable SSL certificate verification by default and use the system installed certificates (perhaps falling back to it's shipped certs file if necessary). The method to override verification should be included in the package documentation (or a README). = Workaround to enable verification = Create a ~/.boto file with the following: [Boto] https_validate_certificates = true = System Information = $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.10 DISTRIB_CODENAME=quantal DISTRIB_DESCRIPTION=Ubuntu 12.10 $ dpkg-query --show python-boto ca-certificates ca-certificates 20120623 python-boto 2.3.0-1 ** Affects: python-boto (Ubuntu) Importance: Undecided Status: Confirmed -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to python-boto in Ubuntu. https://bugs.launchpad.net/bugs/1078444 Title: python-boto should verify SSL certificates and should use the systems certificate repository To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/python-boto/+bug/1078444/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1078444] [NEW] python-boto should verify SSL certificates and should use the systems certificate repository
Public bug reported: Currently python-boto does not verify SSL certificates by default. This is unacceptable as this exposes users to man in the middle attacks. This can be worked around by the user (see below). Unfortunately after enabling verification, python-boto uses it's own cacerts.txt file to verify certificates and does not use the system provided certificates. If a valid certificate is not included in the python-boto shipped cacerts.txt file and certificate validation is tuned on, then verification will fail. I presume that this behaviour exists to enable cross platform compatibility. Python-boto should enable SSL certificate verification by default and use the system installed certificates (perhaps falling back to it's shipped certs file if necessary). The method to override verification should be included in the package documentation (or a README). = Workaround to enable verification = Create a ~/.boto file with the following: [Boto] https_validate_certificates = true = System Information = $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.10 DISTRIB_CODENAME=quantal DISTRIB_DESCRIPTION=Ubuntu 12.10 $ dpkg-query --show python-boto ca-certificates ca-certificates 20120623 python-boto 2.3.0-1 ** Affects: python-boto (Ubuntu) Importance: Undecided Status: Confirmed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1078444 Title: python-boto should verify SSL certificates and should use the systems certificate repository To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/python-boto/+bug/1078444/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1064835] Re: keystoneclient fails on SSL certificates that work for other services
** Tags added: canonistack -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1064835 Title: keystoneclient fails on SSL certificates that work for other services To manage notifications about this bug go to: https://bugs.launchpad.net/python-keystoneclient/+bug/1064835/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1071032] [NEW] python-keystoneclient package missing dependency on python-pkg-resources
Public bug reported: I have recently installed python-keystoneclient within a debootstrapped chroot and what I believe to be a dependency problem with the package. After installing python-pkg-resources python-keystoneclient runs as expected. Testing: (keystone)agy@bricked:~$ keystone catalog Traceback (most recent call last): File /usr/bin/keystone, line 5, in module from pkg_resources import load_entry_point ImportError: No module named pkg_resources (keystone)agy@bricked:~$ sudo apt-get install python-pkg-resources (keystone)agy@bricked:~$ keystone catalog [... catalog is returned ...] System Information: (keystone)agy@bricked:~$ dpkg-query --show '*keystone*' python-keystoneclient 1:0.1.3-0ubuntu1 (keystone)agy@bricked:~$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.10 DISTRIB_CODENAME=quantal DISTRIB_DESCRIPTION=Ubuntu 12.10 ** Affects: python-keystoneclient (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1071032 Title: python-keystoneclient package missing dependency on python-pkg- resources To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/python-keystoneclient/+bug/1071032/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1062277] Re: 092_add_instance_system_metadata migration fails when upgrading
@Dan The default_character_set_name is set to 'latin1'. This system has been upgraded through various lifecycles of Openstack which may explain it (a more recent installation that I looked at is set to 'utf8'). I would argue that setting the character set explicitly on table creation is the correct and prevents these sorts of issues. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/1062277 Title: 092_add_instance_system_metadata migration fails when upgrading To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1062277/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1062277] Re: 092_add_instance_system_metadata migration fails when upgrading
@Dan The default_character_set_name is set to 'latin1'. This system has been upgraded through various lifecycles of Openstack which may explain it (a more recent installation that I looked at is set to 'utf8'). I would argue that setting the character set explicitly on table creation is the correct and prevents these sorts of issues. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1062277 Title: 092_add_instance_system_metadata migration fails when upgrading To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1062277/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1064759] [NEW] Samsung 900x4C will not POST after applying Quantal updates
Public bug reported: What follows are my recollection of events resulting in my Samsung 900x4C no longer POSTing after updates. I was running an outdated Quantal install (of a few weeks old) and recently applied Quantal updates to bring my laptop up to date. One of the updates included the linux-image-3.5.0-17 kernel. After updating and rebooting my laptop would kernel panic shortly after the grub menu displayed. I attempted to use any of the previous kernels listed and similar kernel panics occurred. I managed to boot into a USB installation environment, mounted my laptop's primary partitions, chrooted in and reinstalled the linux- image-3.5.0-17 kernel package. The laptop would then boot, however I noticed that wifi was not working. I then discovered that the linux- image-extra-3.5.0-17-generic package had been removed taking the wifi drivers along with it. I plugged in a ethernet cable and reinstalled the linux-image-extra-3.5.0-17-generic package. After rebooting to ensure that the wifi drivers would load correctly and that the kernel would no longer panic, I found that the machine would not longer POST at all. Pressing the power button turns on the blue power LED, but nothing further happens. It seems that the laptop has been bricked. I cannot remove the laptop's battery and so I am currently running the laptop without power attached to attempt to run down the battery. I had UEFI enabled in the BIOS and the last kernel which seemed to work for me was linux-image-3.5.0-15 (although I cannot confirm). This seems very similar to Bug:1040557 which also bricked other models of Samsung laptops. For obvious reasons, I cannot reproduce nor provide logs. If there is any additional information I can provide, please let me know? ** Affects: linux (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1064759 Title: Samsung 900x4C will not POST after applying Quantal updates To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1064759/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1064759] Re: Samsung 900x4C will not POST after applying Quantal updates
** Tags added: rls-q-incoming -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1064759 Title: Samsung 900x4C will not POST after applying Quantal updates To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1064759/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1062277] [NEW] 092_add_instance_system_metadata migration fails when upgrading
Public bug reported: The upgrade from Precise+Essex to Precise+Folsom using the ubuntu-cloud archive fails while running the 092_add_instance_system_metadata migration. The migration fails while creating the `instance_system_metadata` table (see below). The '1005' MySQL error indicates that a foreign key is failing to apply. After changing the table definition in the table to include `mysql_charset='utf8'` the migration succeeds (see attached bug). This is a similar bug that the `dns_domains` table had previously. Error message received during package upgrade: 2012-10-04 10:59:55 INFO sqlalchemy.engine.base.Engine [-] SHOW CREATE TABLE `instances` 2012-10-04 10:59:55 INFO sqlalchemy.engine.base.Engine [-] () 2012-10-04 10:59:55 DEBUG sqlalchemy.engine.base.Engine [-] Col ('Table', 'Create Table') from (pid=24755) __init__ /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py:2789 2012-10-04 10:59:55 DEBUG sqlalchemy.engine.base.Engine [-] Row ('instances', 'CREATE TABLE `instances` (\n `created_at` datetime DEFAULT NULL,\n `updated_at` datetime DEFAULT NULL,\n `deleted_at` datetime DEFAULT NULL,\n `deleted` tinyint(1) DEFAULT NULL,\n `id` int(11) NOT NULL AUTO_INCREMENT,\n `internal_id` int(11) DEFAULT NULL,\n`user_id` varchar(255) DEFAULT NULL,\n `project_id` varchar(255) DEFAULT NULL,\n `image_ref` varchar(255) DEFAULT NULL,\n `kernel_id` varchar(255) DEFAULT NULL,\n `ramdisk_id` varchar(255) DEFAULT NULL,\n `server_name` varchar(255) DEFAULT NULL,\n `launch_index` int(11) DEFAULT NULL,\n `key_name` varchar(255) DEFAULT NULL,\n `key_data` mediumtext,\n `power_state` int(11) DEFAULT NULL,\n `vm_state` varchar(255) DEFAULT NULL,\n `memory_mb` int(11) DEFAULT NULL,\n `vcpus` int(11) DEFAULT NULL,\n `hostname` varchar(255) DEFAULT NULL,\n `host` varchar(255) DEFAULT NULL,\n `user_data` mediumtext,\n `reservation_id` varchar(255) DEFAULT NULL,\n `scheduled_at` datetime DEFAULT NULL,\n `launched_at` datetime DEFAULT NULL,\n `terminated_at` datetime DEFAULT NULL,\n `display_name` varchar(255) DEFAULT NULL,\n `display_description` varchar(255) DEFAULT NULL,\n `availability_zone` varchar(255) DEFAULT NULL,\n `locked` tinyint(1) DEFAULT NULL,\n `os_type` varchar(255) DEFAULT NULL,\n `launched_on` ediumtext,\n `instance_type_id` int(11) DEFAULT NULL,\n `vm_mode` varchar(255) DEFAULT NULL,\n `uuid` varchar(36) DEFAULT NULL,\n `architecture` varchar(255) DEFAULT NULL,\n `root_device_name` varchar(255) DEFAULT NULL,\n access_ip_v4` varchar(255) DEFAULT NULL,\n `access_ip_v6` varchar(255) DEFAULT NULL,\n `config_drive` varchar(255) DEFAULT NULL,\n `task_state` varchar(255) DEFAULT NULL,\n `default_ephemeral_device` varchar(255) DEFAULT NULL,\n `default_swap_device` varchar(255) DEFAULT NULL,\n `progress` int(11) DEFAULT NULL,\n `auto_disk_config` tinyint(1) DEFAULT NULL,\n `shutdown_terminate` tinyint(1) DEFAULT NULL,\n `disable_terminate` tinyint(1) DEFAULT NULL,\n `root_gb` int(11) DEFAULT NULL,\n `ephemeral_gb` int(11) DEFAULT NULL,\n `cell_name` varchar(255) DEFAULT NULL,\n PRIMARY KEY (`id`),\n UNIQUE KEY `uuid` (`uuid`),\n KEY `project_id` (`project_id`)\n) ENGINE=InnoDB AUTO_INCREMENT=17623 DEFAULT CHARSET=utf8') from (pid=24755) process_rows /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py:3194 2012-10-04 10:59:55 DEBUG sqlalchemy.pool.QueuePool [-] Connection _mysql.connection open to '10.55.58.1' at 28b0ed0 being returned to pool from (pid=24755) _finalize_fairy /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py:3 64 2012-10-04 10:59:55 DEBUG sqlalchemy.pool.QueuePool [-] Connection _mysql.connection open to '10.55.58.1' at 28b0ed0 checked out from pool from (pid=24755) __init__ /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py:401 2012-10-04 10:59:55 INFO sqlalchemy.engine.base.Engine [-] CREATE TABLE instance_system_metadata ( created_at DATETIME, updated_at DATETIME, deleted_at DATETIME, deleted BOOL, id INTEGER NOT NULL AUTO_INCREMENT, instance_uuid VARCHAR(36) NOT NULL, `key` VARCHAR(255) NOT NULL, value VARCHAR(255), PRIMARY KEY (id), CHECK (deleted IN (0, 1)), FOREIGN KEY(instance_uuid) REFERENCES instances (uuid) )ENGINE=InnoDB 2012-10-04 10:59:55 INFO sqlalchemy.engine.base.Engine [-] () 2012-10-04 10:59:55 INFO sqlalchemy.engine.base.Engine [-] ROLLBACK 2012-10-04 10:59:55 DEBUG sqlalchemy.pool.QueuePool [-] Connection _mysql.connection open to '10.55.58.1' at 28b0ed0 being returned to pool from (pid=24755) _finalize_fairy /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py:364 No handlers could be found for logger 092_add_instance_system_metadata Command failed, please check log for more info 2012-10-04 10:59:55 CRITICAL nova [-] (OperationalError) (1005, Can't create table 'nova.instance_system_metadata' (errno: 150)) '\nCREATE TABLE instance_system_metadata (\n\tcreated_at DATETIME,
[Bug 1062277] Re: 092_add_instance_system_metadata migration fails when upgrading
** Patch added: 092_add_instance_system_metadata.patch https://bugs.launchpad.net/bugs/1062277/+attachment/3375835/+files/092_add_instance_system_metadata.patch -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/1062277 Title: 092_add_instance_system_metadata migration fails when upgrading To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1062277/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1062160] [NEW] python-swiftclient fails install if the swift package is installed
Public bug reported: I was upgrading a Precise + Essex install, using the ubuntu-cloud archive, to Precise + Folsom. During the upgrade I received an error during the package installation of python-swiftclient (see below). The workaround was to remove the 'swift' package before installing 'python-swiftclient'. Is there a missing conflicts on this package? Error message: [...] Selecting previously unselected package python-swiftclient. Unpacking python-swiftclient (from .../python-swiftclient_1%3a1.2.0-0ubuntu2~cloud0_all.deb) ... dpkg: error processing /var/cache/apt/archives/python-swiftclient_1%3a1.2.0-0ubuntu2~cloud0_all.deb (--unpack): trying to overwrite '/usr/bin/swift', which is also in package swift 1.4.8-0ubuntu2 [...] Current System information: $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu 12.04.1 LTS $ dpkg -l | grep swift ii python-swift 1.7.4-0ubuntu1~cloud0 distributed virtual object store - Python libraries ii python-swiftclient 1:1.2.0-0ubuntu2~cloud0 Client libary for Openstack Swift API. ** Affects: python-swiftclient (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1062160 Title: python-swiftclient fails install if the swift package is installed To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/python-swiftclient/+bug/1062160/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1062165] [NEW] Ubuntu theme looks incorrect in Folsom
Public bug reported: I have recently upgraded from Precise + Essex to Precise + Folsom using the ubuntu-cloud archive. The openstack-dashboard with the ubuntu theme looks incorrect/broken. The functionality of the UI looks to be correct. Unfortunately, I cannot provide much more information (apart from the attached screenshot). Current system information: $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu 12.04.1 LTS $ dpkg -l | grep openstack-dashboard ii openstack-dashboard 2012.2-0ubuntu1~cloud0 django web interface to Openstack ii openstack-dashboard-ubuntu-theme 2012.2-0ubuntu1~cloud0 Ubuntu theme for the Openstack dashboard ** Affects: openstack-dashboard (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1062165 Title: Ubuntu theme looks incorrect in Folsom To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openstack-dashboard/+bug/1062165/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1062165] Re: Ubuntu theme looks incorrect in Folsom
** Attachment added: dashboard-problems.png https://bugs.launchpad.net/bugs/1062165/+attachment/3375287/+files/dashboard-problems.png -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1062165 Title: Ubuntu theme looks incorrect in Folsom To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openstack-dashboard/+bug/1062165/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1062165] Re: Ubuntu theme looks incorrect in Folsom
Added screenshot after removing the theme, reloading apache and refreshing the browser. ** Attachment added: dashboard-wo-ubuntu-theme.png https://bugs.launchpad.net/ubuntu/+source/openstack-dashboard/+bug/1062165/+attachment/3375296/+files/dashboard-wo-ubuntu-theme.png -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1062165 Title: Ubuntu theme looks incorrect in Folsom To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openstack-dashboard/+bug/1062165/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1062171] [NEW] Dashboard should not add WSGI config in /etc/apache2/conf.d
Public bug reported: The dashboard should not add WSGI config in /etc/apache2/conf.d providing a _global_ configuration change. This breaks all the other VirtualHosts on the Apache server. I had a previously defined VirtualHost which served the dashboard. The package installation failed during upgrade because Apache's config was incorrect. Error during upgrade: Setting up openstack-dashboard (2012.2-0ubuntu1~cloud0) ... Syntax error on line 28 of /etc/apache2/sites-enabled/dashboard.canonistack.canonical.com: Name duplicates previous WSGI daemon definition. Action 'configtest' failed. The Apache error log may have more information. ...fail! dpkg: error processing openstack-dashboard (--configure): subprocess installed post-installation script returned error exit status 1 System Information: $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu 12.04.1 LTS $ dpkg -l | grep openstack-dashboard ii openstack-dashboard 2012.2-0ubuntu1~cloud0 django web interface to Openstack ii openstack-dashboard-ubuntu-theme 2012.2-0ubuntu1~cloud0 Ubuntu theme for the Openstack dashboard ** Affects: openstack-dashboard (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1062171 Title: Dashboard should not add WSGI config in /etc/apache2/conf.d To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openstack-dashboard/+bug/1062171/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1062277] [NEW] 092_add_instance_system_metadata migration fails when upgrading
Public bug reported: The upgrade from Precise+Essex to Precise+Folsom using the ubuntu-cloud archive fails while running the 092_add_instance_system_metadata migration. The migration fails while creating the `instance_system_metadata` table (see below). The '1005' MySQL error indicates that a foreign key is failing to apply. After changing the table definition in the table to include `mysql_charset='utf8'` the migration succeeds (see attached bug). This is a similar bug that the `dns_domains` table had previously. Error message received during package upgrade: 2012-10-04 10:59:55 INFO sqlalchemy.engine.base.Engine [-] SHOW CREATE TABLE `instances` 2012-10-04 10:59:55 INFO sqlalchemy.engine.base.Engine [-] () 2012-10-04 10:59:55 DEBUG sqlalchemy.engine.base.Engine [-] Col ('Table', 'Create Table') from (pid=24755) __init__ /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py:2789 2012-10-04 10:59:55 DEBUG sqlalchemy.engine.base.Engine [-] Row ('instances', 'CREATE TABLE `instances` (\n `created_at` datetime DEFAULT NULL,\n `updated_at` datetime DEFAULT NULL,\n `deleted_at` datetime DEFAULT NULL,\n `deleted` tinyint(1) DEFAULT NULL,\n `id` int(11) NOT NULL AUTO_INCREMENT,\n `internal_id` int(11) DEFAULT NULL,\n`user_id` varchar(255) DEFAULT NULL,\n `project_id` varchar(255) DEFAULT NULL,\n `image_ref` varchar(255) DEFAULT NULL,\n `kernel_id` varchar(255) DEFAULT NULL,\n `ramdisk_id` varchar(255) DEFAULT NULL,\n `server_name` varchar(255) DEFAULT NULL,\n `launch_index` int(11) DEFAULT NULL,\n `key_name` varchar(255) DEFAULT NULL,\n `key_data` mediumtext,\n `power_state` int(11) DEFAULT NULL,\n `vm_state` varchar(255) DEFAULT NULL,\n `memory_mb` int(11) DEFAULT NULL,\n `vcpus` int(11) DEFAULT NULL,\n `hostname` varchar(255) DEFAULT NULL,\n `host` varchar(255) DEFAULT NULL,\n `user_data` mediumtext,\n `reservation_id` varchar(255) DEFAULT NULL,\n `scheduled_at` datetime DEFAULT NULL,\n `launched_at` datetime DEFAULT NULL,\n `terminated_at` datetime DEFAULT NULL,\n `display_name` varchar(255) DEFAULT NULL,\n `display_description` varchar(255) DEFAULT NULL,\n `availability_zone` varchar(255) DEFAULT NULL,\n `locked` tinyint(1) DEFAULT NULL,\n `os_type` varchar(255) DEFAULT NULL,\n `launched_on` ediumtext,\n `instance_type_id` int(11) DEFAULT NULL,\n `vm_mode` varchar(255) DEFAULT NULL,\n `uuid` varchar(36) DEFAULT NULL,\n `architecture` varchar(255) DEFAULT NULL,\n `root_device_name` varchar(255) DEFAULT NULL,\n access_ip_v4` varchar(255) DEFAULT NULL,\n `access_ip_v6` varchar(255) DEFAULT NULL,\n `config_drive` varchar(255) DEFAULT NULL,\n `task_state` varchar(255) DEFAULT NULL,\n `default_ephemeral_device` varchar(255) DEFAULT NULL,\n `default_swap_device` varchar(255) DEFAULT NULL,\n `progress` int(11) DEFAULT NULL,\n `auto_disk_config` tinyint(1) DEFAULT NULL,\n `shutdown_terminate` tinyint(1) DEFAULT NULL,\n `disable_terminate` tinyint(1) DEFAULT NULL,\n `root_gb` int(11) DEFAULT NULL,\n `ephemeral_gb` int(11) DEFAULT NULL,\n `cell_name` varchar(255) DEFAULT NULL,\n PRIMARY KEY (`id`),\n UNIQUE KEY `uuid` (`uuid`),\n KEY `project_id` (`project_id`)\n) ENGINE=InnoDB AUTO_INCREMENT=17623 DEFAULT CHARSET=utf8') from (pid=24755) process_rows /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py:3194 2012-10-04 10:59:55 DEBUG sqlalchemy.pool.QueuePool [-] Connection _mysql.connection open to '10.55.58.1' at 28b0ed0 being returned to pool from (pid=24755) _finalize_fairy /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py:3 64 2012-10-04 10:59:55 DEBUG sqlalchemy.pool.QueuePool [-] Connection _mysql.connection open to '10.55.58.1' at 28b0ed0 checked out from pool from (pid=24755) __init__ /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py:401 2012-10-04 10:59:55 INFO sqlalchemy.engine.base.Engine [-] CREATE TABLE instance_system_metadata ( created_at DATETIME, updated_at DATETIME, deleted_at DATETIME, deleted BOOL, id INTEGER NOT NULL AUTO_INCREMENT, instance_uuid VARCHAR(36) NOT NULL, `key` VARCHAR(255) NOT NULL, value VARCHAR(255), PRIMARY KEY (id), CHECK (deleted IN (0, 1)), FOREIGN KEY(instance_uuid) REFERENCES instances (uuid) )ENGINE=InnoDB 2012-10-04 10:59:55 INFO sqlalchemy.engine.base.Engine [-] () 2012-10-04 10:59:55 INFO sqlalchemy.engine.base.Engine [-] ROLLBACK 2012-10-04 10:59:55 DEBUG sqlalchemy.pool.QueuePool [-] Connection _mysql.connection open to '10.55.58.1' at 28b0ed0 being returned to pool from (pid=24755) _finalize_fairy /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py:364 No handlers could be found for logger 092_add_instance_system_metadata Command failed, please check log for more info 2012-10-04 10:59:55 CRITICAL nova [-] (OperationalError) (1005, Can't create table 'nova.instance_system_metadata' (errno: 150)) '\nCREATE TABLE instance_system_metadata (\n\tcreated_at DATETIME,
[Bug 1062277] Re: 092_add_instance_system_metadata migration fails when upgrading
** Patch added: 092_add_instance_system_metadata.patch https://bugs.launchpad.net/bugs/1062277/+attachment/3375835/+files/092_add_instance_system_metadata.patch -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1062277 Title: 092_add_instance_system_metadata migration fails when upgrading To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1062277/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1061166] Re: ec2 instance IDs are broken after folsom upgrade
I upgraded today and found that I had this issue. Not all of my instances had duplicate entries in the `instance_id_mappings` table, however I did have more entries within the `instance_id_mappings` than within the `instances` table. It seems that the `id` column in the `instances` table is still being used somewhere. In order to work around the problem I needed to set the auto_increment integer to be the same for each table. Example: -- grab the auto_increment integer for `instances` table SELECT Auto_increment FROM information_schema.tables WHERE table_name='instances' AND table_schema='nova'; -- grab the auto_increment integer for `instances` table SELECT Auto_increment FROM information_schema.tables WHERE table_name='instance_id_mappings' AND table_schema='nova'; -- raise the lowest number returned to the same as the highest for the relevant table. ALTER TABLE instances AUTO_INCREMENT = 1769801923; -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/1061166 Title: ec2 instance IDs are broken after folsom upgrade To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1061166/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1061166] Re: ec2 instance IDs are broken after folsom upgrade
The loose relationship between two auto_incrementing `id` columns of different tables is insanely brittle. I am not convinced that the intention was for the relation to work this way. It may simple be a problem with part of the code referencing `id` instead of `uuid`. The uuid's in the `instances_id_mapping` should, at the minimum, include a unique constraint. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/1061166 Title: ec2 instance IDs are broken after folsom upgrade To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1061166/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1061166] Re: ec2 instance IDs are broken after folsom upgrade
I upgraded today and found that I had this issue. Not all of my instances had duplicate entries in the `instance_id_mappings` table, however I did have more entries within the `instance_id_mappings` than within the `instances` table. It seems that the `id` column in the `instances` table is still being used somewhere. In order to work around the problem I needed to set the auto_increment integer to be the same for each table. Example: -- grab the auto_increment integer for `instances` table SELECT Auto_increment FROM information_schema.tables WHERE table_name='instances' AND table_schema='nova'; -- grab the auto_increment integer for `instances` table SELECT Auto_increment FROM information_schema.tables WHERE table_name='instance_id_mappings' AND table_schema='nova'; -- raise the lowest number returned to the same as the highest for the relevant table. ALTER TABLE instances AUTO_INCREMENT = 1769801923; -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1061166 Title: ec2 instance IDs are broken after folsom upgrade To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1061166/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1061166] Re: ec2 instance IDs are broken after folsom upgrade
The loose relationship between two auto_incrementing `id` columns of different tables is insanely brittle. I am not convinced that the intention was for the relation to work this way. It may simple be a problem with part of the code referencing `id` instead of `uuid`. The uuid's in the `instances_id_mapping` should, at the minimum, include a unique constraint. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1061166 Title: ec2 instance IDs are broken after folsom upgrade To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1061166/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1056462] [NEW] The '--print' option to ceph-authtool is incorrect
Public bug reported: Running `ceph-authtool keyringfile --print` throws an error claiming that it does not know the option however, the help indicates that this is the valid long-option for '-p'. Looking at the source code, the command expects '--print-key' and using this works as expected. Either the help is incorrect or the '--print-key' option is incorrect. How to reproduce: $ ceph-authtool --help no command specified usage: ceph-authtool keyringfile [OPTIONS]... where the options are: -l, --listwill list all keys and capabilities present in the keyring -p, --print will print an encoded key for the specified entityname. This is suitable for the 'mount -o secret=..' argument [...] $ ceph-authtool /etc/ceph/keyring --print ceph-authtool: unexpected '--print' [...] $ ceph-authtool /etc/ceph/keyring -p XX== $ ceph-authtool /etc/ceph/keyring --print-key XX== See the trivial patch attached. ** Affects: ceph (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to ceph in Ubuntu. https://bugs.launchpad.net/bugs/1056462 Title: The '--print' option to ceph-authtool is incorrect To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1056462/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1056462] Re: The '--print' option to ceph-authtool is incorrect
** Patch added: Correct help message to provide correct long option '--print-key' https://bugs.launchpad.net/bugs/1056462/+attachment/3343155/+files/ceph_authtool.patch -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to ceph in Ubuntu. https://bugs.launchpad.net/bugs/1056462 Title: The '--print' option to ceph-authtool is incorrect To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1056462/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1056462] [NEW] The '--print' option to ceph-authtool is incorrect
Public bug reported: Running `ceph-authtool keyringfile --print` throws an error claiming that it does not know the option however, the help indicates that this is the valid long-option for '-p'. Looking at the source code, the command expects '--print-key' and using this works as expected. Either the help is incorrect or the '--print-key' option is incorrect. How to reproduce: $ ceph-authtool --help no command specified usage: ceph-authtool keyringfile [OPTIONS]... where the options are: -l, --listwill list all keys and capabilities present in the keyring -p, --print will print an encoded key for the specified entityname. This is suitable for the 'mount -o secret=..' argument [...] $ ceph-authtool /etc/ceph/keyring --print ceph-authtool: unexpected '--print' [...] $ ceph-authtool /etc/ceph/keyring -p XX== $ ceph-authtool /etc/ceph/keyring --print-key XX== See the trivial patch attached. ** Affects: ceph (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1056462 Title: The '--print' option to ceph-authtool is incorrect To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1056462/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1056462] Re: The '--print' option to ceph-authtool is incorrect
** Patch added: Correct help message to provide correct long option '--print-key' https://bugs.launchpad.net/bugs/1056462/+attachment/3343155/+files/ceph_authtool.patch -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1056462 Title: The '--print' option to ceph-authtool is incorrect To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1056462/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1055771] [NEW] Samsung 900X4C laptop boots with display backlight off
Public bug reported: After upgrading to 3.5.0-15 on Quantal my Samsung 900X4C boots with it's display backlight off. The backlight is turned off immediately after grub and before the kernel messages start. The display remains off and the keyboard keys assigned to adjust the brightness do not change the display. The previous kernel (3.5.0-14) does not display these symptoms. Workarounds: 1) Create a script which runs at boot time to turn the display on. I do this by running a reboot cronjob to set the display brightness. /etc/cron.d/fix-backlight: @reboot root echo 2040 /sys/class/backlight/intel_backlight/brightness A initscript would be able to do the job as well, or better. 2) Use an earlier kernel (3.5.0-14) ProblemType: Bug DistroRelease: Ubuntu 12.10 Package: linux-image-3.5.0-15-generic 3.5.0-15.22 ProcVersionSignature: Ubuntu 3.5.0-15.22-generic 3.5.4 Uname: Linux 3.5.0-15-generic x86_64 ApportVersion: 2.5.2-0ubuntu4 Architecture: amd64 AudioDevicesInUse: USERPID ACCESS COMMAND /dev/snd/controlC0: agy2182 F pulseaudio Date: Mon Sep 24 16:45:57 2012 EcryptfsInUse: Yes HibernationDevice: RESUME=UUID=e58f26d0-910f-4f85-a240-cd9d404f38ca InstallationMedia: Ubuntu 12.04 LTS Precise Pangolin - Release amd64 (20120425) MachineType: SAMSUNG ELECTRONICS CO., LTD. 900X3C/900X4C/900X4D ProcEnviron: TERM=screen PATH=(custom, no user) LANG=en_US.UTF-8 SHELL=/bin/bash ProcFB: 0 inteldrmfb ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.5.0-15-generic root=UUID=b281cccd-7e4d-4f82-9888-61890b3bafe4 ro quiet splash vt.handoff=7 RelatedPackageVersions: linux-restricted-modules-3.5.0-15-generic N/A linux-backports-modules-3.5.0-15-generic N/A linux-firmware1.93 SourcePackage: linux UpgradeStatus: Upgraded to quantal on 2012-09-14 (10 days ago) dmi.bios.date: 04/26/2012 dmi.bios.vendor: Phoenix Technologies Ltd. dmi.bios.version: P01AAC dmi.board.asset.tag: Base Board Asset Tag dmi.board.name: SAMSUNG_NP1234567890 dmi.board.vendor: SAMSUNG ELECTRONICS CO., LTD. dmi.board.version: FAB1 dmi.chassis.asset.tag: No Asset Tag dmi.chassis.type: 9 dmi.chassis.vendor: SAMSUNG ELECTRONICS CO., LTD. dmi.chassis.version: 0.1 dmi.modalias: dmi:bvnPhoenixTechnologiesLtd.:bvrP01AAC:bd04/26/2012:svnSAMSUNGELECTRONICSCO.,LTD.:pn900X3C/900X4C/900X4D:pvr0.1:rvnSAMSUNGELECTRONICSCO.,LTD.:rnSAMSUNG_NP1234567890:rvrFAB1:cvnSAMSUNGELECTRONICSCO.,LTD.:ct9:cvr0.1: dmi.product.name: 900X3C/900X4C/900X4D dmi.product.version: 0.1 dmi.sys.vendor: SAMSUNG ELECTRONICS CO., LTD. ** Affects: linux (Ubuntu) Importance: Undecided Status: New ** Tags: amd64 apport-bug quantal running-unity -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1055771 Title: Samsung 900X4C laptop boots with display backlight off To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1055771/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1055771] Re: Samsung 900X4C laptop boots with display backlight off
-- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1055771 Title: Samsung 900X4C laptop boots with display backlight off To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1055771/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1055770] [NEW] Samsung 900X4C laptop boots with display backlight off
*** This bug is a duplicate of bug 1055771 *** https://bugs.launchpad.net/bugs/1055771 Public bug reported: After upgrading to 3.5.0-15 on Quantal my Samsung 900X4C boots with it's display backlight off. The backlight is turned off immediately after grub and before the kernel messages start. The display remains off and the keyboard keys assigned to adjust the brightness do not change the display. The previous kernel (3.5.0-14) does not display these symptoms. Workarounds: 1) Create a script which runs at boot time to turn the display on. I do this by running a reboot cronjob to set the display brightness. /etc/cron.d/fix-backlight: @reboot root echo 2040 /sys/class/backlight/intel_backlight/brightness A initscript would be able to do the job as well, or better. 2) Use an earlier kernel (3.5.0-14) ** Affects: linux (Ubuntu) Importance: Undecided Status: New ** This bug has been marked a duplicate of bug 1055771 Samsung 900X4C laptop boots with display backlight off -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1055770 Title: Samsung 900X4C laptop boots with display backlight off To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1055770/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1055770] Re: Samsung 900X4C laptop boots with display backlight off
*** This bug is a duplicate of bug 1055771 *** https://bugs.launchpad.net/bugs/1055771 This is the same bug report as bug#1055771. They are identical and are not separate bugs. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1055770 Title: Samsung 900X4C laptop boots with display backlight off To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1055770/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 985489] Re: nova-compute stops processing compute.$HOSTNAME occasionally
@Rafi, Thanks for the suggestion. What you describe used to happen to me when we were using Oneiric's libvirtd (LP#903212). Since upgrading to Precise I haven't experienced the exact problem you're having. My issue is that libvirtd seems to temporarily stop responding, enough to block nova-compute, but it then responds to external commands later when I probe it. Looking through my monitoring logs, I can see that libvirtd occasionally stops responding, but not for long enough to generate an alert. I speculate that this is a timing issue, where libvirtd has stopped responding for a very limited period and during that moment nova-compute attempts to poll libvirtd? -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/985489 Title: nova-compute stops processing compute.$HOSTNAME occasionally To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/985489/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 985489] Re: nova-compute stops processing compute.$HOSTNAME occasionally
@Rafi, Thanks for the suggestion. What you describe used to happen to me when we were using Oneiric's libvirtd (LP#903212). Since upgrading to Precise I haven't experienced the exact problem you're having. My issue is that libvirtd seems to temporarily stop responding, enough to block nova-compute, but it then responds to external commands later when I probe it. Looking through my monitoring logs, I can see that libvirtd occasionally stops responding, but not for long enough to generate an alert. I speculate that this is a timing issue, where libvirtd has stopped responding for a very limited period and during that moment nova-compute attempts to poll libvirtd? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/985489 Title: nova-compute stops processing compute.$HOSTNAME occasionally To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/985489/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 985489] Re: nova-compute stops processing compute.$HOSTNAME occasionally
@Serge, As mentioned in #1 the cluster is running Precise + Essex. Unfortunately I cannot consistently reproduce the problem, however the problem has been recurring approximately once per month. As mentioned in #5 I think that perhaps nova-compute is attempting to query libvirtd and libvirtd is not responding. Since the query doesn't seem to time out nova-compute hangs waiting for a response which will never come. I can only query the system after the problem occurs and I only have current state and log files to try infer the problem. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/985489 Title: nova-compute stops processing compute.$HOSTNAME occasionally To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/985489/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 985489] Re: nova-compute stops processing compute.$HOSTNAME occasionally
@Serge, As mentioned in #1 the cluster is running Precise + Essex. Unfortunately I cannot consistently reproduce the problem, however the problem has been recurring approximately once per month. As mentioned in #5 I think that perhaps nova-compute is attempting to query libvirtd and libvirtd is not responding. Since the query doesn't seem to time out nova-compute hangs waiting for a response which will never come. I can only query the system after the problem occurs and I only have current state and log files to try infer the problem. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/985489 Title: nova-compute stops processing compute.$HOSTNAME occasionally To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/985489/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1036918] Re: Switching between regions causes login form to appear at the bottom of the page
*** This bug is a duplicate of bug 1033934 *** https://bugs.launchpad.net/bugs/1033934 ** This bug has been marked a duplicate of bug 1033934 Attempting to change regions in the dashboard does not display correctly -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to horizon in Ubuntu. https://bugs.launchpad.net/bugs/1036918 Title: Switching between regions causes login form to appear at the bottom of the page To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1036918/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1036585] Re: Horizon fails to provide Juju credentials with Internal Server Error upon clicking the dowload link.
*** This bug is a duplicate of bug 1033920 *** https://bugs.launchpad.net/bugs/1033920 ** This bug has been marked a duplicate of bug 1033920 Dashboard raises a ServiceCatalogException when attempting to download juju settings -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to horizon in Ubuntu. https://bugs.launchpad.net/bugs/1036585 Title: Horizon fails to provide Juju credentials with Internal Server Error upon clicking the dowload link. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1036585/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1036918] Re: Switching between regions causes login form to appear at the bottom of the page
*** This bug is a duplicate of bug 1033934 *** https://bugs.launchpad.net/bugs/1033934 ** This bug has been marked a duplicate of bug 1033934 Attempting to change regions in the dashboard does not display correctly -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1036918 Title: Switching between regions causes login form to appear at the bottom of the page To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1036918/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1036585] Re: Horizon fails to provide Juju credentials with Internal Server Error upon clicking the dowload link.
*** This bug is a duplicate of bug 1033920 *** https://bugs.launchpad.net/bugs/1033920 ** This bug has been marked a duplicate of bug 1033920 Dashboard raises a ServiceCatalogException when attempting to download juju settings -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1036585 Title: Horizon fails to provide Juju credentials with Internal Server Error upon clicking the dowload link. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1036585/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1026621] Re: nova-network gets release_fixed_ip events from someplace, but the database still keeps them associated with instances
@Adam: Bug #1006898 seems unlikely IMHO. We are not running with 'multi_host' nor with VLANS. So there is one dnsmasq process. The following is just conjecture as the event has rotated out of the logs and I have not done any further testing. These events happen rather infrequently, but are a significant annoyance to be noticed. The dnsmasq(8) man page implies that dnsmasq will block while running the dhcp-script. Ordinarily this would not be a problem for individual leases, however if the daemon receives a HUP signal it will parse the current lease database and run the dhcp-script with the 'old' action for each lease. Assuming the above is correct and we have a dnsmasq lease file of 200 records and a runtime of 0.15s per invocation¹ of dhcp-script, then the daemon could be blocking for up to 30 seconds (or longer). If a host was renewing an expired lease at the time, it is conceivable that the lease may expire. The default lease length is 120s which seems to correlate with the logs provided above. I am not sure if any of this helps. [1]: Non-significant mean run time determined by running 'time /usr/bin/nova-dhcpbridge' ten times. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/1026621 Title: nova-network gets release_fixed_ip events from someplace, but the database still keeps them associated with instances To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1026621/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1026621] Re: nova-network gets release_fixed_ip events from someplace, but the database still keeps them associated with instances
@Adam: Bug #1006898 seems unlikely IMHO. We are not running with 'multi_host' nor with VLANS. So there is one dnsmasq process. The following is just conjecture as the event has rotated out of the logs and I have not done any further testing. These events happen rather infrequently, but are a significant annoyance to be noticed. The dnsmasq(8) man page implies that dnsmasq will block while running the dhcp-script. Ordinarily this would not be a problem for individual leases, however if the daemon receives a HUP signal it will parse the current lease database and run the dhcp-script with the 'old' action for each lease. Assuming the above is correct and we have a dnsmasq lease file of 200 records and a runtime of 0.15s per invocation¹ of dhcp-script, then the daemon could be blocking for up to 30 seconds (or longer). If a host was renewing an expired lease at the time, it is conceivable that the lease may expire. The default lease length is 120s which seems to correlate with the logs provided above. I am not sure if any of this helps. [1]: Non-significant mean run time determined by running 'time /usr/bin/nova-dhcpbridge' ten times. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1026621 Title: nova-network gets release_fixed_ip events from someplace, but the database still keeps them associated with instances To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1026621/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1033920] [NEW] Dashboard raises a ServiceCatalogException when attempting to download juju settings
Public bug reported: When attempting to download the juju settings file from the dashboard I receive an exception¹ which displays an ugly, blank 500 error message. It seems that the juju module is expecting an S3 endpoint in my service catalog, not finding it and throwing an exception which is not properly caught. This means that I need to reload the page to remove the blank page and continue to use the dashboard. What I expect: I expect the exception to be thrown (as I do not have the required endpoint) and an explanation that there was an error with the endpoint and for the user to be able to use the dashboard without having to reload the page. Exception: [1]: After setting DEBUG=True I see the following ServiceCatalogException from Django: Environment: Request Method: POST Request URL: http://localhost:8000/settings/juju/ Django Version: 1.3.1 Python Version: 2.7.3 Installed Applications: ['openstack_dashboard', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'django_nose', 'horizon', 'horizon.dashboards.nova', 'horizon.dashboards.syspanel', 'horizon.dashboards.settings'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'openstack_dashboard.middleware.DashboardLogUnhandledExceptionsMiddleware', 'horizon.middleware.HorizonMiddleware', 'django.middleware.doc.XViewMiddleware', 'django.middleware.locale.LocaleMiddleware') Traceback: File /usr/lib/python2.7/dist-packages/django/core/handlers/base.py in get_response 111. response = callback(request, *callback_args, **callback_kwargs) File /usr/lib/python2.7/dist-packages/horizon/decorators.py in dec 40. return view_func(request, *args, **kwargs) File /usr/lib/python2.7/dist-packages/horizon/decorators.py in dec 55. return view_func(request, *args, **kwargs) File /usr/lib/python2.7/dist-packages/horizon/decorators.py in dec 40. return view_func(request, *args, **kwargs) File /usr/lib/python2.7/dist-packages/django/views/generic/base.py in view 47. return self.dispatch(request, *args, **kwargs) File /usr/lib/python2.7/dist-packages/django/views/generic/base.py in dispatch 68. return handler(request, *args, **kwargs) File /usr/lib/python2.7/dist-packages/horizon/forms/views.py in post 84. return self.get(self, request, *args, **kwargs) File /usr/lib/python2.7/dist-packages/horizon/forms/views.py in get 64. form, handled = self.maybe_handle() File /usr/lib/python2.7/dist-packages/horizon/forms/views.py in maybe_handle 59. self.form, self.handled = form.maybe_handle(self.request, **kwargs) File /usr/lib/python2.7/dist-packages/horizon/forms/base.py in maybe_handle 101. exceptions.handle(request) File /usr/lib/python2.7/dist-packages/horizon/forms/base.py in maybe_handle 99. return form, form.handle(request, form.cleaned_data) File /usr/lib/python2.7/dist-packages/horizon/dashboards/settings/juju/forms.py in handle 88. redirect=request.build_absolute_uri()) File /usr/lib/python2.7/dist-packages/horizon/dashboards/settings/juju/forms.py in handle 81.'s3_url': api.url_for(request, 's3'), File /usr/lib/python2.7/dist-packages/horizon/api/base.py in url_for 112. raise exceptions.ServiceCatalogException(service_type) Exception Type: ServiceCatalogException at /settings/juju/ Exception Value: Invalid service catalog service: s3 System settings: $ dpkg-query --show *dashboard* openstack-dashboard 2012.1-0ubuntu8.1 openstack-dashboard-ubuntu-theme2012.1-0ubuntu8.1 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS ** Affects: horizon (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to horizon in Ubuntu. https://bugs.launchpad.net/bugs/1033920 Title: Dashboard raises a ServiceCatalogException when attempting to download juju settings To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1033920/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1033934] [NEW] Attempting to change regions in the dashboard does not display correctly
Public bug reported: After logging in to the dashboard in one region I attempted to change regions by selecting the other region in the top right corner. After doing this, login credentials form elements appear on the bottom left- hand side and does not look correct (I have attached screenshots to illustrate what I see). While I am not a UX expert, I do not believe that it is intended to display this way. I have tested in both Firefox and Chromium and experienced the same thing. How to reproduce (see attached images): 1. Login to the dashboard, selecting any one of the regions (image 01). 2. Select a region from the dropdown in the top right corner and click on the region name (image 02). 3. Notice the form elements appear in the bottom left corner (image 03). 4. (And again) Scroll to the top of the page, select a region from the dropdown in the top right corner again and notice additional form elements appear (image 04). What I expect (suggestion): I expect the form elements to display correctly. Perhaps as a rendered login dialog overlayed on the current page? System(s) Information: Client system: $ dpkg-query --show 'firefox' 'chromium-browser' chromium-browser18.0.1025.168~r134367-0ubuntu0.12.04.1 firefox 14.0.1+build1-0ubuntu0.12.04.1 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS Server system: $ dpkg-query --show *dashboard* openstack-dashboard 2012.1-0ubuntu8.1 openstack-dashboard-ubuntu-theme2012.1-0ubuntu8.1 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS ** Affects: horizon (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to horizon in Ubuntu. https://bugs.launchpad.net/bugs/1033934 Title: Attempting to change regions in the dashboard does not display correctly To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1033934/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1033934] Re: Attempting to change regions in the dashboard does not display correctly
** Attachment added: Tarball containing images illustrating the display problem. https://bugs.launchpad.net/bugs/1033934/+attachment/3251203/+files/region-displays-incorrectly.tar.gz ** Description changed: After logging in to the dashboard in one region I attempted to change regions by selecting the other region in the top right corner. After doing this, login credentials form elements appear on the bottom left- hand side and does not look correct (I have attached screenshots to illustrate what I see). While I am not a UX expert, I do not believe that it is intended to display this way. I have tested in both Firefox and Chromium and experienced the same thing. - How to reproduce (see attached images): - 1. Login to the dashboard, selecting any one of the regions (image 01). - 2. Select a region from the dropdown in the top right corner and click on the region name (image 02). - 3. Notice the form elements appear in the bottom left corner (image 03). - 4. (And again) Scroll to the top of the page, select a region from the dropdown in the top right corner again and notice additional form elements appear (image 04). - + 1. Login to the dashboard, selecting any one of the regions (image 01). + 2. Select a region from the dropdown in the top right corner and click on the region name (image 02). + 3. Notice the form elements appear in the bottom left corner (image 03). + 4. (And again) Scroll to the top of the page, select a region from the dropdown in the top right corner again and notice additional form elements appear (image 04). What I expect (suggestion): I expect the form elements to display correctly. Perhaps as a rendered login dialog overlayed on the current page? - System(s) Information: Client system: - $ dpkg-query --show 'firefox' 'chromium-browser' - chromium-browser18.0.1025.168~r134367-0ubuntu0.12.04.1 - firefox 14.0.1+build1-0ubuntu0.12.04.1 + $ dpkg-query --show 'firefox' 'chromium-browser' + chromium-browser18.0.1025.168~r134367-0ubuntu0.12.04.1 + firefox 14.0.1+build1-0ubuntu0.12.04.1 - $ cat /etc/lsb-release - DISTRIB_ID=Ubuntu - DISTRIB_RELEASE=12.04 - DISTRIB_CODENAME=precise - DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS + $ cat /etc/lsb-release + DISTRIB_ID=Ubuntu + DISTRIB_RELEASE=12.04 + DISTRIB_CODENAME=precise + DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS Server system: - $ dpkg-query --show *dashboard* - openstack-dashboard 2012.1-0ubuntu8.1 - openstack-dashboard-ubuntu-theme2012.1-0ubuntu8.1 + $ dpkg-query --show *dashboard* + openstack-dashboard 2012.1-0ubuntu8.1 + openstack-dashboard-ubuntu-theme2012.1-0ubuntu8.1 - $ cat /etc/lsb-release - DISTRIB_ID=Ubuntu - DISTRIB_RELEASE=12.04 - DISTRIB_CODENAME=precise - DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS + $ cat /etc/lsb-release + DISTRIB_ID=Ubuntu + DISTRIB_RELEASE=12.04 + DISTRIB_CODENAME=precise + DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS ** Description changed: After logging in to the dashboard in one region I attempted to change regions by selecting the other region in the top right corner. After doing this, login credentials form elements appear on the bottom left- hand side and does not look correct (I have attached screenshots to illustrate what I see). While I am not a UX expert, I do not believe that it is intended to display this way. I have tested in both Firefox and Chromium and experienced the same thing. How to reproduce (see attached images): - 1. Login to the dashboard, selecting any one of the regions (image 01). - 2. Select a region from the dropdown in the top right corner and click on the region name (image 02). - 3. Notice the form elements appear in the bottom left corner (image 03). - 4. (And again) Scroll to the top of the page, select a region from the dropdown in the top right corner again and notice additional form elements appear (image 04). + 1. Login to the dashboard, selecting any one of the regions (image 01). + 2. Select a region from the dropdown in the top right corner and click on the region name (image 02). + 3. Notice the form elements appear in the bottom left corner (image 03). + 4. (And again) Scroll to the top of the page, select a region from the dropdown in the top right corner again and notice additional form elements appear (image 04). What I expect (suggestion): I expect the form elements to display correctly. Perhaps as a rendered login dialog overlayed on the current page? System(s) Information: Client system: $ dpkg-query --show 'firefox' 'chromium-browser' chromium-browser18.0.1025.168~r134367-0ubuntu0.12.04.1 firefox 14.0.1+build1-0ubuntu0.12.04.1 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04
[Bug 1033920] [NEW] Dashboard raises a ServiceCatalogException when attempting to download juju settings
Public bug reported: When attempting to download the juju settings file from the dashboard I receive an exception¹ which displays an ugly, blank 500 error message. It seems that the juju module is expecting an S3 endpoint in my service catalog, not finding it and throwing an exception which is not properly caught. This means that I need to reload the page to remove the blank page and continue to use the dashboard. What I expect: I expect the exception to be thrown (as I do not have the required endpoint) and an explanation that there was an error with the endpoint and for the user to be able to use the dashboard without having to reload the page. Exception: [1]: After setting DEBUG=True I see the following ServiceCatalogException from Django: Environment: Request Method: POST Request URL: http://localhost:8000/settings/juju/ Django Version: 1.3.1 Python Version: 2.7.3 Installed Applications: ['openstack_dashboard', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'django_nose', 'horizon', 'horizon.dashboards.nova', 'horizon.dashboards.syspanel', 'horizon.dashboards.settings'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'openstack_dashboard.middleware.DashboardLogUnhandledExceptionsMiddleware', 'horizon.middleware.HorizonMiddleware', 'django.middleware.doc.XViewMiddleware', 'django.middleware.locale.LocaleMiddleware') Traceback: File /usr/lib/python2.7/dist-packages/django/core/handlers/base.py in get_response 111. response = callback(request, *callback_args, **callback_kwargs) File /usr/lib/python2.7/dist-packages/horizon/decorators.py in dec 40. return view_func(request, *args, **kwargs) File /usr/lib/python2.7/dist-packages/horizon/decorators.py in dec 55. return view_func(request, *args, **kwargs) File /usr/lib/python2.7/dist-packages/horizon/decorators.py in dec 40. return view_func(request, *args, **kwargs) File /usr/lib/python2.7/dist-packages/django/views/generic/base.py in view 47. return self.dispatch(request, *args, **kwargs) File /usr/lib/python2.7/dist-packages/django/views/generic/base.py in dispatch 68. return handler(request, *args, **kwargs) File /usr/lib/python2.7/dist-packages/horizon/forms/views.py in post 84. return self.get(self, request, *args, **kwargs) File /usr/lib/python2.7/dist-packages/horizon/forms/views.py in get 64. form, handled = self.maybe_handle() File /usr/lib/python2.7/dist-packages/horizon/forms/views.py in maybe_handle 59. self.form, self.handled = form.maybe_handle(self.request, **kwargs) File /usr/lib/python2.7/dist-packages/horizon/forms/base.py in maybe_handle 101. exceptions.handle(request) File /usr/lib/python2.7/dist-packages/horizon/forms/base.py in maybe_handle 99. return form, form.handle(request, form.cleaned_data) File /usr/lib/python2.7/dist-packages/horizon/dashboards/settings/juju/forms.py in handle 88. redirect=request.build_absolute_uri()) File /usr/lib/python2.7/dist-packages/horizon/dashboards/settings/juju/forms.py in handle 81.'s3_url': api.url_for(request, 's3'), File /usr/lib/python2.7/dist-packages/horizon/api/base.py in url_for 112. raise exceptions.ServiceCatalogException(service_type) Exception Type: ServiceCatalogException at /settings/juju/ Exception Value: Invalid service catalog service: s3 System settings: $ dpkg-query --show *dashboard* openstack-dashboard 2012.1-0ubuntu8.1 openstack-dashboard-ubuntu-theme2012.1-0ubuntu8.1 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS ** Affects: horizon (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1033920 Title: Dashboard raises a ServiceCatalogException when attempting to download juju settings To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1033920/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1033934] [NEW] Attempting to change regions in the dashboard does not display correctly
Public bug reported: After logging in to the dashboard in one region I attempted to change regions by selecting the other region in the top right corner. After doing this, login credentials form elements appear on the bottom left- hand side and does not look correct (I have attached screenshots to illustrate what I see). While I am not a UX expert, I do not believe that it is intended to display this way. I have tested in both Firefox and Chromium and experienced the same thing. How to reproduce (see attached images): 1. Login to the dashboard, selecting any one of the regions (image 01). 2. Select a region from the dropdown in the top right corner and click on the region name (image 02). 3. Notice the form elements appear in the bottom left corner (image 03). 4. (And again) Scroll to the top of the page, select a region from the dropdown in the top right corner again and notice additional form elements appear (image 04). What I expect (suggestion): I expect the form elements to display correctly. Perhaps as a rendered login dialog overlayed on the current page? System(s) Information: Client system: $ dpkg-query --show 'firefox' 'chromium-browser' chromium-browser18.0.1025.168~r134367-0ubuntu0.12.04.1 firefox 14.0.1+build1-0ubuntu0.12.04.1 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS Server system: $ dpkg-query --show *dashboard* openstack-dashboard 2012.1-0ubuntu8.1 openstack-dashboard-ubuntu-theme2012.1-0ubuntu8.1 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS ** Affects: horizon (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1033934 Title: Attempting to change regions in the dashboard does not display correctly To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1033934/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1033934] Re: Attempting to change regions in the dashboard does not display correctly
** Attachment added: Tarball containing images illustrating the display problem. https://bugs.launchpad.net/bugs/1033934/+attachment/3251203/+files/region-displays-incorrectly.tar.gz ** Description changed: After logging in to the dashboard in one region I attempted to change regions by selecting the other region in the top right corner. After doing this, login credentials form elements appear on the bottom left- hand side and does not look correct (I have attached screenshots to illustrate what I see). While I am not a UX expert, I do not believe that it is intended to display this way. I have tested in both Firefox and Chromium and experienced the same thing. - How to reproduce (see attached images): - 1. Login to the dashboard, selecting any one of the regions (image 01). - 2. Select a region from the dropdown in the top right corner and click on the region name (image 02). - 3. Notice the form elements appear in the bottom left corner (image 03). - 4. (And again) Scroll to the top of the page, select a region from the dropdown in the top right corner again and notice additional form elements appear (image 04). - + 1. Login to the dashboard, selecting any one of the regions (image 01). + 2. Select a region from the dropdown in the top right corner and click on the region name (image 02). + 3. Notice the form elements appear in the bottom left corner (image 03). + 4. (And again) Scroll to the top of the page, select a region from the dropdown in the top right corner again and notice additional form elements appear (image 04). What I expect (suggestion): I expect the form elements to display correctly. Perhaps as a rendered login dialog overlayed on the current page? - System(s) Information: Client system: - $ dpkg-query --show 'firefox' 'chromium-browser' - chromium-browser18.0.1025.168~r134367-0ubuntu0.12.04.1 - firefox 14.0.1+build1-0ubuntu0.12.04.1 + $ dpkg-query --show 'firefox' 'chromium-browser' + chromium-browser18.0.1025.168~r134367-0ubuntu0.12.04.1 + firefox 14.0.1+build1-0ubuntu0.12.04.1 - $ cat /etc/lsb-release - DISTRIB_ID=Ubuntu - DISTRIB_RELEASE=12.04 - DISTRIB_CODENAME=precise - DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS + $ cat /etc/lsb-release + DISTRIB_ID=Ubuntu + DISTRIB_RELEASE=12.04 + DISTRIB_CODENAME=precise + DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS Server system: - $ dpkg-query --show *dashboard* - openstack-dashboard 2012.1-0ubuntu8.1 - openstack-dashboard-ubuntu-theme2012.1-0ubuntu8.1 + $ dpkg-query --show *dashboard* + openstack-dashboard 2012.1-0ubuntu8.1 + openstack-dashboard-ubuntu-theme2012.1-0ubuntu8.1 - $ cat /etc/lsb-release - DISTRIB_ID=Ubuntu - DISTRIB_RELEASE=12.04 - DISTRIB_CODENAME=precise - DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS + $ cat /etc/lsb-release + DISTRIB_ID=Ubuntu + DISTRIB_RELEASE=12.04 + DISTRIB_CODENAME=precise + DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS ** Description changed: After logging in to the dashboard in one region I attempted to change regions by selecting the other region in the top right corner. After doing this, login credentials form elements appear on the bottom left- hand side and does not look correct (I have attached screenshots to illustrate what I see). While I am not a UX expert, I do not believe that it is intended to display this way. I have tested in both Firefox and Chromium and experienced the same thing. How to reproduce (see attached images): - 1. Login to the dashboard, selecting any one of the regions (image 01). - 2. Select a region from the dropdown in the top right corner and click on the region name (image 02). - 3. Notice the form elements appear in the bottom left corner (image 03). - 4. (And again) Scroll to the top of the page, select a region from the dropdown in the top right corner again and notice additional form elements appear (image 04). + 1. Login to the dashboard, selecting any one of the regions (image 01). + 2. Select a region from the dropdown in the top right corner and click on the region name (image 02). + 3. Notice the form elements appear in the bottom left corner (image 03). + 4. (And again) Scroll to the top of the page, select a region from the dropdown in the top right corner again and notice additional form elements appear (image 04). What I expect (suggestion): I expect the form elements to display correctly. Perhaps as a rendered login dialog overlayed on the current page? System(s) Information: Client system: $ dpkg-query --show 'firefox' 'chromium-browser' chromium-browser18.0.1025.168~r134367-0ubuntu0.12.04.1 firefox 14.0.1+build1-0ubuntu0.12.04.1 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04
[Bug 1032633] [NEW] Keystone's token table grows unconditionally.
Public bug reported: Keystone's `token` table grows unconditionally with expired tokens. Keystone should provide a backend-agnostic method to find and delete these tokens. This could be run via a periodic task or supplied as a script to run as a cron job. An example SQL statement (if you're using a SQL backend) to workaround this problem: sql DELETE FROM token WHERE expired = NOW(); It may be ideal to allow a date smear to allow older tokens to persist if needed. ** Affects: keystone (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to keystone in Ubuntu. https://bugs.launchpad.net/bugs/1032633 Title: Keystone's token table grows unconditionally. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1032633/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1032633] Re: Keystone's token table grows unconditionally.
** Description changed: Keystone's `token` table grows unconditionally with expired tokens. Keystone should provide a backend-agnostic method to find and delete these tokens. This could be run via a periodic task or supplied as a script to run as a cron job. An example SQL statement (if you're using a SQL backend) to workaround this problem: sql DELETE FROM token WHERE expired = NOW(); It may be ideal to allow a date smear to allow older tokens to persist if needed. + + System Information: + + $ dpkg-query --show keystone + keystone2012.1+stable~20120608-aff45d6-0ubuntu1 + + $ cat /etc/lsb-release + DISTRIB_ID=Ubuntu + DISTRIB_RELEASE=12.04 + DISTRIB_CODENAME=precise + DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS ** Summary changed: - Keystone's token table grows unconditionally. + Keystone's token table grows unconditionally when using SQL backend. ** Description changed: - Keystone's `token` table grows unconditionally with expired tokens. + Keystone's `token` table grows unconditionally with expired tokens when + using the SQL backend. Keystone should provide a backend-agnostic method to find and delete these tokens. This could be run via a periodic task or supplied as a script to run as a cron job. An example SQL statement (if you're using a SQL backend) to workaround this problem: - sql DELETE FROM token WHERE expired = NOW(); + sql DELETE FROM token WHERE expired = NOW(); It may be ideal to allow a date smear to allow older tokens to persist if needed. + + Choosing the `memcache` backend may workaround this issue, but SQL is + the package default. System Information: $ dpkg-query --show keystone keystone2012.1+stable~20120608-aff45d6-0ubuntu1 - $ cat /etc/lsb-release + $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to keystone in Ubuntu. https://bugs.launchpad.net/bugs/1032633 Title: Keystone's token table grows unconditionally when using SQL backend. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1032633/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1032633] [NEW] Keystone's token table grows unconditionally.
Public bug reported: Keystone's `token` table grows unconditionally with expired tokens. Keystone should provide a backend-agnostic method to find and delete these tokens. This could be run via a periodic task or supplied as a script to run as a cron job. An example SQL statement (if you're using a SQL backend) to workaround this problem: sql DELETE FROM token WHERE expired = NOW(); It may be ideal to allow a date smear to allow older tokens to persist if needed. ** Affects: keystone (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1032633 Title: Keystone's token table grows unconditionally. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1032633/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1032633] Re: Keystone's token table grows unconditionally.
** Description changed: Keystone's `token` table grows unconditionally with expired tokens. Keystone should provide a backend-agnostic method to find and delete these tokens. This could be run via a periodic task or supplied as a script to run as a cron job. An example SQL statement (if you're using a SQL backend) to workaround this problem: sql DELETE FROM token WHERE expired = NOW(); It may be ideal to allow a date smear to allow older tokens to persist if needed. + + System Information: + + $ dpkg-query --show keystone + keystone2012.1+stable~20120608-aff45d6-0ubuntu1 + + $ cat /etc/lsb-release + DISTRIB_ID=Ubuntu + DISTRIB_RELEASE=12.04 + DISTRIB_CODENAME=precise + DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS ** Summary changed: - Keystone's token table grows unconditionally. + Keystone's token table grows unconditionally when using SQL backend. ** Description changed: - Keystone's `token` table grows unconditionally with expired tokens. + Keystone's `token` table grows unconditionally with expired tokens when + using the SQL backend. Keystone should provide a backend-agnostic method to find and delete these tokens. This could be run via a periodic task or supplied as a script to run as a cron job. An example SQL statement (if you're using a SQL backend) to workaround this problem: - sql DELETE FROM token WHERE expired = NOW(); + sql DELETE FROM token WHERE expired = NOW(); It may be ideal to allow a date smear to allow older tokens to persist if needed. + + Choosing the `memcache` backend may workaround this issue, but SQL is + the package default. System Information: $ dpkg-query --show keystone keystone2012.1+stable~20120608-aff45d6-0ubuntu1 - $ cat /etc/lsb-release + $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1032633 Title: Keystone's token table grows unconditionally when using SQL backend. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1032633/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1028509] Re: 'nova rescue' fails if an instance image does not have a kernel_id
** Description changed: A client of mine attempted to rescue an instance by using the `nova - rescuecl` command which failed putting the instance in the `ERROR` - state. + rescue` command which failed putting the instance in the `ERROR` state. I am running Openstack Essex on Ubuntu 12.04 with KVM as my hypervisor and I do not have `rescue_image_id`, `rescue_kernel_id` nor `rescue_ramdisk_id` defined in my nova.conf file. The log (included below) indicates that the rescue failed to complete as the rescue kernel does not exist. This is expected as the config does not include the `rescue_*_id` variables and the instance was started with an image which does not include a ramdisk or kernel image. What happens: - The user is informed that an error occurred and the instance is set to the `error` state. What I expect: -- If `rescue_kernel_id` is not defined in nova.conf and the instance does not have a valid kernel_id, then the user should be informed that the instance cannot be rescued rather than the instance going into the error state and becoming inaccessible. Log file: - 2012-07-24 14:20:08 ERROR nova.compute.manager [req-2bdf6c17-1733-475f-b036-0e66a0bff266 7739264c2246454f9bdbc8a24ad30a63 cf5a8bcc6652400593b472ba82c2c2b5] unable to set user and group to '107:116' on '/srv/nova/instances/instance-367b/kernel.rescue': No such file or directory. Setting instance vm_state to ERROR 2012-07-24 14:20:08 ERROR nova.rpc.amqp [req-2bdf6c17-1733-475f-b036-0e66a0bff266 7739264c2246454f9bdbc8a24ad30a63 cf5a8bcc6652400593b472ba82c2c2b5] Exception during message handling 2012-07-24 14:20:08 TRACE nova.rpc.amqp Traceback (most recent call last): 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 253, in _process_data 2012-07-24 14:20:08 TRACE nova.rpc.amqp rval = node_func(context=ctxt, **node_args) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped 2012-07-24 14:20:08 TRACE nova.rpc.amqp return f(*args, **kw) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 159, in decorated_function 2012-07-24 14:20:08 TRACE nova.rpc.amqp function(self, context, instance_uuid, *args, **kwargs) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 183, in decorated_function 2012-07-24 14:20:08 TRACE nova.rpc.amqp sys.exc_info()) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ 2012-07-24 14:20:08 TRACE nova.rpc.amqp self.gen.next() 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 177, in decorated_function 2012-07-24 14:20:08 TRACE nova.rpc.amqp return function(self, context, instance_uuid, *args, **kwargs) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1133, in rescue_instance 2012-07-24 14:20:08 TRACE nova.rpc.amqp self._legacy_nw_info(network_info), image_meta) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped 2012-07-24 14:20:08 TRACE nova.rpc.amqp return f(*args, **kw) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 853, in rescue 2012-07-24 14:20:08 TRACE nova.rpc.amqp self._hard_reboot(instance, network_info, xml=xml) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 783, in _hard_reboot 2012-07-24 14:20:08 TRACE nova.rpc.amqp self._create_new_domain(xml) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 1589, in _create_new_domain 2012-07-24 14:20:08 TRACE nova.rpc.amqp domain.createWithFlags(launch_flags) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/libvirt.py, line 581, in createWithFlags 2012-07-24 14:20:08 TRACE nova.rpc.amqp if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) 2012-07-24 14:20:08 TRACE nova.rpc.amqp libvirtError: unable to set user and group to '107:116' on '/srv/nova/instances/instance-367b/kernel.rescue': No such file or directory SQL showing the relevant image fields for the instance: --- mysql select image_ref, kernel_id, ramdisk_id from instances where id = '13947'; +--+---++ | image_ref| kernel_id | ramdisk_id |
[Bug 1028509] Re: 'nova rescue' fails if an instance image does not have a kernel_id
** Description changed: A client of mine attempted to rescue an instance by using the `nova - rescuecl` command which failed putting the instance in the `ERROR` - state. + rescue` command which failed putting the instance in the `ERROR` state. I am running Openstack Essex on Ubuntu 12.04 with KVM as my hypervisor and I do not have `rescue_image_id`, `rescue_kernel_id` nor `rescue_ramdisk_id` defined in my nova.conf file. The log (included below) indicates that the rescue failed to complete as the rescue kernel does not exist. This is expected as the config does not include the `rescue_*_id` variables and the instance was started with an image which does not include a ramdisk or kernel image. What happens: - The user is informed that an error occurred and the instance is set to the `error` state. What I expect: -- If `rescue_kernel_id` is not defined in nova.conf and the instance does not have a valid kernel_id, then the user should be informed that the instance cannot be rescued rather than the instance going into the error state and becoming inaccessible. Log file: - 2012-07-24 14:20:08 ERROR nova.compute.manager [req-2bdf6c17-1733-475f-b036-0e66a0bff266 7739264c2246454f9bdbc8a24ad30a63 cf5a8bcc6652400593b472ba82c2c2b5] unable to set user and group to '107:116' on '/srv/nova/instances/instance-367b/kernel.rescue': No such file or directory. Setting instance vm_state to ERROR 2012-07-24 14:20:08 ERROR nova.rpc.amqp [req-2bdf6c17-1733-475f-b036-0e66a0bff266 7739264c2246454f9bdbc8a24ad30a63 cf5a8bcc6652400593b472ba82c2c2b5] Exception during message handling 2012-07-24 14:20:08 TRACE nova.rpc.amqp Traceback (most recent call last): 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 253, in _process_data 2012-07-24 14:20:08 TRACE nova.rpc.amqp rval = node_func(context=ctxt, **node_args) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped 2012-07-24 14:20:08 TRACE nova.rpc.amqp return f(*args, **kw) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 159, in decorated_function 2012-07-24 14:20:08 TRACE nova.rpc.amqp function(self, context, instance_uuid, *args, **kwargs) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 183, in decorated_function 2012-07-24 14:20:08 TRACE nova.rpc.amqp sys.exc_info()) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ 2012-07-24 14:20:08 TRACE nova.rpc.amqp self.gen.next() 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 177, in decorated_function 2012-07-24 14:20:08 TRACE nova.rpc.amqp return function(self, context, instance_uuid, *args, **kwargs) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1133, in rescue_instance 2012-07-24 14:20:08 TRACE nova.rpc.amqp self._legacy_nw_info(network_info), image_meta) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped 2012-07-24 14:20:08 TRACE nova.rpc.amqp return f(*args, **kw) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 853, in rescue 2012-07-24 14:20:08 TRACE nova.rpc.amqp self._hard_reboot(instance, network_info, xml=xml) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 783, in _hard_reboot 2012-07-24 14:20:08 TRACE nova.rpc.amqp self._create_new_domain(xml) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 1589, in _create_new_domain 2012-07-24 14:20:08 TRACE nova.rpc.amqp domain.createWithFlags(launch_flags) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/libvirt.py, line 581, in createWithFlags 2012-07-24 14:20:08 TRACE nova.rpc.amqp if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) 2012-07-24 14:20:08 TRACE nova.rpc.amqp libvirtError: unable to set user and group to '107:116' on '/srv/nova/instances/instance-367b/kernel.rescue': No such file or directory SQL showing the relevant image fields for the instance: --- mysql select image_ref, kernel_id, ramdisk_id from instances where id = '13947'; +--+---++ | image_ref| kernel_id | ramdisk_id |
[Bug 1028509] [NEW] 'nova rescue' fails if an instance image does not have a kernel_id
Public bug reported: A client of mine attempted to rescue an instance by using the `nova rescuecl` command which failed putting the instance in the `ERROR` state. I am running Openstack Essex on Ubuntu 12.04 with KVM as my hypervisor and I do not have `rescue_image_id`, `rescue_kernel_id` nor `rescue_ramdisk_id` defined in my nova.conf file. The log (included below) indicates that the rescue failed to complete as the rescue kernel does not exist. This is expected as the config does not include the `rescue_*_id` variables and the instance was started with an image which does not include a ramdisk or kernel image. What happens: - The user is informed that an error occurred and the instance is set to the `error` state. What I expect: -- If `rescue_kernel_id` is not defined and the instance does not have a valid kernel_id, then the user should be informed that the instance cannot be rescued rather than the instance going into the error state and becoming inaccessible. Log file: - 2012-07-24 14:20:08 ERROR nova.compute.manager [req-2bdf6c17-1733-475f-b036-0e66a0bff266 7739264c2246454f9bdbc8a24ad30a63 cf5a8bcc6652400593b472ba82c2c2b5] unable to set user and group to '107:116' on '/srv/nova/instances/instance-367b/kernel.rescue': No such file or directory. Setting instance vm_state to ERROR 2012-07-24 14:20:08 ERROR nova.rpc.amqp [req-2bdf6c17-1733-475f-b036-0e66a0bff266 7739264c2246454f9bdbc8a24ad30a63 cf5a8bcc6652400593b472ba82c2c2b5] Exception during message handling 2012-07-24 14:20:08 TRACE nova.rpc.amqp Traceback (most recent call last): 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 253, in _process_data 2012-07-24 14:20:08 TRACE nova.rpc.amqp rval = node_func(context=ctxt, **node_args) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped 2012-07-24 14:20:08 TRACE nova.rpc.amqp return f(*args, **kw) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 159, in decorated_function 2012-07-24 14:20:08 TRACE nova.rpc.amqp function(self, context, instance_uuid, *args, **kwargs) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 183, in decorated_function 2012-07-24 14:20:08 TRACE nova.rpc.amqp sys.exc_info()) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ 2012-07-24 14:20:08 TRACE nova.rpc.amqp self.gen.next() 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 177, in decorated_function 2012-07-24 14:20:08 TRACE nova.rpc.amqp return function(self, context, instance_uuid, *args, **kwargs) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1133, in rescue_instance 2012-07-24 14:20:08 TRACE nova.rpc.amqp self._legacy_nw_info(network_info), image_meta) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped 2012-07-24 14:20:08 TRACE nova.rpc.amqp return f(*args, **kw) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 853, in rescue 2012-07-24 14:20:08 TRACE nova.rpc.amqp self._hard_reboot(instance, network_info, xml=xml) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 783, in _hard_reboot 2012-07-24 14:20:08 TRACE nova.rpc.amqp self._create_new_domain(xml) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 1589, in _create_new_domain 2012-07-24 14:20:08 TRACE nova.rpc.amqp domain.createWithFlags(launch_flags) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/libvirt.py, line 581, in createWithFlags 2012-07-24 14:20:08 TRACE nova.rpc.amqp if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) 2012-07-24 14:20:08 TRACE nova.rpc.amqp libvirtError: unable to set user and group to '107:116' on '/srv/nova/instances/instance-367b/kernel.rescue': No such file or directory SQL showing the relevant image fields for the instance: --- mysql select image_ref, kernel_id, ramdisk_id from instances where id = '13947'; +--+---++ | image_ref| kernel_id | ramdisk_id | +--+---++ | 3400daaa-fbea-407b-b92c-5b66c6f168cf | || +--+---++ System information: --- $ cat /etc/lsb-release DISTRIB_ID=Ubuntu
[Bug 1028509] Re: 'nova rescue' fails if an instance image does not have a kernel_id
** Description changed: A client of mine attempted to rescue an instance by using the `nova rescuecl` command which failed putting the instance in the `ERROR` state. I am running Openstack Essex on Ubuntu 12.04 with KVM as my hypervisor and I do not have `rescue_image_id`, `rescue_kernel_id` nor `rescue_ramdisk_id` defined in my nova.conf file. The log (included below) indicates that the rescue failed to complete as the rescue kernel does not exist. This is expected as the config does not include the `rescue_*_id` variables and the instance was started with an image which does not include a ramdisk or kernel image. - What happens: - The user is informed that an error occurred and the instance is set to the `error` state. - What I expect: -- - If `rescue_kernel_id` is not defined and the instance does not have a - valid kernel_id, then the user should be informed that the instance - cannot be rescued rather than the instance going into the error state - and becoming inaccessible. - + If `rescue_kernel_id` is not defined in nova.conf and the instance does + not have a valid kernel_id, then the user should be informed that the + instance cannot be rescued rather than the instance going into the error + state and becoming inaccessible. Log file: - 2012-07-24 14:20:08 ERROR nova.compute.manager [req-2bdf6c17-1733-475f-b036-0e66a0bff266 7739264c2246454f9bdbc8a24ad30a63 cf5a8bcc6652400593b472ba82c2c2b5] unable to set user and group to '107:116' on '/srv/nova/instances/instance-367b/kernel.rescue': No such file or directory. Setting instance vm_state to ERROR 2012-07-24 14:20:08 ERROR nova.rpc.amqp [req-2bdf6c17-1733-475f-b036-0e66a0bff266 7739264c2246454f9bdbc8a24ad30a63 cf5a8bcc6652400593b472ba82c2c2b5] Exception during message handling 2012-07-24 14:20:08 TRACE nova.rpc.amqp Traceback (most recent call last): 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 253, in _process_data 2012-07-24 14:20:08 TRACE nova.rpc.amqp rval = node_func(context=ctxt, **node_args) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped 2012-07-24 14:20:08 TRACE nova.rpc.amqp return f(*args, **kw) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 159, in decorated_function 2012-07-24 14:20:08 TRACE nova.rpc.amqp function(self, context, instance_uuid, *args, **kwargs) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 183, in decorated_function 2012-07-24 14:20:08 TRACE nova.rpc.amqp sys.exc_info()) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ 2012-07-24 14:20:08 TRACE nova.rpc.amqp self.gen.next() 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 177, in decorated_function 2012-07-24 14:20:08 TRACE nova.rpc.amqp return function(self, context, instance_uuid, *args, **kwargs) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1133, in rescue_instance 2012-07-24 14:20:08 TRACE nova.rpc.amqp self._legacy_nw_info(network_info), image_meta) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped 2012-07-24 14:20:08 TRACE nova.rpc.amqp return f(*args, **kw) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 853, in rescue 2012-07-24 14:20:08 TRACE nova.rpc.amqp self._hard_reboot(instance, network_info, xml=xml) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 783, in _hard_reboot 2012-07-24 14:20:08 TRACE nova.rpc.amqp self._create_new_domain(xml) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 1589, in _create_new_domain 2012-07-24 14:20:08 TRACE nova.rpc.amqp domain.createWithFlags(launch_flags) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/libvirt.py, line 581, in createWithFlags 2012-07-24 14:20:08 TRACE nova.rpc.amqp if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) 2012-07-24 14:20:08 TRACE nova.rpc.amqp libvirtError: unable to set user and group to '107:116' on '/srv/nova/instances/instance-367b/kernel.rescue': No such file or directory SQL showing the relevant image fields for the instance: --- mysql select image_ref, kernel_id, ramdisk_id from instances where id = '13947';
[Bug 1028509] [NEW] 'nova rescue' fails if an instance image does not have a kernel_id
Public bug reported: A client of mine attempted to rescue an instance by using the `nova rescuecl` command which failed putting the instance in the `ERROR` state. I am running Openstack Essex on Ubuntu 12.04 with KVM as my hypervisor and I do not have `rescue_image_id`, `rescue_kernel_id` nor `rescue_ramdisk_id` defined in my nova.conf file. The log (included below) indicates that the rescue failed to complete as the rescue kernel does not exist. This is expected as the config does not include the `rescue_*_id` variables and the instance was started with an image which does not include a ramdisk or kernel image. What happens: - The user is informed that an error occurred and the instance is set to the `error` state. What I expect: -- If `rescue_kernel_id` is not defined and the instance does not have a valid kernel_id, then the user should be informed that the instance cannot be rescued rather than the instance going into the error state and becoming inaccessible. Log file: - 2012-07-24 14:20:08 ERROR nova.compute.manager [req-2bdf6c17-1733-475f-b036-0e66a0bff266 7739264c2246454f9bdbc8a24ad30a63 cf5a8bcc6652400593b472ba82c2c2b5] unable to set user and group to '107:116' on '/srv/nova/instances/instance-367b/kernel.rescue': No such file or directory. Setting instance vm_state to ERROR 2012-07-24 14:20:08 ERROR nova.rpc.amqp [req-2bdf6c17-1733-475f-b036-0e66a0bff266 7739264c2246454f9bdbc8a24ad30a63 cf5a8bcc6652400593b472ba82c2c2b5] Exception during message handling 2012-07-24 14:20:08 TRACE nova.rpc.amqp Traceback (most recent call last): 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 253, in _process_data 2012-07-24 14:20:08 TRACE nova.rpc.amqp rval = node_func(context=ctxt, **node_args) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped 2012-07-24 14:20:08 TRACE nova.rpc.amqp return f(*args, **kw) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 159, in decorated_function 2012-07-24 14:20:08 TRACE nova.rpc.amqp function(self, context, instance_uuid, *args, **kwargs) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 183, in decorated_function 2012-07-24 14:20:08 TRACE nova.rpc.amqp sys.exc_info()) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ 2012-07-24 14:20:08 TRACE nova.rpc.amqp self.gen.next() 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 177, in decorated_function 2012-07-24 14:20:08 TRACE nova.rpc.amqp return function(self, context, instance_uuid, *args, **kwargs) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1133, in rescue_instance 2012-07-24 14:20:08 TRACE nova.rpc.amqp self._legacy_nw_info(network_info), image_meta) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped 2012-07-24 14:20:08 TRACE nova.rpc.amqp return f(*args, **kw) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 853, in rescue 2012-07-24 14:20:08 TRACE nova.rpc.amqp self._hard_reboot(instance, network_info, xml=xml) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 783, in _hard_reboot 2012-07-24 14:20:08 TRACE nova.rpc.amqp self._create_new_domain(xml) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 1589, in _create_new_domain 2012-07-24 14:20:08 TRACE nova.rpc.amqp domain.createWithFlags(launch_flags) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/libvirt.py, line 581, in createWithFlags 2012-07-24 14:20:08 TRACE nova.rpc.amqp if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) 2012-07-24 14:20:08 TRACE nova.rpc.amqp libvirtError: unable to set user and group to '107:116' on '/srv/nova/instances/instance-367b/kernel.rescue': No such file or directory SQL showing the relevant image fields for the instance: --- mysql select image_ref, kernel_id, ramdisk_id from instances where id = '13947'; +--+---++ | image_ref| kernel_id | ramdisk_id | +--+---++ | 3400daaa-fbea-407b-b92c-5b66c6f168cf | || +--+---++ System information: --- $ cat /etc/lsb-release DISTRIB_ID=Ubuntu
[Bug 1028509] Re: 'nova rescue' fails if an instance image does not have a kernel_id
** Description changed: A client of mine attempted to rescue an instance by using the `nova rescuecl` command which failed putting the instance in the `ERROR` state. I am running Openstack Essex on Ubuntu 12.04 with KVM as my hypervisor and I do not have `rescue_image_id`, `rescue_kernel_id` nor `rescue_ramdisk_id` defined in my nova.conf file. The log (included below) indicates that the rescue failed to complete as the rescue kernel does not exist. This is expected as the config does not include the `rescue_*_id` variables and the instance was started with an image which does not include a ramdisk or kernel image. - What happens: - The user is informed that an error occurred and the instance is set to the `error` state. - What I expect: -- - If `rescue_kernel_id` is not defined and the instance does not have a - valid kernel_id, then the user should be informed that the instance - cannot be rescued rather than the instance going into the error state - and becoming inaccessible. - + If `rescue_kernel_id` is not defined in nova.conf and the instance does + not have a valid kernel_id, then the user should be informed that the + instance cannot be rescued rather than the instance going into the error + state and becoming inaccessible. Log file: - 2012-07-24 14:20:08 ERROR nova.compute.manager [req-2bdf6c17-1733-475f-b036-0e66a0bff266 7739264c2246454f9bdbc8a24ad30a63 cf5a8bcc6652400593b472ba82c2c2b5] unable to set user and group to '107:116' on '/srv/nova/instances/instance-367b/kernel.rescue': No such file or directory. Setting instance vm_state to ERROR 2012-07-24 14:20:08 ERROR nova.rpc.amqp [req-2bdf6c17-1733-475f-b036-0e66a0bff266 7739264c2246454f9bdbc8a24ad30a63 cf5a8bcc6652400593b472ba82c2c2b5] Exception during message handling 2012-07-24 14:20:08 TRACE nova.rpc.amqp Traceback (most recent call last): 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 253, in _process_data 2012-07-24 14:20:08 TRACE nova.rpc.amqp rval = node_func(context=ctxt, **node_args) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped 2012-07-24 14:20:08 TRACE nova.rpc.amqp return f(*args, **kw) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 159, in decorated_function 2012-07-24 14:20:08 TRACE nova.rpc.amqp function(self, context, instance_uuid, *args, **kwargs) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 183, in decorated_function 2012-07-24 14:20:08 TRACE nova.rpc.amqp sys.exc_info()) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ 2012-07-24 14:20:08 TRACE nova.rpc.amqp self.gen.next() 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 177, in decorated_function 2012-07-24 14:20:08 TRACE nova.rpc.amqp return function(self, context, instance_uuid, *args, **kwargs) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1133, in rescue_instance 2012-07-24 14:20:08 TRACE nova.rpc.amqp self._legacy_nw_info(network_info), image_meta) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped 2012-07-24 14:20:08 TRACE nova.rpc.amqp return f(*args, **kw) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 853, in rescue 2012-07-24 14:20:08 TRACE nova.rpc.amqp self._hard_reboot(instance, network_info, xml=xml) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 783, in _hard_reboot 2012-07-24 14:20:08 TRACE nova.rpc.amqp self._create_new_domain(xml) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 1589, in _create_new_domain 2012-07-24 14:20:08 TRACE nova.rpc.amqp domain.createWithFlags(launch_flags) 2012-07-24 14:20:08 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/libvirt.py, line 581, in createWithFlags 2012-07-24 14:20:08 TRACE nova.rpc.amqp if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) 2012-07-24 14:20:08 TRACE nova.rpc.amqp libvirtError: unable to set user and group to '107:116' on '/srv/nova/instances/instance-367b/kernel.rescue': No such file or directory SQL showing the relevant image fields for the instance: --- mysql select image_ref, kernel_id, ramdisk_id from instances where id = '13947';
[Bug 1026621] Re: nova-network gets release_fixed_ip events from someplace, but the database still keeps them associated with instances
We're running nova with nova.network.manager.FlatDHCPManager. Grepping the nova-network machine's syslog shows that the last DHCP lease request was on Jul 16 14:04:29 Sample from the log file: Jul 16 14:04:29 dziban dnsmasq-dhcp[30249]: DHCPREQUEST(br100) 10.55.60.141 fa:16:3e:11:c5:37 Jul 16 14:04:29 dziban dnsmasq-dhcp[30249]: DHCPACK(br100) 10.55.60.141 fa:16:3e:11:c5:37 server-13282 -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/1026621 Title: nova-network gets release_fixed_ip events from someplace, but the database still keeps them associated with instances To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1026621/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1026621] Re: nova-network gets release_fixed_ip events from someplace, but the database still keeps them associated with instances
We're running nova with nova.network.manager.FlatDHCPManager. Grepping the nova-network machine's syslog shows that the last DHCP lease request was on Jul 16 14:04:29 Sample from the log file: Jul 16 14:04:29 dziban dnsmasq-dhcp[30249]: DHCPREQUEST(br100) 10.55.60.141 fa:16:3e:11:c5:37 Jul 16 14:04:29 dziban dnsmasq-dhcp[30249]: DHCPACK(br100) 10.55.60.141 fa:16:3e:11:c5:37 server-13282 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1026621 Title: nova-network gets release_fixed_ip events from someplace, but the database still keeps them associated with instances To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1026621/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1019913] Re: Lazy load of attribute fails for instance_type.rxtx_factor
** Description changed: - Running proposed on one of our clusters, I see the following with - instances started via juju. I have been unable to re-create the problem - with raw ec2 commands. + Running Precise proposed on one of our clusters, I see the following + with instances started via juju. I have been unable to re-create the + problem with raw ec2 commands. [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] Ensuring static filters [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] Instance failed to spawn [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] Traceback (most recent call last): [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 598, in _spawn [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] self._legacy_nw_info(network_info), block_device_info) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] return f(*args, **kw) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 921, in spawn [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] self.firewall_driver.prepare_instance_filter(instance, network_info) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/nova/virt/firewall.py, line 136, in prepare_instance_filter [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] self.add_filters_for_instance(instance) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/nova/virt/firewall.py, line 178, in add_filters_for_instance [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] ipv4_rules, ipv6_rules = self.instance_rules(instance, network_info) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/nova/virt/firewall.py, line 335, in instance_rules [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] instance) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/nova/network/api.py, line 213, in get_instance_nw_info [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] 'rxtx_factor': instance['instance_type']['rxtx_factor'], [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/models.py, line 75, in __getitem__ [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] return getattr(self, key) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 168, in __get__ [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] return self.impl.get(instance_state(instance),dict_) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 453, in get [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] value = self.callable_(state, passive) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/strategies.py, line 485, in _load_for_state [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] (mapperutil.state_str(state), key) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] DetachedInstanceError: Parent instance Instance at 0x45f6350 is not bound to a Session; lazy load operation of attribute 'instance_type' cannot proceed [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/1019913 Title: Lazy load of attribute fails for instance_type.rxtx_factor To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1019913/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1019913] Re: Lazy load of attribute fails for instance_type.rxtx_factor
** Description changed: - Running proposed on one of our clusters, I see the following with - instances started via juju. I have been unable to re-create the problem - with raw ec2 commands. + Running Precise proposed on one of our clusters, I see the following + with instances started via juju. I have been unable to re-create the + problem with raw ec2 commands. [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] Ensuring static filters [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] Instance failed to spawn [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] Traceback (most recent call last): [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 598, in _spawn [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] self._legacy_nw_info(network_info), block_device_info) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] return f(*args, **kw) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 921, in spawn [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] self.firewall_driver.prepare_instance_filter(instance, network_info) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/nova/virt/firewall.py, line 136, in prepare_instance_filter [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] self.add_filters_for_instance(instance) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/nova/virt/firewall.py, line 178, in add_filters_for_instance [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] ipv4_rules, ipv6_rules = self.instance_rules(instance, network_info) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/nova/virt/firewall.py, line 335, in instance_rules [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] instance) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/nova/network/api.py, line 213, in get_instance_nw_info [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] 'rxtx_factor': instance['instance_type']['rxtx_factor'], [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/models.py, line 75, in __getitem__ [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] return getattr(self, key) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 168, in __get__ [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] return self.impl.get(instance_state(instance),dict_) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 453, in get [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] value = self.callable_(state, passive) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/strategies.py, line 485, in _load_for_state [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] (mapperutil.state_str(state), key) [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] DetachedInstanceError: Parent instance Instance at 0x45f6350 is not bound to a Session; lazy load operation of attribute 'instance_type' cannot proceed [instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1019913 Title: Lazy load of attribute fails for instance_type.rxtx_factor To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1019913/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1016111] [NEW] Passing an incorrect zone name to euca-create-volume results in a volume stuck in the creating state
Public bug reported: What happens: I have two regions and one availability zone (AZ) for my Essex cluster. When creating a volume I accidentally used the region name instead of the AZ name. This resulted in a volume being stuck in the creating state which I cannot delete. What I expect: I expect two things: 1. The volume creation should fail (it currently does) 2. The volume to go into an error state (it does not currently do this) Reasoning: Having the volume go into an error state informs the user that there was a problem and does not leave them wondering if they should wait longer in case the command hasn't completed. Steps to reproduce: 1. user@laptop:~$ euca-create-volume -s 2 # command requires a zone These required options are missing: zone 2. user@laptop:~$ euca-create-volume -s 2 -z lcy-2 # accidentally pass a non-existant zone name VOLUME vol-00012 lcy-2 creating 2012-06-21T13:50:10.074Z 3. user@laptop:~$ euca-describe-volumes vol-0001 VOLUME vol-0001 2 lcy-2 creating 2012-06-21T13:50:10.000Z 4. The nova-scheduler log file shows the following. 2012-06-21 13:58:41 WARNING nova.scheduler.manager [req-63af9b42-502d-4ad4-9ae1-e981d053f9fc a9d62e6e73294368b79d21ea2a2e2d86 df473f958e4f47949282696966e58f49] Failed to schedule_create_volume: No valid host was found. Is the appropriate service running? 2012-06-21 13:58:41 ERROR nova.rpc.amqp [req-63af9b42-502d-4ad4-9ae1-e981d053f9fc a9d62e6e73294368b79d21ea2a2e2d86 df473f958e4f47949282696966e58f49] Exception during message handling 2012-06-21 13:58:41 TRACE nova.rpc.amqp Traceback (most recent call last): 2012-06-21 13:58:41 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 252, in _process_data 2012-06-21 13:58:41 TRACE nova.rpc.amqp rval = node_func(context=ctxt, **node_args) 2012-06-21 13:58:41 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/scheduler/manager.py, line 97, in _schedule 2012-06-21 13:58:41 TRACE nova.rpc.amqp context, ex, *args, **kwargs) 2012-06-21 13:58:41 TRACE nova.rpc.amqp File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ 2012-06-21 13:58:41 TRACE nova.rpc.amqp self.gen.next() 2012-06-21 13:58:41 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/scheduler/manager.py, line 92, in _schedule 2012-06-21 13:58:41 TRACE nova.rpc.amqp return driver_method(*args, **kwargs) 2012-06-21 13:58:41 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/scheduler/simple.py, line 144, in schedule_create_volume 2012-06-21 13:58:41 TRACE nova.rpc.amqp raise exception.NoValidHost(reason=msg) 2012-06-21 13:58:41 TRACE nova.rpc.amqp NoValidHost: No valid host was found. Is the appropriate service running? Successful volume creation: 1. user@laptop:~$ euca-describe-availability-zones # find the AZ AVAILABILITYZONEnovaavailable 2. user@laptop:~$ euca-create-volume -s 2 -z nova # create the volume passing in the correct AZ name VOLUME vol-00032 novacreating 2012-06-21T14:02:07.842Z 3. agy@agy-laptop:~$ euca-describe-volumes VOLUME vol-0001 2 lcy-2 creating 2012-06-21T13:50:10.000Z VOLUME vol-0002 2 lcy-2 creating 2012-06-21T13:58:41.000Z VOLUME vol-0003 2 novaavailable 2012-06-21T14:02:07.000Z Note the two volumes that have failed and will not complete and the correctly created volume. Ideally, I would like the API server to reject the command and return an informative error message to the user before attempting to create the volume. Operating System Information (all machines): $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS Package Information (API/Controller): $ dpkg-query --show nova-* nova-api2012.1-0ubuntu2.1 nova-cert 2012.1-0ubuntu2.1 nova-common 2012.1-0ubuntu2.1 nova-doc2012.1-0ubuntu2.1 nova-network2012.1-0ubuntu2.1 nova-objectstore2012.1-0ubuntu2.1 nova-scheduler 2012.1-0ubuntu2.1 Package Information (Volume Node): $ dpkg-query --show nova-* nova-common 2012.1-0ubuntu2.3 nova-volume 2012.1-0ubuntu2.3 ** Affects: nova (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/1016111 Title: Passing an incorrect zone name to euca-create-volume results in a volume stuck in the creating state To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1016111/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or
[Bug 1016111] [NEW] Passing an incorrect zone name to euca-create-volume results in a volume stuck in the creating state
Public bug reported: What happens: I have two regions and one availability zone (AZ) for my Essex cluster. When creating a volume I accidentally used the region name instead of the AZ name. This resulted in a volume being stuck in the creating state which I cannot delete. What I expect: I expect two things: 1. The volume creation should fail (it currently does) 2. The volume to go into an error state (it does not currently do this) Reasoning: Having the volume go into an error state informs the user that there was a problem and does not leave them wondering if they should wait longer in case the command hasn't completed. Steps to reproduce: 1. user@laptop:~$ euca-create-volume -s 2 # command requires a zone These required options are missing: zone 2. user@laptop:~$ euca-create-volume -s 2 -z lcy-2 # accidentally pass a non-existant zone name VOLUME vol-00012 lcy-2 creating 2012-06-21T13:50:10.074Z 3. user@laptop:~$ euca-describe-volumes vol-0001 VOLUME vol-0001 2 lcy-2 creating 2012-06-21T13:50:10.000Z 4. The nova-scheduler log file shows the following. 2012-06-21 13:58:41 WARNING nova.scheduler.manager [req-63af9b42-502d-4ad4-9ae1-e981d053f9fc a9d62e6e73294368b79d21ea2a2e2d86 df473f958e4f47949282696966e58f49] Failed to schedule_create_volume: No valid host was found. Is the appropriate service running? 2012-06-21 13:58:41 ERROR nova.rpc.amqp [req-63af9b42-502d-4ad4-9ae1-e981d053f9fc a9d62e6e73294368b79d21ea2a2e2d86 df473f958e4f47949282696966e58f49] Exception during message handling 2012-06-21 13:58:41 TRACE nova.rpc.amqp Traceback (most recent call last): 2012-06-21 13:58:41 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 252, in _process_data 2012-06-21 13:58:41 TRACE nova.rpc.amqp rval = node_func(context=ctxt, **node_args) 2012-06-21 13:58:41 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/scheduler/manager.py, line 97, in _schedule 2012-06-21 13:58:41 TRACE nova.rpc.amqp context, ex, *args, **kwargs) 2012-06-21 13:58:41 TRACE nova.rpc.amqp File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ 2012-06-21 13:58:41 TRACE nova.rpc.amqp self.gen.next() 2012-06-21 13:58:41 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/scheduler/manager.py, line 92, in _schedule 2012-06-21 13:58:41 TRACE nova.rpc.amqp return driver_method(*args, **kwargs) 2012-06-21 13:58:41 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/scheduler/simple.py, line 144, in schedule_create_volume 2012-06-21 13:58:41 TRACE nova.rpc.amqp raise exception.NoValidHost(reason=msg) 2012-06-21 13:58:41 TRACE nova.rpc.amqp NoValidHost: No valid host was found. Is the appropriate service running? Successful volume creation: 1. user@laptop:~$ euca-describe-availability-zones # find the AZ AVAILABILITYZONEnovaavailable 2. user@laptop:~$ euca-create-volume -s 2 -z nova # create the volume passing in the correct AZ name VOLUME vol-00032 novacreating 2012-06-21T14:02:07.842Z 3. agy@agy-laptop:~$ euca-describe-volumes VOLUME vol-0001 2 lcy-2 creating 2012-06-21T13:50:10.000Z VOLUME vol-0002 2 lcy-2 creating 2012-06-21T13:58:41.000Z VOLUME vol-0003 2 novaavailable 2012-06-21T14:02:07.000Z Note the two volumes that have failed and will not complete and the correctly created volume. Ideally, I would like the API server to reject the command and return an informative error message to the user before attempting to create the volume. Operating System Information (all machines): $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu 12.04 LTS Package Information (API/Controller): $ dpkg-query --show nova-* nova-api2012.1-0ubuntu2.1 nova-cert 2012.1-0ubuntu2.1 nova-common 2012.1-0ubuntu2.1 nova-doc2012.1-0ubuntu2.1 nova-network2012.1-0ubuntu2.1 nova-objectstore2012.1-0ubuntu2.1 nova-scheduler 2012.1-0ubuntu2.1 Package Information (Volume Node): $ dpkg-query --show nova-* nova-common 2012.1-0ubuntu2.3 nova-volume 2012.1-0ubuntu2.3 ** Affects: nova (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1016111 Title: Passing an incorrect zone name to euca-create-volume results in a volume stuck in the creating state To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1016111/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 985489] Re: nova-compute stops processing compute.$HOSTNAME occasionally
Fortunately (or not) this has just recently occurred. We do not have debug symbols installed. It looks to me to be stuck on virDomainGetInfo(). Interestingly, libvirtd seems to be responding when I query it. Perhaps there is a missing timeout somewhere? Backtrace from python: (gdb) bt #0 0x7f57e2424b03 in poll () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x7f57df7e757c in ?? () from /usr/lib/libvirt.so.0 #2 0x7f57df7e81d1 in ?? () from /usr/lib/libvirt.so.0 #3 0x7f57df7e9400 in ?? () from /usr/lib/libvirt.so.0 #4 0x7f57df7e9af7 in ?? () from /usr/lib/libvirt.so.0 #5 0x7f57df7cdaf0 in ?? () from /usr/lib/libvirt.so.0 #6 0x7f57df7cdc44 in ?? () from /usr/lib/libvirt.so.0 #7 0x7f57df7d47f3 in ?? () from /usr/lib/libvirt.so.0 #8 0x7f57df7a4b7a in virDomainGetInfo () from /usr/lib/libvirt.so.0 #9 0x7f57dfabb355 in ?? () from /usr/lib/python2.7/dist-packages/libvirtmod.so #10 0x00566df4 in ?? () #11 0x7fff7952f7c0 in ?? () #12 0x7fff7952f6f0 in ?? () #13 0x02daa3c0 in ?? () #14 0x021610a0 in ?? () #15 0x04f342d8 in ?? () #16 0x7f8579e6e212c67e in ?? () #17 0x7f57e3a29e10 in ?? () #18 0x039c1ab8 in ?? () #19 0x021610a0 in ?? () #20 0x02da89f0 in ?? () #21 0x in ?? () -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/985489 Title: nova-compute stops processing compute.$HOSTNAME occasionally To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/985489/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 985489] Re: nova-compute stops processing compute.$HOSTNAME occasionally
Fortunately (or not) this has just recently occurred. We do not have debug symbols installed. It looks to me to be stuck on virDomainGetInfo(). Interestingly, libvirtd seems to be responding when I query it. Perhaps there is a missing timeout somewhere? Backtrace from python: (gdb) bt #0 0x7f57e2424b03 in poll () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x7f57df7e757c in ?? () from /usr/lib/libvirt.so.0 #2 0x7f57df7e81d1 in ?? () from /usr/lib/libvirt.so.0 #3 0x7f57df7e9400 in ?? () from /usr/lib/libvirt.so.0 #4 0x7f57df7e9af7 in ?? () from /usr/lib/libvirt.so.0 #5 0x7f57df7cdaf0 in ?? () from /usr/lib/libvirt.so.0 #6 0x7f57df7cdc44 in ?? () from /usr/lib/libvirt.so.0 #7 0x7f57df7d47f3 in ?? () from /usr/lib/libvirt.so.0 #8 0x7f57df7a4b7a in virDomainGetInfo () from /usr/lib/libvirt.so.0 #9 0x7f57dfabb355 in ?? () from /usr/lib/python2.7/dist-packages/libvirtmod.so #10 0x00566df4 in ?? () #11 0x7fff7952f7c0 in ?? () #12 0x7fff7952f6f0 in ?? () #13 0x02daa3c0 in ?? () #14 0x021610a0 in ?? () #15 0x04f342d8 in ?? () #16 0x7f8579e6e212c67e in ?? () #17 0x7f57e3a29e10 in ?? () #18 0x039c1ab8 in ?? () #19 0x021610a0 in ?? () #20 0x02da89f0 in ?? () #21 0x in ?? () -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/985489 Title: nova-compute stops processing compute.$HOSTNAME occasionally To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/985489/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 824874] Re: nova-objectstore goes into a tight loop and becomes unresponsive
I do not believe that we've seen this issue in the later releases of Diablo and in Essex. I think that we can close this bug unless someone objects or still experiences this issue. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/824874 Title: nova-objectstore goes into a tight loop and becomes unresponsive To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/824874/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 824874] Re: nova-objectstore goes into a tight loop and becomes unresponsive
I do not believe that we've seen this issue in the later releases of Diablo and in Essex. I think that we can close this bug unless someone objects or still experiences this issue. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/824874 Title: nova-objectstore goes into a tight loop and becomes unresponsive To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/824874/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 994034] Re: Cannot deallocate floating addresses as cloudadmin
*** This bug is a duplicate of bug 897140 *** https://bugs.launchpad.net/bugs/897140 ** This bug has been marked a duplicate of bug 897140 unassociated floating IPs not visible to admin -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/994034 Title: Cannot deallocate floating addresses as cloudadmin To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/994034/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 994034] Re: Cannot deallocate floating addresses as cloudadmin
*** This bug is a duplicate of bug 897140 *** https://bugs.launchpad.net/bugs/897140 ** This bug has been marked a duplicate of bug 897140 unassociated floating IPs not visible to admin -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/994034 Title: Cannot deallocate floating addresses as cloudadmin To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/994034/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 994034] [NEW] Cannot deallocate floating addresses as cloudadmin
Public bug reported: Occassionally I need to deallocate floating IP addresses that my clients have allocated but are not using. Attempting to do this throws a NotAuthorized even if my user has the global role of cloudadmin. Unfortunately the 'nova-manage' commands do not seem to provide a means of releasing floating IP addresses and so I am attempting to use the euca2ools instead. The 'nova' tools will not seem to allow me to deallocate/release addresses either. Command (as a cloudadmin user): $ euca-release-address 172.16.93.110 Exception: 2012-05-03 14:30:42 DEBUG nova.rpc.amqp [-] received {u'_context_roles': [u'cloudadmin', u'projectmanager', u'admin'], u'_context_request_id': u'req-ff455811-461c-4c80-a7ad-d64e8311c142', u'_context_read_deleted': u'no', u'args': {u'affect_auto_assigned': False, u'address': u'172.16.93.110'}, u'_context_auth_token': 'SANITIZED', u'_context_is_admin': True, u'_context_project_id': u'canonistack_project', u'_context_timestamp': u'2012-05-03T14:30:42.506510', u'_context_user_id': u'admin', u'method': u'deallocate_floating_ip', u'_context_remote_address': u'172.16.93.65'} from (pid=18920) _safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160 2012-05-03 14:30:42 DEBUG nova.rpc.amqp [req-ff455811-461c-4c80-a7ad-d64e8311c142 admin canonistack_project] unpacked context: {'user_id': u'admin', 'roles': [u'cloudadmin', u'projectmanager', u'admin'], 'timestamp': '2012-05-03T14:30:42.506510', 'auth_token': 'SANITIZED', 'remote_address': u'172.16.93.65', 'is_admin': True, 'request_id': u'req-ff455811-461c-4c80-a7ad-d64e8311c142', 'project_id': u'canonistack_project', 'read_deleted': u'no'} from (pid=18920) _safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160 2012-05-03 14:30:42 WARNING nova.network.manager [req-ff455811-461c-4c80-a7ad-d64e8311c142 admin canonistack_project] Address |172.16.93.110| is not allocated to your project |canonistack_project| 2012-05-03 14:30:42 ERROR nova.rpc.amqp [req-ff455811-461c-4c80-a7ad-d64e8311c142 admin canonistack_project] Exception during message handling 2012-05-03 14:30:42 TRACE nova.rpc.amqp Traceback (most recent call last): 2012-05-03 14:30:42 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 252, in _process_data 2012-05-03 14:30:42 TRACE nova.rpc.amqp rval = node_func(context=ctxt, **node_args) 2012-05-03 14:30:42 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/network/manager.py, line 258, in wrapped 2012-05-03 14:30:42 TRACE nova.rpc.amqp return func(self, context, *args, **kwargs) 2012-05-03 14:30:42 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/network/manager.py, line 421, in deallocate_floating_ip 2012-05-03 14:30:42 TRACE nova.rpc.amqp self._floating_ip_owned_by_project(context, floating_ip) 2012-05-03 14:30:42 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/network/manager.py, line 394, in _floating_ip_owned_by_project 2012-05-03 14:30:42 TRACE nova.rpc.amqp raise exception.NotAuthorized() 2012-05-03 14:30:42 TRACE nova.rpc.amqp NotAuthorized: Not authorised. 2012-05-03 14:30:42 TRACE nova.rpc.amqp ** Affects: nova (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack ** Tags added: canonistack -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/994034 Title: Cannot deallocate floating addresses as cloudadmin To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/994034/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 994034] Re: Cannot deallocate floating addresses as cloudadmin
I ran the following SQL query to work around the issue: sql UPDATE floating_ips SET deleted_at = NULL, fixed_ip_id = NULL, project_id = NULL, host = NULL where floating_ips.deleted = 0 AND floating_ips.auto_assigned = 0 and host is NULL; -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/994034 Title: Cannot deallocate floating addresses as cloudadmin To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/994034/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 994034] [NEW] Cannot deallocate floating addresses as cloudadmin
Public bug reported: Occassionally I need to deallocate floating IP addresses that my clients have allocated but are not using. Attempting to do this throws a NotAuthorized even if my user has the global role of cloudadmin. Unfortunately the 'nova-manage' commands do not seem to provide a means of releasing floating IP addresses and so I am attempting to use the euca2ools instead. The 'nova' tools will not seem to allow me to deallocate/release addresses either. Command (as a cloudadmin user): $ euca-release-address 172.16.93.110 Exception: 2012-05-03 14:30:42 DEBUG nova.rpc.amqp [-] received {u'_context_roles': [u'cloudadmin', u'projectmanager', u'admin'], u'_context_request_id': u'req-ff455811-461c-4c80-a7ad-d64e8311c142', u'_context_read_deleted': u'no', u'args': {u'affect_auto_assigned': False, u'address': u'172.16.93.110'}, u'_context_auth_token': 'SANITIZED', u'_context_is_admin': True, u'_context_project_id': u'canonistack_project', u'_context_timestamp': u'2012-05-03T14:30:42.506510', u'_context_user_id': u'admin', u'method': u'deallocate_floating_ip', u'_context_remote_address': u'172.16.93.65'} from (pid=18920) _safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160 2012-05-03 14:30:42 DEBUG nova.rpc.amqp [req-ff455811-461c-4c80-a7ad-d64e8311c142 admin canonistack_project] unpacked context: {'user_id': u'admin', 'roles': [u'cloudadmin', u'projectmanager', u'admin'], 'timestamp': '2012-05-03T14:30:42.506510', 'auth_token': 'SANITIZED', 'remote_address': u'172.16.93.65', 'is_admin': True, 'request_id': u'req-ff455811-461c-4c80-a7ad-d64e8311c142', 'project_id': u'canonistack_project', 'read_deleted': u'no'} from (pid=18920) _safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160 2012-05-03 14:30:42 WARNING nova.network.manager [req-ff455811-461c-4c80-a7ad-d64e8311c142 admin canonistack_project] Address |172.16.93.110| is not allocated to your project |canonistack_project| 2012-05-03 14:30:42 ERROR nova.rpc.amqp [req-ff455811-461c-4c80-a7ad-d64e8311c142 admin canonistack_project] Exception during message handling 2012-05-03 14:30:42 TRACE nova.rpc.amqp Traceback (most recent call last): 2012-05-03 14:30:42 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 252, in _process_data 2012-05-03 14:30:42 TRACE nova.rpc.amqp rval = node_func(context=ctxt, **node_args) 2012-05-03 14:30:42 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/network/manager.py, line 258, in wrapped 2012-05-03 14:30:42 TRACE nova.rpc.amqp return func(self, context, *args, **kwargs) 2012-05-03 14:30:42 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/network/manager.py, line 421, in deallocate_floating_ip 2012-05-03 14:30:42 TRACE nova.rpc.amqp self._floating_ip_owned_by_project(context, floating_ip) 2012-05-03 14:30:42 TRACE nova.rpc.amqp File /usr/lib/python2.7/dist-packages/nova/network/manager.py, line 394, in _floating_ip_owned_by_project 2012-05-03 14:30:42 TRACE nova.rpc.amqp raise exception.NotAuthorized() 2012-05-03 14:30:42 TRACE nova.rpc.amqp NotAuthorized: Not authorised. 2012-05-03 14:30:42 TRACE nova.rpc.amqp ** Affects: nova (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack ** Tags added: canonistack -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/994034 Title: Cannot deallocate floating addresses as cloudadmin To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/994034/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 994034] Re: Cannot deallocate floating addresses as cloudadmin
I ran the following SQL query to work around the issue: sql UPDATE floating_ips SET deleted_at = NULL, fixed_ip_id = NULL, project_id = NULL, host = NULL where floating_ips.deleted = 0 AND floating_ips.auto_assigned = 0 and host is NULL; -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/994034 Title: Cannot deallocate floating addresses as cloudadmin To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/994034/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 980930] Re: nova client does not respect regions for a subset of commands
@James: It looks like the --help output doesn't list --region and lists --os_region_name and --region_name instead. That said, my tests seem to show that the --region option still works (for now). Retesting: $ for opt in region region_name os_region_name; do for region in doesnotexist regionOne regionTwo; do echo opt: ${opt}, region: ${region} nova --${opt} ${region} endpoints | \ awk '$2 ~ /region/ { print $4 }' | sort -u done done opt: region, region: doesnotexist ERROR: opt: region, region: regionOne regionOne opt: region, region: regionTwo regionOne opt: region_name, region: doesnotexist ERROR: opt: region_name, region: regionOne regionOne opt: region_name, region: regionTwo regionOne opt: os_region_name, region: doesnotexist ERROR: opt: os_region_name, region: regionOne regionOne opt: os_region_name, region: regionTwo regionOne My results seem to be the same. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to python-novaclient in Ubuntu. https://bugs.launchpad.net/bugs/980930 Title: nova client does not respect regions for a subset of commands To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/python-novaclient/+bug/980930/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 980930] Re: nova client does not respect regions for a subset of commands
@James: It looks like the --help output doesn't list --region and lists --os_region_name and --region_name instead. That said, my tests seem to show that the --region option still works (for now). Retesting: $ for opt in region region_name os_region_name; do for region in doesnotexist regionOne regionTwo; do echo opt: ${opt}, region: ${region} nova --${opt} ${region} endpoints | \ awk '$2 ~ /region/ { print $4 }' | sort -u done done opt: region, region: doesnotexist ERROR: opt: region, region: regionOne regionOne opt: region, region: regionTwo regionOne opt: region_name, region: doesnotexist ERROR: opt: region_name, region: regionOne regionOne opt: region_name, region: regionTwo regionOne opt: os_region_name, region: doesnotexist ERROR: opt: os_region_name, region: regionOne regionOne opt: os_region_name, region: regionTwo regionOne My results seem to be the same. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/980930 Title: nova client does not respect regions for a subset of commands To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/python-novaclient/+bug/980930/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 978970] Re: hud fails to accept keyboard input after pressing TAB
@Sebastien: Works for Me™ -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/978970 Title: hud fails to accept keyboard input after pressing TAB To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/unity/+bug/978970/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 985489] Re: nova-compute stops processing compute.$HOSTNAME occasionally
The symptoms are similar to what we experienced in LP#903212, however I can confirm that libvirtd seems to be responding correctly in Precise. Is there further information that we can provide? $ dpkg-query --show nova-* nova-api2012.1~e4~20120210.12574-0ubuntu1 nova-common 2012.1-0ubuntu2 nova-compute2012.1-0ubuntu2 nova-compute-hypervisor nova-compute-kvm2012.1-0ubuntu2 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu precise (development branch) -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/985489 Title: nova-compute stops processing compute.$HOSTNAME occasionally To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/985489/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 985489] Re: nova-compute stops processing compute.$HOSTNAME occasionally
This has happened again. Process listing: $ ps auxwwwf | grep [n]ova-compute nova 25735 0.0 0.0 48040 4 ?Ss Apr16 0:00 su -s /bin/sh -c exec nova-compute --flagfile=/etc/nova/nova.conf --flagfile=/etc/nova/nova-compute.conf nova nova 25746 1.2 0.1 1725088 32604 ? Sl Apr16 53:22 \_ /usr/bin/python /usr/bin/nova-compute --flagfile=/etc/nova/nova.conf --flagfile=/etc/nova/nova-compute.conf Strace'ing the parent process: $ sudo strace -p 25735 Process 25735 attached - interrupt to quit wait4(-1, ^C Strace'ing the child process $ sudo strace -p 25746 Process 25746 attached - interrupt to quit restart_syscall(... resuming interrupted call ... ^C Checking libvirtd: $ time sudo virsh list | wc -l 33 real0m0.170s user0m0.020s sys 0m0.012s Last few lines from /var/log/nova/nova-compute.log: 2012-04-19 07:04:28 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_bandwidth_usage from (pid=25746) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152 2012-04-19 07:04:28 INFO nova.compute.manager [-] Updating bandwidth usage cache 2012-04-19 07:04:28 DEBUG nova.manager [-] Running periodic task ComputeManager.update_available_resource from (pid=25746) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152 --- restart happens now --- 2012-04-19 13:54:25 DEBUG nova.service [-] Full set of FLAGS: from (pid=1012) wait /usr/lib/python2.7/dist-packages/nova/service.py:402 2012-04-19 13:54:25 DEBUG nova.service [-] default_floating_pool : nova from (pid=1012) wait /usr/lib/python2.7/dist-packages/nova/service.py:411 Conclusion: libvirtd is responding, but nova-compute is not. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/985489 Title: nova-compute stops processing compute.$HOSTNAME occasionally To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/985489/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 985489] Re: nova-compute stops processing compute.$HOSTNAME occasionally
The symptoms are similar to what we experienced in LP#903212, however I can confirm that libvirtd seems to be responding correctly in Precise. Is there further information that we can provide? $ dpkg-query --show nova-* nova-api2012.1~e4~20120210.12574-0ubuntu1 nova-common 2012.1-0ubuntu2 nova-compute2012.1-0ubuntu2 nova-compute-hypervisor nova-compute-kvm2012.1-0ubuntu2 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu precise (development branch) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/985489 Title: nova-compute stops processing compute.$HOSTNAME occasionally To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/985489/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 985489] Re: nova-compute stops processing compute.$HOSTNAME occasionally
This has happened again. Process listing: $ ps auxwwwf | grep [n]ova-compute nova 25735 0.0 0.0 48040 4 ?Ss Apr16 0:00 su -s /bin/sh -c exec nova-compute --flagfile=/etc/nova/nova.conf --flagfile=/etc/nova/nova-compute.conf nova nova 25746 1.2 0.1 1725088 32604 ? Sl Apr16 53:22 \_ /usr/bin/python /usr/bin/nova-compute --flagfile=/etc/nova/nova.conf --flagfile=/etc/nova/nova-compute.conf Strace'ing the parent process: $ sudo strace -p 25735 Process 25735 attached - interrupt to quit wait4(-1, ^C Strace'ing the child process $ sudo strace -p 25746 Process 25746 attached - interrupt to quit restart_syscall(... resuming interrupted call ... ^C Checking libvirtd: $ time sudo virsh list | wc -l 33 real0m0.170s user0m0.020s sys 0m0.012s Last few lines from /var/log/nova/nova-compute.log: 2012-04-19 07:04:28 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_bandwidth_usage from (pid=25746) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152 2012-04-19 07:04:28 INFO nova.compute.manager [-] Updating bandwidth usage cache 2012-04-19 07:04:28 DEBUG nova.manager [-] Running periodic task ComputeManager.update_available_resource from (pid=25746) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152 --- restart happens now --- 2012-04-19 13:54:25 DEBUG nova.service [-] Full set of FLAGS: from (pid=1012) wait /usr/lib/python2.7/dist-packages/nova/service.py:402 2012-04-19 13:54:25 DEBUG nova.service [-] default_floating_pool : nova from (pid=1012) wait /usr/lib/python2.7/dist-packages/nova/service.py:411 Conclusion: libvirtd is responding, but nova-compute is not. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/985489 Title: nova-compute stops processing compute.$HOSTNAME occasionally To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/985489/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 966105] Re: indicator-messages hijacks Alt+F10 keystrokes
$ dpkg-query --show unity indicator-messages indicator-messages 0.6.0-0ubuntu1 unity 5.10.0-0ubuntu3 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu precise (development branch) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/966105 Title: indicator-messages hijacks Alt+F10 keystrokes To manage notifications about this bug go to: https://bugs.launchpad.net/indicator-messages/+bug/966105/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 980930] [NEW] nova client does not respect regions
Public bug reported: I am performing some tests with Keystone and multiple regions and have discovered that nova client does not respect the --region argument correctly. Setup: == I have defined services within two regions: - regionOne - regionTwo Tests: == # Ask for all endpoints $ nova endpoints Found more than one valid endpoint. Use a more restrictive filter ERROR: AmbiguousEndpoints: [{u'adminURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb', u'region': u'regionOne', 'serviceName': u'nova', u'internalURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb', u'publicURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb'}, {u'adminURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb', u'region': u'regionTwo', 'serviceName': u'nova', u'internalURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb', u'publicURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb'}] # I expect this as there are multiple regions # Ask for all endpoints in a region that does not exist $ nova --region doesnotexist endpoints Could not find any suitable endpoint. Correct region? ERROR: # Good. This is correct. # Ask for a region that does exist. $ nova --region regionOne endpoints +-+-+ | nova|Value| +-+-+ | adminURL| http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb | | internalURL | http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb | | publicURL | http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb | | region | regionOne | | serviceName | nova| +-+-+ [...] # Good. This looks correct. # Ask for endpoints in the second region. $ nova --region regionTwo endpoints +-+-+ | nova|Value| +-+-+ | adminURL| http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb | | internalURL | http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb | | publicURL | http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb | | region | regionOne | +-+-+ [...] # This is _not_ correct! I asked for regionTwo and received regionOne instead! System Setup: = $ dpkg-query --show python-novaclient keystone keystone2012.1-0ubuntu1 python-novaclient 2012.1-0ubuntu1 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu precise (development branch) ** Affects: nova (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack ** Description changed: I am performing some tests with Keystone and multiple regions and have discovered that nova client does not respect the --region argument correctly. - Setup: == I have defined services within two regions: - regionOne - regionTwo - Tests: == # Ask for all endpoints $ nova endpoints Found more than one valid endpoint. Use a more restrictive filter ERROR: AmbiguousEndpoints: [{u'adminURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb', u'region': u'regionOne', 'serviceName': u'nova', u'internalURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb', u'publicURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb'}, {u'adminURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb', u'region': u'regionTwo', 'serviceName': u'nova', u'internalURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb', u'publicURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb'}] # I expect this as there are multiple regions # Ask for all endpoints in a region that does not exist $ nova --region doesnotexist endpoints Could not find any suitable endpoint. Correct region? ERROR: # Good. This is correct. # Ask for a region that does exist. $ nova --region regionOne endpoints +-+-+ | nova|Value| +-+-+ | adminURL| http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb | | internalURL | http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb | | publicURL |
[Bug 980930] Re: nova client does not respect regions
A little more clarification: When setting NOVACLIENT_DEBUG in my shell's environment I can confirm that Keystone returns the full catalog for all regions to the nova client. Assuming this is the correct behavior for Keystone, then this is a client-side filtering bug. ** Tags added: canonistack -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/980930 Title: nova client does not respect regions To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/980930/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 980940] [NEW] euca2ools does not respect the --region option correctly
Public bug reported: The euc2ools commands do not support the --region flag for anything other than EC2. I would like to define multiple regions and use these commands against Eucalyptus and Openstack clouds (among others). Currently the endpoint URL is hardcoded (see the function below). /usr/lib/pyshared/euca2ools/commands/eucacommand.py: def get_endpoint_url(self, region_name): Get the URL needed to reach a region with a given name. This currently only works with EC2. In the future it may use other means to also work with Eucalyptus. endpoint_template = 'https://ec2.%s.amazonaws.com/' endpoint_url = endpoint_template % region_name endpoint_dnsname = urlparse.urlparse(endpoint_url).hostname try: socket.getaddrinfo(endpoint_dnsname, None) except socket.gaierror: raise KeyError('Cannot resolve endpoint %s' % endpoint_dnsname) return endpoint_url System information: $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu precise (development branch) $ dpkg-query --show euca2ools euca2ools 2.0.0~bzr516-0ubuntu3 ** Affects: euca2ools (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack ** Tags added: canonistack -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to euca2ools in Ubuntu. https://bugs.launchpad.net/bugs/980940 Title: euca2ools does not respect the --region option correctly To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/euca2ools/+bug/980940/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 980930] Re: nova client does not respect regions
Further testing shows that nova list commands respect --region correctly. So this may only affect a subset of commands. I've updated the bug description to reflect this. ** Summary changed: - nova client does not respect regions + nova client does not respect regions for a subset of commands -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/980930 Title: nova client does not respect regions for a subset of commands To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/980930/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 980930] [NEW] nova client does not respect regions
Public bug reported: I am performing some tests with Keystone and multiple regions and have discovered that nova client does not respect the --region argument correctly. Setup: == I have defined services within two regions: - regionOne - regionTwo Tests: == # Ask for all endpoints $ nova endpoints Found more than one valid endpoint. Use a more restrictive filter ERROR: AmbiguousEndpoints: [{u'adminURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb', u'region': u'regionOne', 'serviceName': u'nova', u'internalURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb', u'publicURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb'}, {u'adminURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb', u'region': u'regionTwo', 'serviceName': u'nova', u'internalURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb', u'publicURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb'}] # I expect this as there are multiple regions # Ask for all endpoints in a region that does not exist $ nova --region doesnotexist endpoints Could not find any suitable endpoint. Correct region? ERROR: # Good. This is correct. # Ask for a region that does exist. $ nova --region regionOne endpoints +-+-+ | nova|Value| +-+-+ | adminURL| http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb | | internalURL | http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb | | publicURL | http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb | | region | regionOne | | serviceName | nova| +-+-+ [...] # Good. This looks correct. # Ask for endpoints in the second region. $ nova --region regionTwo endpoints +-+-+ | nova|Value| +-+-+ | adminURL| http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb | | internalURL | http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb | | publicURL | http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb | | region | regionOne | +-+-+ [...] # This is _not_ correct! I asked for regionTwo and received regionOne instead! System Setup: = $ dpkg-query --show python-novaclient keystone keystone2012.1-0ubuntu1 python-novaclient 2012.1-0ubuntu1 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu precise (development branch) ** Affects: nova (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack ** Description changed: I am performing some tests with Keystone and multiple regions and have discovered that nova client does not respect the --region argument correctly. - Setup: == I have defined services within two regions: - regionOne - regionTwo - Tests: == # Ask for all endpoints $ nova endpoints Found more than one valid endpoint. Use a more restrictive filter ERROR: AmbiguousEndpoints: [{u'adminURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb', u'region': u'regionOne', 'serviceName': u'nova', u'internalURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb', u'publicURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb'}, {u'adminURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb', u'region': u'regionTwo', 'serviceName': u'nova', u'internalURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb', u'publicURL': u'http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb'}] # I expect this as there are multiple regions # Ask for all endpoints in a region that does not exist $ nova --region doesnotexist endpoints Could not find any suitable endpoint. Correct region? ERROR: # Good. This is correct. # Ask for a region that does exist. $ nova --region regionOne endpoints +-+-+ | nova|Value| +-+-+ | adminURL| http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb | | internalURL | http://localhost:8774/v1.1/defc27df9a0b4ef8a9d727f4277dbdeb | | publicURL |
[Bug 980930] Re: nova client does not respect regions
A little more clarification: When setting NOVACLIENT_DEBUG in my shell's environment I can confirm that Keystone returns the full catalog for all regions to the nova client. Assuming this is the correct behavior for Keystone, then this is a client-side filtering bug. ** Tags added: canonistack -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/980930 Title: nova client does not respect regions To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/980930/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 980940] [NEW] euca2ools does not respect the --region option correctly
Public bug reported: The euc2ools commands do not support the --region flag for anything other than EC2. I would like to define multiple regions and use these commands against Eucalyptus and Openstack clouds (among others). Currently the endpoint URL is hardcoded (see the function below). /usr/lib/pyshared/euca2ools/commands/eucacommand.py: def get_endpoint_url(self, region_name): Get the URL needed to reach a region with a given name. This currently only works with EC2. In the future it may use other means to also work with Eucalyptus. endpoint_template = 'https://ec2.%s.amazonaws.com/' endpoint_url = endpoint_template % region_name endpoint_dnsname = urlparse.urlparse(endpoint_url).hostname try: socket.getaddrinfo(endpoint_dnsname, None) except socket.gaierror: raise KeyError('Cannot resolve endpoint %s' % endpoint_dnsname) return endpoint_url System information: $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu precise (development branch) $ dpkg-query --show euca2ools euca2ools 2.0.0~bzr516-0ubuntu3 ** Affects: euca2ools (Ubuntu) Importance: Undecided Status: New ** Tags: canonistack ** Tags added: canonistack -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/980940 Title: euca2ools does not respect the --region option correctly To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/euca2ools/+bug/980940/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 980930] Re: nova client does not respect regions
Further testing shows that nova list commands respect --region correctly. So this may only affect a subset of commands. I've updated the bug description to reflect this. ** Summary changed: - nova client does not respect regions + nova client does not respect regions for a subset of commands -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/980930 Title: nova client does not respect regions for a subset of commands To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/980930/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 978970] [NEW] hud fails to accept keyboard input after pressing TAB
Public bug reported: While Alt+TABing between windows I accidentally summoned the HUD and triggered a case where all my keyboard input was ignored (including ESC, backspace, Return but _not_ Ctrl+Alt+Fx). How to reproduce: 1. Hit Alt key 2. Once the dash appears 3. Start typing something (example: abcdef) 4. Hit the TAB key 5. Notice that keyboard input is ignored in the HUD (example: pressing other random keys, including ESC does nothing) Workaround: Clicking the mouse outside of the HUD make the HUD go away and keyboard input returns What I expect: 1. The HUD not to appear unless specifically summoned (not relevant for this bug) 2. The HUD to respond to keystrokes once summoned Misc: 1. I can successfully summon the HUD and dismiss it by pressing the ESC key as expected if I haven't pressed the TAB key. 2. This bug seems to occur regardless of which application I am currently using. Tested Firefox, Terminator and Evince. 3. I am running Unity-3D System details: $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION=Ubuntu precise (development branch) $ dpkg-query --show compiz unity compiz 1:0.9.7.4-0ubuntu3 unity 5.8.0-0ubuntu2 ** Affects: unity (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/978970 Title: hud fails to accept keyboard input after pressing TAB To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/unity/+bug/978970/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs