[Yahoo-eng-team] [Bug 1754253] [NEW] make integration tests work in other OS
Public bug reported: most of the integration tests are written for ubuntu and have hardcoded paths, service names or package names that may are different on other OS. it would be good to have the possibility to run integration tests in another os than ubuntu. this would as far as i can tell require a lot of tests to be rewritten. but first, there must be an easy way to get os specific variables for a certain module into the test class. for example how could the SaltConstant class being accessed from within the salt_minion.yaml so we don't have to hardcode paths ect. in the collection scripts? ** Affects: cloud-init Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1754253 Title: make integration tests work in other OS Status in cloud-init: New Bug description: most of the integration tests are written for ubuntu and have hardcoded paths, service names or package names that may are different on other OS. it would be good to have the possibility to run integration tests in another os than ubuntu. this would as far as i can tell require a lot of tests to be rewritten. but first, there must be an easy way to get os specific variables for a certain module into the test class. for example how could the SaltConstant class being accessed from within the salt_minion.yaml so we don't have to hardcode paths ect. in the collection scripts? To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1754253/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1753584] Re: incorrect ImportError message raised
Reviewed: https://review.openstack.org/549870 Committed: https://git.openstack.org/cgit/openstack/keystone/commit/?id=852bd45c94df4a5c5ccb1e42077ec4f2d1a57272 Submitter: Zuul Branch:master commit 852bd45c94df4a5c5ccb1e42077ec4f2d1a57272 Author: Mark Hamzy Date: Mon Mar 5 15:18:31 2018 -0600 Fix formatting of ImportError Fix formatting of ImportError when using a driver not found in the list of token providers. Change-Id: I0ac8ac199aeebd20960ad0654461f1f81c4d7da0 Closes-bug: 1753584 ** Changed in: keystone Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1753584 Title: incorrect ImportError message raised Status in OpenStack Identity (keystone): Fix Released Bug description: Logs show: 2018-03-05 20:50:01.665 35 WARNING stevedore.named [-] Could not load uuid 2018-03-05 20:50:01.666 35 CRITICAL keystone [-] Unhandled error: ImportError: (u'Unable to find %(name)r driver in %(namespace)r.', {'namespace': 'keystone.token.provider', 'name': 'uuid'}) 2018-03-05 20:50:01.666 35 ERROR keystone Traceback (most recent call last): 2018-03-05 20:50:01.666 35 ERROR keystone File "/var/lib/kolla/venv/bin/keystone-manage", line 10, in 2018-03-05 20:50:01.666 35 ERROR keystone sys.exit(main()) 2018-03-05 20:50:01.666 35 ERROR keystone File "/var/lib/kolla/venv/lib/python2.7/site-packages/keystone/cmd/manage.py", line 45, in main 2018-03-05 20:50:01.666 35 ERROR keystone cli.main(argv=sys.argv, config_files=config_files) 2018-03-05 20:50:01.666 35 ERROR keystone File "/var/lib/kolla/venv/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1349, i n main 2018-03-05 20:50:01.666 35 ERROR keystone CONF.command.cmd_class.main() 2018-03-05 20:50:01.666 35 ERROR keystone File "/var/lib/kolla/venv/lib/python2.7/site-packages/keystone/cmd/cli.py", line 397, in main 2018-03-05 20:50:01.666 35 ERROR keystone klass = cls() 2018-03-05 20:50:01.666 35 ERROR keystone File "/var/lib/kolla/venv/lib/python2.7/site-packages/keystone/cmd/cli.py", line 66, in __init__ 2018-03-05 20:50:01.666 35 ERROR keystone self.load_backends() 2018-03-05 20:50:01.666 35 ERROR keystone File "/var/lib/kolla/venv/lib/python2.7/site-packages/keystone/cmd/cli.py", line 129, in load_backends 2018-03-05 20:50:01.666 35 ERROR keystone drivers = backends.load_backends() 2018-03-05 20:50:01.666 35 ERROR keystone File "/var/lib/kolla/venv/lib/python2.7/site-packages/keystone/server/backends.py", line 53, in load_backends 2018-03-05 20:50:01.666 35 ERROR keystone drivers = {d._provides_api: d() for d in managers} 2018-03-05 20:50:01.666 35 ERROR keystone File "/var/lib/kolla/venv/lib/python2.7/site-packages/keystone/server/backends.py", line 53, in 2018-03-05 20:50:01.666 35 ERROR keystone drivers = {d._provides_api: d() for d in managers} 2018-03-05 20:50:01.666 35 ERROR keystone File "/var/lib/kolla/venv/lib/python2.7/site-packages/keystone/token/provider.py", line 65, in __init__ 2018-03-05 20:50:01.666 35 ERROR keystone super(Manager, self).__init__(CONF.token.provider) 2018-03-05 20:50:01.666 35 ERROR keystone File "/var/lib/kolla/venv/lib/python2.7/site-packages/keystone/common/manager.py", line 181, in __init__ 2018-03-05 20:50:01.666 35 ERROR keystone self.driver = load_driver(self.driver_namespace, driver_name) 2018-03-05 20:50:01.666 35 ERROR keystone File "/var/lib/kolla/venv/lib/python2.7/site-packages/keystone/common/manager.py", line 81, in load_driver 2018-03-05 20:50:01.666 35 ERROR keystone raise ImportError(msg, {'name': driver_name, 'namespace': namespace}) 2018-03-05 20:50:01.666 35 ERROR keystone ImportError: (u'Unable to find %(name)r driver in %(namespace)r.', {'namespace': 'keystone .token.provider', 'name': 'uuid'}) 2018-03-05 20:50:01.666 35 ERROR keystone which is misleading. The correct error should be: 2018-03-05 20:50:25.517 47 ERROR keystone ImportError: Unable to find 'uuid' driver in 'keystone.token.provider'. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1753584/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1754185] [NEW] Limit resources should support descriptions
Public bug reported: The experimental unified limits API was discussed during the Rocky PTG. One of the things people wanted to see supported was an optional `description` attribute for limits. Supporting this will give users an opportunity to describe a limit, much like they can describe other things in keystone using a `description` field. ** Affects: keystone Importance: Medium Status: Triaged ** Tags: limits ** Changed in: keystone Status: New => Triaged ** Changed in: keystone Importance: Undecided => Medium ** Tags added: limits -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1754185 Title: Limit resources should support descriptions Status in OpenStack Identity (keystone): Triaged Bug description: The experimental unified limits API was discussed during the Rocky PTG. One of the things people wanted to see supported was an optional `description` attribute for limits. Supporting this will give users an opportunity to describe a limit, much like they can describe other things in keystone using a `description` field. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1754185/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1754184] [NEW] Unified limits API shouldn't return a list of all limits
Public bug reported: During the Rocky PTG, we reviewed the unified limit API as a group. One of the things that became apparent during the discussion was that the API shouldn't return a list of all limits when updating limits or creating new limits. Originally, the API was designed this way so that an operator, or user, could double check their work after making a change. Where things get a bit complicated is if you attempt to delegate limit management to other users. For example, say a system administrator creates a new doamin for a customer and sets some limits on that domain. Let's also assume the customer has the ability to create projects within their domain and manage their limits with respect to the limits the system administrator set on the domain. If the customer makes a change to a limit within their domain, they will get a response that contains limit information for all projects, essentially leaking project information to someone who isn't authorized to see that information. We should change the unified limit API to account for this by not returning a list of all limits on POST and PUT operations. This will be a backwards incompatible change, but we should be able to make it because the API is still marked as experimental. ** Affects: keystone Importance: Medium Status: Triaged ** Tags: limits ** Changed in: keystone Status: New => Triaged ** Changed in: keystone Importance: Undecided => Medium ** Tags added: limits -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1754184 Title: Unified limits API shouldn't return a list of all limits Status in OpenStack Identity (keystone): Triaged Bug description: During the Rocky PTG, we reviewed the unified limit API as a group. One of the things that became apparent during the discussion was that the API shouldn't return a list of all limits when updating limits or creating new limits. Originally, the API was designed this way so that an operator, or user, could double check their work after making a change. Where things get a bit complicated is if you attempt to delegate limit management to other users. For example, say a system administrator creates a new doamin for a customer and sets some limits on that domain. Let's also assume the customer has the ability to create projects within their domain and manage their limits with respect to the limits the system administrator set on the domain. If the customer makes a change to a limit within their domain, they will get a response that contains limit information for all projects, essentially leaking project information to someone who isn't authorized to see that information. We should change the unified limit API to account for this by not returning a list of all limits on POST and PUT operations. This will be a backwards incompatible change, but we should be able to make it because the API is still marked as experimental. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1754184/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1749667] Re: neutron doesn't correctly handle unknown protocols and should whitelist known and handled protocols
Reviewed: https://review.openstack.org/545091 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=b564871bb759a38cf96527f94e7c7d4cc760b1c9 Submitter: Zuul Branch:master commit b564871bb759a38cf96527f94e7c7d4cc760b1c9 Author: Brian Haley Date: Thu Feb 15 13:57:32 2018 -0500 Only allow SG port ranges for whitelisted protocols Iptables only supports port-ranges for certain protocols, others will generate failures, possibly leaving the agent looping trying to apply rules. Change to not allow port ranges outside of the list of known good protocols. Change-Id: I5867f77fc5aedc169b42f50def0424ff209c164c Closes-bug: #1749667 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1749667 Title: neutron doesn't correctly handle unknown protocols and should whitelist known and handled protocols Status in neutron: Fix Released Bug description: We have had problems with openvswitch agent continuously restarting and never actually completing setup because of this: # Completed by iptables_manager ; Stdout: ; Stderr: iptables-restore v1.4.21: multiport only works with TCP, UDP, UDPLITE, SCTP and DCCP Error occurred at line: 83 Try `iptables-restore -h' or 'iptables-restore --help' for more information. 83. -I neutron-openvswi- 69 -s -p 112 -m multiport --dports 1:65535 -j RETURN --- Someone has managed to inject a rule that is, effectively, a DoS. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1749667/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1750618] Re: rebuild to same host with a different image results in erroneously doing a Claim
Reviewed: https://review.openstack.org/546268 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=a39029076c7997236a7f999682fb1e998c474204 Submitter: Zuul Branch:master commit a39029076c7997236a7f999682fb1e998c474204 Author: Matt Riedemann Date: Tue Feb 20 13:48:12 2018 -0500 Only attempt a rebuild claim for an evacuation to a new host Change I11746d1ea996a0f18b7c54b4c9c21df58cc4714b changed the behavior of the API and conductor when rebuilding an instance with a new image such that the image is run through the scheduler filters again to see if it will work on the existing host that the instance is running on. As a result, conductor started passing 'scheduled_node' to the compute which was using it for logic to tell if a claim should be attempted. We don't need to do a claim for a rebuild since we're on the same host. This removes the scheduled_node logic from the claim code, as we should only ever attempt a claim if we're evacuating, which we can determine based on the 'recreate' parameter. Change-Id: I7fde8ce9dea16679e76b0cb2db1427aeeec0c222 Closes-Bug: #1750618 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1750618 Title: rebuild to same host with a different image results in erroneously doing a Claim Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) ocata series: In Progress Status in OpenStack Compute (nova) pike series: In Progress Status in OpenStack Compute (nova) queens series: In Progress Bug description: As of stable/pike if we do a rebuild-to-same-node with a new image, it results in ComputeManager.rebuild_instance() being called with "scheduled_node=" and "recreate=False". This results in a new Claim, which seems wrong since we're not changing the flavor and that claim could fail if the compute node is already full. The comments in ComputeManager.rebuild_instance() make it appear that it expects both "recreate" and "scheduled_node" to be None for the rebuild-to-same-host case otherwise it will do a Claim. However, if we rebuild to a different image it ends up going through the scheduler which means that "scheduled_node" is not None. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1750618/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1750777] Re: openvswitch agent eating CPU, time spent in ip_conntrack.py
Reviewed: https://review.openstack.org/548976 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=4c8b97eca32c9c2beadf95fef14ed5b7d8981c5a Submitter: Zuul Branch:master commit 4c8b97eca32c9c2beadf95fef14ed5b7d8981c5a Author: Brian Haley Date: Thu Mar 1 15:42:59 2018 + Do not start conntrack worker thread from __init__ Instead, start it when the first entry is being added to the queue. Also, log any exceptions just in case get() throws something so we can do further debugging. Changed class from Queue to LightQueue was done after going through the eventlet.queue code looking at usage, since it's a little smaller and should be faster. Change-Id: Ie84be88382f327ebe312bf17ec2dc5c80a8de35f Closes-bug: 1750777 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1750777 Title: openvswitch agent eating CPU, time spent in ip_conntrack.py Status in neutron: Fix Released Bug description: We just ran into a case where the openvswitch agent (local dev destack, current master branch) eats 100% of CPU time. Pyflame profiling show the time being largely spent in neutron.agent.linux.ip_conntrack, line 95. https://github.com/openstack/neutron/blob/master/neutron/agent/linux/ip_conntrack.py#L95 The code around this line is: while True: pool.spawn_n(self._process_queue) The documentation of eventlet.spawn_n says: "The same as spawn(), but it’s not possible to know how the function terminated (i.e. no return value or exceptions). This makes execution faster. See spawn_n for more details." I suspect that GreenPool.spaw_n may behave similarly. It seems plausible that spawn_n is returning very quickly because of some error, and then all time is quickly spent in a short circuited while loop. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1750777/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1752289] Re: ServiceCatalog does not contain "network" service
Reviewed: https://review.openstack.org/548572 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=3a3b0f09db318faf1a1ea711a73bb365cab8b233 Submitter: Zuul Branch:master commit 3a3b0f09db318faf1a1ea711a73bb365cab8b233 Author: Thomas Bechtold Date: Wed Feb 28 11:45:01 2018 +0100 Allow 'network' in RequestContext service_catalog When booting instances, nova might create neutron resources. For that, the network service endpoint needs to be available. Otherwise we run into: EndpointNotFound: ['internal', 'public'] endpoint for network service \ not found Change-Id: Iaed84826b76ab976ffdd1c93106b7bae700a64a9 Closes-Bug: #1752289 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1752289 Title: ServiceCatalog does not contain "network" service Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) queens series: In Progress Bug description: On SLE12SP3, openstack-nova 17.0.0.0~xrc2~dev160-1.1, I try to boot an cirros instance and get: 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [req-53fa6935-f60d-4e07-bc75-14b6a5336330 f80483de8573468b869e64262780a903 d9685d1130d74a73af6ee213c421d9de - default default] [instance: 0ae671f6-5241-486f-9054-1100b124f704] Instance failed to spawn: EndpointNotFound: ['internal', 'public'] endpoint for network service not found 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] Traceback (most recent call last): 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2236, in _build_resources 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] yield resources 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2019, in _build_and_run_instance 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] block_device_info=block_device_info) 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3011, in spawn 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] mdevs=mdevs) 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5256, in _get_guest_xml 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] network_info_str = str(network_info) 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 568, in __str__ 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] return self._sync_wrapper(fn, *args, **kwargs) 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 551, in _sync_wrapper 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] self.wait() 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 583, in wait 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] self[:] = self._gt.wait() 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 175, in wait 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] return self._exit_event.wait() 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 125, in wait 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] current.throw(*self._exc) 2018-02-28 10:24:07.553 10768 ERROR nova.compute.manager [instance: 0ae671f6-5241-486f-9054-1100b124f704] File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 214,
[Yahoo-eng-team] [Bug 1753540] Re: When isolated metadata is enabled, metadata proxy doesn't get automatically started/stopped when needed
Reviewed: https://review.openstack.org/549822 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=9362d4f1f21df2a27c818bb0c2918241eb67e3d0 Submitter: Zuul Branch:master commit 9362d4f1f21df2a27c818bb0c2918241eb67e3d0 Author: Daniel Alvarez Date: Mon Mar 5 18:05:54 2018 +0100 Spawn/stop metadata proxies upon adding/deleting router interfaces When a network becomes isolated and isolated_metadata_enabled=True, the DHCP agent won't spawn the required metadata proxy instance unless the agent gets restarted. Similarly, it won't stop them when the network is no longer isolated. This patch fixes it by updating the isolated metadata proxy on port_update_end and port_delete_end methods which are invoked every time a router interface port is added, updated or deleted. Change-Id: I5c197a5755135357c6465dfe4803019a2ad52c14 Closes-Bug: #1753540 Signed-off-by: Daniel Alvarez ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1753540 Title: When isolated metadata is enabled, metadata proxy doesn't get automatically started/stopped when needed Status in neutron: Fix Released Bug description: When enabled_isolated_metadata option is set to True in DHCP agent configuration, the metadata proxy instances won't get started dynamically when the network gets isolated. Similarly, when a subnet is added to the router, they don't get stopped if they were already running. 100% reproducible: With enable_isolated_metadata=True: 1. Create a network, a subnet and a router. 2. Check that there's a proxy instance running in the DHCP namespace for this network: neutron 89 1 0 17:01 ?00:00:00 haproxy -f /var/lib/neutron/ns-metadata- proxy/9d1c7905-a887-419a-a885-9b07c20c2012.conf 3. Attach the subnet to the router. 4. Verify that the proxy instance is still running. 5. Restart DHCP agent 6. Verify that the proxy instance went away (since the network is not isolated). 7. Remove the subnet from the router. 8. Verify that the proxy instance has not been spawned. At this point, booting any VM on the network will fail since it won't be able to fetch metadata. However, any update on the network/subnet will trigger the agent to refresh the status of the isolated metadata proxy: For example: openstack network set --name foo would trigger that DHCP agent spawns the proxy for that network. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1753540/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1754133] [NEW] login page exception - hz-login-finder doesn't function because of horizon.app loading failure
Public bug reported: It happens to master and stable/pike. Didn't check other version. When access login page of horizon in Chrome, F12 you will see the following exceptions in the console. If it uses default keystone credential, it doesn't hurt, once login the exceptions are gone. However, if want to use the WEBSSO feature when angular invokes the hz- login-finder directive to hide the username/password inputs...it doesn't function. Looks like it is caused by the loading problem of horizon.app module Exceptions Uncaught SyntaxError: Unexpected token < c575dddbc1e4.js:325 Uncaught ReferenceError: gettext is not defined at c575dddbc1e4.js:325 at c575dddbc1e4.js:325 (anonymous) @ c575dddbc1e4.js:325 (anonymous) @ c575dddbc1e4.js:325 732ce617825a.js:699 Uncaught Error: [$injector:nomod] Module 'horizon.app' is not available! You either misspelled the module name or forgot to load it. If registering a module ensure that you specify the dependencies as the second argument. http://errors.angularjs.org/1.5.8/$injector/nomod?p0=horizon.app at 732ce617825a.js:699 at 732ce617825a.js:818 at ensure (732ce617825a.js:816) at Object.module (732ce617825a.js:818) at 680be8487836.js:1 (anonymous) @ 732ce617825a.js:699 (anonymous) @ 732ce617825a.js:818 ensure @ 732ce617825a.js:816 module @ 732ce617825a.js:818 (anonymous) @ 680be8487836.js:1 732ce617825a.js:699 Uncaught Error: [$injector:modulerr] Failed to instantiate module horizon.app due to: Error: [$injector:nomod] Module 'horizon.app' is not available! You either misspelled the module name or forgot to load it. If registering a module ensure that you specify the dependencies as the second argument. http://errors.angularjs.org/1.5.8/$injector/nomod?p0=horizon.app at https://myhelion.test/static/dashboard/js/732ce617825a.js:699:8 at https://myhelion.test/static/dashboard/js/732ce617825a.js:818:59 at ensure (https://myhelion.test/static/dashboard/js/732ce617825a.js:816:320) at module (https://myhelion.test/static/dashboard/js/732ce617825a.js:818:8) at https://myhelion.test/static/dashboard/js/732ce617825a.js:925:35 at forEach (https://myhelion.test/static/dashboard/js/732ce617825a.js:703:400) at loadModules (https://myhelion.test/static/dashboard/js/732ce617825a.js:924:156) at createInjector (https://myhelion.test/static/dashboard/js/732ce617825a.js:913:464) at doBootstrap (https://myhelion.test/static/dashboard/js/732ce617825a.js:792:36) at bootstrap (https://myhelion.test/static/dashboard/js/732ce617825a.js:793:58) http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=horizon.app&p1=Error%3A%20%5B%24injector%3Anomod%5D%20Module%20'horizon.app'%20is%20not%20available!%20You%20either%20misspelled%20the%20module%20name%20or%20forgot%20to%20load%20it.%20If%20registering%20a%20module%20ensure%20that%20you%20specify%20the%20dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod%3Fp0%3Dhorizon.app%0A%20%20%20%20at%20https%3A%2F%2Fmyhelion.test%2Fstatic%2Fdashboard%2Fjs%2F732ce617825a.js%3A699%3A8%0A%20%20%20%20at%20https%3A%2F%2Fmyhelion.test%2Fstatic%2Fdashboard%2Fjs%2F732ce617825a.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(https%3A%2F%2Fmyhelion.test%2Fstatic%2Fdashboard%2Fjs%2F732ce617825a.js%3A816%3A320)%0A%20%20%20%20at%20module%20(https%3A%2F%2Fmyhelion.test%2Fstatic%2Fdashboard%2Fjs%2F732ce617825a.js%3A818%3A8)%0A%20%20%20%20at%20https%3A%2F%2Fmyhelion.test%2Fstatic%2Fdashboard%2Fjs%2F732ce617825a.js%3A925%3A35%0A%20%20%20%20at%20forEach%20(https%3A%2F%2Fmyhelion.test%2Fstatic%2Fdashboard%2Fjs%2F732ce617825a.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(https%3A%2F%2Fmyhelion.test%2Fstatic%2Fdashboard%2Fjs%2F732ce617825a.js%3A924%3A156)%0A%20%20%20%20at%20createInjector%20(https%3A%2F%2Fmyhelion.test%2Fstatic%2Fdashboard%2Fjs%2F732ce617825a.js%3A913%3A464)%0A%20%20%20%20at%20doBootstrap%20(https%3A%2F%2Fmyhelion.test%2Fstatic%2Fdashboard%2Fjs%2F732ce617825a.js%3A792%3A36)%0A%20%20%20%20at%20bootstrap%20(https%3A%2F%2Fmyhelion.test%2Fstatic%2Fdashboard%2Fjs%2F732ce617825a.js%3A793%3A58) at https://myhelion.test/static/dashboard/js/732ce617825a.js:699:8 at https://myhelion.test/static/dashboard/js/732ce617825a.js:818:59 at ensure (https://myhelion.test/static/dashboard/js/732ce617825a.js:816:320) at module (https://myhelion.test/static/dashboard/js/732ce617825a.js:818:8) at https://myhelion.test/static/dashboard/js/732ce617825a.js:925:35 at forEach (https://myhelion.test/static/dashboard/js/732ce617825a.js:703:400) at loadModules (https://myhelion.test/static/dashboard/js/732ce617825a.js:924:156) at createInjector (https://myhelion.test/static/dashboard/js/732ce617825a.js:913:464) at doBootstrap (https://myhelion.test/static/dashboard/js/732ce617825a.js:792:36) at bootstrap (https://myhelion.test/static/dashboard/js/732ce617825a.js:793:58) http://errors.angularjs.org/1.5.8/$inj
[Yahoo-eng-team] [Bug 1754123] [NEW] Support filter with floating IP address substring
Public bug reported: This report proposes to introduce a new filter for filtering floatingips list result with substring of IP address. For example: GET /v2.0/floatingips?floating_ip_address_substr=172.24.4. This allows users/admins to efficiently retrieve a list of floating IP addresses within a network, which is the common usage pattern in real production scenario. A use case is that cloud admin finds some suspicious traffics from some known floatingips cidr and want to locate the targets (i.e. the VMs). Retrieve a filtered list of floating IP addresses would be the first step for them. ** Affects: neutron Importance: Undecided Assignee: Hongbin Lu (hongbin.lu) Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1754123 Title: Support filter with floating IP address substring Status in neutron: New Bug description: This report proposes to introduce a new filter for filtering floatingips list result with substring of IP address. For example: GET /v2.0/floatingips?floating_ip_address_substr=172.24.4. This allows users/admins to efficiently retrieve a list of floating IP addresses within a network, which is the common usage pattern in real production scenario. A use case is that cloud admin finds some suspicious traffics from some known floatingips cidr and want to locate the targets (i.e. the VMs). Retrieve a filtered list of floating IP addresses would be the first step for them. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1754123/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1722571] Re: NotImplementedError(_('direct_snapshot() is not implemented')) stacktraces in n-cpu logs
Reviewed: https://review.openstack.org/511074 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=9ef56fa86639f97f63d853dbb2213415dcd5691b Submitter: Zuul Branch:master commit 9ef56fa86639f97f63d853dbb2213415dcd5691b Author: Hongbin Lu Date: Tue Oct 10 23:05:14 2017 + Handle not found error on taking snapshot If there is a request to create a snapshot of an instance and another request to delete the instance at the same time, the snapshot task might fail with libvirt error and this error is not handled correctly by compute manager. As a result, tracestack was printed in the compute log. This patch fixes it by handling libvirt exception during live snapshot and raise instance not found exception if the libvirt exception is raised due to domain not found. Change-Id: I585b7b03753ed1d28a313ce443e6918687d76a8b Closes-Bug: #1722571 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1722571 Title: NotImplementedError(_('direct_snapshot() is not implemented')) stacktraces in n-cpu logs Status in OpenStack Compute (nova): Fix Released Bug description: After we enabled live snapshot by default for the libvirt driver, we get these stacktraces all over the n-cpu logs anytime we create a snapshot image: http://logs.openstack.org/65/509465/18/check/gate-tempest-dsvm-py35 -ubuntu-xenial/0e88c0a/logs/screen-n-cpu.txt#_Oct_10_13_48_24_125578 Oct 10 13:48:24.125578 ubuntu-xenial-infracloud-chocolate-11309928 nova-compute[26979]: ERROR nova.compute.manager [instance: 8cd13eb3-54cc-4ca3-9bfc-689efd768baf] Traceback (most recent call last): Oct 10 13:48:24.125728 ubuntu-xenial-infracloud-chocolate-11309928 nova-compute[26979]: ERROR nova.compute.manager [instance: 8cd13eb3-54cc-4ca3-9bfc-689efd768baf] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1697, in snapshot Oct 10 13:48:24.125890 ubuntu-xenial-infracloud-chocolate-11309928 nova-compute[26979]: ERROR nova.compute.manager [instance: 8cd13eb3-54cc-4ca3-9bfc-689efd768baf] instance.image_ref) Oct 10 13:48:24.126025 ubuntu-xenial-infracloud-chocolate-11309928 nova-compute[26979]: ERROR nova.compute.manager [instance: 8cd13eb3-54cc-4ca3-9bfc-689efd768baf] File "/opt/stack/new/nova/nova/virt/libvirt/imagebackend.py", line 412, in direct_snapshot Oct 10 13:48:24.126158 ubuntu-xenial-infracloud-chocolate-11309928 nova-compute[26979]: ERROR nova.compute.manager [instance: 8cd13eb3-54cc-4ca3-9bfc-689efd768baf] raise NotImplementedError(_('direct_snapshot() is not implemented')) Oct 10 13:48:24.126326 ubuntu-xenial-infracloud-chocolate-11309928 nova-compute[26979]: ERROR nova.compute.manager [instance: 8cd13eb3-54cc-4ca3-9bfc-689efd768baf] NotImplementedError: direct_snapshot() is not implemented Oct 10 13:48:24.126485 ubuntu-xenial-infracloud-chocolate-11309928 nova-compute[26979]: ERROR nova.compute.manager [instance: 8cd13eb3-54cc-4ca3-9bfc-689efd768baf] We shouldn't be stacktracing on that NotImplementedError since it's an image backend-specific method implementation for handling snapshots, and only the Rbd image backend implements direct_snapshot(). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1722571/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1753507] Re: FWaaS V2: Upgrade Pike->Queen causes error
Reviewed: https://review.openstack.org/550140 Committed: https://git.openstack.org/cgit/openstack/neutron-fwaas/commit/?id=9b89d4802c113f3eab9114129a4c14175948d2ed Submitter: Zuul Branch:master commit 9b89d4802c113f3eab9114129a4c14175948d2ed Author: Chandan Dutta Chowdhury Date: Tue Mar 6 08:47:35 2018 + Skip unknown protocols while deleting conntrack This patch updates the legacy conntrack driver to skip any conntrack entries in the virtual router with an unknown protocol. The conntrack driver currently handles sessions for TCP/UDP/ICMP/ICMP6 protocols only Change-Id: Ic2572086a13ea9c3acc3aee1350b569740aa0d8f Closes-Bug: #1753507 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1753507 Title: FWaaS V2: Upgrade Pike->Queen causes error Status in neutron: Fix Released Bug description: From our chat: Jon Davis Hello - I just upgraded to Queens and fwaas_v2 is throwing error: http://paste.openstack.org/raw/68/ 6:46 PM J Jon Davis Everything was working fine in Pike 6:46 PM for attr, position in ATTR_POSITIONS[protocol]: KeyError: 'unknown' 6:47 PM Ideas on where to look? To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1753507/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1754104] [NEW] Install and configure (Ubuntu) in glance
Public bug reported: This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [X] This doc is inaccurate in this way: The code shown for the keystone_authtoken needs an update regarding the ports for queens. Following the guides, keystone only listens on port 5000 instead of 5000 & 35357 - [ ] This is a doc addition request. - [x] I have a fix to the document that I can paste below including example: input and output. Input: [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = GLANCE_PASS output: [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = GLANCE_PASS If you have a troubleshooting or support issue, use the following resources: - Ask OpenStack: http://ask.openstack.org - The mailing list: http://lists.openstack.org - IRC: 'openstack' channel on Freenode --- Release: 16.0.1.dev1 on 'Thu Mar 1 07:26:57 2018, commit 968f4ae' SHA: 968f4ae9ce244d9372cb3e8f45acea9d557f317d Source: https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-ubuntu.rst URL: https://docs.openstack.org/glance/queens/install/install-ubuntu.html ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1754104 Title: Install and configure (Ubuntu) in glance Status in Glance: New Bug description: This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [X] This doc is inaccurate in this way: The code shown for the keystone_authtoken needs an update regarding the ports for queens. Following the guides, keystone only listens on port 5000 instead of 5000 & 35357 - [ ] This is a doc addition request. - [x] I have a fix to the document that I can paste below including example: input and output. Input: [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = GLANCE_PASS output: [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = GLANCE_PASS If you have a troubleshooting or support issue, use the following resources: - Ask OpenStack: http://ask.openstack.org - The mailing list: http://lists.openstack.org - IRC: 'openstack' channel on Freenode --- Release: 16.0.1.dev1 on 'Thu Mar 1 07:26:57 2018, commit 968f4ae' SHA: 968f4ae9ce244d9372cb3e8f45acea9d557f317d Source: https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-ubuntu.rst URL: https://docs.openstack.org/glance/queens/install/install-ubuntu.html To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1754104/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1753443] Re: os_nova: upgrade_levels/compute=auto failure on master
Reviewed: https://review.openstack.org/549737 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=b81a66b3b0b1ce9f56280dda40ac871132cb29f2 Submitter: Zuul Branch:master commit b81a66b3b0b1ce9f56280dda40ac871132cb29f2 Author: git-harry Date: Mon Mar 5 10:48:28 2018 + Fix version cap when no nova-compute started When a zero service version is returned, it means that we have no services running for the requested binary. In that case, we should assume the latest version available until told otherwise. This usually happens in first-start cases, where everything is likely to be up to date anyway. This change addresses an issue where the version returned had been hard-coded to 4.11 (mitaka). Change-Id: I696a8ea8adbe9481e11407ecafd5e47b2bd29804 Closes-bug: 1753443 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1753443 Title: os_nova: upgrade_levels/compute=auto failure on master Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) queens series: Confirmed Status in openstack-ansible: Fix Released Bug description: It looks like a recent change [1] in nova, to remove RPC 4.x support, has exposed a bug when using upgrade_levels/compute=auto on a new deployment. This is blocking the openstack-ansible-os_nova master gate. Tempest tests are failing, the following in nova-conductor.log shows the failure: ``` 2018-03-03 05:13:23.679 9771 ERROR oslo_messaging.rpc.server [req-9c3f2dd3-81dd-4275-9a61-a3a859dde29d 3639ea84ebcf4c858de98eeede6789a9 3b9624e03ed740f483c64301d0d11372 - default default] Exception during message handling: RPCVersionCapError: Requested message version, 5.0 is incompatible. It needs to be equal in major version and less than or equal in minor version as the specified version cap 4.11. 2018-03-03 05:13:23.679 9771 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2018-03-03 05:13:23.679 9771 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-testing/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming 2018-03-03 05:13:23.679 9771 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2018-03-03 05:13:23.679 9771 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-testing/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch 2018-03-03 05:13:23.679 9771 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2018-03-03 05:13:23.679 9771 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-testing/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch 2018-03-03 05:13:23.679 9771 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2018-03-03 05:13:23.679 9771 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-testing/local/lib/python2.7/site-packages/nova/conductor/manager.py", line 1265, in schedule_and_build_instances 2018-03-03 05:13:23.679 9771 ERROR oslo_messaging.rpc.server limits=host.limits, host_list=host_list) 2018-03-03 05:13:23.679 9771 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-testing/local/lib/python2.7/site-packages/nova/compute/rpcapi.py", line 1030, in build_and_run_instance 2018-03-03 05:13:23.679 9771 ERROR oslo_messaging.rpc.server cctxt.cast(ctxt, 'build_and_run_instance', **kwargs) 2018-03-03 05:13:23.679 9771 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-testing/local/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 149, in cast 2018-03-03 05:13:23.679 9771 ERROR oslo_messaging.rpc.server self._check_version_cap(msg.get('version')) 2018-03-03 05:13:23.679 9771 ERROR oslo_messaging.rpc.server File "/openstack/venvs/nova-testing/local/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 126, in _check_version_cap 2018-03-03 05:13:23.679 9771 ERROR oslo_messaging.rpc.server version_cap=self.version_cap) 2018-03-03 05:13:23.679 9771 ERROR oslo_messaging.rpc.server RPCVersionCapError: Requested message version, 5.0 is incompatible. It needs to be equal in major version and less than or equal in minor version as the specified version cap 4.11. 2018-03-03 05:13:23.679 9771 ERROR oslo_messaging.rpc.server ``` When openstack-ansible-os_nova is used for a new deployment, the following appears in the logs: ``` 2018-03-02 17:25:55.954 19495 DEBUG nova.compute.rpcapi [req-97c173ed-052e-4ce7-8314-d220dfdab8e7 - - - - -] Not caching compute RPC version_cap, because min service_version is 0. Please ensure a nova-compute service has been started. Defaulting to Mitaka
[Yahoo-eng-team] [Bug 1744032] Re: Hyper-V: log warning on PortBindingFailed exception
** Also affects: nova/queens Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1744032 Title: Hyper-V: log warning on PortBindingFailed exception Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) queens series: In Progress Bug description: Description === When spawning an Hyper-V instance with NICs having the vif_type "hyperv", neutron will fail to bind the port to the Hyper-V host if the neutron server doesn't have the "hyperv" mechanism driver installed and configured, resulting in a PortBindingFailed exception on the nova-compute side. When this exception is encountered, the logs will say to check the neutron-server logs, but the problem and its solution are not obvious or clear, resulting in plenty of questions / reports, all having the same solution: install networking-hyperv and configure neutron-server to use the "hyperv" mechanism_driver. Steps to reproduce == 1. Do not configure neutron-server with a "hyperv" mechanism_driver. 2. Spawn an instance having NICs with the vif_type "hyperv". Expected result === PortBindingFailed, and a clear explanation and / or solution for it. After the execution of the steps above, what should have happened if the issue wasn't present? Actual result = PortBindingFailed, telling users to check the neutron-server logs, which doesn't contain the obvious problem / solution. Environment === Hyper-V compute nodes, with neutron-hyperv-agent agent. Any OpenStack version. Logs & Configs == Logs: http://paste.openstack.org/show/646888/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1744032/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1744032] Re: Hyper-V: log warning on PortBindingFailed exception
Reviewed: https://review.openstack.org/539584 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=b80c245ba529ab603910b5c0e2fa466bf0b6a146 Submitter: Zuul Branch:master commit b80c245ba529ab603910b5c0e2fa466bf0b6a146 Author: Claudiu Belu Date: Sat Jan 13 17:58:56 2018 -0800 hyper-v: Logs tips on PortBindingFailed When spawning an Hyper-V instance with NICs having the vif_type "hyperv", neutron will fail to bind the port to the Hyper-V host if the neutron server doesn't have the "hyperv" mechanism driver installed and configured, resulting in a PortBindingFailed exception on the nova-compute side. When this exception is encountered, the logs will say to check the neutron-server logs, but the problem and its solution are not obvious or clear, resulting in plenty of questions / reports, all having the same solution: is there an L2 agent on the host alive and reporting to neutron, and if neutron Hyper-V agent is used, make sure to install networking-hyperv and configure neutron-server to use the "hyperv" mechanism_driver. Change-Id: Idceeb08e1452413e3b10ecd0a65f71d4d82866e0 Closes-Bug: #1744032 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1744032 Title: Hyper-V: log warning on PortBindingFailed exception Status in OpenStack Compute (nova): Fix Released Bug description: Description === When spawning an Hyper-V instance with NICs having the vif_type "hyperv", neutron will fail to bind the port to the Hyper-V host if the neutron server doesn't have the "hyperv" mechanism driver installed and configured, resulting in a PortBindingFailed exception on the nova-compute side. When this exception is encountered, the logs will say to check the neutron-server logs, but the problem and its solution are not obvious or clear, resulting in plenty of questions / reports, all having the same solution: install networking-hyperv and configure neutron-server to use the "hyperv" mechanism_driver. Steps to reproduce == 1. Do not configure neutron-server with a "hyperv" mechanism_driver. 2. Spawn an instance having NICs with the vif_type "hyperv". Expected result === PortBindingFailed, and a clear explanation and / or solution for it. After the execution of the steps above, what should have happened if the issue wasn't present? Actual result = PortBindingFailed, telling users to check the neutron-server logs, which doesn't contain the obvious problem / solution. Environment === Hyper-V compute nodes, with neutron-hyperv-agent agent. Any OpenStack version. Logs & Configs == Logs: http://paste.openstack.org/show/646888/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1744032/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1752301] Re: Project tags treats entire collection as a single tag
Reviewed: https://review.openstack.org/548399 Committed: https://git.openstack.org/cgit/openstack/keystone/commit/?id=94ef9458858506766149761b8f2a9961d6c9def6 Submitter: Zuul Branch:master commit 94ef9458858506766149761b8f2a9961d6c9def6 Author: Gage Hugo Date: Tue Feb 27 19:55:22 2018 + Remove @expression from tags This change makes tags a property of Project instead of a hybrid_property since we will always have a Project contain some list of tags. Change-Id: I1033321132cb3ec71bf94b8293cef91dfc6b8272 Co-Authored-By: Morgan Fainberg Closes-Bug: #1752301 ** Changed in: keystone Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1752301 Title: Project tags treats entire collection as a single tag Status in OpenStack Identity (keystone): Fix Released Bug description: When backporting Debian stable/queens from release Sid to Stretch, an issue where 3 unit tests were failing due to the entire project tags collection being treated as a single tag was encountered[0]. It was later determined that the hybrid_property.expression implementation was causing this issue[1]. When a quick change was pushed up and tested, the issue appeared to be fixed[2]. The fix for this issue is to drop @hybrid_property usage for @property, which removes the use of tags.expression. Project should always have tags instantiated, so there is not a behavior difference, which is a better fit for @property. [0] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-02-27.log.html#t2018-02-27T11:23:30 [1] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone /%23openstack-keystone.2018-02-27.log.html#t2018-02-27T19:32:42 [2] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone /%23openstack-keystone.2018-02-27.log.html#t2018-02-27T21:24:34 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1752301/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1754071] [NEW] image not found warning in logs when instance is deleted during snapshot
Public bug reported: Related to bug 1722571 (and bug 1737024), when an instance is deleted while nova is creating an image of it, there is some cleanup code in the compute manager that tries to delete the image, which might not exist, and we log a warning: http://logs.openstack.org/74/511074/6/check/tempest- full/39df584/controller/logs/screen-n-cpu.txt#_Mar_06_23_34_23_166597 Mar 06 23:34:23.058201 ubuntu-xenial-rax-dfw-0002817722 nova-compute[12626]: INFO nova.virt.libvirt.driver [None req-fcf19cc8-3566-4f62-9d8a-6e9733fd0bef tempest-ImagesTestJSON-1528672258 tempest-ImagesTestJSON-1528672258] [instance: 5946fa5a-f91f-4878-8cc1-fc6e248ef38b] Deletion of /opt/stack/data/nova/instances/5946fa5a-f91f-4878-8cc1-fc6e248ef38b_del complete Mar 06 23:34:23.116179 ubuntu-xenial-rax-dfw-0002817722 nova-compute[12626]: INFO nova.virt.libvirt.driver [None req-fcf19cc8-3566-4f62-9d8a-6e9733fd0bef tempest-ImagesTestJSON-1528672258 tempest-ImagesTestJSON-1528672258] [instance: 5946fa5a-f91f-4878-8cc1-fc6e248ef38b] Instance instance-0022 disappeared while taking snapshot of it: [Error Code 42] Domain not found: no domain with matching uuid '5946fa5a-f91f-4878-8cc1-fc6e248ef38b' (instance-0022) Mar 06 23:34:23.116411 ubuntu-xenial-rax-dfw-0002817722 nova-compute[12626]: DEBUG nova.compute.manager [None req-fcf19cc8-3566-4f62-9d8a-6e9733fd0bef tempest-ImagesTestJSON-1528672258 tempest-ImagesTestJSON-1528672258] [instance: 5946fa5a-f91f-4878-8cc1-fc6e248ef38b] Instance disappeared during snapshot {{(pid=12626) _snapshot_instance /opt/stack/nova/nova/compute/manager.py:3372}} Mar 06 23:34:23.166597 ubuntu-xenial-rax-dfw-0002817722 nova-compute[12626]: WARNING nova.compute.manager [None req-fcf19cc8-3566-4f62-9d8a-6e9733fd0bef tempest-ImagesTestJSON-1528672258 tempest-ImagesTestJSON-1528672258] [instance: 5946fa5a-f91f-4878-8cc1-fc6e248ef38b] Error while trying to clean up image 263b517a-3fc0-4486-a292-9cf8a4865282: ImageNotFound: Image 263b517a-3fc0-4486-a292-9cf8a4865282 could not be found. That warning comes from this code: https://github.com/openstack/nova/blob/489a8f5bf3e50944ced253283c15e77310a56e40/nova/compute/manager.py#L3378 We should be able to handle an ImageNotFound exception specifically in that try/except block and not log a warning for it. ** Affects: nova Importance: Low Status: Triaged ** Tags: compute snapshot -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1754071 Title: image not found warning in logs when instance is deleted during snapshot Status in OpenStack Compute (nova): Triaged Bug description: Related to bug 1722571 (and bug 1737024), when an instance is deleted while nova is creating an image of it, there is some cleanup code in the compute manager that tries to delete the image, which might not exist, and we log a warning: http://logs.openstack.org/74/511074/6/check/tempest- full/39df584/controller/logs/screen-n-cpu.txt#_Mar_06_23_34_23_166597 Mar 06 23:34:23.058201 ubuntu-xenial-rax-dfw-0002817722 nova-compute[12626]: INFO nova.virt.libvirt.driver [None req-fcf19cc8-3566-4f62-9d8a-6e9733fd0bef tempest-ImagesTestJSON-1528672258 tempest-ImagesTestJSON-1528672258] [instance: 5946fa5a-f91f-4878-8cc1-fc6e248ef38b] Deletion of /opt/stack/data/nova/instances/5946fa5a-f91f-4878-8cc1-fc6e248ef38b_del complete Mar 06 23:34:23.116179 ubuntu-xenial-rax-dfw-0002817722 nova-compute[12626]: INFO nova.virt.libvirt.driver [None req-fcf19cc8-3566-4f62-9d8a-6e9733fd0bef tempest-ImagesTestJSON-1528672258 tempest-ImagesTestJSON-1528672258] [instance: 5946fa5a-f91f-4878-8cc1-fc6e248ef38b] Instance instance-0022 disappeared while taking snapshot of it: [Error Code 42] Domain not found: no domain with matching uuid '5946fa5a-f91f-4878-8cc1-fc6e248ef38b' (instance-0022) Mar 06 23:34:23.116411 ubuntu-xenial-rax-dfw-0002817722 nova-compute[12626]: DEBUG nova.compute.manager [None req-fcf19cc8-3566-4f62-9d8a-6e9733fd0bef tempest-ImagesTestJSON-1528672258 tempest-ImagesTestJSON-1528672258] [instance: 5946fa5a-f91f-4878-8cc1-fc6e248ef38b] Instance disappeared during snapshot {{(pid=12626) _snapshot_instance /opt/stack/nova/nova/compute/manager.py:3372}} Mar 06 23:34:23.166597 ubuntu-xenial-rax-dfw-0002817722 nova-compute[12626]: WARNING nova.compute.manager [None req-fcf19cc8-3566-4f62-9d8a-6e9733fd0bef tempest-ImagesTestJSON-1528672258 tempest-ImagesTestJSON-1528672258] [instance: 5946fa5a-f91f-4878-8cc1-fc6e248ef38b] Error while trying to clean up image 263b517a-3fc0-4486-a292-9cf8a4865282: ImageNotFound: Image 263b517a-3fc0-4486-a292-9cf8a4865282 could not be found. That warning comes from this code: https://github.com/openstack/nova/blob/489a8f5bf3e50944ced253283c15e77310a56e40/nova/compute/manager.py#L3378 We should be able to handle an ImageNotFound exception specifically in th
[Yahoo-eng-team] [Bug 1754062] [NEW] openstack client does not pass prefixlen when creating subnet
Public bug reported: Version: Pike OpenStack Client: 3.12.0 When testing Subnet Pool functionality, I found that the behavior between the openstack and neutron clients is different. Subnet pool: root@controller01:~# openstack subnet pool show MySubnetPool +---+--+ | Field | Value| +---+--+ | address_scope_id | None | | created_at| 2018-03-07T13:18:22Z | | default_prefixlen | 8| | default_quota | None | | description | | | id| e49703d8-27f4-4a16-9bf4-91a6cf00fff3 | | ip_version| 4| | is_default| False| | max_prefixlen | 32 | | min_prefixlen | 8| | name | MySubnetPool | | prefixes | 172.31.0.0/16| | project_id| 9233b6b4f6a54386af63c0a7b8f043c2 | | revision_number | 0| | shared| False| | tags | | | updated_at| 2018-03-07T13:18:22Z | +---+--+ When attempting to create a /28 subnet from that pool with the openstack client, the following error is observed: root@controller01:~# openstack subnet create \ > --subnet-pool MySubnetPool \ > --prefix-length 28 \ > --network MyVLANNetwork2 \ > MyFlatSubnetFromPool HttpException: Internal Server Error (HTTP 500) (Request-ID: req-61b3f00a-9764-4bcb-899d-e85d66f54e5a), Failed to allocate subnet: Insufficient prefix space to allocate subnet size /8. However, the same request is successful with the neutron client: root@controller01:~# neutron subnet-create --subnetpool MySubnetPool --prefixlen 28 --name MySubnetFromPool MyVLANNetwork2 neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. Created a new subnet: +---+---+ | Field | Value | +---+---+ | allocation_pools | {"start": "172.31.0.2", "end": "172.31.0.14"} | | cidr | 172.31.0.0/28 | | created_at| 2018-03-07T13:35:35Z | | description | | | dns_nameservers | | | enable_dhcp | True | | gateway_ip| 172.31.0.1| | host_routes | | | id| 43cb9dda-1b7e-436d-9dc1-5312866a1b63 | | ip_version| 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | MySubnetFromPool | | network_id| e01ca743-607c-4a94-9176-b572a46fba84 | | project_id| 9233b6b4f6a54386af63c0a7b8f043c2 | | revision_number | 0 | | service_types | | | subnetpool_id | e49703d8-27f4-4a16-9bf4-91a6cf00fff3 | | tags | | | tenant_id | 9233b6b4f6a54386af63c0a7b8f043c2 | | updated_at| 2018-03-07T13:35:35Z | +---+---+ The payload is different between these clients - the openstack client fails to send the prefixlen key. openstack client: REQ: curl -g -i -X POST http://controller01:9696/v2.0/subnets -H "User-Agent: openstacksdk/0.9.17 keystoneauth1/3.1.0 python-requests/2.18.1 CPython/2.7.12" -H "Content-Type: application/json" -H "X-Auth-Token: {SHA1}ec04a71699eee2c70dc4abb35037de272523fef0" -d '{"subnet": {"network_id": "e01ca743-607c-4a94-9176-b572a46fba84", "ip_version": 4, "name": "MyFlatSubnetFromPool", "subnetpool_id": "e49703d8-27f4-4a16-9bf4-91a6cf00fff3"}}' http://controller01:9696 "POST /v2.0/subnets HTTP/1.1" 500 160 neutron client: REQ: curl -g -i -X POST http://controller01:9696/v2.0/subnets -H "User- Agent: python-neutronclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}b3b6f0fa14c2b28c5c9784f857ee753455c1d375" -d '{"subnet": {"network_id": "e01ca743-607c-4a94-9176-b572a46fba84",
[Yahoo-eng-team] [Bug 1753656] Re: Cannot update ssl certificate when update listener
Is this report about neutron-lbaas-dashboard or octavia-dashboard? It is not clear. In either case, both neutron-lbaas-dashboard and octavia-dashboard are maintained by "octavia" project. Could you file a bug on "octavia" storyboard https://storyboard.openstack.org/#!/project_group/70? (Note that "Create Story" in the storyboard corresponds to "Report a bug" in Launchpad.) ** Changed in: horizon Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1753656 Title: Cannot update ssl certificate when update listener Status in OpenStack Dashboard (Horizon): Invalid Bug description: Description === Cannot update ssl certificate when update listener Steps to reproduce == 1. Create one load balancer, choose "TERMINATED_HTTPS" as listener protocol and select a certificate 2. Edit the existed listener with another certificate Expected result === Listener should be updated successfully Actual result = Update listener success, but only the name and description of the listener has been updated. Certificate remains same as old one. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1753656/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1754048] [NEW] Federated domain is reported when validating a federated token
Public bug reported: Prior to introducing per idp domains, all federated users lived in the Federated domain. That is not the case anymore but Keystone keeps reporting that federated users are part of that domain rather their per- idp domains. Token validation: http://paste.openstack.org/show/693652/ ** Affects: keystone Importance: Undecided Status: New ** Tags: federation -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1754048 Title: Federated domain is reported when validating a federated token Status in OpenStack Identity (keystone): New Bug description: Prior to introducing per idp domains, all federated users lived in the Federated domain. That is not the case anymore but Keystone keeps reporting that federated users are part of that domain rather their per-idp domains. Token validation: http://paste.openstack.org/show/693652/ To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1754048/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1643623] Re: Instance stuck in 'migrating' status due to invalid host
Reviewed: https://review.openstack.org/447355 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=fb68fd12e2fd6e9686ad45c9875508bd9fa0df91 Submitter: Zuul Branch:master commit fb68fd12e2fd6e9686ad45c9875508bd9fa0df91 Author: Sivasathurappan Radhakrishnan Date: Mon Mar 20 03:13:13 2017 + Return 400 when compute host is not found Previously user was getting a 500 error code for ComputeHostNotFound if they are using latest microversion that does live migration in async. This patches changes return response to 400 as 500 internal server error should not be returned to the user for failures due to user error that can be fixed by changing to request on client side. Change-Id: I7a9de211ecfaa7f2816fbf8bcd73ebbdd990643c closes-bug:1643623 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1643623 Title: Instance stuck in 'migrating' status due to invalid host Status in OpenStack Compute (nova): Fix Released Bug description: Tried to live migrate instance to invalid destination host. Got an error message saying host was not available . Did a nova list and found status and task state was stuck in migrating status forever. Couldn't see the instance in 'nova migration-list' and not able to abort the migration using 'nova live-migration-abort' as the operation was aborted well before migration id could be set. Steps to reproduce: 1) Create an instance test_1 2) live migrate instance using 'nova live-migration test_1 ' 3) Check status of the instance using 'nova show test_1' or 'nova list'. Expected Result: Status of the instance should have been in Active status as live migration failed with invalid host name Actual Result: Instance is stuck in 'migrating' status forever. Environment: Multinode devstack environment with 2 compute nodes or it can be done in single node environment as the validation of host name happens before live migration. Multinode environment is not really required to reproduce above scenario 1)Current master 2)Networking-neutron 3)Hypervisor Libvirt-KVM To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1643623/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1753982] [NEW] Install and configure (Ubuntu) in glance
Public bug reported: This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [x] This doc is inaccurate in this way: __ Shouldn't the line auth_url = http://controller:35357 point to auth_url = http://controller:5000 ??? Need to be changed in both [keystone_authtoken] sections (in glance- api.conf and glance-registry.conf) Thx and BR --- Release: 16.0.1.dev1 on 'Thu Mar 1 07:26:57 2018, commit 968f4ae' SHA: 968f4ae9ce244d9372cb3e8f45acea9d557f317d Source: https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-ubuntu.rst URL: https://docs.openstack.org/glance/queens/install/install-ubuntu.html ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1753982 Title: Install and configure (Ubuntu) in glance Status in Glance: New Bug description: This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [x] This doc is inaccurate in this way: __ Shouldn't the line auth_url = http://controller:35357 point to auth_url = http://controller:5000 ??? Need to be changed in both [keystone_authtoken] sections (in glance- api.conf and glance-registry.conf) Thx and BR --- Release: 16.0.1.dev1 on 'Thu Mar 1 07:26:57 2018, commit 968f4ae' SHA: 968f4ae9ce244d9372cb3e8f45acea9d557f317d Source: https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-ubuntu.rst URL: https://docs.openstack.org/glance/queens/install/install-ubuntu.html To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1753982/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1753964] [NEW] Image remains in queued state for web-download if node_staging_uri is not set
Public bug reported: If operator does not set 'node_staging_uri' in glance-api.conf then importing image using web-download remains in queued state. Steps to reproduce: 1. Ensure glance-api is running under mod_wsgi (add WSGI_MODE=mod_wsgi in local.conf and run stack.sh) 2. Do not set node_staging_uri in glance-api.conf 3. Create image using below curl command: curl -i -X POST -H "x-auth-token: " http://192.168.0.13:9292/v2/images -d '{"container_format":"bare","disk_format":"raw","name":"Import web-download"}' 4. Import image using below curl command: curl -i -X POST -H "Content-type: application/json" -H "x-auth-token: " http://192.168.0.13:9292/v2/images//import -d '{"method":{"name":"web-download","uri":"https://www.openstack.org/assets/openstack-logo/2016R/OpenStack-Logo-Horizontal.eps.zip"}}' Expected result: Image should be in active state. Actual result: Image remains in queued state. API Logs: Mar 07 09:26:07 ubuntu-16 glance-api[3499]: DEBUG glance_store.backend [-] Attempting to import store file {{(pid=3506) _load_store /usr/local/lib/python2.7/dist-packages/glance_store/backend.py:231}} Mar 07 09:26:07 ubuntu-16 glance-api[3499]: DEBUG glance_store.capabilities [-] Store glance_store._drivers.filesystem.Store doesn't support updating dynamic storage capabilities. Please overwrite 'update_capabilities' method of the store to implement updating logics if needed. {{(pid=3506) update_capabilities /usr/local/lib/python2.7/dist-packages/glance_store/capabilities.py:97}} Mar 07 09:26:07 ubuntu-16 glance-api[3499]: Traceback (most recent call last): Mar 07 09:26:07 ubuntu-16 glance-api[3499]: File "/usr/local/lib/python2.7/dist-packages/eventlet/greenpool.py", line 82, in _spawn_n_impl Mar 07 09:26:07 ubuntu-16 glance-api[3499]: func(*args, **kwargs) Mar 07 09:26:07 ubuntu-16 glance-api[3499]: File "/opt/stack/glance/glance/domain/proxy.py", line 238, in run Mar 07 09:26:07 ubuntu-16 glance-api[3499]: self.base.run(executor) Mar 07 09:26:07 ubuntu-16 glance-api[3499]: File "/opt/stack/glance/glance/notifier.py", line 581, in run Mar 07 09:26:07 ubuntu-16 glance-api[3499]: super(TaskProxy, self).run(executor) Mar 07 09:26:07 ubuntu-16 glance-api[3499]: File "/opt/stack/glance/glance/domain/proxy.py", line 238, in run Mar 07 09:26:07 ubuntu-16 glance-api[3499]: self.base.run(executor) Mar 07 09:26:07 ubuntu-16 glance-api[3499]: File "/opt/stack/glance/glance/domain/proxy.py", line 238, in run Mar 07 09:26:07 ubuntu-16 glance-api[3499]: self.base.run(executor) Mar 07 09:26:07 ubuntu-16 glance-api[3499]: File "/opt/stack/glance/glance/domain/__init__.py", line 438, in run Mar 07 09:26:07 ubuntu-16 glance-api[3499]: executor.begin_processing(self.task_id) Mar 07 09:26:07 ubuntu-16 glance-api[3499]: File "/opt/stack/glance/glance/async/taskflow_executor.py", line 144, in begin_processing Mar 07 09:26:07 ubuntu-16 glance-api[3499]: super(TaskExecutor, self).begin_processing(task_id) Mar 07 09:26:07 ubuntu-16 glance-api[3499]: File "/opt/stack/glance/glance/async/__init__.py", line 63, in begin_processing Mar 07 09:26:07 ubuntu-16 glance-api[3499]: self._run(task_id, task.type) Mar 07 09:26:07 ubuntu-16 glance-api[3499]: File "/opt/stack/glance/glance/async/taskflow_executor.py", line 165, in _run Mar 07 09:26:07 ubuntu-16 glance-api[3499]: flow = self._get_flow(task) Mar 07 09:26:07 ubuntu-16 glance-api[3499]: File "/opt/stack/glance/glance/async/taskflow_executor.py", line 134, in _get_flow Mar 07 09:26:07 ubuntu-16 glance-api[3499]: invoke_kwds=kwds).driver Mar 07 09:26:07 ubuntu-16 glance-api[3499]: File "/usr/local/lib/python2.7/dist-packages/stevedore/driver.py", line 61, in __init__ Mar 07 09:26:07 ubuntu-16 glance-api[3499]: warn_on_missing_entrypoint=warn_on_missing_entrypoint Mar 07 09:26:07 ubuntu-16 glance-api[3499]: File "/usr/local/lib/python2.7/dist-packages/stevedore/named.py", line 81, in __init__ Mar 07 09:26:07 ubuntu-16 glance-api[3499]: verify_requirements) Mar 07 09:26:07 ubuntu-16 glance-api[3499]: File "/usr/local/lib/python2.7/dist-packages/stevedore/extension.py", line 203, in _load_plugins Mar 07 09:26:07 ubuntu-16 glance-api[3499]: self._on_load_failure_callback(self, ep, err) Mar 07 09:26:07 ubuntu-16 glance-api[3499]: File "/usr/local/lib/python2.7/dist-packages/stevedore/extension.py", line 195, in _load_plugins Mar 07 09:26:07 ubuntu-16 glance-api[3499]: verify_requirements, Mar 07 09:26:07 ubuntu-16 glance-api[3499]: File "/usr/local/lib/python2.7/dist-packages/stevedore/named.py", line 158, in _load_one_plugin Mar 07 09:26:07 ubuntu-16 glance-api[3499]: verify_requirements, Mar 07 09:26:07 ubuntu-16 glance-api[3499]: File "/usr/local/lib/python2.7/dist-packages/stevedore/extension.py", line 227, in _load_one_plugin Mar 07 09:26:07 ubuntu-16 glance-api[3499]: obj = plugin(*invoke_args, **invoke_kwds) Mar 07 09:26:07 ubuntu-16 glance-a
[Yahoo-eng-team] [Bug 1751051] Re: UnicodeEncodeError when creating user with non-ascii chars
We need to make sure that the default locale when booting a subiquity image is C.UTF-8, not C. This probably needs fixing in livecd-rootfs and I don't think there are any code changes for subiquity. ** Project changed: subiquity => livecd-rootfs ** Project changed: livecd-rootfs => livecd-rootfs (Ubuntu) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1751051 Title: UnicodeEncodeError when creating user with non-ascii chars Status in cloud-init: Confirmed Status in cloud-init package in Ubuntu: Fix Released Status in livecd-rootfs package in Ubuntu: New Bug description: I was testing subiquity, and at the user creation prompt typed in "André D'Silva" for the username, and just "andre" for the login. The installer finished fine, but upon first login I couldn't login. Booting into rescue mode showed me that the user had not been created. Checking cloud-init logs, I find the UnicodeEncodeError. 2018-02-22 12:44:01,386 - __init__.py[DEBUG]: Adding user andre 2018-02-22 12:44:01,387 - util.py[WARNING]: Failed to create user andre 2018-02-22 12:44:01,387 - util.py[DEBUG]: Failed to create user andre Traceback (most recent call last): File "/usr/lib/python3/dist-packages/cloudinit/distros/__init__.py", line 463, in add_user util.subp(adduser_cmd, logstring=log_adduser_cmd) File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 1871, in subp env=env, shell=shell) File "/usr/lib/python3.6/subprocess.py", line 709, in __init__ restore_signals, start_new_session) File "/usr/lib/python3.6/subprocess.py", line 1275, in _execute_child restore_signals, start_new_session, preexec_fn) UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 4: ordinal not in range(128) user-data contains this: #cloud-config hostname: sbqt users: - gecos: "Andr\xE9 D'Silva" groups: [adm, cdrom, dip, lpadmin, plugdev, sambashare, debian-tor, libvirtd, lxd, sudo] lock-passwd: false name: andre passwd: $6$UaxxahbQam4Ko1g7$WB5tNuCR84DvWwI7ovxDiofIdLP47pG2USPel2iIQV/qzzT3pAb1VtlbelCR2iCNRxCoJgsVafcNtqdfz1/IL1 shell: /bin/bash ssh_import_id: ['lp:ahasenack'] cloud-init is 17.2-34-g644048e3-0ubuntu1 from bionic/main. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1751051/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp