[Yahoo-eng-team] [Bug 1598370] [NEW] Got AttributeError when launching instance in Aarch64
Public bug reported: Description === Using nova to create an instance in AArch64,and finally got AttributeError "'NoneType' object has no attribute 'parse_dom'" after "_get_guest_xml". == 1.Using devstack to deploy openstack. Using default local.conf. 2.Upload the aarch64 image with glance. $ source ~/devstack/openrc admin admin $ glance image-create --name image-arm64.img --disk-format qcow2 --container-format bare --visibility public --file images/image-arm64-wily.qcow2 --progress $ glance image-create --name image-arm64.vmlinuz --disk-format aki --container-format aki --visibility public --file images/image-arm64-wily.vmlinuz --progress $ glance image-create --name image-arm64.initrd --disk-format ari --container-format ari --visibility public --file images/image-arm64-wily.initrd --progress $ IMAGE_UUID=$(glance image-list | grep image-arm64.img | awk '{ print $2 }') $ IMAGE_KERNEL_UUID=$(glance image-list | grep image-arm64.vmlinuz | awk '{ print $2 }') $ IMAGE_INITRD_UUID=$(glance image-list | grep image-arm64.initrd | awk '{ print $2 }') $ glance image-update --kernel-id ${IMAGE_KERNEL_UUID} --ramdisk-id ${IMAGE_INITRD_UUID} ${IMAGE_UUID} 3.Set the scsi model: $ glance image-update --property hw_disk_bus --property hw_scsi_model=virtio-scsi ${IMAGE_UUID} 4.nova add keypair $ nova keypair-add default --pub-key ~/.ssh/id_rsa.pub 5.Launch the instance: $ image=$(nova image-list | egrep "image-arm64.img"'[^-]' | awk '{ print $2 }') $ nova boot --flavor m1.small--image ${image} --key-name default test-arm64 6.See the n-cpu log, we can get the error information. Expected result === Spawning guest successfully. Actual result = Got the error log information as below: 2016-07-02 06:57:08.645 ERROR nova.compute.manager [req-c8805971-7d8a-4775-ae95-7ac62b284487 admin admin] [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] Instance failed to spawn 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] Traceback (most recent call last): 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] File "/opt/stack/nova/nova/compute/manager.py", line 2063, in _build_resources 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] yield resources 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] File "/opt/stack/nova/nova/compute/manager.py", line 1907, in _build_and_run_instance 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] block_device_info=block_device_info) 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2665, in spawn 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] post_xml_callback=gen_confdrive) 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4860, in _create_domain_and_network 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] post_xml_callback=post_xml_callback) 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4784, in _create_domain 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] post_xml_callback() 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3137, in _create_configdrive 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] instance) 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 7547, in _build_device_metadata 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] guest_config.parse_dom(xml_dom) 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] File "/opt/stack/nova/nova/virt/libvirt/config.py", line 2193, in parse_dom 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] obj.parse_dom(d) 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] File "/opt/stack/nova/nova/virt/libvirt/config.py", line 1402, in parse_dom 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] obj.parse_dom(c) 2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: c8ea40f1-2877-45d
[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong
** Also affects: python-neutronclient Importance: Undecided Status: New ** Also affects: neutron Importance: Undecided Status: New ** Changed in: neutron Assignee: (unassigned) => Yan Songming (songmingyan) ** Changed in: python-neutronclient Assignee: (unassigned) => Yan Songming (songmingyan) ** No longer affects: neutron ** Also affects: networking-sfc Importance: Undecided Status: New ** Changed in: networking-sfc Assignee: (unassigned) => Yan Songming (songmingyan) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1259292 Title: Some tests use assertEqual(observed, expected) , the argument order is wrong Status in Astara: In Progress Status in Barbican: In Progress Status in Blazar: New Status in Ceilometer: Invalid Status in Cinder: Fix Released Status in congress: Fix Released Status in daisycloud-core: New Status in Designate: Fix Released Status in Freezer: In Progress Status in Glance: Fix Released Status in glance_store: Fix Released Status in Higgins: New Status in OpenStack Dashboard (Horizon): In Progress Status in OpenStack Identity (keystone): Fix Released Status in Magnum: Fix Released Status in Manila: Fix Released Status in Mistral: Fix Released Status in Murano: Fix Released Status in networking-sfc: New Status in OpenStack Compute (nova): Won't Fix Status in os-brick: In Progress Status in python-ceilometerclient: Invalid Status in python-cinderclient: Fix Released Status in python-designateclient: Fix Committed Status in python-glanceclient: Fix Released Status in python-mistralclient: Fix Released Status in python-neutronclient: New Status in python-solumclient: Fix Released Status in Python client library for Zaqar: Fix Released Status in Sahara: Fix Released Status in sqlalchemy-migrate: New Status in SWIFT: New Status in tacker: New Status in tempest: New Status in zaqar: Fix Released Bug description: The test cases will produce a confusing error message if the tests ever fail, so this is worth fixing. To manage notifications about this bug go to: https://bugs.launchpad.net/astara/+bug/1259292/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1589750] Re: Request stable/liberty 1.0.3 release for networking-ofagent
I pushed the tag. I also +1'd the releases patch: https://review.openstack.org/334687 From Neutron perspective, all done. ** Changed in: neutron Status: New => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1589750 Title: Request stable/liberty 1.0.3 release for networking-ofagent Status in networking-ofagent: New Status in neutron: Fix Released Bug description: Please release new version of stable/liberty for networking-ofagent. tag: 1.0.3 commit id: commit dc9a94137c140ea3f8ef0d6c890f3469d8978fa0 Author: OpenStack Proposal Bot Date: Wed May 18 14:01:19 2016 + Updated from global requirements Change-Id: I443f7de70d853e4a30873513e9f5b14eaf8b2c8e To manage notifications about this bug go to: https://bugs.launchpad.net/networking-ofagent/+bug/1589750/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1589502] Re: Request Mitaka release for networking-bagpipe
** Changed in: neutron Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1589502 Title: Request Mitaka release for networking-bagpipe Status in BaGPipe: New Status in neutron: Fix Released Bug description: Can you please do a release of networking-bagpipe from master branch ? Commit: 870d281eeb707fbb6c4de431d764cebb586f872e Version: 4.0.0 (first release, but number chosen to be in sync with networking-bgpvpn) To manage notifications about this bug go to: https://bugs.launchpad.net/networking-bagpipe/+bug/1589502/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1586009] Re: Requesting stable/mitaka release for networking-onos
** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1586009 Title: Requesting stable/mitaka release for networking-onos Status in networking-onos: New Status in neutron: Fix Released Bug description: Could you please tag and release networking-onos, as it currently stands at: https://git.openstack.org/cgit/openstack/networking-onos root1@openstack:~/mydata/projects/networking-onos$ git log -1 commit 871248650a8823d12a7be1e25d4803b298a8e81f Author: OpenStack Proposal Bot Date: Sat Apr 30 18:04:56 2016 + Updated from global requirements Change-Id: If2dd10f2451f4b52b4e419427ff6deb58961cb68 This will be the second release of networking-onos, so I guess it will be 2.0.0. To manage notifications about this bug go to: https://bugs.launchpad.net/networking-onos/+bug/1586009/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1589501] Re: Request Mitaka maintenance release for networking-bgpvpn
** Changed in: neutron Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1589501 Title: Request Mitaka maintenance release for networking-bgpvpn Status in networking-bgpvpn: Fix Released Status in neutron: Fix Released Bug description: Please release a maintenance version of networking-bgpvpn from our stable/mitaka branch, tag abe26499aae8b4b875789c20a9825cd96b3e4b52, with release number 4.0.1 . To manage notifications about this bug go to: https://bugs.launchpad.net/bgpvpn/+bug/1589501/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1598373] [NEW] the result of "nova hypervisor-servers hypervisor-name" is error
Public bug reported: version:master problem: the result of "nova hypervisor-servers hypervisor-name" is error, for example: the result of "nova hypervisor-servers dell-nova-1" will include the servers on dell-nova-11 when there are dell-nova-1 and dell-nova-11 hypervisor nodes. ** Affects: nova Importance: Undecided Assignee: liuxiuli (liu-lixiu) Status: New ** Changed in: nova Assignee: (unassigned) => liuxiuli (liu-lixiu) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1598373 Title: the result of "nova hypervisor-servers hypervisor-name" is error Status in OpenStack Compute (nova): New Bug description: version:master problem: the result of "nova hypervisor-servers hypervisor-name" is error, for example: the result of "nova hypervisor-servers dell-nova-1" will include the servers on dell-nova-11 when there are dell-nova-1 and dell-nova-11 hypervisor nodes. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1598373/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong
** No longer affects: python-neutronclient -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1259292 Title: Some tests use assertEqual(observed, expected) , the argument order is wrong Status in Astara: In Progress Status in Barbican: In Progress Status in Blazar: New Status in Ceilometer: Invalid Status in Cinder: Fix Released Status in congress: Fix Released Status in daisycloud-core: New Status in Designate: Fix Released Status in Freezer: In Progress Status in Glance: Fix Released Status in glance_store: Fix Released Status in Higgins: New Status in OpenStack Dashboard (Horizon): In Progress Status in OpenStack Identity (keystone): Fix Released Status in Magnum: Fix Released Status in Manila: Fix Released Status in Mistral: Fix Released Status in Murano: Fix Released Status in networking-sfc: New Status in OpenStack Compute (nova): Won't Fix Status in os-brick: In Progress Status in python-ceilometerclient: Invalid Status in python-cinderclient: Fix Released Status in python-designateclient: Fix Committed Status in python-glanceclient: Fix Released Status in python-mistralclient: Fix Released Status in python-solumclient: Fix Released Status in Python client library for Zaqar: Fix Released Status in Sahara: Fix Released Status in sqlalchemy-migrate: New Status in SWIFT: New Status in tacker: New Status in tempest: New Status in zaqar: Fix Released Bug description: The test cases will produce a confusing error message if the tests ever fail, so this is worth fixing. To manage notifications about this bug go to: https://bugs.launchpad.net/astara/+bug/1259292/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1598374] [NEW] select an not right host when resizing an instance with hw:numa_nodes=X in flavor.
Public bug reported: version:master problem: First, I create an instance with hw:numa_nodes=2 in flavor. Then I resize it to a flavor with hw:numa_nodes=1, and numa_topology_filter must select a host with two available numa nodes. I think this is error. ** Affects: nova Importance: Undecided Assignee: liuxiuli (liu-lixiu) Status: New ** Changed in: nova Assignee: (unassigned) => liuxiuli (liu-lixiu) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1598374 Title: select an not right host when resizing an instance with hw:numa_nodes=X in flavor. Status in OpenStack Compute (nova): New Bug description: version:master problem: First, I create an instance with hw:numa_nodes=2 in flavor. Then I resize it to a flavor with hw:numa_nodes=1, and numa_topology_filter must select a host with two available numa nodes. I think this is error. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1598374/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1598062] Re: Unit test fails on python3.5
Reviewed: https://review.openstack.org/336443 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=b7033277dc19493b4a53705b5ecae7c3c77da999 Submitter: Jenkins Branch:master commit b7033277dc19493b4a53705b5ecae7c3c77da999 Author: Jens Rosenboom Date: Fri Jul 1 10:58:12 2016 +0200 Fix api_validation for Python 3 * Convert argument to base64.decodestring() to bytes for PY3. * Fix an issue with python3.5 where the format of an internal error message changed. Change-Id: If8184c190e76d8cefb5b097f8fa8cb7564207103 Closes-Bug: 1598062 ** Changed in: nova Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1598062 Title: Unit test fails on python3.5 Status in OpenStack Compute (nova): Fix Released Bug description: This is similar to https://launchpad.net/bugs/1559191 but in this case it looks like the embedded error message comes from the jsonschema library: == Failed 1 tests - output below: == nova.tests.unit.test_api_validation.PatternPropertiesTestCase.test_validate_patternProperties_fails --- Captured traceback: ~~~ b'Traceback (most recent call last):' b' File "/home/ubuntu/src/nova/nova/api/validation/validators.py", line 258, in validate' b'self.validator.validate(*args, **kwargs)' b' File "/home/ubuntu/src/nova/.tox/py35/lib/python3.5/site-packages/jsonschema/validators.py", line 122, in validate' b'for error in self.iter_errors(*args, **kwargs):' b' File "/home/ubuntu/src/nova/.tox/py35/lib/python3.5/site-packages/jsonschema/validators.py", line 98, in iter_errors' b'for error in errors:' b' File "/home/ubuntu/src/nova/.tox/py35/lib/python3.5/site-packages/jsonschema/_validators.py", line 25, in additionalProperties' b'extras = set(_utils.find_additional_properties(instance, schema))' b' File "/home/ubuntu/src/nova/.tox/py35/lib/python3.5/site-packages/jsonschema/_utils.py", line 100, in find_additional_properties' b'if patterns and re.search(patterns, property):' b' File "/home/ubuntu/src/nova/.tox/py35/lib/python3.5/re.py", line 173, in search' b'return _compile(pattern, flags).search(string)' b'TypeError: expected string or bytes-like object' b'' b'During handling of the above exception, another exception occurred:' b'' b'Traceback (most recent call last):' b' File "/home/ubuntu/src/nova/nova/tests/unit/test_api_validation.py", line 101, in check_validation_error' b'method(body=body, req=req,)' b' File "/home/ubuntu/src/nova/nova/api/validation/__init__.py", line 71, in wrapper' b"schema_validator.validate(kwargs['body'])" b' File "/home/ubuntu/src/nova/nova/api/validation/validators.py", line 277, in validate' b'raise exception.ValidationError(detail=detail)' b'nova.exception.ValidationError: expected string or bytes-like object' b'' b'During handling of the above exception, another exception occurred:' b'' b'Traceback (most recent call last):' b' File "/home/ubuntu/src/nova/nova/tests/unit/test_api_validation.py", line 359, in test_validate_patternProperties_fails' b'expected_detail=detail)' b' File "/home/ubuntu/src/nova/nova/tests/unit/test_api_validation.py", line 106, in check_validation_error' b"'Exception details did not match expected')" b' File "/home/ubuntu/src/nova/.tox/py35/lib/python3.5/site-packages/testtools/testcase.py", line 411, in assertEqual' b'self.assertThat(observed, matcher, message)' b' File "/home/ubuntu/src/nova/.tox/py35/lib/python3.5/site-packages/testtools/testcase.py", line 498, in assertThat' b'raise mismatch_error' b"testtools.matchers._impl.MismatchError: 'expected string or buffer' != 'expected string or bytes-like object': Exception details did not match expected" b'' To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1598062/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1598422] [NEW] HA router is scheduled to imcompatible l3 agent
Public bug reported: When router_auto_schedule is configured to true and l3 agent count of ha router don't reach max_l3_agents_per_router, auto_schedule_routers will schedule ha router to imcompatible l3 agent. This issue is because of that L3Scheduler don't check if l3 agent is compatible with the scheduling router like what it did in get_l3_agent_candidates. see https://github.com/openstack/neutron/blob/master/neutron/scheduler/l3_agent_scheduler.py#L313. How to reproduce: Scenario 1 - legacy mode, three network nodes: host_1, host_2, host_3. Configure host_3 with handle_internal_only_routers = false. - create a ha router router-A, ensure that router-A is only hosted by host_1 and host_2. - restart l3 agent on host_3, then router-A is hosted by three agents, including agent on host_3. Scenario 2 - dvr mode, two network nodes: host_1 and host_2 are configured to dvr_snat, one compute node: host_3 is configured to dvr. - create a ha router router-A, ensure that router-A is only hosted by host_1 and host_2. - restart l3 agent on host_3, then router-A is hosted by three agents, including agent on host_3. Expected behavior: - auto_schedule_routers() and schedule() in L3Scheduler should choose candidates with consistent standards. ha router shouldn't be scheduled to imcompatible agent after auto_schedule_routers. Affected versions: - I saw this issue in Kilo, and reproduced it in master branch. I guess liberty and mitaka are affected also. ** Affects: neutron Importance: Undecided Assignee: Q.Tian (tianquan23) Status: New ** Tags: l3-ha ** Tags added: l3-ha ** Changed in: neutron Assignee: (unassigned) => Q.Tian (tianquan23) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1598422 Title: HA router is scheduled to imcompatible l3 agent Status in neutron: New Bug description: When router_auto_schedule is configured to true and l3 agent count of ha router don't reach max_l3_agents_per_router, auto_schedule_routers will schedule ha router to imcompatible l3 agent. This issue is because of that L3Scheduler don't check if l3 agent is compatible with the scheduling router like what it did in get_l3_agent_candidates. see https://github.com/openstack/neutron/blob/master/neutron/scheduler/l3_agent_scheduler.py#L313. How to reproduce: Scenario 1 - legacy mode, three network nodes: host_1, host_2, host_3. Configure host_3 with handle_internal_only_routers = false. - create a ha router router-A, ensure that router-A is only hosted by host_1 and host_2. - restart l3 agent on host_3, then router-A is hosted by three agents, including agent on host_3. Scenario 2 - dvr mode, two network nodes: host_1 and host_2 are configured to dvr_snat, one compute node: host_3 is configured to dvr. - create a ha router router-A, ensure that router-A is only hosted by host_1 and host_2. - restart l3 agent on host_3, then router-A is hosted by three agents, including agent on host_3. Expected behavior: - auto_schedule_routers() and schedule() in L3Scheduler should choose candidates with consistent standards. ha router shouldn't be scheduled to imcompatible agent after auto_schedule_routers. Affected versions: - I saw this issue in Kilo, and reproduced it in master branch. I guess liberty and mitaka are affected also. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1598422/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1597613] Re: OVS firewall fails if of_interface=native and ovsdb_interface=native
Reviewed: https://review.openstack.org/335800 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=ce644b821ad1b86120f47633816eaef8c2c1235e Submitter: Jenkins Branch:master commit ce644b821ad1b86120f47633816eaef8c2c1235e Author: IWAMOTO Toshihiro Date: Thu Jun 30 14:51:01 2016 +0900 Fix OVSBridge.set_protocols arg For native ovsdb_interface compatibility, use a list instead of a comma separated string for set_protocols argument. The native interface failed silently. Change-Id: Idc6fce9f943b2fe64f668bcfaf9ed40fcf47034c Closes-Bug: 1597613 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1597613 Title: OVS firewall fails if of_interface=native and ovsdb_interface=native Status in neutron: Fix Released Bug description: OVSFirewallDriver fails to run with the following errors. A fix is to follow. 2016-06-30 13:14:32.721 DEBUG neutron.agent.linux.utils [req-e89302ff-35eb-4bd8-bb6a-9e705fb1cdc2 None None] Running command (rootwrap daemon): ['ovs-ofctl', 'add-flows', 'br-int', '-'] from (pid=26921) execute_rootwrap_daemon /opt/stack/neutron/neutron/agent/linux/utils.py:98 2016-06-30 13:14:32.725 ERROR neutron.agent.linux.utils [req-e89302ff-35eb-4bd8-bb6a-9e705fb1cdc2 None None] Exit code: 1; Stdin: hard_timeout=0,idle_timeout=0,priority=0,table=71,cookie=13680950857646023732,actions=drop; Stdout: ; Stderr: 2016-06-30T04:14:32Z|1|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: version negotiation failed (we support version 0x01, peer supports version 0x04) ovs-ofctl: br-int: failed to connect to socket (Broken pipe) To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1597613/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1598466] [NEW] Neutron VPNaas gate functional tests failing on race condition
Public bug reported: gate-neutron-vpnaas-dsvm-functional-sswan and gate-neutron-vpnaas-dsvm- functional are failing on a race condition in test_ipsec_site_connections_with_l3ha_routers: ft1.4: neutron_vpnaas.tests.functional.common.test_scenario.TestIPSecScenario.test_ipsec_site_connections_with_l3ha_routers_StringException: Empty attachments: pythonlogging:'' stderr stdout Traceback (most recent call last): File "neutron_vpnaas/tests/functional/common/test_scenario.py", line 667, in test_ipsec_site_connections_with_l3ha_routers self.check_ping(site1, site2, 0) File "neutron_vpnaas/tests/functional/common/test_scenario.py", line 519, in check_ping timeout=8, count=4) File "/opt/stack/new/neutron/neutron/tests/common/net_helpers.py", line 110, in assert_ping dst_ip]) File "/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 876, in execute log_fail_as_error=log_fail_as_error, **kwargs) File "/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 138, in execute raise RuntimeError(msg) RuntimeError: Exit code: 1; Stdin: ; Stdout: PING 35.4.2.5 (35.4.2.5) 56(84) bytes of data. --- 35.4.2.5 ping statistics --- 1 packets transmitted, 0 received, 100% packet loss, time 0ms ; Stderr: ** Affects: neutron Importance: High Status: New ** Tags: vpnaas ** Changed in: neutron Importance: Critical => High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1598466 Title: Neutron VPNaas gate functional tests failing on race condition Status in neutron: New Bug description: gate-neutron-vpnaas-dsvm-functional-sswan and gate-neutron-vpnaas- dsvm-functional are failing on a race condition in test_ipsec_site_connections_with_l3ha_routers: ft1.4: neutron_vpnaas.tests.functional.common.test_scenario.TestIPSecScenario.test_ipsec_site_connections_with_l3ha_routers_StringException: Empty attachments: pythonlogging:'' stderr stdout Traceback (most recent call last): File "neutron_vpnaas/tests/functional/common/test_scenario.py", line 667, in test_ipsec_site_connections_with_l3ha_routers self.check_ping(site1, site2, 0) File "neutron_vpnaas/tests/functional/common/test_scenario.py", line 519, in check_ping timeout=8, count=4) File "/opt/stack/new/neutron/neutron/tests/common/net_helpers.py", line 110, in assert_ping dst_ip]) File "/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 876, in execute log_fail_as_error=log_fail_as_error, **kwargs) File "/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 138, in execute raise RuntimeError(msg) RuntimeError: Exit code: 1; Stdin: ; Stdout: PING 35.4.2.5 (35.4.2.5) 56(84) bytes of data. --- 35.4.2.5 ping statistics --- 1 packets transmitted, 0 received, 100% packet loss, time 0ms ; Stderr: To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1598466/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1586931] Re: TestServerBasicOps: Test fails when deleting server and floating ip almost at the same time
I hit this. I can reproduce it almost every time on my env using linuxbrige+vxlan. The nova trace is: 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions [req-bf41dac1-8fc0-4fd6-9a35-d754cea79057 9a0be6e4b8bf4cadb4a43401696fec19 48935f9a5ed84703973c70dd70859b7f - - -] Unexpected exception in API method 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions Traceback (most recent call last): 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, in wrapped 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions return f(*args, **kwargs) 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/floating_ips.py", line 173, in delete 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions context, instance, floating_ip) 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1527, in disassociate_and_release_floating_ip 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions raise_if_associated=False) 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1536, in _release_floating_ip 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions client.delete_floatingip(fip['id']) 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 102, in with_params 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions ret = self.function(instance, *args, **kwargs) 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 751, in delete_floatingip 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions return self.delete(self.floatingip_path % (floatingip)) 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 289, in delete 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions headers=headers, params=params) 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 270, in retry_request 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions headers=headers, params=params) 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 211, in do_request 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions self._handle_fault_response(status_code, replybody) 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 185, in _handle_fault_response 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions exception_handler_v20(status_code, des_error_body) 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 70, in exception_handler_v20 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions status_code=status_code) 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions PortNotFoundClient: Port f4f11381-dc3b-41b2-94ca-4a9f494c0372 could not be found. 2016-07-02 16:20:52.670 9993 ERROR nova.api.openstack.extensions 2 operation occurs: the VM deletion (that triggers the Neutron port deletion) and the floating ip deletion. Nova sends a request to Neutron to delete the floating IP. When a floating IP is deleted Neutron will get the port associated with the floating ip to send a network change event notification to Nova. The get port fails with PortNotFound because in the meanwhile the Neutron port that the VM was using, has been deleted. The floating ip request fails because Neutron sends back to Nova a PortNotFound error. ** Also affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1586931 Title: TestServerBasicOps: Test fails when deleting server and floating ip almost at the same time Status in neutron: New Status in OpenStack Compute (nova): Incomplete Status in tempest: In Progress Bug description: In tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops, after last step: self.servers_client.delete_server(self.instance['id']) it doesn't wait for the server to be deleted, and then deletes the floating ip immediately in the clean up, this will cause faiure: Here is the partial
[Yahoo-eng-team] [Bug 1593719] Re: StaleDataError: DELETE statement on table 'standardattributes' expected to delete 1 row(s); 0 were matched
Reviewed: https://review.openstack.org/331137 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=f3816cb8bd68f406d7f8c5b80fbd3352b493ca70 Submitter: Jenkins Branch:master commit f3816cb8bd68f406d7f8c5b80fbd3352b493ca70 Author: Ihar Hrachyshka Date: Wed Jun 29 15:35:44 2016 +0200 ml2: postpone exception logs to when retry mechanism fails to recover Since Ia2d911a6a90b1baba1d5cc36f7c625e156a2bc33, we use version_id_col SQLAlchemy feature to routinely bump revision numbers for resources. By doing so, SQLAlchemy also enforces the expected number on any UPDATE and DELETE, which may not be valid when the transaction is actually applied. In that case, the library will raise StaleDataError: http://docs.sqlalchemy.org/en/latest/orm/exceptions.html#sqlalchemy.orm.exc.StaleDataError The exception is then logged by ml2 and bubbles up to API layer where retry mechanism will correctly catch it and issue another attempt. If API layer does not retry on exception, it already logs the error, including the traceback. In ml2, it's too early to decide if an exception is worth being logged. Plugin instead should just silently allow all unexpected exceptions to bubble up and be dealt by API layer. At the same time, there are some details that are known at the plugin level only, that are not easily deducible from the API request details. That's why we save details about the error on the exception object that bubbles up into API layer, where we are ready to decide if those details are worth being logged. Change-Id: I848df0aef5381e50dfb58e46d7a652113ac27a49 Closes-Bug: #1593719 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1593719 Title: StaleDataError: DELETE statement on table 'standardattributes' expected to delete 1 row(s); 0 were matched Status in neutron: Fix Released Bug description: This error started to show up in neutron-server logs in gate after https://review.openstack.org/#/c/328185/5 landed. The reason is that using version_id_col makes UPDATE and DELETE filter by the revision number, and raise StaleDataError on mismatch. That's documented in: http://docs.sqlalchemy.org/en/latest/orm/exceptions.html#sqlalchemy.orm.exc.StaleDataError Once the exception is raised, it's correctly caught by retry mechanism. We should consider StateDataErrors a usual operation mode, and hence avoid logging the exceptions in ml2 plugin. Instead, we should bubble up exceptions to retry layer and allow it to determine if to log those exceptions. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1593719/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1598527] [NEW] next() is incompatible in test_network_ip_availability.py
Public bug reported: In test_network_ip_availability.py:89 I think 'subnet_cidr = cidr.subnet(mask_bits).next()' should be 'subnet_cidr = next(cidr.subnet(mask_bits))' ** Affects: neutron Importance: Undecided Assignee: QunyingRan (ran-qunying) Status: New ** Changed in: neutron Assignee: (unassigned) => QunyingRan (ran-qunying) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1598527 Title: next() is incompatible in test_network_ip_availability.py Status in neutron: New Bug description: In test_network_ip_availability.py:89 I think 'subnet_cidr = cidr.subnet(mask_bits).next()' should be 'subnet_cidr = next(cidr.subnet(mask_bits))' To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1598527/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1422674] Re: Glance can't share images via Horizon dashboard but can show shared images.
[Expired for OpenStack Dashboard (Horizon) because there has been no activity for 60 days.] ** Changed in: horizon Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1422674 Title: Glance can't share images via Horizon dashboard but can show shared images. Status in OpenStack Dashboard (Horizon): Expired Bug description: Glance have functions for share uploaded images with other tenants. But this functional is unavailable via Horizon Dashboard. in Project > Compute > Images we can see category named 'Shared with me', but user can't share image. From CLI this work correctly. Steps: 1. Deploy OS with Horizon Dashboard 2. Upload an image to Glance from admin tenant and don't set Public = True in image options. 3. SSH to controller node, use . openrc. 4. Execute `glance member-create` to share image with other non-admin tenant. 5. Log into Horizon as user, which we shared an image. 6. Navigate to Project> Compute > Images Actual result: Image, which shared with this tenant was appeared in category 'Shared With Me' To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1422674/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp