[Yahoo-eng-team] [Bug 1614831] Re: gate-networking-vsphere-python27-ubuntu-xenial fails for networking vsphere patches
** No longer affects: neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614831 Title: gate-networking-vsphere-python27-ubuntu-xenial fails for networking vsphere patches Status in networking-vsphere: New Bug description: I have uploaded test case patches for ovsvapp in networking vsphere repo. It fails gate-networking-vsphere-python27-ubuntu-xenia. For reference my review id: https://review.openstack.org/#/c/357266/ To manage notifications about this bug go to: https://bugs.launchpad.net/networking-vsphere/+bug/1614831/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614831] Re: gate-networking-vsphere-python27-ubuntu-xenial fails for networking vsphere patches
** Also affects: networking-vsphere Importance: Undecided Status: New ** Changed in: neutron Status: New => Incomplete -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614831 Title: gate-networking-vsphere-python27-ubuntu-xenial fails for networking vsphere patches Status in networking-vsphere: New Status in neutron: Incomplete Bug description: I have uploaded test case patches for ovsvapp in networking vsphere repo. It fails gate-networking-vsphere-python27-ubuntu-xenia. For reference my review id: https://review.openstack.org/#/c/357266/ To manage notifications about this bug go to: https://bugs.launchpad.net/networking-vsphere/+bug/1614831/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614831] [NEW] gate-networking-vsphere-python27-ubuntu-xenial fails for networking vsphere patches
Public bug reported: I have uploaded test case patches for ovsvapp in networking vsphere repo. It fails gate-networking-vsphere-python27-ubuntu-xenia. For reference my review id: https://review.openstack.org/#/c/357266/ ** Affects: networking-vsphere Importance: Undecided Status: New ** Affects: neutron Importance: Undecided Status: New ** Attachment added: "error.txt" https://bugs.launchpad.net/bugs/1614831/+attachment/4723858/+files/error.txt -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614831 Title: gate-networking-vsphere-python27-ubuntu-xenial fails for networking vsphere patches Status in networking-vsphere: New Status in neutron: New Bug description: I have uploaded test case patches for ovsvapp in networking vsphere repo. It fails gate-networking-vsphere-python27-ubuntu-xenia. For reference my review id: https://review.openstack.org/#/c/357266/ To manage notifications about this bug go to: https://bugs.launchpad.net/networking-vsphere/+bug/1614831/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614827] [NEW] "NoSuchOptError: no such option disable_ssl_certificate_validation in group [identity]" in vpnaas tempest plugin
Public bug reported: vpnaas tempest plugin is broken after I296f1080ce89f0cdceae1c476866b215393b2605 eg. http://logs.openstack.org/52/357552/1/check/gate-tempest-dsvm-networking-midonet-v2/c7b538f/console.html 2016-08-19 01:20:14.212494 | Failed to import test module: neutron_vpnaas.tests.tempest.api.test_vpnaas 2016-08-19 01:20:14.212517 | Traceback (most recent call last): 2016-08-19 01:20:14.212555 | File "/usr/local/lib/python2.7/dist-packages/unittest2/loader.py", line 456, in _find_test_path 2016-08-19 01:20:14.212578 | module = self._get_module_from_name(name) 2016-08-19 01:20:14.212615 | File "/usr/local/lib/python2.7/dist-packages/unittest2/loader.py", line 395, in _get_module_from_name 2016-08-19 01:20:14.212631 | __import__(name) 2016-08-19 01:20:14.212685 | File "/opt/stack/new/neutron-vpnaas/neutron_vpnaas/tests/tempest/api/test_vpnaas.py", line 24, in 2016-08-19 01:20:14.212715 | from neutron_vpnaas.tests.tempest.api import base 2016-08-19 01:20:14.212757 | File "/opt/stack/new/neutron-vpnaas/neutron_vpnaas/tests/tempest/api/base.py", line 24, in 2016-08-19 01:20:14.212783 | from neutron_vpnaas.tests.tempest.api import clients 2016-08-19 01:20:14.212821 | File "/opt/stack/new/neutron-vpnaas/neutron_vpnaas/tests/tempest/api/clients.py", line 63, in 2016-08-19 01:20:14.212841 | class Manager(manager.Manager): 2016-08-19 01:20:14.212878 | File "/opt/stack/new/neutron-vpnaas/neutron_vpnaas/tests/tempest/api/clients.py", line 71, in Manager 2016-08-19 01:20:14.212902 | CONF.identity.disable_ssl_certificate_validation, 2016-08-19 01:20:14.212945 | File "/opt/stack/new/tempest/.tox/all-plugin/local/lib/python2.7/site-packages/oslo_config/cfg.py", line 3057, in __getattr__ 2016-08-19 01:20:14.212968 | return self._conf._get(name, self._group) 2016-08-19 01:20:14.213009 | File "/opt/stack/new/tempest/.tox/all-plugin/local/lib/python2.7/site-packages/oslo_config/cfg.py", line 2668, in _get 2016-08-19 01:20:14.213032 | value = self._do_get(name, group, namespace) 2016-08-19 01:20:14.213074 | File "/opt/stack/new/tempest/.tox/all-plugin/local/lib/python2.7/site-packages/oslo_config/cfg.py", line 2685, in _do_get 2016-08-19 01:20:14.213095 | info = self._get_opt_info(name, group) 2016-08-19 01:20:14.213139 | File "/opt/stack/new/tempest/.tox/all-plugin/local/lib/python2.7/site-packages/oslo_config/cfg.py", line 2824, in _get_opt_info 2016-08-19 01:20:14.213161 | raise NoSuchOptError(opt_name, group) 2016-08-19 01:20:14.213193 | NoSuchOptError: no such option disable_ssl_certificate_validation in group [identity] ** Affects: neutron Importance: Undecided Assignee: YAMAMOTO Takashi (yamamoto) Status: In Progress ** Tags: gate-failure vpnaas ** Tags added: gate-failure vpnaas ** Changed in: neutron Assignee: (unassigned) => YAMAMOTO Takashi (yamamoto) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614827 Title: "NoSuchOptError: no such option disable_ssl_certificate_validation in group [identity]" in vpnaas tempest plugin Status in neutron: In Progress Bug description: vpnaas tempest plugin is broken after I296f1080ce89f0cdceae1c476866b215393b2605 eg. http://logs.openstack.org/52/357552/1/check/gate-tempest-dsvm-networking-midonet-v2/c7b538f/console.html 2016-08-19 01:20:14.212494 | Failed to import test module: neutron_vpnaas.tests.tempest.api.test_vpnaas 2016-08-19 01:20:14.212517 | Traceback (most recent call last): 2016-08-19 01:20:14.212555 | File "/usr/local/lib/python2.7/dist-packages/unittest2/loader.py", line 456, in _find_test_path 2016-08-19 01:20:14.212578 | module = self._get_module_from_name(name) 2016-08-19 01:20:14.212615 | File "/usr/local/lib/python2.7/dist-packages/unittest2/loader.py", line 395, in _get_module_from_name 2016-08-19 01:20:14.212631 | __import__(name) 2016-08-19 01:20:14.212685 | File "/opt/stack/new/neutron-vpnaas/neutron_vpnaas/tests/tempest/api/test_vpnaas.py", line 24, in 2016-08-19 01:20:14.212715 | from neutron_vpnaas.tests.tempest.api import base 2016-08-19 01:20:14.212757 | File "/opt/stack/new/neutron-vpnaas/neutron_vpnaas/tests/tempest/api/base.py", line 24, in 2016-08-19 01:20:14.212783 | from neutron_vpnaas.tests.tempest.api import clients 2016-08-19 01:20:14.212821 | File "/opt/stack/new/neutron-vpnaas/neutron_vpnaas/tests/tempest/api/clients.py", line 63, in 2016-08-19 01:20:14.212841 | class Manager(manager.Manager): 2016-08-19 01:20:14.212878 | File "/opt/stack/new/neutron-vpnaas/neutron_vpnaas/tests/tempest/api/clients.py", line 71, in Manager 2016-08-19 01:20:14.212902 | CONF.identity.disable_ssl_certificate_validation, 2016-08-19 01:20:14.212945 | File "/opt/stack/new/tempest/.tox/all-plugin/local/lib/python2.7/site-package
[Yahoo-eng-team] [Bug 1614822] [NEW] api-ref: security-group-rules api missing request parameters table.
Public bug reported: security-group-rule api missing request parameter table in http://developer.openstack.org/api-ref/networking/v2/index.html #security-group-rules-security-group-rules ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614822 Title: api-ref: security-group-rules api missing request parameters table. Status in neutron: New Bug description: security-group-rule api missing request parameter table in http://developer.openstack.org/api-ref/networking/v2/index.html #security-group-rules-security-group-rules To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1614822/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614817] [NEW] api-ref: Floating IPs is wrong some in of paramaters
Public bug reported: Current Floating IPs: http://developer.openstack.org/api- ref/networking/v2/?expanded=list-floating-ips-detail,create-floating-ip- detail,show-floating-ip-details-detail,update-floating-ip-detail #floating-ips-floatingips Some parameters including router_id, port_id is wrong in column. For example: + The in of "router_id" parameter is 'path'. It should be changed to 'body'. + The in of "port_id" paramter is "patch". It should be changed to "body" ** Affects: neutron Importance: Undecided Assignee: Nam (namnh) Status: New ** Changed in: neutron Assignee: (unassigned) => Nam (namnh) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614817 Title: api-ref: Floating IPs is wrong some in of paramaters Status in neutron: New Bug description: Current Floating IPs: http://developer.openstack.org/api- ref/networking/v2/?expanded=list-floating-ips-detail,create-floating- ip-detail,show-floating-ip-details-detail,update-floating-ip-detail #floating-ips-floatingips Some parameters including router_id, port_id is wrong in column. For example: + The in of "router_id" parameter is 'path'. It should be changed to 'body'. + The in of "port_id" paramter is "patch". It should be changed to "body" To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1614817/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614815] [NEW] api-ref: security-group api show wrong description of security_group_id
Public bug reported: Security-groups API show wrong description of seucurity_group_id and wrong information in attribute 'in'.[1] [1] http://developer.openstack.org/api-ref/networking/v2/index.html?expanded=show-security-group-detail,update-security-group-detail,delete-security-group-detail#show-security-group ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614815 Title: api-ref: security-group api show wrong description of security_group_id Status in neutron: New Bug description: Security-groups API show wrong description of seucurity_group_id and wrong information in attribute 'in'.[1] [1] http://developer.openstack.org/api-ref/networking/v2/index.html?expanded=show-security-group-detail,update-security-group-detail,delete-security-group-detail#show-security-group To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1614815/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1501860] [Review update] master
Review in progress for https://review.opencontrail.org/23433 Submitter: Vinay Vithal Mahuli (vmah...@juniper.net) ** Also affects: juniperopenstack/trunk Importance: Undecided Status: New ** Also affects: juniperopenstack/trunk Importance: Low Assignee: Rajat Vig (rajatv) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1501860 Title: OpenStack Services functions should have its name with first letter lowercased Status in OpenStack Dashboard (Horizon): Fix Released Status in Juniper Openstack: In Progress Status in Juniper Openstack trunk series: In Progress Bug description: OpenStack Services functions are not constructor functions therefor shouldn't have the name with first letter capitalized. It should be with lower cased. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1501860/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1581667] Re: AttributeError - 'NoneType' object has no attribute 'lower'
Reviewed: https://review.openstack.org/316847 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=da2e93578f1f2b364ddb4d66e9311d225ab629d5 Submitter: Jenkins Branch:master commit da2e93578f1f2b364ddb4d66e9311d225ab629d5 Author: Yosef Hoffman Date: Mon May 16 09:02:17 2016 -0400 Fix AttributeError in context_selection.py When create a service with region = None, File "/openstack_dashboard/templatetags/context_selection.py", line 100, in AttributeError: 'NoneType' object has no attribute 'lower' The problem code: sorted(request.user.available_services_regions, key=lambda x: x.lower()) To fix this, if region is NoneType, use '' as key to put it first in the list. Change-Id: Ide8ea634bf1933ef263d3b27204c7711d167 Closes-Bug: #1581667 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1581667 Title: AttributeError - 'NoneType' object has no attribute 'lower' Status in OpenStack Dashboard (Horizon): Fix Released Bug description: When create a service with region = "None", occurs Error list services without horizon. how to test > create a service any with the region = None Example: +--+---+--++-+---+--+ | ID | Region| Service Name | Service Type | Enabled | Interface | URL | +--+---+--++-+---+--+ | 01089a905fec48ef957e77281c9aec92 | RegionOne | nova_legacy | compute_legacy | True| public| http://10.0.99.85:8774/v2/$(project_id)s | | 1238174be563469fa1fd34fec099bcf8 | RegionOne | cinder | volume | True| admin | http://10.0.99.85:8776/v1/$(project_id)s | | 22303c8d5b0340cda534f0fd33fd2ac3 | None | nova | image | True| public| http://10.0.99.85:500/v1/$(project_id)s | - > go to http://localhost/dashboard/admin/info/ horizon error File "/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/templatetags/context_selection.py", line 100, in AttributeError: 'NoneType' object has no attribute 'lower' 2016-05-11 16:59:02.554680 key=lambda x: x.lower()), To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1581667/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614801] [NEW] Set wrong serial when swapping volumes
Public bug reported: In current master branch: https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L4850 It sets serial of new volume as volume_id of old volume. ** Affects: nova Importance: Undecided Assignee: Lisa Li (lisali) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1614801 Title: Set wrong serial when swapping volumes Status in OpenStack Compute (nova): In Progress Bug description: In current master branch: https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L4850 It sets serial of new volume as volume_id of old volume. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1614801/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614799] [NEW] NoSuchOptError: in group [identity] failure for API job
Public bug reported: An error instance: http://logs.openstack.org/34/356134/4/check/gate-neutron-dsvm- api/07351c0/console.html The potential culprit: https://review.openstack.org/#/c/349749/ ** Affects: neutron Importance: Critical Assignee: Armando Migliaccio (armando-migliaccio) Status: In Progress ** Tags: gate-failure ** Changed in: neutron Status: New => Confirmed ** Changed in: neutron Importance: Undecided => Critical ** Tags added: gte ** Tags removed: gte ** Tags added: gate-failure -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614799 Title: NoSuchOptError: in group [identity] failure for API job Status in neutron: In Progress Bug description: An error instance: http://logs.openstack.org/34/356134/4/check/gate-neutron-dsvm- api/07351c0/console.html The potential culprit: https://review.openstack.org/#/c/349749/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1614799/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614792] [NEW] Strongswan: symbol for auth algorithm sha256 is not sha2_256
Public bug reported: The symbol for auth algorithm sha256 is different for openswan/libreswan and strongswan. For openswan/libreswan it should be sha2_256. and fow strongswan, it should be sha256. ** Affects: neutron Importance: Undecided Assignee: Yi Jing Zhu (nick-zhuyj) Status: New ** Tags: vpnaas ** Changed in: neutron Assignee: (unassigned) => Yi Jing Zhu (nick-zhuyj) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614792 Title: Strongswan: symbol for auth algorithm sha256 is not sha2_256 Status in neutron: New Bug description: The symbol for auth algorithm sha256 is different for openswan/libreswan and strongswan. For openswan/libreswan it should be sha2_256. and fow strongswan, it should be sha256. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1614792/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1613199] Re: nova does not accept ssh certificate authorities (regression)
This seems to me to be more of a feature request than an actual bug. If you'd like Nova to support prepended comments in public keys, feel free to propose it per our blueprint process. https://wiki.openstack.org/wiki/Blueprints ** Changed in: nova Status: Incomplete => Invalid ** Changed in: nova Assignee: Augustina Ragwitz (auggy) => (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1613199 Title: nova does not accept ssh certificate authorities (regression) Status in OpenStack Compute (nova): Invalid Bug description: Prior to commit 3f3f9bf22efd2fb209d2a2fe0246f4857cd2d21a nova/crypto.py generate_fingerprint used ssh-keygen -q -l -f to generate finger prints. ssh-keygen -qlf is quite happy to process public key matter of the form cert-authority ssh-rsa B3NzaC1yc2EDAQABAAABAQCfHlWGrnpirvqvUTySnoQK6ze5oIXz7cYIT+XCBeBCahlK05O38g0erBGrNWFozZwbIXnysVCibaUJqtH0JrYqmcr2NnYA0PoiTeranvaJI7pQsga1gBxfK/D4UItw5yI6V7w9efMT0zpIP8WEubQz6GFtkyiNVgFCHj3+VhLs3RslvYzb35SFcLXEDsGVQM5NdWBUgRaNRqpTPvuMcxTyPvy32wW72kwaYRQioDJFcE2WJ240M2oSsx+dhTWvI8sW1sEUI1qIDfyBPsOgsLofuSpt4ZNgJqBUTp/hW85wVpNzud6A4YJWHpZXSDMtUMYE9QL+x2fw/b26yck9ZPE/ hines@tun The issue is the string cert-authority at the beginning of the public key matter. This form can appear in authorized_keys to enable multiple users on a project to have individual keys certified by a central certifying authority providing access to a single administrative account. The use of ssh certificates is documented here: https://www.digitalocean.com/community/tutorials/how-to-create-an-ssh- ca-to-validate-hosts-and-clients-with-ubuntu Steps to reproduce: 1) Place the string """ cert-authority ssh-rsa B3NzaC1yc2EDAQABAAABAQCfHlWGrnpirvqvUTySnoQK6ze5oIXz7cYIT+XCBeBCahlK05O38g0erBGrNWFozZwbIXnysVCibaUJqtH0JrYqmcr2NnYA0PoiTeranvaJI7pQsga1gBxfK/D4UItw5yI6V7w9efMT0zpIP8WEubQz6GFtkyiNVgFCHj3+VhLs3RslvYzb35SFcLXEDsGVQM5NdWBUgRaNRqpTPvuMcxTyPvy32wW72kwaYRQioDJFcE2WJ240M2oSsx+dhTWvI8sW1sEUI1qIDfyBPsOgsLofuSpt4ZNgJqBUTp/hW85wVpNzud6A4YJWHpZXSDMtUMYE9QL+x2fw/b26yck9ZPE/ hines@tun """ in a file 2) run nova keypair-add --pub-key Expected result: They nova keypair-list should now list the key Actual result: ERROR (BadRequest): Keypair data is invalid: failed to generate fingerprint (HTTP 400) Environment: Openstack liberty release (bug is not present on kilo) Logs: Sorry, not available (I'm only a user not an admin) Suggest fix: either: 1) revert generate_fingerprint to using exec ssh-keygen 2) generate_fingerprint should strip the string cert-authority from the begining of the public key matter (if present) before attempting to generate the fingerprint. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1613199/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614766] [NEW] ovsdb managers shouldn't bbe erased/overwritten
Public bug reported: neutron(ovs or dhcp) agent with native ovslib sets to its own ovsdb managers. In some use cases, cloud admin sets ovsdb managers. e.g. monitoring/debug purses, sdn controller. neutron agents shouldn't erase those settings. ** Affects: neutron Importance: Undecided Assignee: Isaku Yamahata (yamahata) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614766 Title: ovsdb managers shouldn't bbe erased/overwritten Status in neutron: In Progress Bug description: neutron(ovs or dhcp) agent with native ovslib sets to its own ovsdb managers. In some use cases, cloud admin sets ovsdb managers. e.g. monitoring/debug purses, sdn controller. neutron agents shouldn't erase those settings. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1614766/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614728] [NEW] qos: rule list in policy is too difficult to use
Public bug reported: The format of qos policy should be something like {"policy": {"rules": {"bandwidth_limit": {...}, "dscp_marking": {... https://review.openstack.org/#/c/207043/ The patch changed the format of qos policy to something like this {"rules": [{"type": "bandwitdh_limit", ...}, {"type": "dscp_marking", ...}, ]} with the rationale that there should be zero-or-one rule per each rule-type. But the format has the following issues - there is no guarantee that there is at most one rule per rule-type In order to verify that, the list has to be scanned. - When agent(or backend) programs switch for specific rule-type, it has to scan the list to find a rule of a give rule-type - the format is unfriendly to parser or external tools.(In my case java jaxb) The actual variable type of rule needs to be determined by reading ahead "type". Then the rule needs to be parsed again. So the following format is better {"rules": {"bandwidth_limit": {...}, "dscp_marking": {...}}} - it is guaranteed that there is zero-or-one rule per rule-type - it's easy to find a rule of a given rule-type - parser easily determine variable-type. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614728 Title: qos: rule list in policy is too difficult to use Status in neutron: New Bug description: The format of qos policy should be something like {"policy": {"rules": {"bandwidth_limit": {...}, "dscp_marking": {... https://review.openstack.org/#/c/207043/ The patch changed the format of qos policy to something like this {"rules": [{"type": "bandwitdh_limit", ...}, {"type": "dscp_marking", ...}, ]} with the rationale that there should be zero-or-one rule per each rule-type. But the format has the following issues - there is no guarantee that there is at most one rule per rule-type In order to verify that, the list has to be scanned. - When agent(or backend) programs switch for specific rule-type, it has to scan the list to find a rule of a give rule-type - the format is unfriendly to parser or external tools.(In my case java jaxb) The actual variable type of rule needs to be determined by reading ahead "type". Then the rule needs to be parsed again. So the following format is better {"rules": {"bandwidth_limit": {...}, "dscp_marking": {...}}} - it is guaranteed that there is zero-or-one rule per rule-type - it's easy to find a rule of a given rule-type - parser easily determine variable-type. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1614728/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1611321] Re: HyperV: shelve vm deadlock
Reviewed: https://review.openstack.org/352837 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=c7af24ca8279226adc5cd8fa0984c6fd79e26d67 Submitter: Jenkins Branch:master commit c7af24ca8279226adc5cd8fa0984c6fd79e26d67 Author: Lucian Petrut Date: Tue Aug 9 13:21:48 2016 +0300 HyperV: remove instance snapshot lock At the moment, the instance snapshot operation is synchronized using the instance uuid. This was added some time ago, as the instance destroy operation was failing when an instance snapshot was in proggress. This is now causing a deadlock, as a similar lock was recently introduced in the manager for the shelve operation by this change: Id36b3b9516d72d28519c18c38d98b646b47d288d We can safely remove the lock from the HyperV driver as we now stop pending jobs when destroying instances. Closes-Bug: #1611321 Change-Id: I1c2ca0d24c195ebaba442bbb7091dcecc0a7e781 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1611321 Title: HyperV: shelve vm deadlock Status in compute-hyperv: New Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) mitaka series: New Bug description: At the moment, the instance snapshot operation is synchronized using the instance uuid. This was added some time ago, as the instance destroy operation was failing when an instance snapshot was in proggress. This is now causing a deadlock, as a similar lock was recently introduced in the manager for the shelve operation by this change: Id36b3b9516d72d28519c18c38d98b646b47d288d We can safely remove the lock from the HyperV driver as we now stop pending jobs when destroying instances. To manage notifications about this bug go to: https://bugs.launchpad.net/compute-hyperv/+bug/1611321/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614000] Re: Orchestration Resource Types names restriction is incorrect
Reviewed: https://review.openstack.org/356625 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=a86c7d9be7b7399b117b1289d6548f50b657efe6 Submitter: Jenkins Branch:master commit a86c7d9be7b7399b117b1289d6548f50b657efe6 Author: Tatiana Ovchinnikova Date: Wed Aug 17 20:21:19 2016 +0300 Remove Orchestration Resource Types names restriction The additional columns "Implementation", "Component" and "Resource" are representative for a limited resource type group only. Resource type name can have less or more than three words and Heat even allows to specify a URL as a resource type. Horizon should not use these columns at all: "Type" column and filter will do just the same trick. Change-Id: I38a671490b90122e2d75e6aa11d3de0fa12817c9 Closes-Bug: #1614000 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1614000 Title: Orchestration Resource Types names restriction is incorrect Status in OpenStack Dashboard (Horizon): Fix Released Bug description: The additional columns "Implementation", "Component" and "Resource" are representative for a limited resource type group only. Resource type name can have less or more than three words and Heat even allows to specify a URL as a resource type. Horizon should not use these columns at all: "Type" column and filter will do just the same trick. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1614000/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614133] Re: limits API isn't filtering security groups and floating IPs from 2.36 response
Reviewed: https://review.openstack.org/356694 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=4461cdf4c464ecf5c8d86544453f30de1fd4 Submitter: Jenkins Branch:master commit 4461cdf4c464ecf5c8d86544453f30de1fd4 Author: Sean Dague Date: Wed Aug 17 16:20:45 2016 -0400 don't report network limits after 2.35 We correctly stopped reporting the limits for things like security groups and floating ips after mv 2.35. We completely missed that limits are modified by the used_limits extension, and hilarity ensued. We were reporting no maxSecurityGroups over the wire, but we were reporting totalSecurityGroups through the magic of extensions. Change-Id: I85b2b41d919ed6987d4c9288905ccce49c10c81f Closes-Bug: #1614133 ** Changed in: nova Status: Triaged => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1614133 Title: limits API isn't filtering security groups and floating IPs from 2.36 response Status in OpenStack Compute (nova): Fix Released Bug description: The limits API isn't filtering out network resources like security groups and floating IPs from the 2.36 response (like it does for quota sets): https://github.com/openstack/nova/blob/955c921b33103e6e03a665f1e7bf705f5c661c68/nova/api/openstack/compute/used_limits.py#L44 I found this when testing some changes in novaclient for 2.36. DEBUG (session:337) REQ: curl -g -i -X GET http://9.5.125.222:8774/v2.1/limits -H "OpenStack-API-Version: compute 2.36" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.36" -H "X-Auth-Token: {SHA1}eeec7f2d075d62c93954f4f0619d78ac07017379" DEBUG (connectionpool:401) "GET /v2.1/limits HTTP/1.1" 200 430 DEBUG (session:366) RESP: [200] Content-Length: 430 Content-Type: application/json Openstack-Api-Version: compute 2.36 X-Openstack-Nova-Api-Version: 2.36 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-Compute-Request-Id: req-189b16e7-fabb-4f5a-a0ab-e9c90ede95a7 Date: Wed, 17 Aug 2016 14:51:45 GMT Connection: keep-alive RESP BODY: {"limits": {"rate": [], "absolute": {"maxServerMeta": 128, "maxPersonality": 5, "totalServerGroupsUsed": 0, "maxImageMeta": 128, "maxPersonalitySize": 10240, "maxTotalKeypairs": 100, "totalCoresUsed": 0, "maxServerGroups": 10, "totalRAMUsed": 0, "totalInstancesUsed": 0, "totalFloatingIpsUsed": 0, "maxTotalCores": 20, "maxServerGroupMembers": 10, "totalSecurityGroupsUsed": 0, "maxTotalInstances": 10, "maxTotalRAMSize": 51200}}} To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1614133/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1613542] Re: tempest.conf doesn't contain $project in [service_available] section
Reviewed: https://review.openstack.org/355544 Committed: https://git.openstack.org/cgit/openstack/aodh/commit/?id=5a3e03ca6ce3e1a8ca80af09b59606a450b53916 Submitter: Jenkins Branch:master commit 5a3e03ca6ce3e1a8ca80af09b59606a450b53916 Author: Thomas Bechtold Date: Mon Aug 15 17:52:51 2016 +0200 Fix tempest.conf generation [service_available] isn't being generated. This patch fixes it. Closes-Bug: #1613542 Change-Id: I9a30b2b77ee863053c1ddf6e813a5d93fb71caf3 ** Changed in: aodh Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1613542 Title: tempest.conf doesn't contain $project in [service_available] section Status in Aodh: Fix Released Status in Ceilometer: In Progress Status in OpenStack Identity (keystone): In Progress Status in Magnum: Fix Released Bug description: When generating the tempest conf, the tempest plugins need to register the config options. But for the [service_available] section, ceilometer (and the other mentioned projects) doesn't register any value so it's missng in the tempest sample config. Steps to reproduce: $ tox -egenconfig $ source .tox/genconfig/bin/activate $ oslo-config-generator --config-file .tox/genconfig/lib/python2.7/site-packages/tempest/cmd/config-generator.tempest.conf --output-file tempest.conf.sample Now check the [service_available] section from tempest.conf.sample To manage notifications about this bug go to: https://bugs.launchpad.net/aodh/+bug/1613542/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614680] [NEW] In FWaaS v2 cross-tenant assignment of policies is inconsistent
Public bug reported: In the unit tests associated with the FWaaS v2 DB (neutron_fwaas/tests/unit/db/firewall/v2/test_firewall_db_v2.py), there are two that demonstrate improper handling of cross-tenant firewall policy assignment. First, the logic tested in test_update_firewall_rule_associated_with_other_tenant_policy succeeds, but it should not. Second, the logic tested in test_update_firewall_group_with_public_fwp fails, but it should succeed. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614680 Title: In FWaaS v2 cross-tenant assignment of policies is inconsistent Status in neutron: New Bug description: In the unit tests associated with the FWaaS v2 DB (neutron_fwaas/tests/unit/db/firewall/v2/test_firewall_db_v2.py), there are two that demonstrate improper handling of cross-tenant firewall policy assignment. First, the logic tested in test_update_firewall_rule_associated_with_other_tenant_policy succeeds, but it should not. Second, the logic tested in test_update_firewall_group_with_public_fwp fails, but it should succeed. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1614680/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614673] [NEW] [FWaaS] Rule position testing is insufficient
Public bug reported: The FWaaS unit tests around rule position nesting are not working with FWaaS v2, and need to be fixed up. The specific tests that need to be fixed are: test_show_firewall_rule_with_fw_policy_associated test_delete_firewall_policy_with_rule test_update_firewall_policy_reorder_rules in neutron_fwaas/tests/unit/db/firewall/v2/test_firewall_db_v2.py. ** Affects: neutron Importance: Wishlist Status: Confirmed ** Tags: fwaas unittest ** Changed in: neutron Importance: Undecided => Wishlist ** Tags added: fwaas unittest ** Changed in: neutron Status: New => Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614673 Title: [FWaaS] Rule position testing is insufficient Status in neutron: Confirmed Bug description: The FWaaS unit tests around rule position nesting are not working with FWaaS v2, and need to be fixed up. The specific tests that need to be fixed are: test_show_firewall_rule_with_fw_policy_associated test_delete_firewall_policy_with_rule test_update_firewall_policy_reorder_rules in neutron_fwaas/tests/unit/db/firewall/v2/test_firewall_db_v2.py. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1614673/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592169] Re: cached tokens break Liberty to Mitaka upgrade
Reviewed: https://review.openstack.org/347543 Committed: https://git.openstack.org/cgit/openstack/keystone/commit/?id=bc99dc76775d22eca01b818f37de35a76ece9d72 Submitter: Jenkins Branch:master commit bc99dc76775d22eca01b818f37de35a76ece9d72 Author: Colleen Murphy Date: Tue Jul 26 13:02:42 2016 -0700 Add dummy domain_id column to cached role When token caching is turned on, upgrading from stable/liberty to stable/mitaka or master causes tokens to fail to be issued for the time-to-live of the cache. This is because as part of the token issuance the token's role is looked up, and the cached version of the role immediately after upgrade does not have a domain_id field, even though that column was successfully added to the role database. This patch hacks around that by artificially adding a null domain_id value to the role reference. This must be done in the manager, as opposed to the driver, because it is the manager that is caching the value and so modifying the value returned by the driver has no effect. Change-Id: I55c791486f2a26ae995f693370b016895176a16f Closes-bug: #1592169 ** Changed in: keystone Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1592169 Title: cached tokens break Liberty to Mitaka upgrade Status in OpenStack Identity (keystone): Fix Released Status in OpenStack Identity (keystone) mitaka series: In Progress Status in OpenStack Identity (keystone) newton series: Fix Released Bug description: Sequence of events. - Fernet tokens (didnt test with UUID) - Running cluster with Liberty from about 6 weeks ago, so close to stable. - Upgrade Keystone to Mitaka (automated) - Tokens fail to issue for about 5 minutes, after this time, all the cached tokens are gone - Everything works after that. See also Work-around at bottom. Annotated logs: Token call works to this point. db_sync is running here, but code is still Liberty, DB now Mitaka: An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-04dcb954-ae4e-41fa-b235-aa0b05ac8b44) An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-d27eee3a-723a-412e-a7b0-37ffd511c221) An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-265b6261-bcac-44f1-a806-8696b455ff5a) Puppet bounces Keystone, the restarted code is Mitaka: Discovering versions from the identity service failed when creating the password plugin. Attempting to determine version from URL. Tokens fail to generate here due to the caching format changing. This will continue for about 5 minutes or so, I suspect it depends on whats in the cache and timeouts. An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-8b835f67-4a21-42d3-9030-b4dbfd820238) An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-b92bcd56-87da-4977-b82e-c717c7120f4f) An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-a787163f-20c1-493f-9b34-82708dea4191) An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-e2ab7bf1-3483-438e-8425-06e5cfbf2e37) Keystone log is full of this: 2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi Traceback (most recent call last): 2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi File "/venv/local/lib/python2.7/site-packages/keystone/common/wsgi.py", line 249, in __call__ 2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi result = method(context, **params) 2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi File "/venv/local/lib/python2.7/site-packages/oslo_log/versionutils.py", line 165, in wrapped 2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi return func_or_cls(*args, **kwargs) 2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi File "/venv/local/lib/python2.7/site-packages/keystone/token/controllers.py", line 100, in authenticate 2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi context, auth) 2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi File "/venv/local/lib/python2.7/site-packages/keystone/token/controllers.py", line 310, in _authenticate_local 2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi user_id, tenant_id) 2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi File "/venv/local/lib/python2.7/site-packages/keystone/token/controllers.py", line 391, in _get_project_roles_and_ref 2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi user_id, tenant_id) 2016-06-13 21:37:58.947 35 ERROR keystone.common.wsgi File "/venv/local/lib/python2.7/site-packages/keyston
[Yahoo-eng-team] [Bug 1607039] Re: KVS _update_user_token_list can be more efficient
Thanks Billy! I'll mark this as WONTFIX since it doesn't align with project plans. ** Changed in: keystone Status: Confirmed => Won't Fix ** Changed in: keystone Assignee: Billy Olsen (billy-olsen) => (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1607039 Title: KVS _update_user_token_list can be more efficient Status in OpenStack Identity (keystone): Won't Fix Bug description: Maintaining the user token list and the revocation list in the memcached persistence backend (kvs) is inefficient for larger amounts of tokens due to the use of a linear algorithm for token list maintenance. Since the list is unordered, each token within the list must be checked first to ensure whether it has expired or not, secondly to determine if it has been revoked or not. By changing to an ordered list and using a binary search, expired tokens can be found with less computational overhead. The current algorithm means that the insertion of a new token into the list is O(n) since token expiration validity is done when the list is updated. By using an ordered list, the insertion and validation of the expiration can be reduced to O(log n). To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1607039/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614630] [NEW] hz-field-directive If a property is null or undefined and does not have filters transforming, exception is logged
Public bug reported: hz-field-directive was developed using hz images panel where every single property had at least the no value filter applied. However, when using it with properties that don't have the no value filter applied and there is no value, we'll see the following in console.log angular.js:12783 TypeError: Cannot read property 'then' of undefined at link (http://127.0.0.1:8005/static/framework/widgets/property/hz-field.directive.js:115:17) at http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:9073:44 at invokeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:9079:9) at nodeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:8566:11) at compositeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:7965:13) at compositeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:7968:13) at nodeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:8561:24) at delayedNodeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:8828:11) at compositeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:7965:13) at compositeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:7968:13) (anonymous function) @ angular.js:12783(anonymous function) @ angular.js:9535invokeLinkFn @ angular.js:9081nodeLinkFn @ angular.js:8566compositeLinkFn @ angular.js:7965compositeLinkFn @ angular.js:7968nodeLinkFn @ angular.js:8561delayedNodeLinkFn @ angular.js:8828compositeLinkFn @ angular.js:7965compositeLinkFn @ angular.js:7968publicLinkFn @ angular.js:7845boundTranscludeFn @ angular.js:7983controllersBoundTransclude @ angular.js:8593ngRepeatAction @ angular.js:28080$watchCollectionAction @ angular.js:16066$digest @ angular.js:16203$apply @ angular.js:16467done @ angular.js:10852completeRequest @ angular.js:11050requestLoaded @ angular.js:10991 4angular.js:12783 TypeError: Cannot read property 'then' of null at link (http://127.0.0.1:8005/static/framework/widgets/property/hz-field.directive.js:115:17) at http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:9073:44 at invokeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:9079:9) at nodeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:8566:11) at compositeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:7965:13) at compositeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:7968:13) at nodeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:8561:24) at delayedNodeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:8828:11) at compositeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:7965:13) at compositeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:7968:13) ** Affects: horizon Importance: Undecided Assignee: Travis Tripp (travis-tripp) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1614630 Title: hz-field-directive If a property is null or undefined and does not have filters transforming, exception is logged Status in OpenStack Dashboard (Horizon): In Progress Bug description: hz-field-directive was developed using hz images panel where every single property had at least the no value filter applied. However, when using it with properties that don't have the no value filter applied and there is no value, we'll see the following in console.log angular.js:12783 TypeError: Cannot read property 'then' of undefined at link (http://127.0.0.1:8005/static/framework/widgets/property/hz-field.directive.js:115:17) at http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:9073:44 at invokeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:9079:9) at nodeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:8566:11) at compositeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:7965:13) at compositeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:7968:13) at nodeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:8561:24) at delayedNodeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:8828:11) at compositeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:7965:13) at compositeLinkFn (http://127.0.0.1:8005/static/horizon/lib/angular/angular.js:7968:13) (anonymous function) @ angular.js:12783(anonymous function) @ angular.js:9535invokeLinkFn @ angular.js:9081nodeLinkFn @ angular.js:8566compositeLinkFn @ angular.js:7965compositeLinkFn @ angular.js:7968nodeLinkFn @ angular.js:8561delayedNodeLinkFn @ angular.js:8828compositeLinkFn
[Yahoo-eng-team] [Bug 1613703] Re: Types of form values are lost when transferred using multipart/form-data
Reviewed: https://review.openstack.org/353987 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=f85d2fdbb75618e13d424649aa16231bd9c12ebc Submitter: Jenkins Branch:master commit f85d2fdbb75618e13d424649aa16231bd9c12ebc Author: Timur Sufiev Date: Thu Aug 11 13:40:36 2016 +0300 Fix the loss of JSON types when using multipart/form-data To pass a binary blob in a POST request, browser sets the header 'Content-Type: multipart/form-data', which in turn causes all form fields' values to be passed as strings. Circumvent this by storing original field values as a JSON string on client-side and decoding it on server-side. As a result the setting HORIZON_IMAGES_UPLOAD_MODE = 'legacy' will start working together with Glance V2. Closes-Bug: #1613703 Change-Id: I53a8fbba15e4c3c6c17d6ef1ffe701634efda149 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1613703 Title: Types of form values are lost when transferred using multipart/form- data Status in OpenStack Dashboard (Horizon): Fix Released Bug description: Django REST wrappers defined on Horizon's server-side to receive data from Angular modal forms rely on field values being transferred along with their type using JSON format, i.e. number widgets produce '{number_field: 42}', boolean fields produce '{boolean_field: true} etc. This assumption becomes wrong when a FileField is present in such form, because to transfer it browser has to use 'Content-Type: multipart/form-data' header, which forces every field to pass its value as a string. This becomes a real problem as soon as Glance V2 API is fully supported by Horizon, since Glance V2 requires that image properties types obey the types defined in the schema. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1613703/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614594] [NEW] neutron-lib public APIs missing docstring
Public bug reported: As per our neutron-lib guidelines [1], we'd like to have all public APIs documented via docstrings. Today we have a handful that are missing docs. This is just a tracking bug to get our existing neutron-lib public APIs docstringed. [1] https://github.com/openstack/neutron-lib/blob/master/doc/source/review-guidelines.rst ** Affects: neutron Importance: Undecided Assignee: Boden R (boden) Status: New ** Tags: lib ** Changed in: neutron Assignee: (unassigned) => Boden R (boden) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614594 Title: neutron-lib public APIs missing docstring Status in neutron: New Bug description: As per our neutron-lib guidelines [1], we'd like to have all public APIs documented via docstrings. Today we have a handful that are missing docs. This is just a tracking bug to get our existing neutron- lib public APIs docstringed. [1] https://github.com/openstack/neutron-lib/blob/master/doc/source/review-guidelines.rst To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1614594/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1611321] Re: HyperV: shelve vm deadlock
** Also affects: nova/mitaka Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1611321 Title: HyperV: shelve vm deadlock Status in compute-hyperv: New Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) mitaka series: New Bug description: At the moment, the instance snapshot operation is synchronized using the instance uuid. This was added some time ago, as the instance destroy operation was failing when an instance snapshot was in proggress. This is now causing a deadlock, as a similar lock was recently introduced in the manager for the shelve operation by this change: Id36b3b9516d72d28519c18c38d98b646b47d288d We can safely remove the lock from the HyperV driver as we now stop pending jobs when destroying instances. To manage notifications about this bug go to: https://bugs.launchpad.net/compute-hyperv/+bug/1611321/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614591] [NEW] registry_client_opts are not in glance-cache.conf.sample
Public bug reported: The glance-cache service also uses the registry client but some configuration options are not available in the sample config file. Steps to reproduce: 1) tox -egenconfig 2) then registry_client_opts are not in the sample config file ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1614591 Title: registry_client_opts are not in glance-cache.conf.sample Status in Glance: New Bug description: The glance-cache service also uses the registry client but some configuration options are not available in the sample config file. Steps to reproduce: 1) tox -egenconfig 2) then registry_client_opts are not in the sample config file To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1614591/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614578] [NEW] Image Metadata API wasn't deprecated after 2.35
Public bug reported: The image API already was deprecated after 2.35, due to it is proxy API to glance. But the sub-resource image-metadata of image was forgotten to deprecate. ** Affects: nova Importance: Undecided Assignee: Alex Xu (xuhj) Status: New ** Changed in: nova Assignee: (unassigned) => Alex Xu (xuhj) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1614578 Title: Image Metadata API wasn't deprecated after 2.35 Status in OpenStack Compute (nova): New Bug description: The image API already was deprecated after 2.35, due to it is proxy API to glance. But the sub-resource image-metadata of image was forgotten to deprecate. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1614578/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614564] [NEW] SRIOV not working with OvS on same host on vlan network
Public bug reported: Hi, Issue - On Liberty Openstack with FDB patch, The above set up is working good with FDB extension for openstack Flat network but not for openstack VLAN network. Pre condn - https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking - scenario Create a VLAN network. Create a normal instance VM A VM B, the OVS external bridge(br-ex) is connected to PF. I am not able to get the IP from external DHCP and i am able to see the fdb entries for the VM. Create SRIOV instance VM C on the same VLAN network on the same compute host. I am able to get IP from external server and do ping gateway for VM C via VF. VM A & VM B work if i clear the VFs in the same compute host or if i create OvS VM after booting SRIOV VMs(i.e, after assigning MAC and VLANs to VFs) But still i am not able to ping each other between VM A/VM B and VM C The TCP dump shows the reply going out of the DHCP server, but it is not hitting PF but the ARP is received at the VF and hit the SRIOV instance VM-SRIOV. PF is not getting any incoming packets from external server or from SRIOV instance. versions Openstack - Liberty with fdb extension ixgbe - 4.3.13 ixgbevf - 3.1.2 Please help me if i am doing wrong Regards, MK ** Affects: neutron Importance: Undecided Status: New ** Tags: ovs sriov-pci-pt ** Description changed: - | VM | | VM | | VM | |_A _| |_B__| |_C__| - | | | - | | | - _|___|__ | + | | | + | | | + _|___|__ | ||| |OvS || ||| - | | - -|---| - | __|__ __|__ | - | | PF | | VF || - | |_| Intel NIC|_|| - || - -- + | | + -|---| + | __|__ __|__ | + | | PF | | VF || + | |_| Intel NIC|_|| + || + -- Issue - - On Liberty Openstack with FDB patch, - The above set up is working good with FDB extension for openstack Flat network but not for openstack VLAN network. - + On Liberty Openstack with FDB patch, + The above set up is working good with FDB extension for openstack Flat network but not for openstack VLAN network. + Pre condn - - https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking + https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking - - scenario - Create a VLAN network. - Create a normal instance VM A VM B, the OVS external bridge(br-ex) is connected to PF. - I am not able to get the IP from external DHCP and i am able to see the fdb entries for the VM. - Create SRIOV instance VM C on the same VLAN network on the same compute host. - I am able to get IP from external server and do ping gateway for VM C via VF. - VM A & VM B work if i clear the VFs in the same compute host or if i create OvS VM after booting SRIOV VMs(i.e, after assigning MAC and VLANs to VFs) - But still i am not able to ping each other between VM A/VM B and VM C + - scenario + Create a VLAN network. + Create a normal instance VM A VM B, the OVS external bridge(br-ex) is connected to PF. + I am not able to get the IP from external DHCP and i am able to see the fdb entries for the VM. + Create SRIOV instance VM C on the same VLAN network on the same compute host. + I am able to get IP from external server and do ping gateway for VM C via VF. + VM A & VM B work if i clear the VFs in the same compute host or if i create OvS VM after booting SRIOV VMs(i.e, after assigning MAC and VLANs to VFs) + But still i am not able to ping each other between VM A/VM B and VM C - The TCP dump shows the reply going out of the DHCP server, but it + The TCP dump shows the reply going out of the DHCP server, but it is not hitting PF but the ARP is received at the VF and hit the SRIOV instance VM-SRIOV. - PF is not getting any incoming packets from external server or + PF is not getting any incoming packets from external server or from SRIOV instance. - versions Openstack - Liberty with fdb extension ixgbe - 4.3.13 ixgbevf - 3.1.2 Please help me if i am doing wrong Regards, MK ** Description changed: - - | VM | | VM | | VM | - |_A _| |_B__| |_C__| - | |
[Yahoo-eng-team] [Bug 1614561] [NEW] db.bw_usage_update can update multiple db records
Public bug reported: The current code in db.bw_usage_update() function uses .first() and is not correct because there is no order_by() applied on the SQL query and therefore the returned "first record" is indeterminate. We should remove misleading note about possible race and exception and added order_by() to ensure that the same record is updated every time. Ideally we should add UniqueConstraint for BandwidthUsage model to prevent multiple bw usage records existing for the same date range and UUID. That fix for this will mean we should be able to remove the .first() call and instead use .one(). ** Affects: nova Importance: Undecided Assignee: Pavel Kholkin (pkholkin) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1614561 Title: db.bw_usage_update can update multiple db records Status in OpenStack Compute (nova): In Progress Bug description: The current code in db.bw_usage_update() function uses .first() and is not correct because there is no order_by() applied on the SQL query and therefore the returned "first record" is indeterminate. We should remove misleading note about possible race and exception and added order_by() to ensure that the same record is updated every time. Ideally we should add UniqueConstraint for BandwidthUsage model to prevent multiple bw usage records existing for the same date range and UUID. That fix for this will mean we should be able to remove the .first() call and instead use .one(). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1614561/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614556] [NEW] Resize does not work in mitaka
Public bug reported: Deployed cluster with Fuel 9.0. Created and launched an instance with resources of 2VCPUs, 8GB of ram, 100GB ephemeral storage. Ephemeral storage is not backed by CEPH. I was able to successfully use Horizon to resize the instance (CentOS) from the current running resources above to a larger instances with 4VCPUS, 16GB of ram and 200GB of ephemeral storage. However, when I went to go back down to the 2VCPUS, 8GB Ram, 100GB ephemeral the instance won't resize. In horizon 1. click on the "Resize" option 2. Confirm the resize 3. Horizon displays a "Success" message (but actually failed in the background) I check the instances "Action Log" in horizon and I see that it errored. Go check the logs on the controller and see. WARNING nova.scheduler.host_manager [req-d46ad3a1-be18-464b- a3b9-11123e481fcc bdb162ee567d4230a988895f2a000a8b9 84ec9bb0ccc34eea84fbf49b557c4a66] Host has more disk space than database expected (43gb > 34gb) Searching around on the internet I found this https://ask.openstack.org/en/question/43359/resize-openstack-icehouse- instance-at-same-node-bug/ and it isn't clear that this was a bug or not. Some people have reported that it works for them in older versions but not in newer version. Others report that this is a problem resizing down in that it should just show an error message. I just need clarification if this is a bug, it is intended to error on a resize down (granted I don't think it should show success in then failure). ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1614556 Title: Resize does not work in mitaka Status in OpenStack Compute (nova): New Bug description: Deployed cluster with Fuel 9.0. Created and launched an instance with resources of 2VCPUs, 8GB of ram, 100GB ephemeral storage. Ephemeral storage is not backed by CEPH. I was able to successfully use Horizon to resize the instance (CentOS) from the current running resources above to a larger instances with 4VCPUS, 16GB of ram and 200GB of ephemeral storage. However, when I went to go back down to the 2VCPUS, 8GB Ram, 100GB ephemeral the instance won't resize. In horizon 1. click on the "Resize" option 2. Confirm the resize 3. Horizon displays a "Success" message (but actually failed in the background) I check the instances "Action Log" in horizon and I see that it errored. Go check the logs on the controller and see. WARNING nova.scheduler.host_manager [req-d46ad3a1-be18-464b- a3b9-11123e481fcc bdb162ee567d4230a988895f2a000a8b9 84ec9bb0ccc34eea84fbf49b557c4a66] Host has more disk space than database expected (43gb > 34gb) Searching around on the internet I found this https://ask.openstack.org/en/question/43359/resize-openstack-icehouse- instance-at-same-node-bug/ and it isn't clear that this was a bug or not. Some people have reported that it works for them in older versions but not in newer version. Others report that this is a problem resizing down in that it should just show an error message. I just need clarification if this is a bug, it is intended to error on a resize down (granted I don't think it should show success in then failure). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1614556/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong
** Also affects: bandit Importance: Undecided Status: New ** Changed in: bandit Assignee: (unassigned) => Jiong Liu (liujiong) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1259292 Title: Some tests use assertEqual(observed, expected) , the argument order is wrong Status in Astara: Fix Released Status in Bandit: New Status in Barbican: In Progress Status in Blazar: New Status in Ceilometer: Invalid Status in Cinder: Fix Released Status in congress: Fix Released Status in daisycloud-core: New Status in Designate: Fix Released Status in Freezer: In Progress Status in Glance: Fix Released Status in glance_store: Fix Released Status in Higgins: New Status in OpenStack Dashboard (Horizon): In Progress Status in OpenStack Identity (keystone): Fix Released Status in Magnum: Fix Released Status in Manila: Fix Released Status in Mistral: Fix Released Status in Murano: Fix Released Status in networking-calico: New Status in networking-infoblox: In Progress Status in networking-l2gw: In Progress Status in networking-sfc: Fix Released Status in OpenStack Compute (nova): Won't Fix Status in os-brick: Fix Released Status in PBR: Fix Released Status in pycadf: New Status in python-barbicanclient: In Progress Status in python-ceilometerclient: Invalid Status in python-cinderclient: Fix Released Status in python-designateclient: Fix Committed Status in python-glanceclient: Fix Released Status in python-mistralclient: Fix Released Status in python-solumclient: Fix Released Status in Python client library for Zaqar: Fix Released Status in Rally: In Progress Status in Sahara: Fix Released Status in Solum: Fix Released Status in sqlalchemy-migrate: New Status in SWIFT: New Status in tacker: In Progress Status in tempest: Invalid Status in zaqar: Fix Released Bug description: The test cases will produce a confusing error message if the tests ever fail, so this is worth fixing. To manage notifications about this bug go to: https://bugs.launchpad.net/astara/+bug/1259292/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614538] [NEW] neutron: instance.info_cache isn't refreshed after deleting associated floating IP
Public bug reported: Shown in a tempest test here: http://logs.openstack.org/95/356095/2/check/gate-tempest-dsvm-neutron- full-ubuntu-xenial/8d3cbb2/console.html#_2016-08-18_03_18_38_290951 You can see from this patch that we refresh the instance's network info_cache (server.addresses) when deleting a floating IP associated with that instance but only when using nova-network, we don't do it for neutron. This is related to bug 1586931 and investigation that happened in https://review.openstack.org/#/c/351960/. Basically the problem is that this method isn't decorated with the refresh_cache decorator: https://github.com/openstack/nova/blob/d14fc79f65e04cc39a3988783344aecd84621291/nova/network/neutronv2/api.py#L1826 But notice that this is: https://github.com/openstack/nova/blob/d14fc79f65e04cc39a3988783344aecd84621291/nova/network/neutronv2/api.py#L1845 That's the method that's called from the REST API when disassociating, but not deleting, a floating IP from a server. ** Affects: nova Importance: Medium Assignee: MJWurtz (michael-wurtz) Status: Triaged ** Affects: nova/liberty Importance: Medium Status: Confirmed ** Affects: nova/mitaka Importance: Medium Status: Confirmed ** Tags: low-hanging-fruit neutron ** Also affects: nova/mitaka Importance: Undecided Status: New ** Also affects: nova/liberty Importance: Undecided Status: New ** Changed in: nova/mitaka Status: New => Confirmed ** Changed in: nova/liberty Status: New => Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1614538 Title: neutron: instance.info_cache isn't refreshed after deleting associated floating IP Status in OpenStack Compute (nova): Triaged Status in OpenStack Compute (nova) liberty series: Confirmed Status in OpenStack Compute (nova) mitaka series: Confirmed Bug description: Shown in a tempest test here: http://logs.openstack.org/95/356095/2/check/gate-tempest-dsvm-neutron- full-ubuntu-xenial/8d3cbb2/console.html#_2016-08-18_03_18_38_290951 You can see from this patch that we refresh the instance's network info_cache (server.addresses) when deleting a floating IP associated with that instance but only when using nova-network, we don't do it for neutron. This is related to bug 1586931 and investigation that happened in https://review.openstack.org/#/c/351960/. Basically the problem is that this method isn't decorated with the refresh_cache decorator: https://github.com/openstack/nova/blob/d14fc79f65e04cc39a3988783344aecd84621291/nova/network/neutronv2/api.py#L1826 But notice that this is: https://github.com/openstack/nova/blob/d14fc79f65e04cc39a3988783344aecd84621291/nova/network/neutronv2/api.py#L1845 That's the method that's called from the REST API when disassociating, but not deleting, a floating IP from a server. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1614538/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614537] [NEW] Neutron Objects do not override the UUIDField to actually validate UUIDs
Public bug reported: Oslo Versioned Objects' implmentation of the UUID Field does not actually validate anything. It is a wrapper around a string type. Projects are advised that to actually validate UUIDs, they need to override the field in their custom fields. [1] Leaving it non validating can cause issues when the field is moved to validate, or there is an assumption that it is being validated. 1 - http://docs.openstack.org/developer/oslo.versionedobjects/api/fields.html#oslo_versionedobjects.fields.UUIDField ** Affects: neutron Importance: Undecided Status: New ** Tags: oslo -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614537 Title: Neutron Objects do not override the UUIDField to actually validate UUIDs Status in neutron: New Bug description: Oslo Versioned Objects' implmentation of the UUID Field does not actually validate anything. It is a wrapper around a string type. Projects are advised that to actually validate UUIDs, they need to override the field in their custom fields. [1] Leaving it non validating can cause issues when the field is moved to validate, or there is an assumption that it is being validated. 1 - http://docs.openstack.org/developer/oslo.versionedobjects/api/fields.html#oslo_versionedobjects.fields.UUIDField To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1614537/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1601822] Re: remotefs.RsyncDriver() should use utils.safe_ip_format()
Reviewed: https://review.openstack.org/340386 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=270be6906c13bc621a7ad507b8ae729a940609d2 Submitter: Jenkins Branch:master commit 270be6906c13bc621a7ad507b8ae729a940609d2 Author: Alexey I. Froloff Date: Mon Jul 11 16:31:09 2016 +0300 Properly quote IPv6 address in RsyncDriver When IPv6 address literal is used as host in rsync call, it should be enclosed in square brackets. This is already done for copy_file method outside of driver in changeset Ia5f28673e79158d948980f2b3ce496c6a56882af Create helper function format_remote_path(host, path) and use where appropriate. Closes-Bug: 1601822 Change-Id: Ifc386539f33684fb764f5f638a7ee0a10b1ef534 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1601822 Title: remotefs.RsyncDriver() should use utils.safe_ip_format() Status in OpenStack Compute (nova): Fix Released Bug description: IPv6 address literal should be wrapped in square brackets when calling rsync: Resize error: not able to execute ssh command: Unexpected error while running command. Command: rsync --archive --relative --no-implied-dirs /tmp/tmpo_wpSz/./var/lib/nova/instances/fd7c6610-cf13-42e0-826c-3b4eb2494465 fd4b:cafe:dead:beef::bad:f00d:/ Exit code: 255 Stdout: u'' Stderr: u'ssh: Could not resolve hostname fd4b: Name or service not known\r\nrsync: connection unexpectedly closed (0 bytes received so far) [sender]\nrsync error: unexplained error (code 255) at io.c(226) [sender=3.1.0]\n' To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1601822/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614519] [NEW] Error on booting instance with ephemeral while using Flat/Raw images
Public bug reported: Description === When using Flat/Raw image type in nova, and flavors with none-empty ephemerals, instance fails to boot. Steps to reproduce == 1. create flavor with non-empty ephemeral 2. Boot instance with flavor from step 1. Expected result === Instance become active Actual result = Instance moved to ERROR state Environment === 1. stack@node1:/opt/stack/nova$ git log -1 commit d23fb5ff9f10559681adc04b5b4116cfb0ede9df Merge: 6a5e36f 630eed5 Author: Jenkins Date: Wed Aug 17 14:47:19 2016 + Merge "Make simple_cell_setup work when multiple nodes are present" 2. Libvirt 1.3.1 2. No shared storage Logs & Configs == nova.conf: use_cow_images = False on that compute node. 2016-08-18 11:46:15.451 ERROR nova.compute.manager [req-8b08cc99-5b5c-4a18-8306-142a38673dce admin admin] [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] Instance failed to spawn 2016-08-18 11:46:15.451 TRACE nova.compute.manager [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] Traceback (most recent call last): 2016-08-18 11:46:15.451 TRACE nova.compute.manager [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] File "/opt/stack/nova/nova/compute/manager.py", line 2075, in _build_resources 2016-08-18 11:46:15.451 TRACE nova.compute.manager [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] yield resources 2016-08-18 11:46:15.451 TRACE nova.compute.manager [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] File "/opt/stack/nova/nova/compute/manager.py", line 1919, in _build_and_run_instance 2016-08-18 11:46:15.451 TRACE nova.compute.manager [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] block_device_info=block_device_info) 2016-08-18 11:46:15.451 TRACE nova.compute.manager [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2650, in spawn 2016-08-18 11:46:15.451 TRACE nova.compute.manager [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] admin_pass=admin_password) 2016-08-18 11:46:15.451 TRACE nova.compute.manager [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3131, in _create_image 2016-08-18 11:46:15.451 TRACE nova.compute.manager [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] specified_fs=specified_fs) 2016-08-18 11:46:15.451 TRACE nova.compute.manager [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] File "/opt/stack/nova/nova/virt/libvirt/imagebackend.py", line 221, in cache 2016-08-18 11:46:15.451 TRACE nova.compute.manager [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] if size > self.get_disk_size(base): 2016-08-18 11:46:15.451 TRACE nova.compute.manager [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] File "/opt/stack/nova/nova/virt/libvirt/imagebackend.py", line 275, in get_disk_size 2016-08-18 11:46:15.451 TRACE nova.compute.manager [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] return disk.get_disk_size(name) 2016-08-18 11:46:15.451 TRACE nova.compute.manager [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] File "/opt/stack/nova/nova/virt/disk/api.py", line 148, in get_disk_size 2016-08-18 11:46:15.451 TRACE nova.compute.manager [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] return images.qemu_img_info(path).virtual_size 2016-08-18 11:46:15.451 TRACE nova.compute.manager [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] File "/opt/stack/nova/nova/virt/images.py", line 51, in qemu_img_info 2016-08-18 11:46:15.451 TRACE nova.compute.manager [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] raise exception.DiskNotFound(location=path) 2016-08-18 11:46:15.451 TRACE nova.compute.manager [instance: 4317b4fe-bba2-4b92-bbe9-73506229bb22] DiskNotFound: No disk at /opt/stack/data/nova/instances/_base/ephemeral_3_40d1d2c ** Affects: nova Importance: High Status: Confirmed ** Changed in: nova Status: New => Confirmed ** Changed in: nova Importance: Undecided => High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1614519 Title: Error on booting instance with ephemeral while using Flat/Raw images Status in OpenStack Compute (nova): Confirmed Bug description: Description === When using Flat/Raw image type in nova, and flavors with none-empty ephemerals, instance fails to boot. Steps to reproduce == 1. create flavor with non-empty ephemeral 2. Boot instance with flavor from step 1. Expected result === Instance become active Actual result = Instance moved to ERROR state Environment === 1. stack@node1:/opt/stack/nova$ git log -1 commit d23fb5ff9f10559681adc04b5b4116cfb0ede9df Merge: 6a5e36f 630eed5 Author: Jenkins Date: Wed Aug 17 14:47:19 2016 + Merge "Make simple_cell_setup work w
[Yahoo-eng-team] [Bug 1613256] Re: 'No data to report.' error in coverage report
Reviewed: https://review.openstack.org/355716 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=d177395e75faaf0fb2af679567532b8466c965c8 Submitter: Jenkins Branch:master commit d177395e75faaf0fb2af679567532b8466c965c8 Author: Takashi NATSUME Date: Tue Aug 16 13:23:18 2016 +0900 Fix 'No data to report' error Remove 2 unnecessary commands in 'testenv:cover' of tox.ini. HTML coverage reports can be generated in 'cover' directory by 'python setup.py testr --coverage' only without the removed commands. Change-Id: Ib9539f845aad29269a9cb07db719b22e35f7bbeb Closes-Bug: #1613256 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1613256 Title: 'No data to report.' error in coverage report Status in OpenStack Compute (nova): Fix Released Bug description: When executing 'tox -e cover', a 'No data to report.' error occurs. But coverage report (HTML) is generated in 'cover' directory. 'python setup.py testr --coverage --testr-args=' generates coverage report (HTML). Then 'coverage combine' command truncates the contents of '.coverage' file. So 'coverage html --include=nova/* -d covhtml -i' fails with 'No data to report.' stack@devstack-master:/tmp/nova$ tox -e cover cover develop-inst-noop: /tmp/nova cover installed: alembic==0.8.7,amqp==1.4.9,anyjson==0.3.3,appdirs==1.4.0,Babel==2.3.4,bandit==1.0.1,boto==2.42.0,cachetools==1.1.6,castellan==0.4.0,cffi==1.7.0,cliff==2.1.0,cmd2==0.6.8,colorama==0.3.7,contextlib2==0.5.4,coverage==4.2,cryptography==1.4,debtcollector==1.8.0,decorator==4.0.10,docutils==0.12,dogpile.cache==0.6.1,enum34==1.1.6,eventlet==0.19.0,extras==1.0.0,fasteners==0.14.1,fixtures==3.0.0,flake8==2.2.4,funcsigs==1.0.2,functools32==3.2.3.post2,futures==3.0.5,futurist==0.17.0,gabbi==1.24.0,gitdb==0.6.4,GitPython==2.0.8,greenlet==0.4.10,hacking==0.10.2,httplib2==0.9.2,idna==2.1,ipaddress==1.0.16,iso8601==0.1.11,Jinja2==2.8,jsonpatch==1.14,jsonpath-rw==1.4.0,jsonpath-rw-ext==1.0.0,jsonpointer==1.10,jsonschema==2.5.1,keystoneauth1==2.11.1,keystonemiddleware==4.8.0,kombu==3.0.35,linecache2==1.0.0,lxml==3.6.1,Mako==1.0.4,MarkupSafe==0.23,mccabe==0.2.1,microversion-parse==0.1.4,mock==2.0.0,monotonic==1.2,mox3==0.18.0,msgpack-python==0.4.8,netaddr==0.7.18,netifaces==0.10.4,-e git+https://git.openstack.org/openstack/nova.git@15e536518ae1a366c8a8b15d9183072050e4b6f2#egg=nova,numpy==1.11.1,openstackdocstheme==1.4.0,openstacksdk==0.9.1,os-api-ref==0.4.0,os-brick==1.5.0,os-client-config==1.18.0,os-testr==0.7.0,os-vif==1.1.0,os-win==1.1.0,osc-lib==1.0.0,oslo.cache==1.12.0,oslo.concurrency==3.13.0,oslo.config==3.15.0,oslo.context==2.8.0,oslo.db==4.11.0,oslo.i18n==3.8.0,oslo.log==3.14.0,oslo.messaging==5.7.0,oslo.middleware==3.17.0,oslo.policy==1.14.0,oslo.privsep==1.11.0,oslo.reports==1.14.0,oslo.rootwrap==5.1.0,oslo.serialization==2.13.0,oslo.service==1.14.0,oslo.utils==3.16.0,oslo.versionedobjects==1.15.0,oslo.vmware==2.13.0,oslosphinx==4.7.0,oslotest==2.8.0,paramiko==2.0.2,Paste==2.0.3,PasteDeploy==1.5.2,pbr==1.10.0,pep8==1.5.7,pika==0.10.0,pika-pool==0.1.3,ply==3.8,positional==1.1.1,prettytable==0.7.2,psutil==1.2.1,psycopg2==2.6.2,py==1.4.31,pyasn1==0.1.9,pycadf==2.3.0,pycparser==2.14,pyflakes==0.8.1,Pygments==2.1.3,pyinotify==0.9.6,PyMySQL==0.7.6,pyparsin g==2.1.6,pytest==2.9.2,python-barbicanclient==4.0.1,python-cinderclient==1.8.0,python-dateutil==2.5.3,python-editor==1.0.1,python-glanceclient==2.3.0,python-ironicclient==1.6.0,python-keystoneclient==3.4.0,python-mimeparse==1.5.2,python-neutronclient==5.0.0,python-novaclient==5.0.0,python-openstackclient==2.6.0,python-subunit==1.2.0,pytz==2016.6.1,PyYAML==3.11,reno==1.8.0,repoze.lru==0.6,requests==2.11.0,requests-mock==1.0.0,requestsexceptions==1.1.3,retrying==1.3.3,rfc3986==0.3.1,Routes==2.3.1,simplejson==3.8.2,six==1.10.0,smmap==0.9.0,Sphinx==1.2.3,SQLAlchemy==1.0.14,sqlalchemy-migrate==0.10.0,sqlparse==0.2.0,stevedore==1.17.0,suds-jurko==0.6,tempest-lib==1.0.0,Tempita==0.5.2,testrepository==0.0.20,testresources==2.0.1,testscenarios==0.5.0,testtools==2.2.0,traceback2==1.4.0,unicodecsv==0.14.1,unittest2==1.1.0,urllib3==1.16,warlock==1.2.0,WebOb==1.6.1,websockify==0.8.0,wrapt==1.10.8,wsgi-intercept==1.3.1 cover runtests: PYTHONHASHSEED='119558979' cover runtests: commands[0] | coverage erase cover runtests: commands[1] | python setup.py testr --coverage --testr-args= running testr running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \ OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \ OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \ ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./nova/tests} --list running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \ OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \ OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \ ${PYTHON:-python} -m subu
[Yahoo-eng-team] [Bug 1614493] [NEW] openstack endpoint delete failing with error not found
Public bug reported: Using devstack mitaka and trying to delete endpoint 28d778c42e7644e1932023ce5807b306 created by stack.sh but its failing with error not found while it exists. $ openstack endpoint list +--+---+--++ | ID | Region| Service Name | Service Type | +--+---+--++ | 28d778c42e7644e1932023ce5807b306 | RegionOne | neutron | network| | 4afd25b537b749c28344832b442d9045 | RegionOne | heat-cfn | cloudformation | | 4afdf9c49d534b66a2a69fef22841241 | RegionOne | heat | orchestration | | 07684056f1784ac6a1bdd68156ffb3fc | RegionOne | neutron | network| | ed180651cb1748d0bcece2a955649be4 | RegionOne | glance | image | | b943463cd9be4d08a729f53321f7aef6 | RegionOne | nova | compute| | ae99b760e89b414992795447ca9dd709 | RegionOne | nova_legacy | compute_legacy | | 282c0c95bc2f45a59dab9ebf4e7be84f | RegionOne | keystone | identity | +--+---+--++ $ openstack endpoint show 28d778c42e7644e1932023ce5807b306 +--+--+ | Field| Value| +--+--+ | adminurl | http://192.168.2.141:9696/ | | enabled | True | | id | 28d778c42e7644e1932023ce5807b306 | | internalurl | http://192.168.2.141:9696/ | | publicurl| http://192.168.2.141:9696/ | | region | RegionOne| | service_id | fc163cb88651409c9d01ceb130369cdc | | service_name | neutron | | service_type | network | +--+--+ $ openstack endpoint delete 28d778c42e7644e1932023ce5807b306 Could not find endpoint: 28d778c42e7644e1932023ce5807b306 (HTTP 404) (Request-ID: req-646a9c87-2b51-4648-9875-175098e9a23b) 2016-08-18 16:52:08.195 27228 DEBUG keystone.middleware.auth [req-646a9c87-2b51-4648-9875-175098e9a23b 2efffe9905c14da9a730c31e9e80427c 632a22881fea44e2b99adac47f43d115 - default default] RBAC: auth_context: {'is_delegated_auth': False, 'access_token_id': None, 'user_id': u'2efffe9905c14da9a730c31e9e80427c', 'roles': [u'admin'], 'user_domain_id': 'default', 'trustee_id': None, 'trustor_id': None, 'consumer_id': None, 'token': , 'project_id': u'632a22881fea44e2b99adac47f43d115', 'trust_id': None, 'project_domain_id': 'default'} process_request /opt/stack/keystone/keystone/middleware/auth.py:221 2016-08-18 16:52:08.198 27228 INFO keystone.common.wsgi [req-646a9c87-2b51-4648-9875-175098e9a23b 2efffe9905c14da9a730c31e9e80427c 632a22881fea44e2b99adac47f43d115 - default default] DELETE http://192.168.2.141:35357/v2.0/endpoints/28d778c42e7644e1932023ce5807b306 2016-08-18 16:52:08.199 27228 DEBUG keystone.policy.backends.rules [req-646a9c87-2b51-4648-9875-175098e9a23b 2efffe9905c14da9a730c31e9e80427c 632a22881fea44e2b99adac47f43d115 - default default] enforce admin_required: {'user_id': u'2efffe9905c14da9a730c31e9e80427c', 'is_admin': 0, 'roles': [u'admin'], 'tenant_id': u'632a22881fea44e2b99adac47f43d115'} enforce /opt/stack/keystone/keystone/policy/backends/rules.py:76 2016-08-18 16:52:08.206 27228 WARNING keystone.common.wsgi [req-646a9c87-2b51-4648-9875-175098e9a23b 2efffe9905c14da9a730c31e9e80427c 632a22881fea44e2b99adac47f43d115 - default default] Could not find endpoint: 28d778c42e7644e1932023ce5807b306 ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1614493 Title: openstack endpoint delete failing with error not found Status in OpenStack Identity (keystone): New Bug description: Using devstack mitaka and trying to delete endpoint 28d778c42e7644e1932023ce5807b306 created by stack.sh but its failing with error not found while it exists. $ openstack endpoint list +--+---+--++ | ID | Region| Service Name | Service Type | +--+---+--++ | 28d778c42e7644e1932023ce5807b306 | RegionOne | neutron | network | | 4afd25b537b749c28344832b442d9045 | RegionOne | heat-cfn | cloudformation | | 4afdf9c49d534b66a2a69fef22841241 | RegionOne | heat | orchestration | | 07684056f1784ac6a1bdd68156ffb3fc | RegionOne | neutron | network | | ed180651cb1748d0bcece2a955649be4 | RegionOne | glance | image | | b943463cd9be4d08a729f53321f7aef6 | RegionOne | nova | compute | | a
[Yahoo-eng-team] [Bug 1478103] Re: need support for configuring syslog
** Changed in: maas Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1478103 Title: need support for configuring syslog Status in cloud-init: Fix Released Status in MAAS: Fix Released Bug description: in order to instruct a host to easily log syslog information to another system, we need to add a cloud-config format for this. The format to use looks like this: ## syslog module allows you to configure the systems syslog. ## configuration of syslog is under the top level cloud-config ## entry 'syslog'. ## ## "remotes" ## remotes is a dictionary. items are of 'name: remote_info' ## name is simply a name (example 'maas'). It has no importance other than ## for cloud-init merging configs ## ## remote_info is of the format ##* optional filter for log messages ## default if not present: *.* ##* optional leading '@' or '@@' (indicates udp or tcp). ## default if not present (udp): @ ## This is rsyslog format for that. if not present, is '@' which is udp ##* ipv4 or ipv6 or hostname ## ipv6 addresses must be encoded in [::1] format. example: @[fd00::1]:514 ##* optional port ## port defaults to 514 ## ## Example: #cloud-config rsyslog: remotes: # udp to host 'maas.mydomain' port 514 maashost: maas.mydomain # udp to ipv4 host on port 514 maas: "@[10.5.1.56]:514" # tcp to host ipv6 host on port 555 maasipv6: "*.* @@[FE80::0202:B3FF:FE1E:8329]:555" To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1478103/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614477] [NEW] [RFE] NAT64 support with neutron
Public bug reported: In some deployment scenarios, it is likely that the new clients will be IPv6-only and they will want to connect to the existing IPv4-only servers. In order for all of these devices to be able to communicate, they all need to talk IPv6 or have some sort of translator involved. Translation requires technology such as NAT64. NAT64 allow IPv6 hosts to communicate with IPv4 servers by creating a NAT-mapping between the IPv6 and the IPv4 address. While supporting IPv4/IPv6 translation means providing separate IPv4 and IPv6 connectivity thus incurring additional complexity as well as additional operational and administrative costs, sometimes its a necessary step towards transition to the pure IPv6 networks. We would like to propose NAT64 support by following similar method as FIP allocation for fixed IPv4 address, but this time assigning IPv6 address. Consider the topology like in the following diagram. Allow to associate a IPv6 floating IP allocated on the "external network" to a fixed IP on private network. ++ | external | | network |IPv6 floating-ip |+ | | |router gateway port +---+---+ |router | ++--+ |router |interface | | +-+---+ | private | | network | fixed-ip |-+ For API, the following changes are necessary: * Add an extension "nat64" for the feature discovery. The extension does not add any resources or attributes to the REST API. * Allow IPv6 floating IP association via a router gateway interface. * The existing l3 create floating IP logic should be updated to allow IPv6 external subnet for the floating IP allocation. ** Affects: neutron Importance: Undecided Status: New ** Description changed: In some deployment scenarios, it is likely that the new clients will be IPv6-only and they will want to connect to the existing IPv4-only servers. In order for all of these devices to be able to communicate, they all need to talk IPv6 or have some sort of translator involved. Translation requires technology such as NAT64. NAT64 allow IPv6 hosts to communicate with IPv4 servers by creating a NAT-mapping between the IPv6 and the IPv4 address. While supporting IPv4/IPv6 translation means providing separate IPv4 and IPv6 connectivity thus incurring additional complexity as well as additional operational and administrative costs, sometimes its a necessary step towards transition to the pure IPv6 networks. We would like to propose NAT64 support by following similar method as FIP allocation for fixed IPv4 address, but this time assigning IPv6 address. Consider the topology like in the following diagram. Allow to associate a IPv6 floating IP allocated on the "external network" to a fixed IP on "private network". - :: - +---+ - | external | - | network | - | | - | | - | IPv6 floating-ip | - +--++ -| -| -|router gateway port - +-++ - | | - |router| - | | - ++-+ - |router - |interface - | - | - +-+---+ - | private | - | network | - | | - |fixed-ip | - +-+ + +---+ + | external | + | network | + | | + | | + | IPv6 floating-ip | + +--++ + | + | + |router gateway port + +-++ + | | + |router| + | | + ++-+ + |router + |interface + | + | + +-+---+ + | private | + | network | + | | + |fixed-ip | + +-+ For API, the following changes are necessary: * Add an extension "nat64" for the feature discovery. - The extension does not add any resources or attributes to the REST API. + The extension does not add any resources or attributes to the REST API. * A
[Yahoo-eng-team] [Bug 1614478] [NEW] Synthetic_fields can contain any string
Public bug reported: The objects/base NeutronDbObject doesn't check synthetic_fields for validity, so typos and errors pass silently. ** Affects: neutron Importance: Low Assignee: John Perkins (john-d-perkins) Status: New ** Changed in: neutron Importance: Undecided => Low ** Changed in: neutron Assignee: (unassigned) => John Perkins (john-d-perkins) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614478 Title: Synthetic_fields can contain any string Status in neutron: New Bug description: The objects/base NeutronDbObject doesn't check synthetic_fields for validity, so typos and errors pass silently. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1614478/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1214176] Re: Fix copyright headers to be compliant with Foundation policies
Reviewed: https://review.openstack.org/347299 Committed: https://git.openstack.org/cgit/openstack-dev/pbr/commit/?id=a95982f9a061fa29fd98a87ffd6f9fe7043d5e1f Submitter: Jenkins Branch:master commit a95982f9a061fa29fd98a87ffd6f9fe7043d5e1f Author: dineshbhor Date: Tue Jul 26 16:30:10 2016 +0530 Replace OpenStack LLC with OpenStack Foundation Change-Id: I03fac862d7346bdac83503afb5f26119d0ea300d Closes-Bug: #1214176 ** Changed in: pbr Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1214176 Title: Fix copyright headers to be compliant with Foundation policies Status in Ceilometer: Fix Released Status in Cinder: Fix Released Status in devstack: Fix Released Status in Glance: Fix Released Status in heat: Fix Released Status in OpenStack Dashboard (Horizon): Fix Released Status in OpenStack Identity (keystone): Fix Released Status in Murano: Fix Released Status in neutron: Fix Released Status in OpenStack Compute (nova): Fix Released Status in PBR: Fix Released Status in python-ceilometerclient: Fix Released Status in python-cinderclient: Fix Released Status in python-glanceclient: Fix Released Status in python-heatclient: Fix Released Status in python-keystoneclient: Fix Released Status in python-manilaclient: Fix Released Status in python-neutronclient: Fix Released Status in python-troveclient: Fix Released Status in OpenStack Object Storage (swift): Fix Released Status in tempest: Fix Released Status in OpenStack DBaaS (Trove): Fix Released Bug description: Correct the copyright headers to be consistent with the policies outlined by the OpenStack Foundation at http://www.openstack.org/brand /openstack-trademark-policy/ Remove references to OpenStack LLC, replace with OpenStack Foundation To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1214176/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614443] Re: LBaaSv1: HAproxy scenario tests cleanup fail with a StaleDataError Edit
this is a duplicate of bug 1613251 ** Changed in: neutron Status: In Progress => Invalid ** Changed in: neutron Assignee: Nir Magnezi (nmagnezi) => (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614443 Title: LBaaSv1: HAproxy scenario tests cleanup fail with a StaleDataError Edit Status in neutron: Invalid Bug description: Found this while working on bug 1613251. Example for that error: http://logs.openstack.org/90/351490/10/gate/gate-neutron-lbaasv1-dsvm-api/fa4d806/console.html To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1614443/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614452] [NEW] Port create time grows at scale due to dvr arp update
Public bug reported: Scale tests show that sometimes VMs are not able to spawn because of timeouts on port creation. Neutron server logs show that port creation time grows due to dvr arp table updates being sent to each l3 dvr agent hosting the router one by one - this takes > 90% of time: http://paste.openstack.org/show/560761/ ** Affects: neutron Importance: High Assignee: Oleg Bondarev (obondarev) Status: Confirmed ** Tags: l3-dvr-backlog loadimpact -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614452 Title: Port create time grows at scale due to dvr arp update Status in neutron: Confirmed Bug description: Scale tests show that sometimes VMs are not able to spawn because of timeouts on port creation. Neutron server logs show that port creation time grows due to dvr arp table updates being sent to each l3 dvr agent hosting the router one by one - this takes > 90% of time: http://paste.openstack.org/show/560761/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1614452/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614446] [NEW] Angular feature enablement should be cleaned up
Public bug reported: Angular feature enablement has used a few different settings now, with launch instance having its own setting, swift setting via an enabled/ files UPDATE_HORIZON_CONFIG and Images being updated via HORIZON_CONFIG. We should create a common setting for all features going forward. ** Affects: horizon Importance: Wishlist Assignee: Rob Cresswell (robcresswell) Status: Fix Released ** Tags: angularjs ** Changed in: horizon Assignee: (unassigned) => Rob Cresswell (robcresswell) ** Changed in: horizon Milestone: None => newton-3 ** Changed in: horizon Importance: Undecided => Wishlist ** Changed in: horizon Status: New => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1614446 Title: Angular feature enablement should be cleaned up Status in OpenStack Dashboard (Horizon): Fix Released Bug description: Angular feature enablement has used a few different settings now, with launch instance having its own setting, swift setting via an enabled/ files UPDATE_HORIZON_CONFIG and Images being updated via HORIZON_CONFIG. We should create a common setting for all features going forward. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1614446/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614436] Re: Creation of loadbalancer fails with plug vip exception
** Project changed: neutron => octavia -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614436 Title: Creation of loadbalancer fails with plug vip exception Status in octavia: New Bug description: Here is the scenario: I have two compete nodes with following bridge mappings: Compute node 1: 1. physnet3:br-hed0 (This is the octavia-mgt-network) 2. physnet2: br-hed2 Compute node 2: 1. physnet3:br-hed0 (This is the octavia-mgt-network) 2. physnet1:br-hed1 3. physnet2:br-hed2 Now if I create a loadbalancer with VIP in physnet1, the NOVA is scheduling the amphora image on compute node1. However as there is no physnet1 mapping in compute node 1, the octavia is failing to plug the amphora image into VIP network. Expected result: Octavia should internally check if the availability zone on which nova is scheduling the amphora image has the mapping for the required VIP network or not. Here is the VIP network details: stack@padawan-cp1-c1-m1-mgmt:~/scratch/ansible/next/hos/ansible$ neutron net-show net1 ---+ Field Value ---+ admin_state_upTrue availability_zone_hints availability_zonesnova created_at2016-07-29T03:45:02 description idcd5a5e69-f810-4f08-ad9f-72f6184754af ipv4_address_scope ipv6_address_scope mtu 1500 name net1 provider:network_type vlan provider:physical_network physnet1 provider:segmentation_id 1442 router:external False sharedFalse statusACTIVE subnets 115f7f23-68e2-4cba-9209-97d362612a7f tags tenant_id 6b192dcb6a704f72b039d0552bec5e11 updated_at2016-07-29T03:45:02 ---+ stack@padawan-cp1-c1-m1-mgmt:~/scratch/ansible/next/hos/ansible$ Here is the exception from octavia-worker.log: "/var/log/octavia/octavia-worker.log" [readonly] 2554L, 591063C 1,1 Top 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher result = func(ctxt, **new_args) 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/controller/queue/endpoint.py", line 45, in create_load_balancer 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher self.worker.create_load_balancer(load_balancer_id) 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/controller/worker/controller_worker.py", line 322, in create_load_balancer 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher post_lb_amp_assoc.run() 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 230, in run 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher for _state in self.run_iter(timeout=timeout): 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 308, in run_iter 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher failure.Failure.reraise_if_any(fails) 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/types/failure.py", line 336, in reraise_if_any 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher failures[0].reraise() 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/types/failure.py", line 343, in reraise 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher six.reraise(*self._exc_info) 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 82, in _execute_task 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher result = task.execute(**arguments) 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/controller/worker/tasks/network_tasks.py", line 279, in execute 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher loadbalancer.vip) 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 278, in plug_vip 2016-
[Yahoo-eng-team] [Bug 1614443] [NEW] LBaaSv1: HAproxy scenario tests cleanup fail with a StaleDataError Edit
Public bug reported: Found this while working on bug 1613251. Example for that error: http://logs.openstack.org/90/351490/10/gate/gate-neutron-lbaasv1-dsvm-api/fa4d806/console.html ** Affects: neutron Importance: Undecided Assignee: Nir Magnezi (nmagnezi) Status: In Progress ** Changed in: neutron Assignee: (unassigned) => Nir Magnezi (nmagnezi) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1614443 Title: LBaaSv1: HAproxy scenario tests cleanup fail with a StaleDataError Edit Status in neutron: In Progress Bug description: Found this while working on bug 1613251. Example for that error: http://logs.openstack.org/90/351490/10/gate/gate-neutron-lbaasv1-dsvm-api/fa4d806/console.html To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1614443/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614436] [NEW] Creation of loadbalancer fails with plug vip exception
Public bug reported: Here is the scenario: I have two compete nodes with following bridge mappings: Compute node 1: 1. physnet3:br-hed0 (This is the octavia-mgt-network) 2. physnet2: br-hed2 Compute node 2: 1. physnet3:br-hed0 (This is the octavia-mgt-network) 2. physnet1:br-hed1 3. physnet2:br-hed2 Now if I create a loadbalancer with VIP in physnet1, the NOVA is scheduling the amphora image on compute node1. However as there is no physnet1 mapping in compute node 1, the octavia is failing to plug the amphora image into VIP network. Expected result: Octavia should internally check if the availability zone on which nova is scheduling the amphora image has the mapping for the required VIP network or not. Here is the VIP network details: stack@padawan-cp1-c1-m1-mgmt:~/scratch/ansible/next/hos/ansible$ neutron net-show net1 ---+ Field Value ---+ admin_state_up True availability_zone_hints availability_zones nova created_at 2016-07-29T03:45:02 description id cd5a5e69-f810-4f08-ad9f-72f6184754af ipv4_address_scope ipv6_address_scope mtu 1500 namenet1 provider:network_type vlan provider:physical_network physnet1 provider:segmentation_id1442 router:external False shared False status ACTIVE subnets 115f7f23-68e2-4cba-9209-97d362612a7f tags tenant_id 6b192dcb6a704f72b039d0552bec5e11 updated_at 2016-07-29T03:45:02 ---+ stack@padawan-cp1-c1-m1-mgmt:~/scratch/ansible/next/hos/ansible$ Here is the exception from octavia-worker.log: "/var/log/octavia/octavia-worker.log" [readonly] 2554L, 591063C 1,1 Top 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher result = func(ctxt, **new_args) 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/controller/queue/endpoint.py", line 45, in create_load_balancer 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher self.worker.create_load_balancer(load_balancer_id) 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/controller/worker/controller_worker.py", line 322, in create_load_balancer 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher post_lb_amp_assoc.run() 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 230, in run 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher for _state in self.run_iter(timeout=timeout): 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 308, in run_iter 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher failure.Failure.reraise_if_any(fails) 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/types/failure.py", line 336, in reraise_if_any 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher failures[0].reraise() 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/types/failure.py", line 343, in reraise 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher six.reraise(*self._exc_info) 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 82, in _execute_task 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher result = task.execute(**arguments) 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/controller/worker/tasks/network_tasks.py", line 279, in execute 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher loadbalancer.vip) 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 278, in plug_vip 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher subnet.network_id) 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 93, in _plug_amphora_vip 2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher raise base.PlugVIPException(message)
[Yahoo-eng-team] [Bug 1614432] [NEW] test/unit/api: needs converting to use mock instead of mox
Public bug reported: Some unit test in /test/unit/api/* use moc. So we need to convert to use mock instead of mox. ** Affects: glance Importance: Undecided Assignee: Nam (namnh) Status: New ** Changed in: glance Assignee: (unassigned) => Nam (namnh) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1614432 Title: test/unit/api: needs converting to use mock instead of mox Status in Glance: New Bug description: Some unit test in /test/unit/api/* use moc. So we need to convert to use mock instead of mox. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1614432/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1614063] Re: live migration doesn't use the correct interface to transfer the data
It is already possible to get nova to use a different interface for live migration. Just set live_migration_inbound_addr=IP-ADDR-OF-FASTER-NIC on the compute nodes. ** Changed in: nova Status: New => Incomplete ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1614063 Title: live migration doesn't use the correct interface to transfer the data Status in OpenStack Compute (nova): Invalid Bug description: My compute nodes are attached to several networks (storage, admin, etc). For each network I have a real or a virtual interface with an IP assigned. The DNS is properly configured, so I can `ping node1`, or `ping storage.node1`, and is resolving to the correct IP. I want to use the second network to transfer the data so: * Setup libvirtd to listen into the correct interface (checked with netstat) * Configure nova.conf live_migration_uri * Monitor interfaces and do nova live-migration The migration works correctly, is doing what I think is a PEER2PEER migration type, but the data is transfered via the normal interface. I can replicate it doing a live migration via virsh. After more checks I discover that if I do not use the --migrate-uri parameter, libvirt will ask to the other node the hostname to build this migrage_uri parameter. The hostname resolve via the slow interface. Using the --migrate-uri and the --listen-address (for the -incoming parameter) works at libvirt level. So we need somehow inject this paramer in migrateToURIx in the libvirt nova driver. I have a patch (attached - WIP) that address this issue. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1614063/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp