[Yahoo-eng-team] [Bug 1561858] [NEW] api tests uses wrong exension aliases
Public bug reported: auto_allocate -> auto-allocated-topology rbac_policies -> rbac-policies ** Affects: neutron Importance: Undecided Assignee: YAMAMOTO Takashi (yamamoto) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1561858 Title: api tests uses wrong exension aliases Status in neutron: In Progress Bug description: auto_allocate -> auto-allocated-topology rbac_policies -> rbac-policies To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1561858/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561857] [NEW] NetworksIpAvailabilityIPv6Test is not skipped with network_feature_enabled.ipv6=false
Public bug reported: api test NetworksIpAvailabilityIPv6Test is not skipped with network_feature_enabled.ipv6=false. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1561857 Title: NetworksIpAvailabilityIPv6Test is not skipped with network_feature_enabled.ipv6=false Status in neutron: New Bug description: api test NetworksIpAvailabilityIPv6Test is not skipped with network_feature_enabled.ipv6=false. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1561857/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561856] [NEW] Request Mitaka release for networking-ofagent
Public bug reported: Please release stable/mitaka branch of networking-ofagent. This will be the last release of ofagent. commit id: bf23655bfbde95535fc9c519d11087545983d29b tag: 2.0.0 ** Affects: networking-ofagent Importance: Undecided Status: New ** Affects: neutron Importance: Undecided Status: New ** Tags: release-subproject ** Tags added: release-subproject ** Also affects: networking-ofagent Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1561856 Title: Request Mitaka release for networking-ofagent Status in networking-ofagent: New Status in neutron: New Bug description: Please release stable/mitaka branch of networking-ofagent. This will be the last release of ofagent. commit id: bf23655bfbde95535fc9c519d11087545983d29b tag: 2.0.0 To manage notifications about this bug go to: https://bugs.launchpad.net/networking-ofagent/+bug/1561856/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1553935] Re: add interface doesn't select the first free IP from range
** Project changed: horizon => neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1553935 Title: add interface doesn't select the first free IP from range Status in neutron: New Bug description: When adding a router interface to network, the interface should get the first free IP address from the subnet (/24). Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: 1. create router "test_router" 2. create network "test_net" 3. in test_net create subnet "test_subnet": net address 10.0.0.0/24 IPv4 checkDisable gateway click to Create 4. Network topology, click to router, Add interface, select "test_net: 10.0.0.0/24 (test_subnet)", leave IP address empty, submit the dialogue 5. red popoup is displayed: "Error: Failed to add_interface: Bad router request: Subnet for router interface must have a gateway IP" - it's valid behaviour so far 6. networks - click to "test_net" row, edit subnet "test_subnet": uncheck Disable gateway, set gateway IP 10.0.0.1, check enable DHCP (is already checked by default), allocation pool: 10.0.0.2,10.0.0.254 hit Save button 7. Network topology, click to router "test_router", Add interface, select "test_net: 10.0.0.0/24 (test_subnet)", leave IP address empty, submit the dialogue Actual results: Adding interface is trying to use first IP address from the subnet range, but it's already used as gateway, so red popup is displayed instead: Error: Failed to add_interface: Unable to complete operation for network 490131be-13e9-49b8-b515-6d2dec8847da. The IP address 10.0.0.1 is in use. Expected results: The first unoccupied address is selected automatically. Additional info: If is new subnet created with gateway IP from beginning, router port is created successfully and it picks the same IP as is defined as subnet gateway To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1553935/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561040] Re: RuntimeError while deleting linux bridge by linux bridge agent
Reviewed: https://review.openstack.org/296537 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=16b2ffdfd85eece8fb57a98d10bf35ad617d235a Submitter: Jenkins Branch:master commit 16b2ffdfd85eece8fb57a98d10bf35ad617d235a Author: venkata anil Date: Wed Mar 23 15:24:01 2016 + Ignore exception when deleting linux bridge if doesn't exist Linux bridge is not handling RuntimeError exception when it is trying to delete network's bridge, which is deleted in parallel by nova. Fullstack test has similar scenario, it creates network's bridge for agent and deletes the bridge after the test, like nova. Linux bridge agent has to ignore RuntimeError exception if the bridge doesn't exist. Closes-bug: #1561040 Change-Id: I428384fd42181ff6bc33f29369a7ff5ec163b532 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1561040 Title: RuntimeError while deleting linux bridge by linux bridge agent Status in neutron: Fix Released Bug description: http://logs.openstack.org/14/275614/7/check/gate-neutron-dsvm- fullstack/efae851/logs/TestLinuxBridgeConnectivitySameNetwork.test_connectivity_VLANs_ /neutron-linuxbridge-agent--2016-03-23--04-07-30-395169.log.txt.gz Linux bridge is not handling RuntimeError exception when it is trying to delete network's bridge, which is deleted by nova in parallel. Fullstack test has similar scenario, it creates network's bridge for agent and deletes the bridge after the test, like nova. Linux bridge agent has to ignore RuntimeError exception if the bridge doesn't exist. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1561040/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561824] [NEW] [RFE] Enhance BGP Dynamic Routing driver with Quagga suppport
Public bug reported: Current bgp-dynamic-routing only support ryu bgp. But some time we want to Quagga as bgp speaker because: 1. Quagga has more features which is wanted like: 1.1) multiple instance support which is need by [3]. 1.2) quagga has more flexible route filter 2. Quagga is programmed by C language, it will have better performance. And [1] list the comparison of all possible bgp speakers. [1]BGPSpeakersComparison https://wiki.openstack.org/wiki/Neutron/DynamicRouting/BGPSpeakersComparison [2] bgp-dynamic-routing https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing [3]bgp-dragent-hosting-multiple-speakers https://bugs.launchpad.net/neutron/+bug/1528003 ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1561824 Title: [RFE] Enhance BGP Dynamic Routing driver with Quagga suppport Status in neutron: New Bug description: Current bgp-dynamic-routing only support ryu bgp. But some time we want to Quagga as bgp speaker because: 1. Quagga has more features which is wanted like: 1.1) multiple instance support which is need by [3]. 1.2) quagga has more flexible route filter 2. Quagga is programmed by C language, it will have better performance. And [1] list the comparison of all possible bgp speakers. [1]BGPSpeakersComparison https://wiki.openstack.org/wiki/Neutron/DynamicRouting/BGPSpeakersComparison [2] bgp-dynamic-routing https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing [3]bgp-dragent-hosting-multiple-speakers https://bugs.launchpad.net/neutron/+bug/1528003 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1561824/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1509312] Re: unable to use tenant network after kilo to liberty update due to port security extension
Reviewed: https://review.openstack.org/294132 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=b0519cf0ada3b3d9b76f84948f9ad3c142fc50be Submitter: Jenkins Branch:master commit b0519cf0ada3b3d9b76f84948f9ad3c142fc50be Author: Ihar Hrachyshka Date: Thu Mar 17 16:20:52 2016 +0100 port security: gracefully handle resources with no bindings Resources could be created before the extension was enabled in the setup. In that case, no bindings are created for them. In that case, we should gracefully return default (True) value when extracting the value using the mixin; and we should also create binding model on update request, if there is no existing binding model for the resource. While at it, introduced a constant to store the default value for port security (True) and changed several tests to use the constant instead of extracting it from extension resource map. Change-Id: I8607cdecdc16c5f94635c94e2f02700c732806eb Closes-Bug: #1509312 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1509312 Title: unable to use tenant network after kilo to liberty update due to port security extension Status in neutron: Fix Released Status in openstack-ansible: Confirmed Status in openstack-ansible liberty series: Fix Released Status in openstack-ansible trunk series: Confirmed Bug description: After updating to liberty from kilo all networks created in kilo release are useless in liberty. If i try to spawn a new isntance with a port on a network created in kilo i get the following error in nova-compute.log : BadRequest: Port does not have port security binding. I guess this has to do with the new extension in ml2 plugin port_security. Using neutron DVR on Ubuntu 14.04.3! This is my first bug report so sry in advance for any mistakes. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1509312/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561809] [NEW] The position of tooltip is incorrect in ng-create volume.
Public bug reported: ng-crete volume dialog has 2 pie-charts and the position of tooltip of "Volume and Snapshot Quota" is incorrect. It is shown on the chart of "Volume Quota". ** Affects: horizon Importance: Undecided Assignee: Kenji Ishii (ken-ishii) Status: New ** Changed in: horizon Assignee: (unassigned) => Kenji Ishii (ken-ishii) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1561809 Title: The position of tooltip is incorrect in ng-create volume. Status in OpenStack Dashboard (Horizon): New Bug description: ng-crete volume dialog has 2 pie-charts and the position of tooltip of "Volume and Snapshot Quota" is incorrect. It is shown on the chart of "Volume Quota". To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1561809/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561184] Re: Common utils: remove deprecated methods
Reviewed: https://review.openstack.org/296495 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=9a70c57507d844092c8730e676643f8321677903 Submitter: Jenkins Branch:master commit 9a70c57507d844092c8730e676643f8321677903 Author: Brian Haley Date: Wed Mar 23 10:41:19 2016 -0400 Common utils: remove deprecated methods The following have been removed from use in neutron/common/utils.py. The commit where these were added is 8022adb7342b09886f53c91c12d0b37986fbf35c : * read_cached_file * find_config_file * get_keystone_url TrivialFix Closes-bug: #1561184 Change-Id: If9cbb41eec9ab20b4dc11bb10794d90c731e6239 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1561184 Title: Common utils: remove deprecated methods Status in neutron: Fix Released Bug description: Tracker for removing the deprecated methods in neutron/common/utils.py. The commit where these were added is 8022adb7342b09886f53c91c12d0b37986fbf35c : * read_cached_file * find_config_file * get_keystone_url To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1561184/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561796] [NEW] ironic driver does not support ssl cafile
Public bug reported: Even though Ironic's python client supports SSL encrypted connections to the ironic service, and securing intra-service connections is a recommended practice, the nova.virt.Ironic driver currently lacks an option to specify a custom CA Certificate for validating the SSL connection to the Ironic service. On the other hand, other OpenStack services which Nova connects to (eg, Glance, Neutron...) have support for this via a service-specific "cafile" config option. ** Affects: nova Importance: Undecided Assignee: Devananda van der Veen (devananda) Status: In Progress ** Tags: ironic security ** Tags added: ironic ** Tags added: security -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1561796 Title: ironic driver does not support ssl cafile Status in OpenStack Compute (nova): In Progress Bug description: Even though Ironic's python client supports SSL encrypted connections to the ironic service, and securing intra-service connections is a recommended practice, the nova.virt.Ironic driver currently lacks an option to specify a custom CA Certificate for validating the SSL connection to the Ironic service. On the other hand, other OpenStack services which Nova connects to (eg, Glance, Neutron...) have support for this via a service-specific "cafile" config option. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1561796/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561763] [NEW] "Clear Selection" is unlocalzed
Public bug reported: Project > Object Store > Containers "Clear Selection" button label is unlocalized despite it is translated in zanata. It exists in file openstack_dashboard/locale/djangojs as a string: Clear Selection {$ oc.numSelected $} ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1561763 Title: "Clear Selection" is unlocalzed Status in OpenStack Dashboard (Horizon): New Bug description: Project > Object Store > Containers "Clear Selection" button label is unlocalized despite it is translated in zanata. It exists in file openstack_dashboard/locale/djangojs as a string: Clear Selection {$ oc.numSelected $} To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1561763/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561761] [NEW] Unlocalized text shown in Create Container window
Public bug reported: Project > Object Store > Containers > Create Container The following text appears unlocalized. It exists in pot files, and already translated in zanata, but not shown localized in Horizon. File: openstack_dashboard/locale/django and openstack_dashboard/locale/djangojs Source text: A container is a storage compartment for your data and provides a way for you to organize your data. You can think of a container as a folder in Windows ® or a directory in UNIX ®. The primary difference between a container and these other file system concepts is that containers cannot be nested. You can, however, create an unlimited number of containers within your account. Data must be stored in a container so you must have at least one container defined in your account prior to uploading data. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1561761 Title: Unlocalized text shown in Create Container window Status in OpenStack Dashboard (Horizon): New Bug description: Project > Object Store > Containers > Create Container The following text appears unlocalized. It exists in pot files, and already translated in zanata, but not shown localized in Horizon. File: openstack_dashboard/locale/django and openstack_dashboard/locale/djangojs Source text: A container is a storage compartment for your data and provides a way for you to organize your data. You can think of a container as a folder in Windows ® or a directory in UNIX ®. The primary difference between a container and these other file system concepts is that containers cannot be nested. You can, however, create an unlimited number of containers within your account. Data must be stored in a container so you must have at least one container defined in your account prior to uploading data. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1561761/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1482271] Re: Race condition in multithreaded Apache/WSGI setup
Reviewed: https://review.openstack.org/222173 Committed: https://git.openstack.org/cgit/openstack/keystone/commit/?id=691d497885cf4d8b39bb6ddb384b7c027bb52f95 Submitter: Jenkins Branch:master commit 691d497885cf4d8b39bb6ddb384b7c027bb52f95 Author: Boris Bobrov Date: Thu Sep 3 16:05:55 2015 +0500 Move region configuration to a critical section Cache initialization performed on the request handling after Apache wsgi module start raises exception region.RegionAlreadyConfigured on race condition in multithreaded mode. Change-Id: I65f85aedd5b087499b889540417b9502e050ce7c Closes-bug: 1482271 ** Changed in: keystone Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1482271 Title: Race condition in multithreaded Apache/WSGI setup Status in OpenStack Identity (keystone): Fix Released Bug description: When configured as an Apache WSGI module a race condition is possible during keystone cache initialization: https://github.com/openstack/keystone/blob/a597a86b854215835a4d54885daeb161d7b0efb8/keystone/common/kvs/core.py#L240 The operation raises exception region.RegionAlreadyConfigured. This is a result of the race condition involving global 'application' variable being initialized several times (1 per thread). application is required to be global according to Paste Deploy documentation: http://pythonpaste.org/deploy/ Apache modwsgi documentation suggests protecting global objects with thread locks: http://code.google.com/p/modwsgi/wiki/ProcessesAndThreading#Building_A_Portable_Application To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1482271/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1528894] Re: Native ovsdb implementation not working
Reviewed: https://review.openstack.org/297214 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=d130245967fa4d31fd54eaede38c3fdb42d51916 Submitter: Jenkins Branch:master commit d130245967fa4d31fd54eaede38c3fdb42d51916 Author: Hynek Mlnarik Date: Thu Mar 24 16:22:17 2016 +0100 Fix setting peer to bridge interfaces OVSDB implementation refuses to set options:peer column value as there is no such column in the Interface table. The correct way is to set 'options' column value to a map containing key 'peer', as already used in ovs_lib. Change-Id: Ib5e956f425b36f54cda017c91ac71d9d7ee9747c Closes-Bug: 1528894 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1528894 Title: Native ovsdb implementation not working Status in neutron: Fix Released Bug description: When trying to use the new native OVSDB provider, the connectivity never goes up due to the fact that what seems to be the db_set operation failing to change the patch ports from "nonexistant-peer" to the correct peer, therefore not linking the bridges together. https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1119 The system must be running the latest Liberty release, python- openvswitch package installed and the following command executed: # ovs-vsctl set-manager ptcp:6640:127.0.0.1 Once that's all done, the openvswitch agent configuration should be changed to the following: [OVS] ovsdb_interface = ovsdb Restarting the OVS agent will setup everything but leave your network in a failed state because the correct patch ports aren't updated: # ovs-vsctl show Bridge br-ex Port br-ex Interface br-ex type: internal Port "em1" Interface "em1" Port phy-br-ex Interface phy-br-ex type: patch options: {peer=nonexistent-peer} Bridge br-int fail_mode: secure Port "qvo25d28228-9c" tag: 1 Interface "qvo25d28228-9c" ... Port int-br-ex Interface int-br-ex type: patch options: {peer=nonexistent-peer} Reverting to the regular old forked implementation works with no problems. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1528894/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1443539] Re: Russian translation of string "Additional Routes" is truncated
Reviewed: https://review.openstack.org/289346 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=f9df264a227af8e40452485150757fb20d2a7fac Submitter: Jenkins Branch:master commit f9df264a227af8e40452485150757fb20d2a7fac Author: Ilya Alekseyev Date: Thu Mar 10 12:54:15 2016 + Fixes truncated string in details overview table. In current solution tooltip with full string is showing when hover over. It use template approach. This patch affects all themes. Change-Id: I60162a2c313d08b9ba44484636c72955f8816c1e Closes-Bug: 1443526 Closes-Bug: 1443539 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1443539 Title: Russian translation of string "Additional Routes" is truncated Status in OpenStack Dashboard (Horizon): Fix Released Bug description: On Admin->System->Network->[Detail]->[Subnet Detail] the string "Additional Routes" is truncated. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1443539/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561738] [NEW] Unlocalized string found on Developer tab
Public bug reported: On Developer tab, there is a message which appears unlocalized, despite the translation is completed and imported. File: openstack_dashboard/locale/djangojs String: To view source code, hover over a section, then click the button in the top right of that section. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1561738 Title: Unlocalized string found on Developer tab Status in OpenStack Dashboard (Horizon): New Bug description: On Developer tab, there is a message which appears unlocalized, despite the translation is completed and imported. File: openstack_dashboard/locale/djangojs String: To view source code, hover over a section, then click the button in the top right of that section. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1561738/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1443526] Re: Russian translation of string 'Admin State' is truncated
Reviewed: https://review.openstack.org/289346 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=f9df264a227af8e40452485150757fb20d2a7fac Submitter: Jenkins Branch:master commit f9df264a227af8e40452485150757fb20d2a7fac Author: Ilya Alekseyev Date: Thu Mar 10 12:54:15 2016 + Fixes truncated string in details overview table. In current solution tooltip with full string is showing when hover over. It use template approach. This patch affects all themes. Change-Id: I60162a2c313d08b9ba44484636c72955f8816c1e Closes-Bug: 1443526 Closes-Bug: 1443539 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1443526 Title: Russian translation of string 'Admin State' is truncated Status in OpenStack Dashboard (Horizon): Fix Released Bug description: On Admin->System->Network->[Detail] the "Admin State" translation of Russian is truncated. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1443526/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1554519] Re: seperate device owner flag for HA router interface port
Reviewed: https://review.openstack.org/291651 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=ceebc9f465ea038c2eaac26cfa0a18d3f5cdf7c6 Submitter: Jenkins Branch:master commit ceebc9f465ea038c2eaac26cfa0a18d3f5cdf7c6 Author: venkata anil Date: Tue Mar 22 10:03:58 2016 + use separate device owner for HA router interface Currently HA router interface port uses DEVICE_OWNER_ROUTER_INTF as device owner(like normal router interface). So to check if a port is a HA router interface port, we have to perform a DB operation. Neutron server at many places may need check if a port is HA router interface port and perform different set of operations, then it has to access DB for this. Instead of that, if this information is available as port's device owner, we can avoid DB access every time. Closes-bug: #1554519 Change-Id: I322c392529c04aca2448fd957a35f4908b323449 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1554519 Title: seperate device owner flag for HA router interface port Status in neutron: Fix Released Bug description: Currently HA router interface port uses DEVICE_OWNER_ROUTER_INTF as device owner(like normal router interface). So to check if a port is a HA router interface port, we have to perform a DB operation. Neutron server at many places(functions in plugin.py, rpc.py, mech_driver.py [1]) may need check if a port is HA router interface port and perform different set of operations, then it has to access DB for this. Instead if this information is available as port's device owner, we can avoid DB access every time. [1] ml2_db.is_ha_port(session, port) in below files https://review.openstack.org/#/c/282874/3/neutron/plugins/ml2/plugin.py https://review.openstack.org/#/c/282874/3/neutron/plugins/ml2/drivers/l2pop/mech_driver.py To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1554519/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561695] [NEW] neutron-dhcp-agent generates thousands of interfaces on a failure
Public bug reported: I ran into slowness on a new deploy of mitaka-rc1 code with neutron. I had ~13,000 tap devices that were created by dhcp-agent. The neutron database does not have these ports. As far as I can tell, neutron is no longer aware, or cares about those ports but they remain on the node (and in OpenVSwitch so a reboot wouldnt clear them). I do not know how the initial failure happened, but to reproduce this you can do the following: 1. Stop dhcp agent (and anything using the network namespace). 2. ip netns del qdhcp-8e5d7a66-df5d-4e36-8446-3c2148e53f02 3. touch /run/netns/qdhcp-8e5d7a66-df5d-4e36-8446-3c2148e53f02 4 Start the dhcp agent and watch it continually try to create (and then fail to cleanup) tap interfaces Over the course of ~4 hours this issue generate 13,000 interfaces and 4GB of logs (debug was turned on). How the initial issue came about I do not know but it did happen in normal usage. I believe the proper fix here would be _always_ clean up tap devices even on failures but I am not familiar with the neutron code enough to fix this. The output of `ip netns` when it has an invalid namespace looks like this: # ip netns RTNETLINK answers: Invalid argument RTNETLINK answers: Invalid argument qdhcp-8e5d7a66-df5d-4e36-8446-3c2148e53f02 The stack trace in neutron-dhcp-agent is: 2016-03-24 18:42:12.165 1 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', '--columns=ofport', 'list', 'Interface', 'tap42983a07-e0'] create_process /var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84 2016-03-24 18:42:12.275 1 DEBUG neutron.agent.linux.utils [-] Exit code: 0 execute /var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:142 2016-03-24 18:42:12.276 1 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'link', 'set', 'tap42983a07-e0', 'address', 'fa:16:3e:79:1b:0a'] create_process /var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84 2016-03-24 18:42:12.384 1 DEBUG neutron.agent.linux.utils [-] Exit code: 0 execute /var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:142 2016-03-24 18:42:12.385 1 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'link', 'set', 'tap42983a07-e0', 'mtu', '9000'] create_process /var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84 2016-03-24 18:42:12.495 1 DEBUG neutron.agent.linux.utils [-] Exit code: 0 execute /var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:142 2016-03-24 18:42:12.496 1 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', '-o', 'netns', 'list'] create_process /var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84 2016-03-24 18:42:12.604 1 DEBUG neutron.agent.linux.utils [-] Exit code: 0 execute /var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:142 2016-03-24 18:42:12.605 1 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'link', 'set', 'tap42983a07-e0', 'netns', 'qdhcp-8e5d7a66-df5d-4e36-8446-3c2148e53f02'] create_process /var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84 2016-03-24 18:42:12.709 1 ERROR neutron.agent.linux.utils [-] Exit code: 2; Stdin: ; Stdout: ; Stderr: RTNETLINK answers: Invalid argument 2016-03-24 18:42:12.710 1 ERROR neutron.agent.linux.dhcp [-] Unable to plug DHCP port for network 8e5d7a66-df5d-4e36-8446-3c2148e53f02. Releasing port. 2016-03-24 18:42:12.710 1 ERROR neutron.agent.linux.dhcp Traceback (most recent call last): 2016-03-24 18:42:12.710 1 ERROR neutron.agent.linux.dhcp File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 1234, in setup 2016-03-24 18:42:12.710 1 ERROR neutron.agent.linux.dhcp mtu=network.get('mtu')) 2016-03-24 18:42:12.710 1 ERROR neutron.agent.linux.dhcp File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/linux/interface.py", line 248, in plug 2016-03-24 18:42:12.710 1 ERROR neutron.agent.linux.dhcp bridge, namespace, prefix, mtu) 2016-03-24 18:42:12.710 1 ERROR neutron.agent.linux.dhcp File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/linux/interface.py", line 346, in plug_new 2016-03-24 18:42:12.710 1 ERROR neutron.agent.linux.dhcp namespace_obj.add_device_to_namespace(ns_dev) 2016-03-24 18:42:12.710 1 ERROR neutron.agent.linux.dhcp File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 216, in add_device_to_namespace 2016-03-24
[Yahoo-eng-team] [Bug 1561612] [NEW] Can't edit users "Unable to update the user"
Public bug reported: On Identity->Users it is not possible to change user info with the "Edit" action. After editing the UI shows "Unable to update the user" It looks like keystone it returning a Bad Request error. I'm not exactly finding it in the logs yet. ** Affects: horizon Importance: Critical Status: Confirmed ** Tags: mitaka-backport-potential ** Changed in: horizon Importance: Undecided => Critical -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1561612 Title: Can't edit users "Unable to update the user" Status in OpenStack Dashboard (Horizon): Confirmed Bug description: On Identity->Users it is not possible to change user info with the "Edit" action. After editing the UI shows "Unable to update the user" It looks like keystone it returning a Bad Request error. I'm not exactly finding it in the logs yet. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1561612/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1549479] Re: DocImpact - db_sync doesn't create default domain
Addressed by https://review.openstack.org/#/c/296764/. ** Changed in: openstack-manuals Status: In Progress => Fix Released ** Changed in: openstack-manuals Assignee: Xing Chen (chen-xing) => Matt Kassawara (ionosphere80) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1549479 Title: DocImpact - db_sync doesn't create default domain Status in OpenStack Identity (keystone): Invalid Status in openstack-manuals: Fix Released Bug description: https://review.openstack.org/282042 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/keystone" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit a7b7fea7a3fe7677981fbf9bac5121bc15601163 Author: Brant Knudson Date: Thu Feb 18 14:08:36 2016 -0600 db_sync doesn't create default domain The reason db_sync needed to create the default domain is because we needed a domain for existing v2 users. Since the migrations don't add the domain_id to users anymore there's no need to create the default domain. DocImpact -- The install guide should be updated to say to use keystone-manage bootstrap or to create the default domain if the deployment is going to support v2. Change-Id: I65860fe989ac2456b73bcc12fd02643564b24574 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1549479/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561578] [NEW] lbaasv1 healthmonitor status is unused
Public bug reported: lbaasv1 healthmonitor resource has status. but it seems unused. ** Affects: neutron Importance: Undecided Assignee: YAMAMOTO Takashi (yamamoto) Status: In Progress ** Description changed: - lbaasv1 healthmonitor has status. but it seems unused. + lbaasv1 healthmonitor resource has status. + but it seems unused. -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1561578 Title: lbaasv1 healthmonitor status is unused Status in neutron: In Progress Bug description: lbaasv1 healthmonitor resource has status. but it seems unused. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1561578/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561558] [NEW] Untranslated help text found in Launch Instance window
Public bug reported: Project > Instances > Launch Instance > Source Project > Instances > Launch Instance > Security Groups Found the following untranslated help text in Launch Instance window. [Source tab] Image: This option uses an image to boot the instance. Instance Snapshot: This option uses an instance snapshot to boot the instance. Image (with Create New Volume checked): This options uses an image to boot the instance, and creates a new volume to persist instance data. You can specify volume size and whether to delete the volume on deletion of the instance. Volume: This option uses a volume that already exists. It does not create a new volume. You can choose to delete the volume on deletion of the instance. Note: when selecting Volume, you can only launch one instance. Volume Snapshot: This option uses a volume snapshot to boot the instance, and creates a new volume to persist instance data. You can choose to delete the volume on deletion of the instance. [Security Groups tab] Security groups define a set of IP filter rules that determine how network traffic flows to and from an instance. Users can add additional rules to an existing security group to further define the access options for an instance. To create additional rules, go to the Compute | Access & Security view, then find the security group and click Manage Rules. Translations are already completed in Zanata and other latest translations have been imported to the test envrionment. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1561558 Title: Untranslated help text found in Launch Instance window Status in OpenStack Dashboard (Horizon): New Bug description: Project > Instances > Launch Instance > Source Project > Instances > Launch Instance > Security Groups Found the following untranslated help text in Launch Instance window. [Source tab] Image: This option uses an image to boot the instance. Instance Snapshot: This option uses an instance snapshot to boot the instance. Image (with Create New Volume checked): This options uses an image to boot the instance, and creates a new volume to persist instance data. You can specify volume size and whether to delete the volume on deletion of the instance. Volume: This option uses a volume that already exists. It does not create a new volume. You can choose to delete the volume on deletion of the instance. Note: when selecting Volume, you can only launch one instance. Volume Snapshot: This option uses a volume snapshot to boot the instance, and creates a new volume to persist instance data. You can choose to delete the volume on deletion of the instance. [Security Groups tab] Security groups define a set of IP filter rules that determine how network traffic flows to and from an instance. Users can add additional rules to an existing security group to further define the access options for an instance. To create additional rules, go to the Compute | Access & Security view, then find the security group and click Manage Rules. Translations are already completed in Zanata and other latest translations have been imported to the test envrionment. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1561558/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1555275] Re: Tags set changes on delete
Reviewed: https://review.openstack.org/290741 Committed: https://git.openstack.org/cgit/openstack/glance/commit/?id=def8cfdeef4e3d086ad045a063421a61468e3cb6 Submitter: Jenkins Branch:master commit def8cfdeef4e3d086ad045a063421a61468e3cb6 Author: Niall Bunting Date: Wed Mar 9 18:13:44 2016 + Copy the size of the tag set As the tags are being removed it can cause the removal of the current tag when running tests. Causing a RuntimeError to be thrown. This change makes a temporary list whilst the tags are being deleted. Co-Authored-By: Tom Cocozzello Change-Id: I3cac9060b87449503fba3995d10f8d4e074bffb8 Closes-Bug: 1555275 ** Changed in: glance Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1555275 Title: Tags set changes on delete Status in Glance: Fix Released Bug description: When working on a new test I ran into this, the set should not fail due the tags being deleted. == FAIL: glance.tests.unit.v2.test_images_resource.TestImagesController.test_delete_deactivated_images_anyone tags: worker-0 -- Traceback (most recent call last): File "glance/tests/unit/v2/test_images_resource.py", line 2034, in test_delete_deactivated_images_anyone self.controller.delete(request, UUID1) File "glance/common/utils.py", line 362, in wrapped return func(self, req, *args, **kwargs) File "glance/api/v2/images.py", line 236, in delete image_repo.remove(image) File "glance/domain/proxy.py", line 104, in remove result = self.base.remove(base_item) File "glance/notifier.py", line 487, in remove super(ImageRepoProxy, self).remove(image) File "glance/domain/proxy.py", line 104, in remove result = self.base.remove(base_item) File "glance/domain/proxy.py", line 104, in remove result = self.base.remove(base_item) File "glance/domain/proxy.py", line 104, in remove result = self.base.remove(base_item) File "glance/domain/proxy.py", line 104, in remove result = self.base.remove(base_item) File "glance/db/__init__.py", line 294, in remove new_values = self.db_api.image_destroy(self.context, image.image_id) File "glance/db/simple/api.py", line 64, in wrapped output = func(*args, **kwargs) File "glance/db/simple/api.py", line 761, in image_destroy for tag in tags: RuntimeError: Set changed size during iteration To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1555275/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561550] [NEW] Untranslated strings found in Launch Instance window
Public bug reported: Project > Instances > Launch Instance > Details Strings "Instance Name" and "Count" are not localized in the UI. According to Motoki-san (amotoki), those strings are not included in pot files under the horizon project, despite they are labled with "translate" in the directives and should be considered as translatable. It could be due to the tool's bug which inserts linebreaks, possibly preventing the strings to be extracted. https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/static/dashboard/project/workflow /launch-instance/details/details.html#L9 ** Affects: horizon Importance: Undecided Status: New ** Attachment added: "launchInstance1-ja.png" https://bugs.launchpad.net/bugs/1561550/+attachment/4609909/+files/launchInstance1-ja.png -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1561550 Title: Untranslated strings found in Launch Instance window Status in OpenStack Dashboard (Horizon): New Bug description: Project > Instances > Launch Instance > Details Strings "Instance Name" and "Count" are not localized in the UI. According to Motoki-san (amotoki), those strings are not included in pot files under the horizon project, despite they are labled with "translate" in the directives and should be considered as translatable. It could be due to the tool's bug which inserts linebreaks, possibly preventing the strings to be extracted. https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/static/dashboard/project/workflow /launch-instance/details/details.html#L9 To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1561550/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561540] [NEW] Neutron dhcp agent not able to provide dhcp ip to VM
Public bug reported: VMs are not getting an ip from dhcp. Occasionally a VM may get an ip. Pre-conditions: OpenStack (Liberty, HA, OpenDaylight) built via OPNFV JOID (Canonical) deployment automation that leverages MAAS/Juju. Whoami: JOID user. This is my first neutron reported bug. ubuntu@juma:~$ neutron --version 3.1.0 Perceived severity: is this a blocker for my project. Error seen in dhcp-agent.log: 2016-03-24 02:04:52.245 23148 ERROR oslo.messaging._drivers.impl_rabbit [req-e5037584-e5c6-4dc5-961b-6300b372f60b - - - - -] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 2 seconds. /Error 2016-03-24 02:15:34.895 6657 ERROR neutron.agent.dhcp.agent message = self.waiters.get(msg_id, timeout=timeout) 2016-03-24 02:15:34.895 6657 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 223, in get 2016-03-24 02:15:34.895 6657 ERROR neutron.agent.dhcp.agent 'to message ID %s' % msg_id) 2016-03-24 02:15:34.895 6657 ERROR neutron.agent.dhcp.agent MessagingTimeout: Timed out waiting for a reply to message ID 80fcb2ba9c444b1592ef9e7955cea0ba 2016-03-24 02:15:34.895 6657 ERROR neutron.agent.dhcp.agent 2016-03-24 02:15:34.896 6657 WARNING oslo.service.loopingcall [req-1009548a-c589-43b4-8a25-cc6d265d9dea - - - - -] Function 'neutron.agent.dhcp.agent.DhcpAgentWithStateReport._report_state' run outlasted interval by 30.01 sec 2016-03-24 02:15:34.969 6657 ERROR neutron.agent.dhcp.agent [req-45baaca9-a59d-4b2e-9b28-b0761996e6a9 - - - - -] Failed reporting state! 2016-03-24 02:15:34.969 6657 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): 2016-03-24 02:15:34.969 6657 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/dist-packages/neutron/agent/dhcp/agent.py", line 571, in _report_state 2016-03-24 02:15:34.969 6657 ERROR neutron.agent.dhcp.agent ctx, self.agent_state, True) 2016-03-24 02:15:34.969 6657 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/dist-packages/neutron/agent/rpc.py", line 86, in report_state 2016-03-24 02:15:34.969 6657 ERROR neutron.agent.dhcp.agent return method(context, 'report_state', **kwargs) 2016-03-24 02:15:34.969 6657 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 158, in call 2016-03-24 02:15:34.969 6657 ERROR neutron.agent.dhcp.agent retry=self.retry) 2016-03-24 02:15:34.969 6657 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, in _send 2016-03-24 02:15:34.969 6657 ERROR neutron.agent.dhcp.agent timeout=timeout, retry=retry) 2016-03-24 02:15:34.969 6657 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 431, in send 2016-03-24 02:15:34.969 6657 ERROR neutron.agent.dhcp.agent retry=retry) 2016-03-24 02:15:34.969 6657 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 422, in _send 2016-03-24 02:15:34.969 6657 ERROR neutron.agent.dhcp.agent raise result 2016-03-24 02:15:34.969 6657 ERROR neutron.agent.dhcp.agent RemoteError: Remote error: DBError (pymysql.err.InternalError) (1054, u"Unknown column 'agents.load' in 'field list'") [SQL: u'SELECT agents.id AS agents_id, agents.agent_type AS agents_agent_type, agents.`binary` AS agents_binary, agents.topic AS agents_topic, agents.host AS agents_host, agents.admin_state_up AS agents_admin_state_up, agents.created_at AS agents_created_at, agents.started_at AS agents_started_at, agents.heartbeat_timestamp AS agents_heartbeat_timestamp, agents.description AS agents_description, agents.configurations AS agents_configurations, agents.`load` AS agents_load \nFROM agents \nWHERE agents.agent_type = %s AND agents.host = %s'] [parameters: (u'DHCP agent', u'node4-control')] 2016-03-24 02:15:34.969 6657 ERROR neutron.agent.dhcp.agent [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply\nexecutor_callback))\n', u' File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch\nexecutor_callback)\n', u' File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 129, in _do_dispatch\nresult = func(ctxt, **new_args)\n', u' File "/usr/lib/python2.7/dist-packages/neutron/db/agents_db.py", line 317, in report_state\nreturn self.plugin.create_or_update_agent(context, agent_state)\n', u' File "/usr/lib/python2.7/dist-packages/neutron/db/agents_db.py", line 265, in create_or_update_agent\nreturn self._create_or_update_agent(context, agent)\n', u' File "/usr/lib/python2.7/dist-packages/neutron/db/agents_db.py", line 238, in _create_or_update_agent\ncontext, agent_state[\'agent_type\'], agent_state[\'host\'])\n',
[Yahoo-eng-team] [Bug 1560860] Re: mellanox infiniband SR-IOV(ib_hostdev vif) detach port fails
** Also affects: nova/mitaka Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1560860 Title: mellanox infiniband SR-IOV(ib_hostdev vif) detach port fails Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) mitaka series: New Bug description: detaching SRIOV port direct causes exception. # neutron port-create --binding:vnic_type=direct private # nova boot --flavor m1.small --image cirros-mellanox-x86_64-disk-ib --nic port-id=a247d89e-dae5-4d65-b414-e7bf3a26bfd1 vm1 # nova suspend vm1 logs: https://review.openstack.org/#/c/286668 http://144.76.193.39/ci-artifacts/286668/3/Neutron-Networking-MLNX-ML2/ Traceback message 2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: cdf2e34d-bc2e-4edb-aff7-516b97487730] Traceback (most recent call last): 2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: cdf2e34d-bc2e-4edb-aff7-516b97487730] File "/opt/stack/nova/nova/compute/manager.py", line 6515, in _error_out_instance_on_exception 2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: cdf2e34d-bc2e-4edb-aff7-516b97487730] yield 2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: cdf2e34d-bc2e-4edb-aff7-516b97487730] File "/opt/stack/nova/nova/compute/manager.py", line 4172, in suspend_instance 2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: cdf2e34d-bc2e-4edb-aff7-516b97487730] self.driver.suspend(context, instance) 2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: cdf2e34d-bc2e-4edb-aff7-516b97487730] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2638, in suspend 2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: cdf2e34d-bc2e-4edb-aff7-516b97487730] self._detach_sriov_ports(context, instance, guest) 2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: cdf2e34d-bc2e-4edb-aff7-516b97487730] File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3425, in _detach_sriov_ports 2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: cdf2e34d-bc2e-4edb-aff7-516b97487730] if vif['vnic_type'] in network_model.VNIC_TYPES_SRIOV 2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: cdf2e34d-bc2e-4edb-aff7-516b97487730] AttributeError: 'LibvirtConfigGuestHostdevPCI' object has no attribute 'source_dev' 2016-03-03 04:45:42.775 1801 ERROR nova.compute.manager [instance: cdf2e34d-bc2e-4edb-aff7-516b97487730] To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1560860/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1465315] Re: [dashboard] descriptions of fields (for example license/requirements) should be limited in UI.
Reviewed: https://review.openstack.org/293094 Committed: https://git.openstack.org/cgit/openstack/murano-dashboard/commit/?id=c2e0102c4bc471bed21fbd69dcea6aefab0ef768 Submitter: Jenkins Branch:master commit c2e0102c4bc471bed21fbd69dcea6aefab0ef768 Author: Omar Shykhkerimov Date: Tue Mar 15 20:58:13 2016 +0200 Hide extra text in descriptions and allow expanding Previously every description was shown fully and it caused troubles in case of large descriptions. For now every description is covered in div tag. js-handler provides code to create and control links to expand and compress these fields. Change-Id: I3bb64382b48d1435dea20bccea96836d3d2015da Closes-Bug: #1465315 ** Changed in: murano Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1465315 Title: [dashboard] descriptions of fields (for example license/requirements) should be limited in UI. Status in OpenStack Dashboard (Horizon): Invalid Status in Murano: Fix Released Bug description: To reproduce: add a long license description. https://www.dropbox.com/s/40d86ct7ww18w5z/Screenshot%202015-06-15%2017.37.08.png?dl=0 This looks cluttered and makes little sense. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1465315/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561501] [NEW] Attempting to open Launch Instance during polling causes multiple workflows
Public bug reported: If Launch Instance is triggered while a table is polling, weirdness occurs. To recreate: 1) Launch an Instance on Project > Instances 2) Immediately go to Launch another (actual time limit will depend on API speed) 3) The second Launch Instance will likely look fine; however, when you close it and open again, you'll see 2, 3 maybe more modals at once. This is easy to see when you hit cancel, and have to click through multiple modals. The lag also increases drastically. ** Affects: horizon Importance: High Status: New ** Tags: angularjs mitaka-backport-potential ** Tags added: angularjs mitaka-backport-potential ** Changed in: horizon Importance: Undecided => High ** Changed in: horizon Milestone: None => newton-1 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1561501 Title: Attempting to open Launch Instance during polling causes multiple workflows Status in OpenStack Dashboard (Horizon): New Bug description: If Launch Instance is triggered while a table is polling, weirdness occurs. To recreate: 1) Launch an Instance on Project > Instances 2) Immediately go to Launch another (actual time limit will depend on API speed) 3) The second Launch Instance will likely look fine; however, when you close it and open again, you'll see 2, 3 maybe more modals at once. This is easy to see when you hit cancel, and have to click through multiple modals. The lag also increases drastically. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1561501/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1552487] Re: Add tag mechanism for network resources
Reviewed: https://review.openstack.org/290314 Committed: https://git.openstack.org/cgit/openstack/api-site/commit/?id=a2c6433bef15cb338f0b727c5ad2886ff0edf15d Submitter: Jenkins Branch:master commit a2c6433bef15cb338f0b727c5ad2886ff0edf15d Author: Hirofumi Ichihara Date: Wed Mar 9 16:55:13 2016 +0900 Add API Documentation for Neutron Tag API Extension Change-Id: Idadf1937e5ec6db6b3b54c77f68691d56f3ca788 Closes-Bug: #1552487 ** Changed in: openstack-api-site Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1552487 Title: Add tag mechanism for network resources Status in neutron: Invalid Status in openstack-api-site: Fix Released Status in openstack-manuals: Fix Released Bug description: https://review.openstack.org/273881 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/neutron" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit ec1457dd7503626c917031ce4a16a366fe70c7bb Author: Hirofumi Ichihara Date: Tue Mar 1 11:05:56 2016 +0900 Add tag mechanism for network resources Introduce a generic mechanism to allow the user to set tags on Neutron resources. This patch adds the function for "network" resource with tags. APIImpact DocImpact: allow users to set tags on network resources Partial-Implements: blueprint add-tags-to-core-resources Related-Bug: #1489291 Change-Id: I4d9e80d2c46d07fc22de8015eac4bd3dacf4c03a To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1552487/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561490] [NEW] While adding second interface, i am able to ssh with only the secondary interface, when both primary and secondary interfaces are alive. wanted to check if this a
Public bug reported: Steps to reproduce 1) Launch an instance with one interface (eth0) 2) ssh in to instance 3) Add interface to an existing instance (eth1) 4) Able to ssh with only secondary interface (eth1) not the primary ** Affects: nova Importance: Undecided Status: New ** Summary changed: - While adding second interface, i am able to ssh with only the secondary interface, when both primary and secondary intefaces are alive. is it a expected behaviuor? + While adding second interface, i am able to ssh with only the secondary interface, when both primary and secondary interfaces are alive. is it a expected behaviour? ** Summary changed: - While adding second interface, i am able to ssh with only the secondary interface, when both primary and secondary interfaces are alive. is it a expected behaviour? + While adding second interface, i am able to ssh with only the secondary interface, when both primary and secondary interfaces are alive. wanted to check if this a expected behaviour? ** Project changed: horizon => nova -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1561490 Title: While adding second interface, i am able to ssh with only the secondary interface, when both primary and secondary interfaces are alive. wanted to check if this a expected behaviour? Status in OpenStack Compute (nova): New Bug description: Steps to reproduce 1) Launch an instance with one interface (eth0) 2) ssh in to instance 3) Add interface to an existing instance (eth1) 4) Able to ssh with only secondary interface (eth1) not the primary To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1561490/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561378] Re: l3-agent-list-hosting-router failing with «Field names must be unique!» in interactive Neutron CLI
** Also affects: python-neutronclient Importance: Undecided Status: New ** No longer affects: neutron ** Changed in: python-neutronclient Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1561378 Title: l3-agent-list-hosting-router failing with «Field names must be unique!» in interactive Neutron CLI Status in python-neutronclient: New Bug description: The "l3-agent-list-hosting-router" command fails with the rather unhelpful error message «Field names must be unique!» while using the interactive Neutron CLI. The first invocation of the command never fails the first time in a single CLI session, but after that it fails every time. Demonstrated below: $ neutron (neutron) l3-agent-list-hosting-router 6fecbc74-a7f1-431a-a83d-3ab59b9c7faf +--+--++---+--+ | id | host | admin_state_up | alive | ha_state | +--+--++---+--+ | 5da51291-2ec4-4cf2-8f4a-35581b17b81c | net01-osl2.os-cloud.acme.com | True | :-) | active | | ea6b71bd-5447-4ff0-87f4-58a681344c50 | net02-osl2.os-cloud.acme.com | True | :-) | active | | 7e5b7b98-ba7e-4b63-a86f-2e6a9f293c98 | net01-osl3.os-cloud.acme.com | True | :-) | standby | +--+--++---+--+ (neutron) l3-agent-list-hosting-router 7dd291f7-057a-4dff-8475-e8715c980f82 Field names must be unique! (neutron) l3-agent-list-hosting-router 6fecbc74-a7f1-431a-a83d-3ab59b9c7faf Field names must be unique! Exiting the CLI session and starting a new one will permit me to successfully use the "l3-agent-list-hosting-router" command again, but just once (after which it will start failing again in the same manner). The problem occurs only when using an interactive Neutron CLI session, that is, if I instead run the exact same sequence of commands directly from the shell as command line arguments to the /usr/bin/neutron command, it works fine: $ neutron l3-agent-list-hosting-router 6fecbc74-a7f1-431a-a83d-3ab59b9c7faf +--+--++---+--+ | id | host | admin_state_up | alive | ha_state | +--+--++---+--+ | normal output snipped +--+--++---+--+ $ neutron l3-agent-list-hosting-router 7dd291f7-057a-4dff-8475-e8715c980f82 +--+--++---+--+ | id | host | admin_state_up | alive | ha_state | +--+--++---+--+ | normal output snipped +--+--++---+--+ $ neutron l3-agent-list-hosting-router 6fecbc74-a7f1-431a-a83d-3ab59b9c7faf +--+--++---+--+ | id | host | admin_state_up | alive | ha_state | +--+--++---+--+ | normal output snipped +--+--++---+--+ I am using RHEL7, openstack-neutron-7.0.1-1.el7.noarch.rpm. Tore To manage notifications about this bug go to: https://bugs.launchpad.net/python-neutronclient/+bug/1561378/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561479] [NEW] AngularJS workflow errors cause [object Object] to appear at the bottom of the modal
Public bug reported: Exceptions caused by actions in the AngularJS workflow make [object Object] appear at the bottom of the workflow, beneath the navigation buttons. This seems to mainly occur when something passes validation but fails for other reasons, like a back end service failure. ** Affects: horizon Importance: Medium Status: New ** Tags: angularjs ** Attachment added: "Screen Shot 2016-03-24 at 11.41.49.png" https://bugs.launchpad.net/bugs/1561479/+attachment/4609739/+files/Screen%20Shot%202016-03-24%20at%2011.41.49.png ** Tags added: angularjs ** Changed in: horizon Importance: Undecided => Medium ** Changed in: horizon Milestone: None => newton-1 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1561479 Title: AngularJS workflow errors cause [object Object] to appear at the bottom of the modal Status in OpenStack Dashboard (Horizon): New Bug description: Exceptions caused by actions in the AngularJS workflow make [object Object] appear at the bottom of the workflow, beneath the navigation buttons. This seems to mainly occur when something passes validation but fails for other reasons, like a back end service failure. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1561479/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1201266] Re: 'is_public' filter should be handled when nova calls glance via V2
I add cinder too: https://github.com/openstack/cinder/blob/master/cinder/image/glance.py#L258 ** Also affects: cinder Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1201266 Title: 'is_public' filter should be handled when nova calls glance via V2 Status in Cinder: New Status in OpenStack Compute (nova): Confirmed Bug description: During an image- list call via Nova, it appends an 'is_public:None' to the filters, to ensure that private images are not filtered out. In glance V2 Api, this value should be parsed to something useful, say returning True and preserving the default behaviour of returning all public images ( As is done in V1). Currently image-list to V2 via Nova returns an empty list. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1201266/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1561337] Re: Unable to launch instance
*** This bug is a duplicate of bug 1534273 *** https://bugs.launchpad.net/bugs/1534273 @Arun: It's very likely that this is a configuration issue and it sounds like a duplicate to bug 1534273. Especially this log entry should show you this: 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions BadRequest: Expecting to find username or userId in passwordCredentials - the server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-3fac70af-c83e-457d-9acb-d8e969f0a05c) Please double-check if the Keystone authentication settings in "/etc/nova/nova.conf" are correct [1]. References: [1] http://docs.openstack.org/liberty/install-guide-ubuntu/nova-controller-install.html ** This bug has been marked a duplicate of bug 1534273 Keystone configuration options for nova.conf missing from Redhat/CentOS install guide -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1561337 Title: Unable to launch instance Status in OpenStack Compute (nova): New Bug description: I installed Openstack Liberty using the official guide for ubuntu 14.01. I am not unable to launch instance. Here's the log from nova-api.log 2016-03-24 10:12:53.412 14413 INFO nova.osapi_compute.wsgi.server [req-ec45686b-ad24-4949-83bb-42b3ed336b94 55db47d40b91474399879d1003883561 b3338b63521d4fb7a87011108e9b1107 - - -] 192.168.1.213 "GET /v2/b3338b63521d4fb7a87011108e9b1107/os-quota-sets/b3338b63521d4fb7a87011108e9b1107 HTTP/1.1" status: 200 len: 568 time: 0.0969541 2016-03-24 10:12:57.869 14412 INFO nova.osapi_compute.wsgi.server [req-dcc90aa0-618f-4328-ace0-0e50d3a7bb53 55db47d40b91474399879d1003883561 b3338b63521d4fb7a87011108e9b1107 - - -] 192.168.1.213 "GET /v2/b3338b63521d4fb7a87011108e9b1107/servers/detail?all_tenants=True&tenant_id=b3338b63521d4fb7a87011108e9b1107 HTTP/1.1" status: 200 len: 211 time: 3.3184321 2016-03-24 10:12:59.651 14412 INFO nova.osapi_compute.wsgi.server [req-95cb7922-c703-4036-ba13-005dff79741e 55db47d40b91474399879d1003883561 b3338b63521d4fb7a87011108e9b1107 - - -] 192.168.1.213 "GET /v2/b3338b63521d4fb7a87011108e9b1107/os-keypairs HTTP/1.1" status: 200 len: 212 time: 0.0333679 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions [req-2efac7ae-b1ae-475c-bb03-ab7f28b8ac3d 55db47d40b91474399879d1003883561 b3338b63521d4fb7a87011108e9b1107 - - -] Unexpected exception in API method 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions Traceback (most recent call last): 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, in wrapped 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions return f(*args, **kwargs) 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in wrapper 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in wrapper 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/servers.py", line 611, in create 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions **create_kwargs) 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/hooks.py", line 149, in inner 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions rv = f(*args, **kwargs) 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1581, in create 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions check_server_group_quota=check_server_group_quota) 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1181, in _create_instance 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions auto_disk_config, reservation_id, max_count) 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 955, in _validate_and_build_base_options 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions pci_request_info, requested_networks) 2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions File "/usr/lib/py
[Yahoo-eng-team] [Bug 1561378] [NEW] l3-agent-list-hosting-router failing with «Field names must be unique!» in interactive Neutron CLI
Public bug reported: The "l3-agent-list-hosting-router" command fails with the rather unhelpful error message «Field names must be unique!» while using the interactive Neutron CLI. The first invocation of the command never fails the first time in a single CLI session, but after that it fails every time. Demonstrated below: $ neutron (neutron) l3-agent-list-hosting-router 6fecbc74-a7f1-431a-a83d-3ab59b9c7faf +--+--++---+--+ | id | host | admin_state_up | alive | ha_state | +--+--++---+--+ | 5da51291-2ec4-4cf2-8f4a-35581b17b81c | net01-osl2.os-cloud.acme.com | True | :-) | active | | ea6b71bd-5447-4ff0-87f4-58a681344c50 | net02-osl2.os-cloud.acme.com | True | :-) | active | | 7e5b7b98-ba7e-4b63-a86f-2e6a9f293c98 | net01-osl3.os-cloud.acme.com | True | :-) | standby | +--+--++---+--+ (neutron) l3-agent-list-hosting-router 7dd291f7-057a-4dff-8475-e8715c980f82 Field names must be unique! (neutron) l3-agent-list-hosting-router 6fecbc74-a7f1-431a-a83d-3ab59b9c7faf Field names must be unique! Exiting the CLI session and starting a new one will permit me to successfully use the "l3-agent-list-hosting-router" command again, but just once (after which it will start failing again in the same manner). The problem occurs only when using an interactive Neutron CLI session, that is, if I instead run the exact same sequence of commands directly from the shell as command line arguments to the /usr/bin/neutron command, it works fine: $ neutron l3-agent-list-hosting-router 6fecbc74-a7f1-431a-a83d-3ab59b9c7faf +--+--++---+--+ | id | host | admin_state_up | alive | ha_state | +--+--++---+--+ | normal output snipped +--+--++---+--+ $ neutron l3-agent-list-hosting-router 7dd291f7-057a-4dff-8475-e8715c980f82 +--+--++---+--+ | id | host | admin_state_up | alive | ha_state | +--+--++---+--+ | normal output snipped +--+--++---+--+ $ neutron l3-agent-list-hosting-router 6fecbc74-a7f1-431a-a83d-3ab59b9c7faf +--+--++---+--+ | id | host | admin_state_up | alive | ha_state | +--+--++---+--+ | normal output snipped +--+--++---+--+ I am using RHEL7, openstack-neutron-7.0.1-1.el7.noarch.rpm. Tore ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1561378 Title: l3-agent-list-hosting-router failing with «Field names must be unique!» in interactive Neutron CLI Status in neutron: New Bug description: The "l3-agent-list-hosting-router" command fails with the rather unhelpful error message «Field names must be unique!» while using the interactive Neutron CLI. The first invocation of the command never fails the first time in a single CLI session, but after that it fails every time. Demonstrated below: $ neutron (neutron) l3-agent-list-hosting-router 6fecbc74-a7f1-431a-a83d-3ab59b9c7faf +--+--++---+--+ | id | host | admin_state_up | alive | ha_state | +--+--++---+--+ | 5da51291-2ec4-4cf2-8f4a-35581b17b81c | net01-osl2.os-cloud.acme.com | True | :-) | active | | ea6b71bd-5447-4ff0-87f4-58a681344c50 | net02-osl2.os-cloud.acme.com | True | :-) | active | | 7e5b7b98-ba7e-4b63-a86f-2e6a9f293c98 | net01-osl3.os-cloud.acme.com | True | :-) | standby | +---