[Yahoo-eng-team] [Bug 1574462] Re: No prompt message to the user when router-gateway-clear and routes existed
The message "the nexthop is not connected with router" might be misleading, but what this means is that the 'nexthop' must be on the same (subnet) CIDR and for that you don't need the router's external gateway to reach it. ** Changed in: neutron Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1574462 Title: No prompt message to the user when router-gateway-clear and routes existed Status in neutron: Invalid Bug description: When neutron router-gateway-clear ,routes(nexthop and gw_port ip are in the same cidr) are not be deleted,and no any error reports to the user that the gw_port is required by routes. See the following procedure to reproduce this issue: [root@opencos2 ~(keystone_admin)]# neutron router-gateway-set r1 ext-net1 Set gateway for router r1 [root@opencos2 ~(keystone_admin)]# neutron router-port-list r1 +--+--+---+--+ | id | name | mac_address | fixed_ips | +--+--+---+--+ | af7bf274-92cb-46b8-a0fc-aaf8c0da40d6 | | fa:16:3e:01:1f:3a | {"subnet_id": "15c110ec-62c5-44aa-9f80-0b455ced331c", "ip_address": "12.0.0.81"} | +--+--+---+--+ [root@opencos2 ~(keystone_admin)]# [root@opencos2 ~(keystone_admin)]# neutron router-update r1 --routes type=dict list=true destination=188.163.0.0/24,nexthop=12.0.0.5 Updated router: r1 [root@opencos2 ~(keystone_admin)]# [root@opencos2 ~(keystone_admin)]# neutron router-gateway-clear r1 Removed gateway from router r1 [root@opencos2 ~(keystone_admin)]# [root@opencos2 ~(keystone_admin)]# [root@opencos2 ~(keystone_admin)]# neutron router-show r1 +---+--+ | Field | Value | +---+--+ | admin_state_up| True | | distributed | False | | external_gateway_info | | | ha| False | | id| ba5bcb86-8c5a-4a3a-be48-5bf09288154b | | name | r1 | | routes| {"destination": "188.163.0.0/24", "nexthop": "12.0.0.5"} | | status| ACTIVE | | tenant_id | be58eaec789d44f296a65f96b944a9f5 | +---+--+ [root@opencos2 ~(keystone_admin)]# To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1574462/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575316] Re: Functional job failures in pecan tests
Reviewed: https://review.openstack.org/310306 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=b047e3c28a0f3b714302dec17dcb0c138b758d90 Submitter: Jenkins Branch:master commit b047e3c28a0f3b714302dec17dcb0c138b758d90 Author: Kevin Benton Date: Sun Apr 24 16:46:54 2016 -0700 Pass through setattr to deprecated things Without setattr defined, setting an attr will end up setting a new attribute on the deprecated instance rather than changing my_globals. This means that other functions in my_globals that have a reference to the original will have a different view than external users that get the new attribute. Closes-Bug: #1575316 Change-Id: I7d1f00b5649399cb6db5213fa5efc7a924cf30a8 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1575316 Title: Functional job failures in pecan tests Status in neutron: Fix Released Bug description: Examples: 2016-04-26 14:40:22.082 | 2016-04-26 14:40:22.054 | == 2016-04-26 14:40:22.083 | 2016-04-26 14:40:22.056 | Failed 2 tests - output below: 2016-04-26 14:40:22.083 | 2016-04-26 14:40:22.057 | == 2016-04-26 14:40:22.083 | 2016-04-26 14:40:22.059 | 2016-04-26 14:40:22.083 | 2016-04-26 14:40:22.061 | neutron.tests.functional.pecan_wsgi.test_controllers.TestQuotasController.test_get 2016-04-26 14:40:22.083 | 2016-04-26 14:40:22.063 | -- 2016-04-26 14:40:22.083 | 2016-04-26 14:40:22.065 | 2016-04-26 14:40:22.084 | 2016-04-26 14:40:22.066 | Captured pythonlogging: 2016-04-26 14:40:22.084 | 2016-04-26 14:40:22.068 | ~~~ 2016-04-26 14:40:22.085 | 2016-04-26 14:40:22.070 | INFO [neutron.plugins.ml2.managers] Configured type driver names: ['local', 'flat', 'vlan', 'gre', 'vxlan', 'geneve'] 2016-04-26 14:40:22.087 | 2016-04-26 14:40:22.072 | INFO [neutron.plugins.ml2.drivers.type_flat] Arbitrary flat physical_network names allowed 2016-04-26 14:40:22.088 | 2016-04-26 14:40:22.074 | INFO [neutron.plugins.ml2.drivers.type_vlan] Network VLAN ranges: {} 2016-04-26 14:40:22.090 | 2016-04-26 14:40:22.076 | INFO [neutron.plugins.ml2.drivers.type_local] ML2 LocalTypeDriver initialization complete 2016-04-26 14:40:22.092 | 2016-04-26 14:40:22.077 | INFO [neutron.plugins.ml2.managers] Loaded type driver names: ['geneve', 'flat', 'vlan', 'gre', 'local', 'vxlan'] 2016-04-26 14:40:22.094 | 2016-04-26 14:40:22.079 | INFO [neutron.plugins.ml2.managers] Registered types: ['geneve', 'flat', 'vlan', 'gre', 'local', 'vxlan'] 2016-04-26 14:40:22.095 | 2016-04-26 14:40:22.081 | INFO [neutron.plugins.ml2.managers] Tenant network_types: ['local'] 2016-04-26 14:40:22.097 | 2016-04-26 14:40:22.083 | INFO [neutron.plugins.ml2.managers] Configured extension driver names: [] 2016-04-26 14:40:22.099 | 2016-04-26 14:40:22.084 | INFO [neutron.plugins.ml2.managers] Loaded extension driver names: [] 2016-04-26 14:40:22.101 | 2016-04-26 14:40:22.086 | INFO [neutron.plugins.ml2.managers] Registered extension drivers: [] 2016-04-26 14:40:22.102 | 2016-04-26 14:40:22.088 | INFO [neutron.plugins.ml2.managers] Configured mechanism driver names: [] 2016-04-26 14:40:22.104 | 2016-04-26 14:40:22.090 | INFO [neutron.plugins.ml2.managers] Loaded mechanism driver names: [] 2016-04-26 14:40:22.115 | 2016-04-26 14:40:22.091 | INFO [neutron.plugins.ml2.managers] Registered mechanism drivers: [] 2016-04-26 14:40:22.115 | 2016-04-26 14:40:22.093 | INFO [neutron.plugins.ml2.managers] Initializing driver for type 'geneve' 2016-04-26 14:40:22.115 | 2016-04-26 14:40:22.095 | INFO [neutron.plugins.ml2.drivers.type_tunnel] geneve ID ranges: [] 2016-04-26 14:40:22.115 | 2016-04-26 14:40:22.097 | INFO [neutron.plugins.ml2.managers] Initializing driver for type 'flat' 2016-04-26 14:40:22.116 | 2016-04-26 14:40:22.098 | INFO [neutron.plugins.ml2.drivers.type_flat] ML2 FlatTypeDriver initialization complete 2016-04-26 14:40:22.116 | 2016-04-26 14:40:22.100 | INFO [neutron.plugins.ml2.managers] Initializing driver for type 'vlan' 2016-04-26 14:40:22.117 | 2016-04-26 14:40:22.102 | INFO [neutron.plugins.ml2.drivers.type_vlan] VlanTypeDriver initialization complete 2016-04-26 14:40:22.118 | 2016-04-26 14:40:22.104 | INFO [neutron.plugins.ml2.managers] Initializing driver for type 'gre' 2016-04-26 14:40:22.120 | 2016-04-26 14:40:22.105 | INFO [neutron.plugins.ml2.drivers.type_tunnel] gre ID ranges: [] 2016-04-26 14:40:22.122 | 2016-04-26 14:40:22.107 | INFO [neutron.
[Yahoo-eng-team] [Bug 1527719] Re: Adding a VNIC type for physical functions
Reviewed: https://review.openstack.org/309904 Committed: https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=fcdd1a9701a67ae992ba85d8c8c798fa22adbcaf Submitter: Jenkins Branch:master commit fcdd1a9701a67ae992ba85d8c8c798fa22adbcaf Author: zhangguoqing Date: Mon Apr 25 17:15:30 2016 +0800 Add description for new 'direct-physical' VNIC type. Change-Id: Ic549f5d35e8365a2a806cdaf6379043fd9817c7c Closes-Bug: #1527719 ** Changed in: openstack-manuals Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1527719 Title: Adding a VNIC type for physical functions Status in neutron: Fix Released Status in openstack-api-site: Fix Released Status in openstack-manuals: Fix Released Bug description: https://review.openstack.org/246923 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/neutron" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit 2c60278992d5a21724105ed0ca6e1d2f3e5c Author: Brent Eagles Date: Mon Nov 9 09:26:53 2015 -0330 Adding a VNIC type for physical functions This change adds a new VNIC type to distinguish between virtual and physical functions in SR-IOV. The new VNIC type 'direct-physical' deviates from the behavior of 'direct' VNICs for virtual functions. While neutron tracks the resource as a port, it does not currently perform any management functions. Future changes may extend the segment mapping functionality that is currently based on agent configuration to include direct types. However, the direct-physical VNICs will not have functional parity with the other SR-IOV VNIC types in that quality of service and port security functionality is not available. APIImpact DocImpact: Add description for new 'direct-physical' VNIC type. Closes-Bug: #1500993 Change-Id: If1ab969c2002c649a3d51635ca2765c262e2d37f To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1527719/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575402] [NEW] VPNaaS update NAT rules can generate stack trace
Public bug reported: neutron-vpn-agent can generate stack traces with AttributeError: 'NoneType' object has no attribute 'ipv4' in two different locations in ipsec.py, in add_nat_rule() and remove_nat_rule(), during sync() operations while site connections are being created. Here is an example stack trace (based on a Liberty distribution, but I believe this is still an issue in master/newton): 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher Traceback (most recent call last): 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher executor_callback)) 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher executor_callback) 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 129, in _do_dispatch 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher result = func(ctxt, **new_args) 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/ipsec.py", line 675, in vpnservice_updated 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher self.sync(context, [router] if router else []) 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in inner 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher return f(*args, **kwargs) 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/ipsec.py", line 830, in sync 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher self._delete_vpn_processes(sync_router_ids, router_ids) 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/ipsec.py", line 860, in _delete_vpn_processes 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher self.destroy_process(process_id) 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/ipsec.py", line 728, in destroy_process 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher self._update_nat(vpnservice, self.remove_nat_rule) 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/ipsec.py", line 664, in _update_nat 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher top=True) 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/ipsec.py", line 629, in remove_nat_rule 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher iptables_manager.ipv4['nat'].remove_rule(chain, rule, top=top) 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher AttributeError: 'NoneType' object has no attribute 'ipv4' 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1575402 Title: VPNaaS update NAT rules can generate stack trace Status in neutron: New Bug description: neutron-vpn-agent can generate stack traces with AttributeError: 'NoneType' object has no attribute 'ipv4' in two different locations in ipsec.py, in add_nat_rule() and remove_nat_rule(), during sync() operations while site connections are being created. Here is an example stack trace (based on a Liberty distribution, but I believe this is still an issue in master/newton): 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher Traceback (most recent call last): 2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply 2016-04-26 2
[Yahoo-eng-team] [Bug 1572013] Re: missing parameter explaination in "Servers" section of v2.1 compute api
I agree with Matt here, let's just close this bug report and use the blueprint "api-ref-in-rst" to drive this. The effort is described on the ML [1] and the wiki [2]. The file the bug reporter is referencing to has a comment at the top which describes the needed cleanup tasks [3]. Long story short, push the patch you planned to do but reference the bp instead of the bug. References: [1] [openstack-dev] [nova] api-ref content verification phase doc push http://lists.openstack.org/pipermail/openstack-dev/2016-April/092936.html [2] https://wiki.openstack.org/wiki/NovaAPIRef#Parameter_Verification [3] https://github.com/openstack/nova/blob/master/api-ref/source/servers.inc#L3 ** Changed in: nova Status: New => Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1572013 Title: missing parameter explaination in "Servers" section of v2.1 compute api Status in OpenStack Compute (nova): Opinion Bug description: URL: http://developer.openstack.org/api-ref-compute-v2.1.html I think the request parameters listed in "GET /v2.1/{tenant_id}/servers" of "Servers" section are not complete, when i want to get all servers of all tenants, there should be "?all_tenants=true" in the url, as i read in python-novaclient source code and it works actually after testing; but there is no specific description about "all_tenant" listed in "Request parameters" following in the api documentation. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1572013/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575285] Re: _BroadcastMessage._send_response raises TypeError
It looks like this piece of code [1] is 3 years old without having any negative effect. These interfaces are all private and contained in one single module without leaking to the outside world, so I guess it's not worth to make a patch for that. References: [1] https://git.openstack.org/cgit/openstack/nova/tree/nova/cells/messaging.py?id=f9a868e86ce11f786538547c301b805bd68a1697#n462 ** Changed in: nova Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1575285 Title: _BroadcastMessage._send_response raises TypeError Status in OpenStack Compute (nova): Won't Fix Bug description: The class _BaseMessage defines a method named _send_json_responses, which takes a named parameter neighbor_only. Later on in the same class, another method _send_response makes a call to _send_json_responses (on line 285), setting neighbor_only explicitly. However, a subclass of _BaseMessage, _BroadcastMessage overrides _send_json_responses with a definition that does not have neighbor_only as a named parameter. Therefore if _send_response is ever called on an object of type _BroadcastMessage, a TypeError will be raised. One option would be to change the definition of _BroadcastMessage._send_json_reponses to allow neighbour_only to be passed even though it is not required. def _send_json_responses(self, json_responses,neighbour_only=None): """Responses to broadcast messages always need to go to the neighbor cell from which we received this message. That cell aggregates the responses and makes sure to forward them to the correct source. """ return super(_BroadcastMessage, self)._send_json_responses( json_responses, neighbor_only=True, fanout=True) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1575285/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1574384] Re: Libvirt console parameter incorrect for AMD64 KVM
Nova doesn't set the cmdline if it is not defined in the image properties. Your installed libvirt version uses then a default for that platform and it sounds like an issue in libvirt itself. You can double- check in the logs of nova-compute, there should be the generated "libvirt.xml" *without* any cmdline. I'm closing this bug report. -besides of that- Kilo is only supported for security fixes and this issue doesn't sound like one. ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1574384 Title: Libvirt console parameter incorrect for AMD64 KVM Status in OpenStack Compute (nova): Invalid Bug description: I intentionally make the title of this bug similar to #1259323. Since then, the default cmdline for libvirt was probably changed to root=/dev/vda console=tty0 console=ttyS0 console=ttyAMA0 I can't find when the change could have occured. It is weird that I'm getting it on a fairly standard OpenStack Kilo installation on Ubuntu 14.04, AMD64 platform. The default cmdline is applied because I'm launching instances in the AMI format. The consequence is that I do not see the output of init scripts in the console log. Everything between the last kernel message and the login prompt is missing. Because of that, I do not see, e.g., the generated root password of the image. The graphical console of course works. The kernel documentation explains it - the LAST console= statement is where /dev/console is redirected. Kernel messages go to all of them. The login prompt is then generated by a getty configured in /etc/inittab. It can be fixed using image properties on image upload to glance, such as: glance image-create ... --prop os_command_line="root=/dev/vda console=tty0 console=ttyS0" There are also properties for setting kernel and ramdisk on the command line, so it's not a big problem to add one more definition, but it took me a few hours to figure it out... To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1574384/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575368] [NEW] Federation Unable to handle multiple groups
Public bug reported: I'm using OIDC federated authentication, I'm able to use the mapping json to do ephemeral user authentication. Following is my mapping json: [ { "local": [ { "user": { "name": "{0}" }, "group": { "id": "{1}" }, "domain": { "name": "default" } } ], "remote": [ { "type": "HTTP_OIDC_EMAIL" }, { "type": "HTTP_OIDC_GROUP" }, { "type" : "HTTP_OIDC_ISS", "any_one_of": [ "https://myidp.cisco.com/oauth2"; ] } ] } ] and when tested with the keystone-mange mapping, I'm able to see multiple groups properly. output of Keystone-mapping verification. { "group_ids": [ "5207b97776914a6b9f99e1c985533863,23a70aa1af5f4439afb628a10f53ade3" ], "user": { "domain": { "id": "Federated" }, "type": "ephemeral", "name": "kathu...@cisco.com" }, "group_names": [] } However, when the same flow is executed thru the OIDC I get the following error message {"error": {"message": "Group ['5207b97776914a6b9f99e1c985533863', '23a70aa1af5f4439afb628a10f53ade3'] returned by mapping fed_mapping was not found in the backend. (Disable debug mode to suppress these details.)", "code": 500, "title": "Internal Server Error"}} I looked into the util.py code and printed the groups that were coming into the validate_groups_in_backend function. validate_groups_in_backend /opt/stack/keystone/keystone/contrib/federation/utils.py:258 2016-04-26 12:38:46.750572 25124 DEBUG keystone.contrib.federation.utils [req-b54b5075-a4e5-46fc-a600-f8a07cfaf2cf - - - - -] printing group_ids list [u"['5207b97776914a6b9f99e1c985533863', '23a70aa1af5f4439afb628a10f53ade3']"] validate_groups_in_backend /opt/stack/keystone/keystone/contrib/federation/utils.py:259 2016-04-26 12:38:46.750704 25124 DEBUG keystone.contrib.federation.utils [req-b54b5075-a4e5-46fc-a600-f8a07cfaf2cf - - - - -] printing group_id ['5207b97776914a6b9f99e1c985533863', '23a70aa1af5f4439afb628a10f53ade3'] validate_groups_in_backend /opt/stack/keystone/keystone/contrib/federation/utils.py:260 2016-04-26 12:38:47.092780 25124 WARNING keystone.common.wsgi [req-b54b5075-a4e5-46fc-a600-f8a07cfaf2cf - - - - -] Group ['5207b97776914a6b9f99e1c985533863', '23a70aa1af5f4439afb628a10f53ade3'] returned by mapping openam_mapping was not found in the backend. (Disable debug mode to suppress these details.) (END) it looks like the list is formed incorrectly [u"['5207b97776914a6b9f99e1c985533863', '23a70aa1af5f4439afb628a10f53ade3']"] it should have been [u'5207b97776914a6b9f99e1c985533863', u'23a70aa1af5f4439afb628a10f53ade3'] Thanks, Krishna ** Affects: keystone Importance: Undecided Status: New ** Also affects: centos Importance: Undecided Status: New ** Package changed: centos => ubuntu ** No longer affects: ubuntu ** Description changed: I'm using OIDC federated authentication, I'm able to use the mapping json to do ephemeral user authentication. Following is my mapping json: [ - { - "local": [ - { - "user": { - "name": "{0}" - }, - - "group": { - "id": "{1}" - }, - "domain": { - "name": "default" - } + { + "local": [ + { + "user": { + "name": "{0}" + }, + "group": { + "id": "{1}" + }, + "domain": { + "name": "default" + } - } - ], - "remote": [ - { - "type": "HTTP_OIDC_EMAIL" - }, - { - "type": "HTTP_OIDC_GROUP" - }, - { - "type" : "HTTP_OIDC_ISS", - "any_one_of": [ - "https://myidp.cisco.com/oauth2"; - ] - } + } + ], + "remote": [ + { + "type": "HTTP_OIDC_EMAIL" + }, + { + "type": "HTTP_OIDC_GROUP" + }, + { + "type" : "HTTP_OIDC_ISS", + "any_one_of": [ + "https://myidp.cisco.com/oauth2"; + ] + } - - ] - } - ] + ] + } + ] and when tested with the keystone-man
[Yahoo-eng-team] [Bug 1573875] Re: Nova able to start VM 2 times after failed to live migrate it
> - First of all, it would be nice to install qemu 2.5 with the > original kilo repository, [...] The upstream Nova project is not responsible to install system level packages. You might want to move this bug to the ubuntu-cloud-archive project. > - If nova able to start instances two times with same rbd block > device, it's a really big hole in the system I think [...] The upstream Kilo release has only support for security issues [1] and this doesn't sound like one. Please check if this still happens on the Mitaka release or the current Newton master code. If this is the case please reopen this bug report. > - Some kind of checking also would usefull, which automatically > checks and compare the VM states in the database, and also in > hypervisors side in a given interval (this check may can be disabled, > and checking interval should be able to configured imho) The config option "sync_power_state_interval" in the "nova.conf" file should do exactly that. References: [1] http://releases.openstack.org/ ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1573875 Title: Nova able to start VM 2 times after failed to live migrate it Status in OpenStack Compute (nova): Invalid Bug description: Hi, I've faced a strange problem with nova. A few enviromental details: - We use Ubuntu 14.04 LTS - We use Kilo from Ubuntu cloud archive - We use KVM as Hypervisor with the stocked qemu 2.2 - We got Ceph as shared storage with libvirt-rbd devices - OVS neutron based networking, but it's all the same with other solutions I think. So, the workflow, which need to reproduce the bug: - Start a Windows guest (Linux distros not affected as I saw) - Live migrate this VM to another host (okay, I know, it's not fit 100% in cloud conception, but we must use it) As happend then, is a really wrong behavior: - The VM starts to migrate (virsh list shows it in a new host) - On the source side, virsh list tells me, the instance is stopped - After a few second, the destination host just remove the instance, and the source change it's state back to running - The network comes unavailable - The horizon reports, the instance is in shut off state and it's definietly not (the VNC is still available for example) - User can click on 'Start instance' button, and the instance will be started at the destination - We see those lines in a specified libvirt log: "qemu-system-x86_64: load of migration failed: Invalid argument" After a few google search whit this error, i've found this site: https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1472500 It's not the exact error, but it's tells us a really important fact: those errors came with qemu 2.2, and it's had been fixed in 2.3... First of all, I've installed 2 CentOS compute node, which cames with qemu 2.3 by default, and the Windows migration started to work as Linux guests did before. Unfortunately, we must use Ubuntu, so we needed to find a workaround, which had been done yesterday... What I did: - Added Mitaka repository (which came out two days before) - Run this command (I cannot dist-upgrade openstack now): apt-get install qemu-system qemu-system-arm qemu-system-common qemu-system-mips qemu-system-misc qemu-system-ppc qemu-system-sparc qemu-system-x86 qemu-utils seabios libvirt-bin - Let the qemu 2.5 installed - The migration tests shows us, this new packages solves the issue What I want/advice, to repair this: - First of all, it would be nice to install qemu 2.5 with the original kilo repository, and I be able to upgrade without any 'quick and dirty' method (add-remove Mitaka repo until installing qemu). It is ASAP to us, cause if we not get this until the next weekend, i had to choose the quick and dirty way (but don't want to rush anybody... just telling :) ) - If nova able to start instances two times with same rbd block device, it's a really big hole in the system I think... we just corrupted 2 test Windows 7 guest with a few clicks... Some security check should be implementet, which collects the instances (and their states) from kvm at any VM starting, and if the algorithm sees, there are guest running with the same name (or some kind of uuid maybe) it's just not starting another copy... - Some kind of checking also would usefull, which automatically checks and compare the VM states in the database, and also in hypervisors side in a given interval (this check may can be disabled, and checking interval should be able to configured imho) I've not found any clue, that those things in nova side are repaired previously in liberty or mitaka... am I right, ot just someting avoid my attention? If any further information needed, feel free to ask :) Regards, P
[Yahoo-eng-team] [Bug 1510345] Re: [SRU] Cloud Images do not bring up networking w/ certain virtual NICs due to device naming rules
this is fix-released from cloud-init perspective in xenial. If you disagree, please state why and open the bug. cloud-init now renders .rules files and configures ENI per the datasource's provided network config or a fallback network config. ** No longer affects: cloud-init ** Changed in: cloud-init (Ubuntu) Status: Triaged => Fix Released ** Changed in: cloud-init (Ubuntu Xenial) Status: Triaged => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1510345 Title: [SRU] Cloud Images do not bring up networking w/ certain virtual NICs due to device naming rules Status in Ubuntu on EC2: Fix Released Status in cloud-init package in Ubuntu: Fix Released Status in livecd-rootfs package in Ubuntu: Fix Released Status in livecd-rootfs source package in Wily: Fix Released Status in cloud-init source package in Xenial: Fix Released Status in livecd-rootfs source package in Xenial: Fix Released Bug description: SRU Justification [IMPACT] Cloud images produced by livecd-rootfs are not accessable when presented with certain NICS such as ixgbevf used on HVM instances for AWS. [CAUSE] Changes in default device naming in 15.10 causes some devices to be named at boot time and are not predicatable, i.e. instead of "eth0" being the first NIC, "ens3" might be used. [FIX] Boot instances with "net.ifnames=0". This change reverts to the old device naming conventions. As a fix, this is the most appropriate since the cloud images configure the first NIC for DHCP. [TEST CASE1]: - Build image from -proposed - Boot image in KVM, i.e: $ qemu-system-x86_64 \ -smp 2 -m 1024 -machine accel=kvm \ -drive file=build.img,if=virtio,bus=0,cache=unsafe,unit=0,snapshot=on \ -net nic,model=rtl8139 - Confirm that image has "eth0" [TEST CASE2]: - Build image from -proposed - Publish image to AWS as HVM w/ SRIOV enabled - Confirm that instance boots and is accessable via SSH [ORIGINAL REPORT] I've made several attempts to launch a c4.xlarge and c4.8xlarge instances using Ubuntu 15.10 Wily but am unable to ping the instance after it has started running. The console shows that the instance reachability check failed. I am able to successfully launch c4.xlarge instances using Ubuntu 14.04 and t2.large instances using Ubuntu 15.10. I've tried with both of these instance AMIs: ubuntu/images/hvm-ssd/ubuntu-wily-15.10-amd64-server-20151021 - ami-225ebd11 ubuntu/images-testing/hvm-ssd/ubuntu-wily-daily-amd64-server-20151026 - ami-ea20cdd9 Might there be a problem with the Ubuntu Kernel in 15.10 for the c4 instances? Looking at the system log it seems that the network never comes up: [ 140.699509] cloud-init[1469]: 2015-10-26 20:45:49,887 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04 /meta-data/instance-id' failed [0/120s]: request error [('Connection aborted.', OSError(101, 'Network is unreachable'))] Thread at AWS forums: https://forums.aws.amazon.com/thread.jspa?threadID=218656 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu-on-ec2/+bug/1510345/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575316] [NEW] Functional job failures in pecan tests
Public bug reported: Examples: 2016-04-26 14:40:22.082 | 2016-04-26 14:40:22.054 | == 2016-04-26 14:40:22.083 | 2016-04-26 14:40:22.056 | Failed 2 tests - output below: 2016-04-26 14:40:22.083 | 2016-04-26 14:40:22.057 | == 2016-04-26 14:40:22.083 | 2016-04-26 14:40:22.059 | 2016-04-26 14:40:22.083 | 2016-04-26 14:40:22.061 | neutron.tests.functional.pecan_wsgi.test_controllers.TestQuotasController.test_get 2016-04-26 14:40:22.083 | 2016-04-26 14:40:22.063 | -- 2016-04-26 14:40:22.083 | 2016-04-26 14:40:22.065 | 2016-04-26 14:40:22.084 | 2016-04-26 14:40:22.066 | Captured pythonlogging: 2016-04-26 14:40:22.084 | 2016-04-26 14:40:22.068 | ~~~ 2016-04-26 14:40:22.085 | 2016-04-26 14:40:22.070 | INFO [neutron.plugins.ml2.managers] Configured type driver names: ['local', 'flat', 'vlan', 'gre', 'vxlan', 'geneve'] 2016-04-26 14:40:22.087 | 2016-04-26 14:40:22.072 | INFO [neutron.plugins.ml2.drivers.type_flat] Arbitrary flat physical_network names allowed 2016-04-26 14:40:22.088 | 2016-04-26 14:40:22.074 | INFO [neutron.plugins.ml2.drivers.type_vlan] Network VLAN ranges: {} 2016-04-26 14:40:22.090 | 2016-04-26 14:40:22.076 | INFO [neutron.plugins.ml2.drivers.type_local] ML2 LocalTypeDriver initialization complete 2016-04-26 14:40:22.092 | 2016-04-26 14:40:22.077 | INFO [neutron.plugins.ml2.managers] Loaded type driver names: ['geneve', 'flat', 'vlan', 'gre', 'local', 'vxlan'] 2016-04-26 14:40:22.094 | 2016-04-26 14:40:22.079 | INFO [neutron.plugins.ml2.managers] Registered types: ['geneve', 'flat', 'vlan', 'gre', 'local', 'vxlan'] 2016-04-26 14:40:22.095 | 2016-04-26 14:40:22.081 | INFO [neutron.plugins.ml2.managers] Tenant network_types: ['local'] 2016-04-26 14:40:22.097 | 2016-04-26 14:40:22.083 | INFO [neutron.plugins.ml2.managers] Configured extension driver names: [] 2016-04-26 14:40:22.099 | 2016-04-26 14:40:22.084 | INFO [neutron.plugins.ml2.managers] Loaded extension driver names: [] 2016-04-26 14:40:22.101 | 2016-04-26 14:40:22.086 | INFO [neutron.plugins.ml2.managers] Registered extension drivers: [] 2016-04-26 14:40:22.102 | 2016-04-26 14:40:22.088 | INFO [neutron.plugins.ml2.managers] Configured mechanism driver names: [] 2016-04-26 14:40:22.104 | 2016-04-26 14:40:22.090 | INFO [neutron.plugins.ml2.managers] Loaded mechanism driver names: [] 2016-04-26 14:40:22.115 | 2016-04-26 14:40:22.091 | INFO [neutron.plugins.ml2.managers] Registered mechanism drivers: [] 2016-04-26 14:40:22.115 | 2016-04-26 14:40:22.093 | INFO [neutron.plugins.ml2.managers] Initializing driver for type 'geneve' 2016-04-26 14:40:22.115 | 2016-04-26 14:40:22.095 | INFO [neutron.plugins.ml2.drivers.type_tunnel] geneve ID ranges: [] 2016-04-26 14:40:22.115 | 2016-04-26 14:40:22.097 | INFO [neutron.plugins.ml2.managers] Initializing driver for type 'flat' 2016-04-26 14:40:22.116 | 2016-04-26 14:40:22.098 | INFO [neutron.plugins.ml2.drivers.type_flat] ML2 FlatTypeDriver initialization complete 2016-04-26 14:40:22.116 | 2016-04-26 14:40:22.100 | INFO [neutron.plugins.ml2.managers] Initializing driver for type 'vlan' 2016-04-26 14:40:22.117 | 2016-04-26 14:40:22.102 | INFO [neutron.plugins.ml2.drivers.type_vlan] VlanTypeDriver initialization complete 2016-04-26 14:40:22.118 | 2016-04-26 14:40:22.104 | INFO [neutron.plugins.ml2.managers] Initializing driver for type 'gre' 2016-04-26 14:40:22.120 | 2016-04-26 14:40:22.105 | INFO [neutron.plugins.ml2.drivers.type_tunnel] gre ID ranges: [] 2016-04-26 14:40:22.122 | 2016-04-26 14:40:22.107 | INFO [neutron.plugins.ml2.managers] Initializing driver for type 'local' 2016-04-26 14:40:22.123 | 2016-04-26 14:40:22.109 | INFO [neutron.plugins.ml2.managers] Initializing driver for type 'vxlan' 2016-04-26 14:40:22.125 | 2016-04-26 14:40:22.110 | INFO [neutron.plugins.ml2.drivers.type_tunnel] vxlan ID ranges: [] 2016-04-26 14:40:22.127 | 2016-04-26 14:40:22.112 | INFO [neutron.plugins.ml2.plugin] Modular L2 Plugin initialization complete 2016-04-26 14:40:22.129 | 2016-04-26 14:40:22.114 | INFO [neutron.extensions.vlantransparent] Disabled vlantransparent extension. 2016-04-26 14:40:22.130 | 2016-04-26 14:40:22.116 | INFO [neutron.pecan_wsgi.startup] Extension Quota management support is pecan-aware. Fetching resources and controllers 2016-04-26 14:40:22.132 | 2016-04-26 14:40:22.118 | INFO [neutron.pecan_wsgi.startup] Added controller for resource address_scope via URI path segment:address_scopes 2016-04-26 14:40:22.134 | 2016-04-26 14:40:22.119 | INFO [neutron.pecan_wsgi.startup] Added controller for resource network via URI path segment:networks 2016-04-26 14:40:22.135 |
[Yahoo-eng-team] [Bug 1575285] [NEW] _BroadcastMessage._send_response raises TypeError
Public bug reported: The class _BaseMessage defines a method named _send_json_responses, which takes a named parameter neighbor_only. Later on in the same class, another method _send_response makes a call to _send_json_responses (on line 285), setting neighbor_only explicitly. However, a subclass of _BaseMessage, _BroadcastMessage overrides _send_json_responses with a definition that does not have neighbor_only as a named parameter. Therefore if _send_response is ever called on an object of type _BroadcastMessage, a TypeError will be raised. One option would be to change the definition of _BroadcastMessage._send_json_reponses to allow neighbour_only to be passed even though it is not required. def _send_json_responses(self, json_responses,neighbour_only=None): """Responses to broadcast messages always need to go to the neighbor cell from which we received this message. That cell aggregates the responses and makes sure to forward them to the correct source. """ return super(_BroadcastMessage, self)._send_json_responses( json_responses, neighbor_only=True, fanout=True) ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1575285 Title: _BroadcastMessage._send_response raises TypeError Status in OpenStack Compute (nova): New Bug description: The class _BaseMessage defines a method named _send_json_responses, which takes a named parameter neighbor_only. Later on in the same class, another method _send_response makes a call to _send_json_responses (on line 285), setting neighbor_only explicitly. However, a subclass of _BaseMessage, _BroadcastMessage overrides _send_json_responses with a definition that does not have neighbor_only as a named parameter. Therefore if _send_response is ever called on an object of type _BroadcastMessage, a TypeError will be raised. One option would be to change the definition of _BroadcastMessage._send_json_reponses to allow neighbour_only to be passed even though it is not required. def _send_json_responses(self, json_responses,neighbour_only=None): """Responses to broadcast messages always need to go to the neighbor cell from which we received this message. That cell aggregates the responses and makes sure to forward them to the correct source. """ return super(_BroadcastMessage, self)._send_json_responses( json_responses, neighbor_only=True, fanout=True) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1575285/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575154] Re: Ubuntu 16.04, Unexpected API Error, , Nova
Closed as requested in comment #2. ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1575154 Title: Ubuntu 16.04, Unexpected API Error,, Nova Status in OpenStack Compute (nova): Invalid Bug description: Creating an instance fails: Command: openstack server create --flavor 42b791ba-3330-4a8c-b8bc-5b8afe94b7ce --image 6b5bfd61-8c35-44e0-be49-da12a6988cd1 --nic net-id=16ed9203-beaf-42f0-bf37-b7bdf68839f1 --security-group default --key-name testinstance public-instance --debug (I'm following: http://docs.openstack.org/liberty/install-guide-ubuntu /launch-instance-public.html, all commands nova flavor-list, nova image-list, work without issues) boot_args: ['public-instance', , ] boot_kwargs: {'files': {}, 'userdata': None, 'availability_zone': None, 'nics': [{'port-id': '', 'net-id': u'16ed9203-beaf-42f0-bf37-b7bdf68839f1', 'v4-fixed-ip': '', 'v6-fixed-ip': ''}], 'block_device_mapping': {}, 'max_count': 1, 'meta': None, 'key_name': 'testinstance', 'min_count': 1, 'scheduler_hints': {}, 'reservation_id': None, 'security_groups': ['default'], 'config_drive': None} REQ: curl -g -i -X POST http://172.24.33.142:8774/v2.1/172f573ed78a4fc7b3460ea4a3ac6dbb/servers -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}fadf0d364f2eb45845658431ef42d15ddf1d4e8a" -d '{"server": {"name": "public-instance", "imageRef": "6b5bfd61-8c35-44e0-be49-da12a6988cd1", "key_name": "testinstance", "flavorRef": "42b791ba-3330-4a8c-b8bc-5b8afe94b7ce", "max_count": 1, "min_count": 1, "networks": [{"uuid": "16ed9203-beaf-42f0-bf37-b7bdf68839f1"}], "security_groups": [{"name": "default"}]}}' "POST /v2.1/172f573ed78a4fc7b3460ea4a3ac6dbb/servers HTTP/1.1" 500 216 RESP: [500] Content-Length: 216 X-Compute-Request-Id: req-9bc78003-48a4-4059-9dad-ed7ee468c931 Vary: X-OpenStack-Nova-API-Version Connection: keep-alive X-Openstack-Nova-Api-Version: 2.1 Date: Tue, 26 Apr 2016 12:01:13 GMT Content-Type: application/json; charset=UTF-8 RESP BODY: {"computeFault": {"message": "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.\n", "code": 500}} Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-9bc78003-48a4-4059-9dad-ed7ee468c931) Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/cliff/app.py", line 374, in run_subcommand result = cmd.run(parsed_args) File "/usr/lib/python2.7/dist-packages/openstackclient/common/command.py", line 38, in run return super(Command, self).run(parsed_args) File "/usr/lib/python2.7/dist-packages/cliff/display.py", line 92, in run column_names, data = self.take_action(parsed_args) File "/usr/lib/python2.7/dist-packages/openstackclient/compute/v2/server.py", line 519, in take_action server = compute_client.servers.create(*boot_args, **boot_kwargs) File "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 1233, in create **boot_kwargs) File "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 667, in _boot return_raw=return_raw, **kwargs) File "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 345, in _create resp, body = self.api.client.post(url, body=body) File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 179, in post return self.request(url, 'POST', **kwargs) File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 94, in request raise exceptions.from_response(resp, body, url, method) ClientException: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-9bc78003-48a4-4059-9dad-ed7ee468c931) clean_up CreateServer: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-9bc78003-48a4-4059-9dad-ed7ee468c931) Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/openstackclient/shell.py", line 118, in run ret_val = super(OpenStackShell, self).run(argv) File "/usr/lib/python2.7/dist-packages/cliff/app.py", line 255, in run result = self.run_subcommand(remainder) File "/usr/lib/python2.7/dist-packages/openstackclient/shell.py", line 153, in run_subcommand ret_value = super(OpenStackShell, self).run_subcommand(argv) File "/usr/lib/python2.7/dist-packages/cliff/app.py", line 374, in run_subcommand result = cmd.run(parsed_args) File "/usr/lib/python2.7/dist-packages/openstackclient/common/command.py", line 38, in ru
[Yahoo-eng-team] [Bug 1575247] [NEW] network query when creating subnet looks too complex
Public bug reported: When creating a subnet, the network query appears to translate to: 67 Query SELECT networks.tenant_id AS networks_tenant_id, networks.id AS networks_id, networks.name AS networks_name, networks.status AS networks_status, networks.admin_state_up AS networks_admin_state_up, networks.mtu AS networks_mtu, networks.vlan_transparent AS networks_vlan_transparent, networks.availability_zone_hints AS networks_availability_zone_hints, networks.standard_attr_id AS networks_standard_attr_id, subnetpoolprefixes_1.cidr AS subnetpoolprefixes_1_cidr, subnetpoolprefixes_1.subnetpool_id AS subnetpoolprefixes_1_subnetpool_id, standardattributes_1.created_at AS standardattributes_1_created_at, standardattributes_1.updated_at AS standardattributes_1_updated_at, standardattributes_1.id AS standardattributes_1_id, standardattributes_1.resource_type AS standardattributes_1_resource_type, standardattributes_1.description AS standardattributes_1_description, tags_1.standard_attr_id AS tags_1_standard_attr_id, tags_1.tag AS tags_1_tag, subnetpools_1.tenant_id AS subnetpo ols_1_tenant_id, subnetpools_1.id AS subnetpools_1_id, subnetpools_1.name AS subnetpools_1_name, subnetpools_1.ip_version AS subnetpools_1_ip_version, subnetpools_1.default_prefixlen AS subnetpools_1_default_prefixlen, subnetpools_1.min_prefixlen AS subnetpools_1_min_prefixlen, subnetpools_1.max_prefixlen AS subnetpools_1_max_prefixlen, subnetpools_1.shared AS subnetpools_1_shared, subnetpools_1.is_default AS subnetpools_1_is_default, subnetpools_1.default_quota AS subnetpools_1_default_quota, subnetpools_1.hash AS subnetpools_1_hash, subnetpools_1.address_scope_id AS subnetpools_1_address_scope_id, subnetpools_1.standard_attr_id AS subnetpools_1_standard_attr_id, ipallocationpools_1.id AS ipallocationpools_1_id, ipallocationpools_1.subnet_id AS ipallocationpools_1_subnet_id, ipallocationpools_1.first_ip AS ipallocationpools_1_first_ip, ipallocationpools_1.last_ip AS ipallocationpools_1_last_ip, dnsnameservers_1.address AS dnsnameservers_1_address, dnsnameservers_1.subnet_id AS dnsn ameservers_1_subnet_id, dnsnameservers_1.`order` AS dnsnameservers_1_order, subnetroutes_1.destination AS subnetroutes_1_destination, subnetroutes_1.nexthop AS subnetroutes_1_nexthop, subnetroutes_1.subnet_id AS subnetroutes_1_subnet_id, networkrbacs_1.tenant_id AS networkrbacs_1_tenant_id, networkrbacs_1.id AS networkrbacs_1_id, networkrbacs_1.target_tenant AS networkrbacs_1_target_tenant, networkrbacs_1.action AS networkrbacs_1_action, networkrbacs_1.object_id AS networkrbacs_1_object_id, standardattributes_2.created_at AS standardattributes_2_created_at, standardattributes_2.updated_at AS standardattributes_2_updated_at, standardattributes_2.id AS standardattributes_2_id, standardattributes_2.resource_type AS standardattributes_2_resource_type, standardattributes_2.description AS standardattributes_2_description, tags_2.standard_attr_id AS tags_2_standard_attr_id, tags_2.tag AS tags_2_tag, subnets_1.tenant_id AS subnets_1_tenant_id, subnets_1.id AS subnets_1_id, subnets_1.name AS subnets_1_name, subnets_1.network_id AS subnets_1_network_id, subnets_1.subnetpool_id AS subnets_1_subnetpool_id, subnets_1.ip_version AS subnets_1_ip_version, subnets_1.cidr AS subnets_1_cidr, subnets_1.gateway_ip AS subnets_1_gateway_ip, subnets_1.enable_dhcp AS subnets_1_enable_dhcp, subnets_1.ipv6_ra_mode AS subnets_1_ipv6_ra_mode, subnets_1.ipv6_address_mode AS subnets_1_ipv6_address_mode, subnets_1.standard_attr_id AS subnets_1_standard_attr_id, networkrbacs_2.tenant_id AS networkrbacs_2_tenant_id, networkrbacs_2.id AS networkrbacs_2_id, networkrbacs_2.target_tenant AS networkrbacs_2_target_tenant, networkrbacs_2.action AS networkrbacs_2_action, networkrbacs_2.object_id AS networkrbacs_2_object_id, agents_1.id AS agents_1_id, agents_1.agent_type AS agents_1_agent_type, agents_1.`binary` AS agents_1_binary, agents_1.topic AS agents_1_topic, agents_1.host AS agents_1_host, agents_1.availability_zone AS agents_1_availability_zone, agents_1.admin_state_up AS agents_1_admin_state_ up, agents_1.created_at AS agents_1_created_at, agents_1.started_at AS agents_1_started_at, agents_1.heartbeat_timestamp AS agents_1_heartbeat_timestamp, agents_1.description AS agents_1_description, agents_1.configurations AS agents_1_configurations, agents_1.resource_versions AS agents_1_resource_versions, agents_1.`load` AS agents_1_load, standardattributes_3.created_at AS standardattributes_3_created_at, standardattributes_3.updated_at AS standardattributes_3_updated_at, standardattributes_3.id AS standardattributes_3_id, standardattributes_3.resource_type AS standardattributes_3_resource_type, standardattributes_3.description AS standardattributes_3_description, tags_3.standard_attr_id AS tags_3_standard_attr_id, tags_3.tag AS tags_3_tag, externalnetworks_1.network_id AS externalnetworks_1_network_id, external
[Yahoo-eng-team] [Bug 1575233] [NEW] target-lun 0 can be delete when detaching volume.
Public bug reported: target-lun 0 can delete when detaching volume. Environment === - OpenStack Release : Liberty - OS : Ubuntu 14.04.2 LTS - Hypervisor : KVM - Cinder Storage : iSCSI (EMC VNX) Description === I am using EMC Storage as volume backend. Recently, I discovered problem logic when detaching volume. As you know, nova-compute try to delete device and mapper of detaching volume. AFAIK, EMC Storage has lun-0 device in compute node. lun-0 means system device of EMS storage. $ls -al /dev/disk/by-path/*lun-0 lrwxrwxrwx 1 root root 9 Feb 24 20:05 ip-x.x.x.x:3260-iscsi-iqn.1992-04.com.emc:cx.ckm00142100690.a0-lun-0 -> ../../sdd lrwxrwxrwx 1 root root 9 Feb 24 20:05 ip-x.x.x.x:3260-iscsi-iqn.1992-04.com.emc:cx.ckm00142100690.b0-lun-0 -> ../../sde lrwxrwxrwx 1 root root 9 Feb 24 20:05 ip-x.x.x.x:3260-iscsi-iqn.1992-04.com.emc:cx.ckm00142100690.a1-lun-0 -> ../../sdi lrwxrwxrwx 1 root root 9 Feb 24 20:05 ip-x.x.x.x:3260-iscsi-iqn.1992-04.com.emc:cx.ckm00142100690.b1-lun-0 -> ../../sdh But nova-compute can delete device of lun-0 when failing to get 'target_lun' https://github.com/openstack/nova/blob/stable/kilo/nova/virt/libvirt/volume.py def _delete_mpath(self, iscsi_properties, multipath_device, ips_iqns): entries = self._get_iscsi_devices() # Loop through ips_iqns to construct all paths iqn_luns = [] for ip, iqn in ips_iqns: iqn_lun = '%s-lun-%s' % (iqn,iscsi_properties.get('target_lun', 0))<-- return 0 (lun-id) when getting value of 'target_lun' i think that it needs to modify that code. ** Affects: nova Importance: Low Assignee: jangpro2 (jangseon-ryu) Status: In Progress ** Changed in: nova Assignee: (unassigned) => jangpro2 (jangseon-ryu) ** Description changed: - target-lun 0 can delete when detaching volume. Environment === - OpenStack Release : Liberty - OS : Ubuntu 14.04.2 LTS - Hypervisor : KVM - Cinder Storage : iSCSI (EMC VNX) Description === I am using EMC Storage as volume backend. Recently, I discovered problem logic when detaching volume. As you know, nova-compute try to delete device and mapper of detaching volume. AFAIK, EMC Storage has lun-0 device in compute node. - lun-0 means system device of EMS storage. + lun-0 means system device of EMS storage. $ls -al /dev/disk/by-path/*lun-0 lrwxrwxrwx 1 root root 9 Feb 24 20:05 ip-x.x.x.x:3260-iscsi-iqn.1992-04.com.emc:cx.ckm00142100690.a0-lun-0 -> ../../sdd lrwxrwxrwx 1 root root 9 Feb 24 20:05 ip-x.x.x.x:3260-iscsi-iqn.1992-04.com.emc:cx.ckm00142100690.b0-lun-0 -> ../../sde lrwxrwxrwx 1 root root 9 Feb 24 20:05 ip-x.x.x.x:3260-iscsi-iqn.1992-04.com.emc:cx.ckm00142100690.a1-lun-0 -> ../../sdi lrwxrwxrwx 1 root root 9 Feb 24 20:05 ip-x.x.x.x:3260-iscsi-iqn.1992-04.com.emc:cx.ckm00142100690.b1-lun-0 -> ../../sdh But nova-compute can delete device of lun-0 when failing to get 'target_lun' https://github.com/openstack/nova/blob/stable/kilo/nova/virt/libvirt/volume.py - def _delete_mpath(self, iscsi_properties, multipath_device, ips_iqns): - entries = self._get_iscsi_devices() - # Loop through ips_iqns to construct all paths - iqn_luns = [] - for ip, iqn in ips_iqns: - iqn_lun = '%s-lun-%s' % (iqn, - iscsi_properties.get('target_lun', 0)) <-- return 0 (lun-id) when getting value of 'target_lun' + def _delete_mpath(self, iscsi_properties, multipath_device, ips_iqns): + entries = self._get_iscsi_devices() + # Loop through ips_iqns to construct all paths + iqn_luns = [] + for ip, iqn in ips_iqns: + iqn_lun = '%s-lun-%s' % + (iqn,iscsi_properties.get('target_lun', 0))<-- return 0 (lun-id) when getting value of 'target_lun' i think that it needs to modify that code. -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1575233 Title: target-lun 0 can be delete when detaching volume. Status in OpenStack Compute (nova): In Progress Bug description: target-lun 0 can delete when detaching volume. Environment === - OpenStack Release : Liberty - OS : Ubuntu 14.04.2 LTS - Hypervisor : KVM - Cinder Storage : iSCSI (EMC VNX) Description === I am using EMC Storage as volume backend. Recently, I discovered problem logic when detaching volume. As you know, nova-compute try to delete device and mapper of detaching volume. AFAIK, EMC Storage has lun-0 device in compute node. lun-0 means system device of EMS storage. $ls -al /dev/disk/by-path/*lun-0 lrwxrwxrwx 1 root root 9 Feb 24 20:05 ip-x.x.x.x:3260-iscsi-iqn.1992-04.com.emc:cx.ckm00142100690.a0-lun-0 -> ../../sdd lrwxrwxrwx 1 root root 9 Feb 24 20:05 ip-x.x.x.x:3260
[Yahoo-eng-team] [Bug 1456336] Re: StrongSwan ipsec.conf template is incomplete
Reviewed: https://review.openstack.org/309372 Committed: https://git.openstack.org/cgit/openstack/neutron-vpnaas/commit/?id=e10adef0b1a74e1df39d7d5c79257b9d5b9116b7 Submitter: Jenkins Branch:master commit e10adef0b1a74e1df39d7d5c79257b9d5b9116b7 Author: nick.zhuyj Date: Fri Apr 22 04:33:55 2016 -0500 Strongswan: complete the ipsec.conf Many fields in strongswan ipsec.conf template is not specified. Thus they are used the default value instead of the value user provided. This patch fill those fields in the template. Change-Id: Ibc22db5d75eec6c9508880720dac6acd6197da22 Closes-Bug: #1456336 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1456336 Title: StrongSwan ipsec.conf template is incomplete Status in neutron: Fix Released Bug description: After switching from openswan to strongswan our VPN services were not working anymore. I figured out that the strongswan ipsec.conf template is incomplete. Attributes like IKE and IPSEC Policy were missing. After modification of the template everything is working again. I attached my fixed template. Comments do not work for strongswan, so the template has no comments. Sorry for that. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1456336/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575007] Re: Bad libvirt version 1.2.21 when doing live migration
Closing this out as WONTFIX assuming that only migrateToURI3 will continue to be used for libvirt versions >= 1.2.17 ** Changed in: nova Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1575007 Title: Bad libvirt version 1.2.21 when doing live migration Status in OpenStack Compute (nova): Won't Fix Bug description: Description of problem: Unable to change target guest XML during migration, the argument dxml in virDomainMigrateToURI2 is not used in libvirt 1.2.21 check https://bugzilla.redhat.com/show_bug.cgi?id=1295405 Bug is fixed by libvirt 1.3.1: Jan 17 2016 check http://libvirt.org/news.html libvirt-domain: fix dxml passing in virDomainMigrateToURI2 (Ján Tomko) So we need to blacklist libvirt 1.2.21 when using virDomainMigrateToURI2 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1575007/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575225] [NEW] Neutron only permits IPv6 MLDv1 not v2
Public bug reported: IPv6 Multicast Listener Discovery (MLD) v2 [1] is used on recent version of Linux, currently Neutron only permits MLDv1 in the ICMPV6_ALLOWED_TYPES, so duplicate address discovery (DAD) doesn't not actually detect duplicate addresses should Neutron actually enforce ICMPv6 source addresses (bug/1502933). While Neutron should not assign duplicate addresses, instances where duplicate addresses are possible on provider networks between instances and external devices and on user assign addresses when using allowed address pairs. Here is a dump showing duplicate address detection on a recent Linux kernel: $ uname -r 4.4.0-0.bpo.1-amd64 $ sudo ip link add veth0 type veth peer name veth1 $ sudo ip link set veth1 up $ sudo tcpdump -npel -i veth1 & [1] 15528 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on veth1, link-type EN10MB (Ethernet), capture size 262144 bytes $ sudo ip link set veth0 up $ 09:47:38.853762 5e:9b:3c:4f:a3:e0 > 33:33:00:00:00:16, ethertype IPv6 (0x86dd), length 90: :: > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28 09:47:38.853774 b2:29:3a:34:bc:eb > 33:33:00:00:00:16, ethertype IPv6 (0x86dd), length 90: :: > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28 09:47:39.113772 b2:29:3a:34:bc:eb > 33:33:ff:34:bc:eb, ethertype IPv6 (0x86dd), length 78: :: > ff02::1:ff34:bceb: ICMP6, neighbor solicitation, who has fe80::b029:3aff:fe34:bceb, length 24 09:47:39.141766 5e:9b:3c:4f:a3:e0 > 33:33:ff:4f:a3:e0, ethertype IPv6 (0x86dd), length 78: :: > ff02::1:ff4f:a3e0: ICMP6, neighbor solicitation, who has fe80::5c9b:3cff:fe4f:a3e0, length 24 09:47:39.505764 b2:29:3a:34:bc:eb > 33:33:00:00:00:16, ethertype IPv6 (0x86dd), length 90: :: > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28 09:47:39.717759 5e:9b:3c:4f:a3:e0 > 33:33:00:00:00:16, ethertype IPv6 (0x86dd), length 90: :: > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28 09:47:40.113807 b2:29:3a:34:bc:eb > 33:33:00:00:00:16, ethertype IPv6 (0x86dd), length 90: fe80::b029:3aff:fe34:bceb > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28 09:47:40.113827 b2:29:3a:34:bc:eb > 33:33:00:00:00:02, ethertype IPv6 (0x86dd), length 70: fe80::b029:3aff:fe34:bceb > ff02::2: ICMP6, router solicitation, length 16 09:47:40.121756 b2:29:3a:34:bc:eb > 33:33:00:00:00:16, ethertype IPv6 (0x86dd), length 90: fe80::b029:3aff:fe34:bceb > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28 09:47:40.141811 5e:9b:3c:4f:a3:e0 > 33:33:00:00:00:16, ethertype IPv6 (0x86dd), length 90: fe80::5c9b:3cff:fe4f:a3e0 > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28 09:47:40.141836 5e:9b:3c:4f:a3:e0 > 33:33:00:00:00:02, ethertype IPv6 (0x86dd), length 70: fe80::5c9b:3cff:fe4f:a3e0 > ff02::2: ICMP6, router solicitation, length 16 09:47:40.149763 5e:9b:3c:4f:a3:e0 > 33:33:00:00:00:16, ethertype IPv6 (0x86dd), length 90: fe80::5c9b:3cff:fe4f:a3e0 > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28 1. https://www.ietf.org/rfc/rfc3810.txt ** Affects: neutron Importance: Undecided Assignee: Dustin Lundquist (dlundquist) Status: In Progress ** Tags: ipv6 ** Description changed: - IPv6 Multicast Listener Discovery (MLD) v2 is used on recent version of - Linux, currently Neutron only permits MLDv1 in the ICMPV6_ALLOWED_TYPES, - so duplicate address discovery (DAD) doesn't not actually detect - duplicate addresses should Neutron actually enforce ICMPv6 source - addresses (bug/1502933). While Neutron should not assign duplicate - addresses, instances where duplicate addresses are possible on provider - networks between instances and external devices and on user assign - addresses when using allowed address pairs. + IPv6 Multicast Listener Discovery (MLD) v2 [1] is used on recent version + of Linux, currently Neutron only permits MLDv1 in the + ICMPV6_ALLOWED_TYPES, so duplicate address discovery (DAD) doesn't not + actually detect duplicate addresses should Neutron actually enforce + ICMPv6 source addresses (bug/1502933). While Neutron should not assign + duplicate addresses, instances where duplicate addresses are possible on + provider networks between instances and external devices and on user + assign addresses when using allowed address pairs. Here is a dump showing duplicate address detection on a recent Linux kernel: $ uname -r 4.4.0-0.bpo.1-amd64 $ sudo ip link add veth0 type veth peer name veth1 $ sudo ip link set veth1 up $ sudo tcpdump -npel -i veth1 & [1] 15528 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on veth1, link-type EN10MB (Ethernet), capture size 262144 bytes $ sudo ip link set veth0 up $ 09:47:38.853762 5e:9b:3c:4f:a3:e0 > 33:33:00:00:00:16, ethert
[Yahoo-eng-team] [Bug 1574476] Re: lbaasv2 session_persistence or session-persistence?
** Project changed: neutron => python-neutronclient ** Changed in: python-neutronclient Status: New => Confirmed ** Changed in: python-neutronclient Importance: Undecided => Low ** Tags added: low-hanging-fruit -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1574476 Title: lbaasv2 session_persistence or session-persistence? Status in python-neutronclient: Confirmed Bug description: problem is in Kilo neutron-lbaas branch. When we create a Lbaas pool with --session_persistence it configured ok, we create a Lbaas pool with --session-persistence it configured failed. But we update a Lbaas pool with --session-persistence or --session_persistence it updated OK. [root@opencos2 ~(keystone_admin)]# [root@opencos2 ~(keystone_admin)]# neutron lbaas-pool-create --listener listener500-1 --protocol HTTP --lb-algorithm SOURCE_IP pool500-1 --session-persistence type=dict type='SOURCE_IP' Invalid values_specs type=SOURCE_IP [root@opencos2 ~(keystone_admin)]# [root@opencos2 ~(keystone_admin)]# neutron lbaas-pool-create --listener listener500-1 --protocol HTTP --lb-algorithm SOURCE_IP pool500-1 --session_persistence type=dict type='SOURCE_IP' Created a new pool: +-++ | Field | Value | +-++ | admin_state_up | True | | description || | healthmonitor_id|| | id | 64bed1f2-ff02-4b12-bdfa-1904079786be | | lb_algorithm| SOURCE_IP | | listeners | {"id": "162c70aa-175d-473a-b13a-e3c335a0a9e1"} | | members || | name| pool500-1 | | protocol| HTTP | | session_persistence | {"cookie_name": null, "type": "SOURCE_IP"} | | tenant_id | be58eaec789d44f296a65f96b944a9f5 | +-++ [root@opencos2 ~(keystone_admin)]# [root@opencos2 ~(keystone_admin)]# [root@opencos2 ~(keystone_admin)]# neutron lbaas-pool-update pool500-1 --session_persistence type=dict type='HTTP_COOKIE' Updated pool: pool500-1 [root@opencos2 ~(keystone_admin)]# [root@opencos2 ~(keystone_admin)]# neutron lbaas-pool-update pool500-1 --session-persistence type=dict type='SOURCE_IP' Updated pool: pool500-1 [root@opencos2 ~(keystone_admin)]# [root@opencos2 ~(keystone_admin)]# To manage notifications about this bug go to: https://bugs.launchpad.net/python-neutronclient/+bug/1574476/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1573949] Re: lbaas: better to close a socket explicitly rather than implicitly when they are garbage-collected
This is being reported against an lbaas v1 driver, which is deprecated and pending removal in Newton. If you want to submit a code change, a reviewer might look at it, but we're not accepting bugs/blueprints/specs for lbaas v1. ** Changed in: neutron Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1573949 Title: lbaas: better to close a socket explicitly rather than implicitly when they are garbage-collected Status in neutron: Won't Fix Bug description: https://github.com/openstack/neutron- lbaas/blob/master/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py#L205 : def _get_stats_from_socket(self, socket_path, entity_type): try: s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) s.connect(socket_path) s.send('show stat -1 %s -1\n' % entity_type) raw_stats = '' chunk_size = 1024 while True: chunk = s.recv(chunk_size) raw_stats += chunk if len(chunk) < chunk_size: break return self._parse_stats(raw_stats) except socket.error as e: LOG.warning(_LW('Error while connecting to stats socket: %s'), e) return {} in this function, a socket connection is created but it is not closed explicitly. It is better to close it when all things have been done To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1573949/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1574985] Re: Update security group using Heat
Names are not unique for SG's, so it depends on if its a put or post. Was this intended in the heat template? ** Project changed: neutron => heat -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1574985 Title: Update security group using Heat Status in heat: New Bug description: I created a security group using Horizon dashboard. Then, I created a heat template with the same security group name with some new rules so that my security group gets updatee with new rules. However, heat template created a new security group instead of updating the existing one. Is this a bug or an unsupported feature ? Below is my yaml file heat_template_version: 2013-05-23 description: Create a security group parameters: sec_group: type: string default: test-secgroup resources: security_group: type: OS::Neutron::SecurityGroup properties: name: { get_param: sec_group } rules: - remote_ip_prefix: 0.0.0.0/0 protocol: tcp port_range_min: 22 port_range_max: 22 - remote_ip_prefix: 0.0.0.0/0 protocol: icmp To manage notifications about this bug go to: https://bugs.launchpad.net/heat/+bug/1574985/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575180] Re: logging does not work
** Project changed: neutron => python-openstackclient -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1575180 Title: logging does not work Status in python-openstackclient: New Bug description: I follow the link http://docs.openstack.org/developer/python-openstackclient/configuration.html#logging-settings to enable openstackclient syslog, here is my cloud.yaml contents: juno@bgpvpn:~$ cat /etc/openstack/clouds.yaml clouds: devstack: auth: auth_url: http://192.168.122.102:35357 password: blade123 project_domain_id: default project_name: demo user_domain_id: default username: demo identity_api_version: '3' region_name: RegionOne volume_api_version: '2' operation_log: logging: TRUE file: /tmp/openstackclient_admin.log level: debug devstack-admin: auth: auth_url: http://192.168.122.102:35357 password: blade123 project_domain_id: default project_name: admin user_domain_id: default username: admin identity_api_version: '3' region_name: RegionOne volume_api_version: '2' operation_log: logging: TRUE file: /tmp/openstackclient_admin.log level: debug devstack-alt: auth: auth_url: http://192.168.122.102:35357 password: blade123 project_domain_id: default project_name: alt_demo user_domain_id: default username: alt_demo identity_api_version: '3' region_name: RegionOne volume_api_version: '2' juno@bgpvpn:~$ Then I create a network: juno@bgpvpn:~$ openstack --os-cloud devstack-admin network create juno +---+--+ | Field | Value| +---+--+ | admin_state_up| UP | | availability_zone_hints | | | availability_zones| | | created_at| 2016-04-26 12:48:49+00:00| | description | | | headers | | | id| fe8a5d06-beb9-4d8a-974e-def14596bc0d | | ipv4_address_scope| None | | ipv6_address_scope| None | | mtu | 1450 | | name | juno | | port_security_enabled | True | | project_id| 4503c1d4f54b48cdb941f4fa43cf4916 | | provider:network_type | vxlan| | provider:physical_network | None | | provider:segmentation_id | 1029 | | router_external | Internal | | shared| False| | status| ACTIVE | | subnets | | | tags | [] | | updated_at| 2016-04-26 12:48:49+00:00| +---+--+ But there is no logs generated. juno@bgpvpn:~$ cat /tmp/openstackclient_admin.log cat: /tmp/openstackclient_admin.log: No such file or directory juno@bgpvpn:~$ To manage notifications about this bug go to: https://bugs.launchpad.net/python-openstackclient/+bug/1575180/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1574565] Re: tempest test_preserve_preexisting_port fails
i added nova to the bug because it looks like a nova bug. ie. missing "constants.DNS_INTEGRATION in self.extensions" check. ** Also affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1574565 Title: tempest test_preserve_preexisting_port fails Status in networking-midonet: Confirmed Status in OpenStack Compute (nova): New Bug description: tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_preserve_preexisting_port failed like the following error: http://logs.openstack.org/43/306343/4/check/gate-tempest-dsvm- networking-midonet-v2/193b941/ Traceback (most recent call last): File "tempest/test.py", line 113, in wrapper return f(self, *func_args, **func_kwargs) File "tempest/scenario/test_network_basic_ops.py", line 662, in test_preserve_preexisting_port self.assertEqual('', port['device_id']) File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 406, in assertEqual self.assertThat(observed, matcher, message) File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 493, in assertThat raise mismatch_error testtools.matchers._impl.MismatchError: '' != u'260b6313-90e9-47f7-90b8-74f4a75877ab' I wonder if the following neutron server error is related and it is related to the current nova change(*): 2016-04-25 07:29:01.069 21424 INFO neutron.api.v2.resource [req-408f3e48-a219-4748-908c-7eeeb28d932c neutron -] update failed (client error): Unrecognized attribute(s) 'dns_name' 2016-04-25 07:29:01.070 21424 INFO neutron.wsgi [req-408f3e48-a219-4748-908c-7eeeb28d932c neutron -] 127.0.0.1 - - [25/Apr/2016 07:29:01] "PUT /v2.0/ports/6476e377-74a4-4872-8a7f-f792b3420794.json HTTP/1.1" 400 332 0.003440 (*) change-id: I65edb33b955a91d1701fc91cb9fae0a5f26d4e46 To manage notifications about this bug go to: https://bugs.launchpad.net/networking-midonet/+bug/1574565/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575180] [NEW] logging does not work
Public bug reported: I follow the link http://docs.openstack.org/developer/python-openstackclient/configuration.html#logging-settings to enable openstackclient syslog, here is my cloud.yaml contents: juno@bgpvpn:~$ cat /etc/openstack/clouds.yaml clouds: devstack: auth: auth_url: http://192.168.122.102:35357 password: blade123 project_domain_id: default project_name: demo user_domain_id: default username: demo identity_api_version: '3' region_name: RegionOne volume_api_version: '2' operation_log: logging: TRUE file: /tmp/openstackclient_admin.log level: debug devstack-admin: auth: auth_url: http://192.168.122.102:35357 password: blade123 project_domain_id: default project_name: admin user_domain_id: default username: admin identity_api_version: '3' region_name: RegionOne volume_api_version: '2' operation_log: logging: TRUE file: /tmp/openstackclient_admin.log level: debug devstack-alt: auth: auth_url: http://192.168.122.102:35357 password: blade123 project_domain_id: default project_name: alt_demo user_domain_id: default username: alt_demo identity_api_version: '3' region_name: RegionOne volume_api_version: '2' juno@bgpvpn:~$ Then I create a network: juno@bgpvpn:~$ openstack --os-cloud devstack-admin network create juno +---+--+ | Field | Value| +---+--+ | admin_state_up| UP | | availability_zone_hints | | | availability_zones| | | created_at| 2016-04-26 12:48:49+00:00| | description | | | headers | | | id| fe8a5d06-beb9-4d8a-974e-def14596bc0d | | ipv4_address_scope| None | | ipv6_address_scope| None | | mtu | 1450 | | name | juno | | port_security_enabled | True | | project_id| 4503c1d4f54b48cdb941f4fa43cf4916 | | provider:network_type | vxlan| | provider:physical_network | None | | provider:segmentation_id | 1029 | | router_external | Internal | | shared| False| | status| ACTIVE | | subnets | | | tags | [] | | updated_at| 2016-04-26 12:48:49+00:00| +---+--+ But there is no logs generated. juno@bgpvpn:~$ cat /tmp/openstackclient_admin.log cat: /tmp/openstackclient_admin.log: No such file or directory juno@bgpvpn:~$ ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1575180 Title: logging does not work Status in neutron: New Bug description: I follow the link http://docs.openstack.org/developer/python-openstackclient/configuration.html#logging-settings to enable openstackclient syslog, here is my cloud.yaml contents: juno@bgpvpn:~$ cat /etc/openstack/clouds.yaml clouds: devstack: auth: auth_url: http://192.168.122.102:35357 password: blade123 project_domain_id: default project_name: demo user_domain_id: default username: demo identity_api_version: '3' region_name: RegionOne volume_api_version: '2' operation_log: logging: TRUE file: /tmp/openstackclient_admin.log level: debug devstack-admin: auth: auth_url: http://192.168.122.102:35357 password: blade123 project_domain_id: default project_name: admin user_domain_id: default username: admin identity_api_version: '3' region_name: RegionOne volume_api_version: '2' operation_log: logging: TRUE file: /tmp/openstackclient_admin.log level: debug devstack-alt: auth: auth_url: http://192.168.122.102:35357 password: blade123 project_domain_id: default project_name: alt_demo user_domain_id: defau
[Yahoo-eng-team] [Bug 1565752] Re: Too many PIPEs are created when subprocess.Open fails
** Changed in: neutron Status: Confirmed => Invalid ** Changed in: neutron Assignee: j_king (james-agentultra) => (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1565752 Title: Too many PIPEs are created when subprocess.Open fails Status in neutron: Invalid Bug description: 1. How to reproduce: Set max process (soft, hard) for particular user Example: modify file /etc/security/limits.conf hunters hardnproc 70 hunters softnproc 70 And then, start neutron-openvswitch-agent with this user. Try to start many another applications to get all the free processes, then the error log will be thrown. In root user, check number of current open files of neutron-openvswitch-agent service. # ps -ef | grep neutron-openvswitch 501 29401 1 2 Mar30 ?03:13:53 /usr/bin/python /usr/bin/neutron-openvswitch-agent ... # lsof -p 29401 neutron-o 29401 openstack 10r FIFO0,8 0t0 3849643462 pipe neutron-o 29401 openstack 11w FIFO0,8 0t0 3849643462 pipe neutron-o 29401 openstack 12r FIFO0,8 0t0 3849643463 pipe neutron-o 29401 openstack 13w FIFO0,8 0t0 3849643463 pipe neutron-o 29401 openstack 14r FIFO0,8 0t0 3849643464 pipe neutron-o 29401 openstack 15w FIFO0,8 0t0 3849643464 pipe ... Too many PIPE are created. 2. Summary: At weekend, when server runs at high load for rotating logs or something else, neutron-openvswitch-agent gets error: 2016-04-04 18:05:33.942 7817 ERROR neutron.agent.common.ovs_lib [req-42b082b1-2fbf-48a2-b2f3-3b7d774141f0 - - - - -] Unable to execute ['ovs-ofctl', 'dump-flows', 'br-int', 'table=23']. Exception: [Errno 11] Resource temporarily unavailable 2016-04-04 18:05:33.944 7817 ERROR neutron.agent.common.ovs_lib [req-42b082b1-2fbf-48a2-b2f3-3b7d774141f0 - - - - -] Traceback (most recent call last): File "/home/hunters/neutron-7.0.0/neutron/agent/common/ovs_lib.py", line 226, in run_ofctl process_input=process_input) File "/home/hunters/neutron-7.0.0/neutron/agent/linux/utils.py", line 120, in execute addl_env=addl_env) File "/home/hunters/neutron-7.0.0/neutron/agent/linux/utils.py", line 89, in create_process stderr=subprocess.PIPE) File "/home/hunters/neutron-7.0.0/neutron/common/utils.py", line 199, in subprocess_popen close_fds=close_fds, env=env) File "/home/hunters/neutron-7.0.0/.venv/local/lib/python2.7/site-packages/eventlet/green/subprocess.py", line 53, in init subprocess_orig.Popen.init(self, args, 0, argss, *kwds) File "/usr/lib/python2.7/subprocess.py", line 710, in init errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1223, in _execute_child self.pid = os.fork() OSError: [Errno 11] Resource temporarily unavailable And then, the PIPEs are not closed. About 700 PIPE are created. After 2 week, it throws error "Too many open files" and then neutron- openvswitch-agent stops. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1565752/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1515841] Re: Link rot in identity-api-v3-os-inherit-ext
** Changed in: openstack-api-site Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1515841 Title: Link rot in identity-api-v3-os-inherit-ext Status in OpenStack Identity (keystone): Fix Released Status in openstack-api-site: Invalid Bug description: When I visit http://specs.openstack.org/openstack/keystone- specs/api/v3/identity-api-v3-os-inherit-ext.html, I find almost all the links are dead. e.g. Relationship: http://docs.openstack.org/api /openstack-identity/3/ext/OS- INHERIT/1.0/rel/project_user_role_inherited_to_projects To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1515841/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575146] [NEW] ovs port status should the same as tap.
Public bug reported: In some case, when physnet is donw. VM should know the status of it. But now we don’t。 So mybe we will add a function of this. Maybe we should add a configure option. When 'True', the port in VM should be down when the physnet in host is down. ** Affects: neutron Importance: Undecided Assignee: Yan Songming (songmingyan) Status: New ** Changed in: neutron Assignee: (unassigned) => Yan Songming (songmingyan) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1575146 Title: ovs port status should the same as tap. Status in neutron: New Bug description: In some case, when physnet is donw. VM should know the status of it. But now we don’t。 So mybe we will add a function of this. Maybe we should add a configure option. When 'True', the port in VM should be down when the physnet in host is down. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1575146/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575154] [NEW] Ubuntu 16.04, Unexpected API Error, , Nova
Public bug reported: Creating an instance fails: Command: openstack server create --flavor 42b791ba-3330-4a8c-b8bc-5b8afe94b7ce --image 6b5bfd61-8c35-44e0-be49-da12a6988cd1 --nic net-id=16ed9203-beaf-42f0-bf37-b7bdf68839f1 --security-group default --key-name testinstance public-instance --debug (I'm following: http://docs.openstack.org/liberty/install-guide-ubuntu /launch-instance-public.html, all commands nova flavor-list, nova image- list, work without issues) boot_args: ['public-instance', , ] boot_kwargs: {'files': {}, 'userdata': None, 'availability_zone': None, 'nics': [{'port-id': '', 'net-id': u'16ed9203-beaf-42f0-bf37-b7bdf68839f1', 'v4-fixed-ip': '', 'v6-fixed-ip': ''}], 'block_device_mapping': {}, 'max_count': 1, 'meta': None, 'key_name': 'testinstance', 'min_count': 1, 'scheduler_hints': {}, 'reservation_id': None, 'security_groups': ['default'], 'config_drive': None} REQ: curl -g -i -X POST http://172.24.33.142:8774/v2.1/172f573ed78a4fc7b3460ea4a3ac6dbb/servers -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}fadf0d364f2eb45845658431ef42d15ddf1d4e8a" -d '{"server": {"name": "public-instance", "imageRef": "6b5bfd61-8c35-44e0-be49-da12a6988cd1", "key_name": "testinstance", "flavorRef": "42b791ba-3330-4a8c-b8bc-5b8afe94b7ce", "max_count": 1, "min_count": 1, "networks": [{"uuid": "16ed9203-beaf-42f0-bf37-b7bdf68839f1"}], "security_groups": [{"name": "default"}]}}' "POST /v2.1/172f573ed78a4fc7b3460ea4a3ac6dbb/servers HTTP/1.1" 500 216 RESP: [500] Content-Length: 216 X-Compute-Request-Id: req-9bc78003-48a4-4059-9dad-ed7ee468c931 Vary: X-OpenStack-Nova-API-Version Connection: keep-alive X-Openstack-Nova-Api-Version: 2.1 Date: Tue, 26 Apr 2016 12:01:13 GMT Content-Type: application/json; charset=UTF-8 RESP BODY: {"computeFault": {"message": "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.\n", "code": 500}} Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-9bc78003-48a4-4059-9dad-ed7ee468c931) Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/cliff/app.py", line 374, in run_subcommand result = cmd.run(parsed_args) File "/usr/lib/python2.7/dist-packages/openstackclient/common/command.py", line 38, in run return super(Command, self).run(parsed_args) File "/usr/lib/python2.7/dist-packages/cliff/display.py", line 92, in run column_names, data = self.take_action(parsed_args) File "/usr/lib/python2.7/dist-packages/openstackclient/compute/v2/server.py", line 519, in take_action server = compute_client.servers.create(*boot_args, **boot_kwargs) File "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 1233, in create **boot_kwargs) File "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 667, in _boot return_raw=return_raw, **kwargs) File "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 345, in _create resp, body = self.api.client.post(url, body=body) File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 179, in post return self.request(url, 'POST', **kwargs) File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 94, in request raise exceptions.from_response(resp, body, url, method) ClientException: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-9bc78003-48a4-4059-9dad-ed7ee468c931) clean_up CreateServer: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-9bc78003-48a4-4059-9dad-ed7ee468c931) Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/openstackclient/shell.py", line 118, in run ret_val = super(OpenStackShell, self).run(argv) File "/usr/lib/python2.7/dist-packages/cliff/app.py", line 255, in run result = self.run_subcommand(remainder) File "/usr/lib/python2.7/dist-packages/openstackclient/shell.py", line 153, in run_subcommand ret_value = super(OpenStackShell, self).run_subcommand(argv) File "/usr/lib/python2.7/dist-packages/cliff/app.py", line 374, in run_subcommand result = cmd.run(parsed_args) File "/usr/lib/python2.7/dist-packages/openstackclient/common/command.py", line 38, in run return super(Command, self).run(parsed_args) File "/usr/lib/python2.7/dist-packages/cliff/display.py", line 92, in run column_names, data = self.take_action(parsed_args) File "/usr/lib/python2.7/dist-packages/openstackclient/compute/v2/server.py", line 519, in take_action server = compute_client.servers.create(*boot_args, **boot_kwargs) File "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 1233, in create **boot
[Yahoo-eng-team] [Bug 1575134] [NEW] Same physnet has the same mac address
Public bug reported: neutron net-show d973799a-c900-47ea-a369-a5610b43370c +---+--+ | Field | Value| +---+--+ | admin_state_up| True | | bandwidth | 0| | id| d973799a-c900-47ea-a369-a5610b43370c | | mtu | 1500 | | name | test1| | provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 83 | | router:external | False| | shared| False| | status| ACTIVE | | subnets | | | tenant_id | 9c08bbe9af9c4a1ca3bbfb7f660b5909 | | vlan_transparent | | +---+--+ [root@tfg162 ~(keystone_admin)]# killall screen; killall python^C [root@tfg162 ~(keystone_admin)]# neutron port-create test1 --name bandwidth --binding:vnic-type direct --mac-address 00:01:02:03:04:05 Created a new port: +-+--+ | Field | Value| +-+--+ | admin_state_up | True | | bandwidth | 0| | binding:host_id | | | binding:profile | {} | | binding:vif_details | {} | | binding:vif_type| unbound | | binding:vnic_type | direct | | device_id | | | device_owner| | | fixed_ips | | | id | 393fcfde-21db-44b7-967c-6d741432d4ab | | mac_address | 00:01:02:03:04:05| | name| bandwidth| | network_id | d973799a-c900-47ea-a369-a5610b43370c | | status | DOWN | | tenant_id | 9c08bbe9af9c4a1ca3bbfb7f660b5909 | +-+--+ [root@tfg162 ~(keystone_admin)]# neutron port-create test2 --name bandwidth --binding:vnic-type direct --mac-address 00:01:02:03:04:05 Created a new port: +-+--+ | Field | Value| +-+--+ | admin_state_up | True | | bandwidth | 0| | binding:host_id | | | binding:profile | {} | | binding:vif_details | {} | | binding:vif_type| unbound | | binding:vnic_type | direct | | device_id | | | device_owner| | | fixed_ips | | | id | dfb28b9f-c713-4b95-b942-97c1a7ea8b7a | | mac_address | 00:01:02:03:04:05| | name| bandwidth| | network_id | 9d3f8b14-69d1-46b0-8636-a78bd912283e | | status | DOWN | | tenant_id | 9c08bbe9af9c4a1ca3bbfb7f660b5909 | +-+--+ but sriov NIC(such as intel 82599)don’t support the vf has the same mac,that will make vf can‘t rx packet even if this port is in different network。 ** Affects: neutron Importance: Undecided Assignee: Yan Songming (songmingyan) Status: New ** Changed in: neutron Assignee: (unassigned) => Yan Songming (songmingyan) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1575134 Title: Same physnet has the same mac address Status in neutron: New Bug description: neutron net-show d973799a-c900-47ea-a369-a5610b43370c +---+--+ | Field | Value
[Yahoo-eng-team] [Bug 1575106] [NEW] Keystone crash with coredump during apache stop
Public bug reported: Hey, i have lots of coredump from keystone during apache stop/restart ~# keystone --version 1.3.1 (gdb) where #0 0x7f32631c7730 in _PyTrash_thread_destroy_chain () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #1 0x7f326317ff4e in PyEval_EvalFrameEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #2 0x7f326318154d in PyEval_EvalCodeEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #3 0x7f326317fdd8 in PyEval_EvalFrameEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #4 0x7f326318154d in PyEval_EvalCodeEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #5 0x7f326317fdd8 in PyEval_EvalFrameEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #6 0x7f326318154d in PyEval_EvalCodeEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #7 0x7f32631b67a5 in ?? () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #8 0x7f3263122d43 in PyObject_Call () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #9 0x7f326317c3b1 in PyEval_EvalFrameEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #10 0x7f3263180059 in PyEval_EvalFrameEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #11 0x7f3263180059 in PyEval_EvalFrameEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #12 0x7f326318154d in PyEval_EvalCodeEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #13 0x7f32631b66d0 in ?? () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #14 0x7f3263122d43 in PyObject_Call () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #15 0x7f32630ae7bd in ?? () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #16 0x7f3263122d43 in PyObject_Call () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #17 0x7f326319b577 in PyEval_CallObjectWithKeywords () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #18 0x7f3263103f92 in ?? () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 #19 0x7f3269a8f182 in start_thread (arg=0x7f32414e3700) at pthread_create.c:312 #20 0x7f32697bc47d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111 (gdb) bt full #0 0x7f32631c7730 in _PyTrash_thread_destroy_chain () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #1 0x7f326317ff4e in PyEval_EvalFrameEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #2 0x7f326318154d in PyEval_EvalCodeEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #3 0x7f326317fdd8 in PyEval_EvalFrameEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #4 0x7f326318154d in PyEval_EvalCodeEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #5 0x7f326317fdd8 in PyEval_EvalFrameEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #6 0x7f326318154d in PyEval_EvalCodeEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #7 0x7f32631b67a5 in ?? () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #8 0x7f3263122d43 in PyObject_Call () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #9 0x7f326317c3b1 in PyEval_EvalFrameEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #10 0x7f3263180059 in PyEval_EvalFrameEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #11 0x7f3263180059 in PyEval_EvalFrameEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #12 0x7f326318154d in PyEval_EvalCodeEx () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #13 0x7f32631b66d0 in ?? () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #14 0x7f3263122d43 in PyObject_Call () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #15 0x7f32630ae7bd in ?? () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #16 0x7f3263122d43 in PyObject_Call () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #17 0x7f326319b577 in PyEval_CallObjectWithKeywords () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #18 0x7f3263103f92 in ?? () from /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 No symbol table info available. #19 0x7f3269a8f182 in start_thread (arg=0x7f32414e3700) at pthread_create.c:312 __res = pd = 0x7f32414e3700 now = unwind_buf = {cancel_jmp_buf = {{jmp_buf = {139853820737280, -3495166348351394815, 0, 0, 139853820737984, 139853820737280, 3538191194546613249, 3538281094028176385}, mask_was_saved = 0}}, priv = { pad = {0x0, 0x0, 0x0, 0x0}, d
[Yahoo-eng-team] [Bug 1575022] Re: Editing a flavor changes its ID
This is by design as nova does not provide a way of updating a flavor, only deleting and creating them (http://developer.openstack.org/api-ref- compute-v2.1.html#os-flavors-v2.1) Horizon allows to "update" a flavor by deleting the old one and creating a new one with the updated parameters, so the ID will always change on updating a flavor because its nore really updating it. ** Changed in: horizon Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1575022 Title: Editing a flavor changes its ID Status in OpenStack Dashboard (Horizon): Invalid Bug description: After finishing editing of a a flavor in Mitaka Horizon, it's always changeing the flavor ID. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1575022/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575057] [NEW] 'domain' is not honored in local.group mapping
Public bug reported: The JSON schema for Federation mapping in Mitaka doesn't allow 'domain' to be added to the local.group, but the code explicitly request it: 2016-04-26 10:54:30.614 28203 ERROR keystone.common.wsgi File "/usr/lib/python2.7/dist-packages/keystone/federation/utils.py", line 623, in _transform 2016-04-26 10:54:30.614 28203 ERROR keystone.common.wsgi domain = (group['domain'].get('name') or 2016-04-26 10:54:30.614 28203 ERROR keystone.common.wsgi KeyError: 'domain' ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1575057 Title: 'domain' is not honored in local.group mapping Status in OpenStack Identity (keystone): New Bug description: The JSON schema for Federation mapping in Mitaka doesn't allow 'domain' to be added to the local.group, but the code explicitly request it: 2016-04-26 10:54:30.614 28203 ERROR keystone.common.wsgi File "/usr/lib/python2.7/dist-packages/keystone/federation/utils.py", line 623, in _transform 2016-04-26 10:54:30.614 28203 ERROR keystone.common.wsgi domain = (group['domain'].get('name') or 2016-04-26 10:54:30.614 28203 ERROR keystone.common.wsgi KeyError: 'domain' To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1575057/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575033] [NEW] iptables-restore fails with RuntimeError for ipset
Public bug reported: The following Trace is seen ovs_neutron_agent while running functional tests http://logs.openstack.org/59/307159/5/check/gate-neutron-dsvm- fullstack/e1f25d4/logs/TestDVRL3Agent.test_snat_and_floatingip/neutron- openvswitch-agent--2016-04-22--12-33-51-032511.log.txt.gz 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-f775b822-c40d-4d94-a2ec-005fb8b038fb - - - - -] Error while processing VIF ports 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last): 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1992, in rpc_loop 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_info, ovs_restarted) 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1623, in process_network_ports 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_info.get('updated', set())) 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 292, in setup_port_filters 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent self.prepare_devices_filter(new_devices) 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 147, in decorated_function 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent *args, **kwargs) 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 172, in prepare_devices_filter 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent self.firewall.prepare_port_filter(device) 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent self.gen.next() 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/opt/stack/new/neutron/neutron/agent/firewall.py", line 129, in defer_apply 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent self.filter_defer_apply_off() 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/opt/stack/new/neutron/neutron/agent/linux/iptables_firewall.py", line 824, in filter_defer_apply_off 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/opt/stack/new/neutron/neutron/agent/linux/iptables_firewall.py", line 824, in filter_defer_apply_off 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent self.iptables.defer_apply_off() 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/opt/stack/new/neutron/neutron/agent/linux/iptables_manager.py", line 468, in defer_apply_off 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent self._apply() 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/opt/stack/new/neutron/neutron/agent/linux/iptables_manager.py", line 482, in _apply 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return self._apply_synchronized() 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/opt/stack/new/neutron/neutron/agent/linux/iptables_manager.py", line 559, in _apply_synchronized 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent '\n'.join(log_lines)) 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent File "/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent self.force_reraise() 2016-04-22 12:34:33.670 27936 ERROR neutron.plugins.ml2.drivers.openvswit
[Yahoo-eng-team] [Bug 1574750] Re: Full table scan on "ports" table lookup by "device_id"
Reviewed: https://review.openstack.org/310049 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=3fb07b662930889b52e9ed0bc5a5616212aca46c Submitter: Jenkins Branch:master commit 3fb07b662930889b52e9ed0bc5a5616212aca46c Author: Ilya Chukhnakov Date: Mon Apr 25 22:16:54 2016 +0300 Add device_id index to Port Some 'Port' queries use 'device_id' column for lookup. Such queries could be observed in database query log (at least) during instance launch. In the absence of 'device_id' index that leads to full table scan. That causes unnecessary database load and impacts query response time. Change-Id: If42b7d3265e216d393d3ab8c172b97637af908cc Closes-Bug: #1574750 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1574750 Title: Full table scan on "ports" table lookup by "device_id" Status in neutron: Fix Released Bug description: Current Neutron database model does not define an index for Port.device_id column. However observing the MySQL query log one could notice queries that would benefit from such an index: # sed -n "/WHERE.*device_id/s/'[^']*'//gp" < /var/lib/mysql/$DB_HOSTNAME.log|sort|uniq -c 34 WHERE ports.device_id IN () 78 WHERE ports.tenant_id IN () AND ports.device_id IN () Without that index the database is currently forced to use the full scan table access path (or potentially less selective 'tenant_id' index for the second query) which has suboptimal performance. Pre-conditions: Devstack (master) configured with Neutron networking (from Devstack guide http://docs.openstack.org/developer/devstack/guides/neutron.html #devstack-configuration). Neutron@master:91d95197d892356bd1ab8a96966c11e97d78441b Steps to reproduce: 0. enable MySQL query logging unless already enabled (set global general_log = 'ON') 1. launch new instance 2. observe MySQL log file for queries having ports.device_id in WHERE clause 3. run EXPLAIN query plan for such queries and observe the full scan table access path for 'ports' table To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1574750/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1543169] Re: Nova os-volume-types endpoint doesn't exist
Nova api has moved to nova code base. ** Changed in: nova Status: Invalid => Confirmed ** Changed in: openstack-api-site Status: Confirmed => Invalid ** Changed in: nova Assignee: (unassigned) => Sharat Sharma (sharat-sharma) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1543169 Title: Nova os-volume-types endpoint doesn't exist Status in OpenStack Compute (nova): Confirmed Status in openstack-api-site: Invalid Bug description: The Nova v2.1 documentation shows an endpoint "os-volume-types" which lists the available volume types. http://developer.openstack.org/api- ref-compute-v2.1.html#listVolumeTypes I am using OpenStack Liberty and that endpoint doesn't appear to exist anymore. GET requests sent to /v2.1/{tenant_id}/os-volume-types return 404 not found. When I searched the Nova codebase on GitHub, I could only find a reference to volume types in the policy.json but not implemented anywhere. Does this endpoint still exist, and if so what is the appropriate documentation? To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1543169/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575022] [NEW] Editing a flavor changes its ID
Public bug reported: After finishing editing of a a flavor in Mitaka Horizon, it's always changeing the flavor ID. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1575022 Title: Editing a flavor changes its ID Status in OpenStack Dashboard (Horizon): New Bug description: After finishing editing of a a flavor in Mitaka Horizon, it's always changeing the flavor ID. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1575022/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575007] [NEW] Bad libvirt version 1.2.21 when doing live migration
Public bug reported: Description of problem: Unable to change target guest XML during migration, the argument dxml in virDomainMigrateToURI2 is not used in libvirt 1.2.21 check https://bugzilla.redhat.com/show_bug.cgi?id=1295405 Bug is fixed by libvirt 1.3.1: Jan 17 2016 check http://libvirt.org/news.html libvirt-domain: fix dxml passing in virDomainMigrateToURI2 (Ján Tomko) So we need to blacklist libvirt 1.2.21 when using virDomainMigrateToURI2 ** Affects: nova Importance: Undecided Assignee: Eli Qiao (taget-9) Status: New ** Changed in: nova Assignee: (unassigned) => Eli Qiao (taget-9) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1575007 Title: Bad libvirt version 1.2.21 when doing live migration Status in OpenStack Compute (nova): New Bug description: Description of problem: Unable to change target guest XML during migration, the argument dxml in virDomainMigrateToURI2 is not used in libvirt 1.2.21 check https://bugzilla.redhat.com/show_bug.cgi?id=1295405 Bug is fixed by libvirt 1.3.1: Jan 17 2016 check http://libvirt.org/news.html libvirt-domain: fix dxml passing in virDomainMigrateToURI2 (Ján Tomko) So we need to blacklist libvirt 1.2.21 when using virDomainMigrateToURI2 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1575007/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1574988] [NEW]
Public bug reported: #nova boot --flavor m1.tiny --image CirrOS031 --nic net-id=1476-c17c-4215-af06-f46e2af3f6eb \ --security-group default --key-name demo-key private-instance ERROR (ClientException): Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-496974bf-95e5-41b0-9359-f4f07ccc9594) ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1574988 Title: Status in OpenStack Compute (nova): New Bug description: #nova boot --flavor m1.tiny --image CirrOS031 --nic net-id=1476-c17c-4215-af06-f46e2af3f6eb \ --security-group default --key-name demo-key private-instance ERROR (ClientException): Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-496974bf-95e5-41b0-9359-f4f07ccc9594) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1574988/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1574991] [NEW] member-id should contian only numbers and letters
Public bug reported: Now, there is no limit for member-id's format. It means that all characters are allowed. But member_id means a project's(tenant's) id. It shoud contian only numbers and letters. If not, it will lead some error: reprodue: env: glance master 1.create a member for an image: $glance member-create b9125ded-d2d0-4d4e-9eee-5623344c9cbf fdsae#$%^^da 2.delete the member from the image: $glance member-delete b9125ded-d2d0-4d4e-9eee-5623344c9cbf fdsae#$%^^da the error accoured: 404 Not Found fdsae not found in the member list of the image b9125ded-d2d0-4d4e-9eee-5623344c9cbf. (HTTP 404) ** Affects: glance Importance: Undecided Assignee: wangxiyuan (wangxiyuan) Status: New ** Changed in: glance Assignee: (unassigned) => wangxiyuan (wangxiyuan) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1574991 Title: member-id should contian only numbers and letters Status in Glance: New Bug description: Now, there is no limit for member-id's format. It means that all characters are allowed. But member_id means a project's(tenant's) id. It shoud contian only numbers and letters. If not, it will lead some error: reprodue: env: glance master 1.create a member for an image: $glance member-create b9125ded-d2d0-4d4e-9eee-5623344c9cbf fdsae#$%^^da 2.delete the member from the image: $glance member-delete b9125ded-d2d0-4d4e-9eee-5623344c9cbf fdsae#$%^^da the error accoured: 404 Not Found fdsae not found in the member list of the image b9125ded-d2d0-4d4e-9eee-5623344c9cbf. (HTTP 404) To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1574991/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1574985] [NEW] Update security group using Heat
Public bug reported: I created a security group using Horizon dashboard. Then, I created a heat template with the same security group name with some new rules so that my security group gets updatee with new rules. However, heat template created a new security group instead of updating the existing one. Is this a bug or an unsupported feature ? Below is my yaml file heat_template_version: 2013-05-23 description: Create a security group parameters: sec_group: type: string default: test-secgroup resources: security_group: type: OS::Neutron::SecurityGroup properties: name: { get_param: sec_group } rules: - remote_ip_prefix: 0.0.0.0/0 protocol: tcp port_range_min: 22 port_range_max: 22 - remote_ip_prefix: 0.0.0.0/0 protocol: icmp ** Affects: neutron Importance: Undecided Status: New ** Description changed: I created a security group using Horizon dashboard. Then, I created a heat template with the same security group name with some new rules so that my security group gets updatee with new rules. However, heat template created a new security group instead of updating the existing one. Is this a bug or an unsupported feature ? Below is my yaml file heat_template_version: 2013-05-23 description: Create a security group parameters: - sec_group: - type: string - default: test-manik + sec_group: + type: string + default: test-manik resources: - security_group: - type: OS::Neutron::SecurityGroup - properties: - name: { get_param: sec_group } - id: b82fd6a2-3592-4173-95a9-e4aab7336610 - rules: - - remote_ip_prefix: 0.0.0.0/0 - protocol: tcp - port_range_min: 22 - port_range_max: 22 - - remote_ip_prefix: 0.0.0.0/0 - protocol: icmp + security_group: + type: OS::Neutron::SecurityGroup + properties: + name: { get_param: sec_group } + rules: + - remote_ip_prefix: 0.0.0.0/0 + protocol: tcp + port_range_min: 22 + port_range_max: 22 + - remote_ip_prefix: 0.0.0.0/0 + protocol: icmp ** Description changed: I created a security group using Horizon dashboard. Then, I created a heat template with the same security group name with some new rules so that my security group gets updatee with new rules. However, heat template created a new security group instead of updating the existing one. Is this a bug or an unsupported feature ? Below is my yaml file heat_template_version: 2013-05-23 description: Create a security group parameters: sec_group: type: string - default: test-manik + default: test-secgroup resources: security_group: type: OS::Neutron::SecurityGroup properties: name: { get_param: sec_group } rules: - remote_ip_prefix: 0.0.0.0/0 protocol: tcp port_range_min: 22 port_range_max: 22 - remote_ip_prefix: 0.0.0.0/0 protocol: icmp -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1574985 Title: Update security group using Heat Status in neutron: New Bug description: I created a security group using Horizon dashboard. Then, I created a heat template with the same security group name with some new rules so that my security group gets updatee with new rules. However, heat template created a new security group instead of updating the existing one. Is this a bug or an unsupported feature ? Below is my yaml file heat_template_version: 2013-05-23 description: Create a security group parameters: sec_group: type: string default: test-secgroup resources: security_group: type: OS::Neutron::SecurityGroup properties: name: { get_param: sec_group } rules: - remote_ip_prefix: 0.0.0.0/0 protocol: tcp port_range_min: 22 port_range_max: 22 - remote_ip_prefix: 0.0.0.0/0 protocol: icmp To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1574985/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp