[Yahoo-eng-team] [Bug 1672345] [NEW] Loadbalancer V2 ports are not serviced by DVR
Public bug reported: I reported #1629539, which was on Mitaka/LBaaSv1, but I'm seeing the exact same behaviour on Newton/LBaaSv2. There's apparently a fix (for Kilo) in #1493809. There's also #1494003 (a duplicate of #1493809), which have a lot of debug output and apparently a way to reproduce. When I reinstalled my Openstack setup from Debian GNU/Linux Sid/Mitaka to Debian GNU/Linux Jessie/Newton, I started out with a non-distributed router (DVR). The LBaaS v1 _and_ v2 worked just fine there. But as soon as I enabled/setup DVR, they stoped working. I'm unsure of what information would be required, but "ask and it will be supplied". The problem I'm seeing is that the FIPS of the LB responds, but not the VIP. ** Affects: neutron Importance: Undecided Status: New ** Description changed: - I reported #1629539, which was on Mitaka/LBaaSv2, but I'm seeing the + I reported #1629539, which was on Mitaka/LBaaSv1, but I'm seeing the exact same behaviour on Newton/LBaaSv2. - - There's apparently a fix (for Kilo) in #1493809. There's also #1494003 (a duplicate of #1493809), which have a lot of debug output and apparently a way to reproduce. + There's apparently a fix (for Kilo) in #1493809. There's also #1494003 + (a duplicate of #1493809), which have a lot of debug output and + apparently a way to reproduce. When I reinstalled my Openstack setup from Debian GNU/Linux Sid/Mitaka to Debian GNU/Linux Jessie/Newton, I started out with a non-distributed router (DVR). The LBaaS v1 _and_ v2 worked just fine there. But as soon as I enabled/setup DVR, they stoped working. I'm unsure of what information would be required, but "ask and it will be supplied". ** Description changed: I reported #1629539, which was on Mitaka/LBaaSv1, but I'm seeing the exact same behaviour on Newton/LBaaSv2. There's apparently a fix (for Kilo) in #1493809. There's also #1494003 (a duplicate of #1493809), which have a lot of debug output and apparently a way to reproduce. When I reinstalled my Openstack setup from Debian GNU/Linux Sid/Mitaka to Debian GNU/Linux Jessie/Newton, I started out with a non-distributed router (DVR). The LBaaS v1 _and_ v2 worked just fine there. But as soon as I enabled/setup DVR, they stoped working. I'm unsure of what information would be required, but "ask and it will be supplied". + + The problem I'm seeing is that the FIPS of the LB responds, but not the + VIP. -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1672345 Title: Loadbalancer V2 ports are not serviced by DVR Status in neutron: New Bug description: I reported #1629539, which was on Mitaka/LBaaSv1, but I'm seeing the exact same behaviour on Newton/LBaaSv2. There's apparently a fix (for Kilo) in #1493809. There's also #1494003 (a duplicate of #1493809), which have a lot of debug output and apparently a way to reproduce. When I reinstalled my Openstack setup from Debian GNU/Linux Sid/Mitaka to Debian GNU/Linux Jessie/Newton, I started out with a non- distributed router (DVR). The LBaaS v1 _and_ v2 worked just fine there. But as soon as I enabled/setup DVR, they stoped working. I'm unsure of what information would be required, but "ask and it will be supplied". The problem I'm seeing is that the FIPS of the LB responds, but not the VIP. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1672345/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1637877] [NEW] Ports is UP even though there's no L3 agents available
Public bug reported: When I accidentally upgrade to Newton a few weeks ago, I failed to notice that 'neutron-fwaas-l3-agent' wasn't available (I've been busy with a new job so I haven't had time to deal with all the upgrade issues I encountered). Today, when I was playing with adding a second router, I noticed that all those ports was Down and investigated. It was the missing l3 agent that was at fault. But looking at the existing router, which worked just fine before the upgrade, all those ports are UP. Which I'm assuming they shouldn't. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1637877 Title: Ports is UP even though there's no L3 agents available Status in neutron: New Bug description: When I accidentally upgrade to Newton a few weeks ago, I failed to notice that 'neutron-fwaas-l3-agent' wasn't available (I've been busy with a new job so I haven't had time to deal with all the upgrade issues I encountered). Today, when I was playing with adding a second router, I noticed that all those ports was Down and investigated. It was the missing l3 agent that was at fault. But looking at the existing router, which worked just fine before the upgrade, all those ports are UP. Which I'm assuming they shouldn't. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1637877/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1633734] [NEW] ValueError: Field `instance_uuid' cannot be None
Public bug reported: I "accidentally" upgraded from Mitaka to Newton a few days ago and I'm still cleaning up "the mess" that introduced (to used to Debian GNU/Linux packages takes care of all that for me). Anyway, I'm now getting ValueError: Field `instance_uuid' cannot be None in the nova-api log. I've been looking at http://docs.openstack.org/releasenotes/nova/newton.html#upgrade-notes but I'm not sure what to do. I've run nova-manage db online_data_migrations => ERROR nova.db.sqlalchemy.api [req-c08dbccb-d841-4e38-a895-26768f24222b - - - - -] Data migrations for PciDevice are not safe, likely because not all services that access the DB directly are updated to the latest version nova-manage db sync => ERROR: could not access cell mapping database - has api db been created? nova-manage api_db sync => Seems to run ok nova-manage cell_v2 discover_hosts => error: 'module' object has no attribute 'session' nova-manage cell_v2 map_cell0 => Seemed like it ran ok nova-manage cell_v2 simple_cell_setup --transport-url rabbit://blabla/ => Seemed like it ran ok nova-manage db null_instance_uuid_scan => There were no records found where instance_uuid was NULL. Other than that, I'm not sure what the problem is. ** Affects: nova Importance: Undecided Status: New ** Tags: upgrades -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1633734 Title: ValueError: Field `instance_uuid' cannot be None Status in OpenStack Compute (nova): New Bug description: I "accidentally" upgraded from Mitaka to Newton a few days ago and I'm still cleaning up "the mess" that introduced (to used to Debian GNU/Linux packages takes care of all that for me). Anyway, I'm now getting ValueError: Field `instance_uuid' cannot be None in the nova-api log. I've been looking at http://docs.openstack.org/releasenotes/nova/newton.html#upgrade-notes but I'm not sure what to do. I've run nova-manage db online_data_migrations => ERROR nova.db.sqlalchemy.api [req-c08dbccb-d841-4e38-a895-26768f24222b - - - - -] Data migrations for PciDevice are not safe, likely because not all services that access the DB directly are updated to the latest version nova-manage db sync => ERROR: could not access cell mapping database - has api db been created? nova-manage api_db sync => Seems to run ok nova-manage cell_v2 discover_hosts => error: 'module' object has no attribute 'session' nova-manage cell_v2 map_cell0 => Seemed like it ran ok nova-manage cell_v2 simple_cell_setup --transport-url rabbit://blabla/ => Seemed like it ran ok nova-manage db null_instance_uuid_scan => There were no records found where instance_uuid was NULL. Other than that, I'm not sure what the problem is. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1633734/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1629539] [NEW] Broken distributed virtual router
Public bug reported: I wish I could come up with a smarter, more descriptive title for this, but if someone can after reading my report, feel free to update it. I installed my second controller the other day (because of resource constraints, I run ALL my Openstack control services - APIs, Engines, Servers etc, etc - _everything_ but 'nova-compute' and 'nova-console' - on one physical host) and then one of my LBaaSv1 (haven't gotten around to try enabling v2 again, last time I got some issues which was reported elsewhere in the tracker) stopped working. After almost a day trying to figure out why only one and how to fix it, I realized it must be the _router_ not the load balancer that's at fault (see below). Broken LBaaSv1 VIP: 10.100.0.16/24 Broken LBaaSv1 Floating IP: 10.0.5.90/24 Working LBaaSv1 Floating IP: 10.0.4.190/24 Router VIF namespace:10.0.5.100(not sure exactly what this is, but for some reason it have 'stolen' the "GW functionality" (incoming) on the router from the .253 interfaces) Router qrouter namespace:10.0.4.253 + 10.0.5.253 (these are on the 'External Gateway' on the router and is supposed to be the routers GW) Primary GW/FW/NAT: eth1:192.168.69.1/24, eth2:10.0.4.254/24, eth2:10.0.5.254/24 => == => From a physical host outside the OS network(s) (i.e. from the 192.168.69.0/24 network): traceroute to 10.100.0.16 (10.100.0.16), 30 hops max, 60 byte packets <= CORRECT 1 192.168.69.1 0.088 ms 0.077 ms 0.064 ms 2 10.0.4.253 0.262 ms 0.246 ms 0.258 ms 3 10.100.0.16 2.365 ms 2.348 ms 2.310 ms traceroute to 10.0.5.90 (10.0.5.90), 30 hops max, 60 byte packets <= WRONG, LBaaSv1 don't work 1 192.168.69.1 0.156 ms 0.138 ms 0.123 ms 2 10.0.5.100 0.834 ms 0.863 ms 0.851 ms 3 * * * 4 10.0.5.90 1.487 ms 1.564 ms 1.561 ms traceroute to 10.0.4.190 (10.0.4.190), 30 hops max, 60 byte packets <= WRONG, but LBaaSv1 work 1 192.168.69.1 0.130 ms 0.112 ms 0.097 ms 2 10.0.5.100 1.595 ms 1.581 ms 1.568 ms 3 * * * 4 10.0.4.190 2.265 ms 2.262 ms 2.251 ms => == => From an instance (inside the 10.100.0.0/24 subnet - all ICMP open) traceroute to 10.100.0.16 (10.100.0.16), 30 hops max, 60 byte packets 1 * * * 2 * * * 3 *^C PING 10.100.0.16 (10.100.0.16) 56(84) bytes of data. 64 bytes from 10.100.0.16: icmp_seq=1 ttl=64 time=1.32 ms 64 bytes from 10.100.0.16: icmp_seq=2 ttl=64 time=0.548 ms 64 bytes from 10.100.0.16: icmp_seq=3 ttl=64 time=0.589 ms ^C PING 10.0.5.90 (10.0.5.90) 56(84) bytes of data. 64 bytes from 10.100.0.16: icmp_seq=1 ttl=64 time=1.02 ms 64 bytes from 10.0.5.90: icmp_seq=1 ttl=60 time=1.68 ms (DUP!) ^C PING 10.0.4.190 (10.0.4.190) 56(84) bytes of data. 64 bytes from 10.100.0.4: icmp_seq=1 ttl=64 time=0.925 ms 64 bytes from 10.0.4.190: icmp_seq=1 ttl=60 time=467 ms (DUP!) ^C => == => The 'actual' problem => From a host on the 192.168.69.0/24 network $ curl --insecure https://10.100.0.16:8140/ curl: (35) Unknown SSL protocol error in connection to 10.100.0.16:8140 <= FAIL, never reaches backend server $ curl --insecure https://10.0.5.90:8140/ The environment must be purely alphanumeric, not '' <= Actually working => From an instance $ curl --insecure https://10.100.0.16:8140/ The environment must be purely alphanumeric, not '' <= Actually working $ curl --insecure https://10.0.5.90:8140/ curl: (35) Unknown SSL protocol error in connection to 10.0.5.90:8140 <= FAIL, never reaches backend server Testing a connection to 10.0.4.190 with curl won't work - it's "ldaps" on port 636. But doing a ldapsearch from 192.168.69.0/24 to that works, but not from an instance. So that is broken as well, even though I labeled it 'working' above :(. Just "broken" in a different way.. => == => Relevant name spaces on the controllers: => => Primary Controller => => ip netns | sort fip-cd30c1bb-3db6-488c-b448-6cb4454783be qrouter-4b3639a1-880f-4b55-989f-c6f654e562a7 => fip-cd30c1bb-3db6-488c-b448-6cb4454783be 66: fg-38e452be-d4: mtu 1500 qdisc noqueue state UNKNOWN group default inet 10.0.5.100/24 brd 10.0.5.255 scope global fg-38e452be-d4 Kernel IP routing table Destination Gateway Genmask Flags Metric RefUse Iface 0.0.0.0 10.0.5.254 0.0.0.0 UG0 00 fg-38e452be-d4 10.0.4.189 169.254.106.114 255.255.255.255 UGH 0 00 fpr-4b3639a1-8 10.0.4.190 169.254.106.114 255.255.255.255 UGH 0 00 fpr-4b3639a1-8 10.0.4.195 169.254.106.114 255.255.255.255 UGH 0 00 fpr-4b3639a1-8 10.0.5.00.0.0.0 255.255.255.0 U 0 00 fg-38e452be-d4 10.0.5.90 169.254.106.114 255.255.255.255 UGH 0 00 fpr-4b3639a1-8 10.0.5.92 16
[Yahoo-eng-team] [Bug 1619954] [NEW] migration gives "Unexpected API Error"
Public bug reported: I'm trying to move an instance from one Compute host to another: - s n i p - bladeA01:~# openstack server list --long -f csv -c ID -c Name -c Host --quote none | grep -i bladeA06 ec3810bd-e201-4243-ab67-f1a0801acc0a,devel-sid-5,bladeA06 e67c0719-3c13-45cc-8733-d0e66548d08e,devel-sid-1,bladeA06 d2424223-1707-4976-ac17-c5b766697541,devel-sid-6,bladeA06 bladeA01:~# openstack server migrate --live bladeA05 --shared-migration --wait e67c0719-3c13-45cc-8733-d0e66548d08e Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-8c00c15f-8453-4402-97f0-511037334ae4) - s n i p - The nova-api log on the Control node say: - s n i p - ==> /var/log/nova/nova-api.log <== 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions [req-8c00c15f-8453-4402-97f0-511037334ae4 4b0e25c70d2b4ad6ba4c50250f2f0b0b 04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Unexpected exception in API method 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions Traceback (most recent call last): 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, in wrapped 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions return f(*args, **kwargs) 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in wrapper 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in wrapper 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions return func(*args, **kwargs) 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/migrate_server.py", line 93, in _migrate_live 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions disk_over_commit, host) 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 158, in inner 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions return function(self, context, instance, *args, **kwargs) 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 186, in _wrapped 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions return fn(self, context, instance, *args, **kwargs) 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 139, in inner 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions return f(self, context, instance, *args, **kw) 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 3371, in live_migrate 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions request_spec=request_spec) 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 194, in live_migrate_instance 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions block_migration, disk_over_commit, None, request_spec=request_spec) 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/conductor/rpcapi.py", line 309, in migrate_server 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions return cctxt.call(context, 'migrate_server', **kw) 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 158, in call 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions retry=self.retry) 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, in _send 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions timeout=timeout, retry=retry) 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 470, in send 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions retry=retry) 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 459, in _send 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions result = self._waiter.wait(msg_id, timeout) 2016-09-03 19:18:10.245 8699 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/oslo_messaging/_drive
[Yahoo-eng-team] [Bug 1616094] [NEW] Required attribute 'lb_method' not specified when creating a LBaaSv2
Public bug reported: When creating a LBaaS v2 loadbalancer, listener and pool, I get: - s n i p - 2016-08-23 14:04:32 [pool]: CREATE_FAILED BadRequest: resources.pool: Failed to parse request. Required attribute 'lb_method' not specified - s n i p - The test stack: - s n i p - heat_template_version: 2015-04-30 description: Loadbalancer - Instance template resources: lbaas: type: OS::Neutron::LBaaS::LoadBalancer properties: name: lbaas-test description: lbaas-test vip_subnet: subnet-97 listener: type: OS::Neutron::LBaaS::Listener properties: name: listener-test description: listener-test loadbalancer: { get_resource: lbaas } protocol: TCP protocol_port: 666 pool: type: OS::Neutron::LBaaS::Pool properties: name: hapool-test description: hapool-test listener: { get_resource: listener } protocol: TCP lb_algorithm: LEAST_CONNECTIONS - s n i p - ** Affects: heat Importance: Undecided Status: New ** Affects: neutron Importance: Undecided Status: New ** Tags: heat lbaasv2 neutron ** Also affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1616094 Title: Required attribute 'lb_method' not specified when creating a LBaaSv2 Status in heat: New Status in neutron: New Bug description: When creating a LBaaS v2 loadbalancer, listener and pool, I get: - s n i p - 2016-08-23 14:04:32 [pool]: CREATE_FAILED BadRequest: resources.pool: Failed to parse request. Required attribute 'lb_method' not specified - s n i p - The test stack: - s n i p - heat_template_version: 2015-04-30 description: Loadbalancer - Instance template resources: lbaas: type: OS::Neutron::LBaaS::LoadBalancer properties: name: lbaas-test description: lbaas-test vip_subnet: subnet-97 listener: type: OS::Neutron::LBaaS::Listener properties: name: listener-test description: listener-test loadbalancer: { get_resource: lbaas } protocol: TCP protocol_port: 666 pool: type: OS::Neutron::LBaaS::Pool properties: name: hapool-test description: hapool-test listener: { get_resource: listener } protocol: TCP lb_algorithm: LEAST_CONNECTIONS - s n i p - To manage notifications about this bug go to: https://bugs.launchpad.net/heat/+bug/1616094/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1605336] Re: Neutron loadbalancer VIP port fails to create
** Also affects: trove Importance: Undecided Status: New ** No longer affects: trove -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1605336 Title: Neutron loadbalancer VIP port fails to create Status in Designate: New Status in OpenStack Neutron LBaaS Integration: New Status in neutron: New Bug description: When trying to create a Loadbalancer (v1) VIP with the command: neutron lb-vip-create --address 10.97.0.254 --name vip-97 \ --protocol-port 22 --protocol TCP --subnet-id subnet-97 hapool-97 Where subnet-97 is a subnet to tenant-97, which have 'dns_domain' set to an existing domain. The domain works - creating an instance + floating IP on that will register the set dns_name in the domain. However, the lb-vip-create will fail with Request Failed: internal server error while processing your request. Neutron server returns request_ids: ['req-ee6a68f1-ed8a-4f22-9dea-646fb97ff795'] and the log will say: ==> /var/log/neutron/neutron-server.log <== 2016-07-21 18:08:54.940 7926 INFO neutron.wsgi [req-cc53af04-89fc-482c-8a4f-0a3f5cc2e614 4b0e25c70d2b4ad6ba4c50250f2f0b0b 04ee0e71babe4fd7aa16c3f64a8fca89 - - -] 10.0.4.1 - - [21/Jul/2016 18:08:54] "GET /v2.0/lb/pools.json?fields=id&name=hapool-97 HTTP/1.1" 200 257 0.070421 2016-07-21 18:08:55.027 7926 INFO neutron.wsgi [req-e95bbb13-c38e-4cdf-afc5-9bba3351b8ff 4b0e25c70d2b4ad6ba4c50250f2f0b0b 04ee0e71babe4fd7aa16c3f64a8fca89 - - -] 10.0.4.1 - - [21/Jul/2016 18:08:55] "GET /v2.0/subnets.json?fields=id&name=subnet-97 HTTP/1.1" 200 259 0.081731 2016-07-21 18:08:55.037 7926 INFO neutron.quota [req-ee6a68f1-ed8a-4f22-9dea-646fb97ff795 4b0e25c70d2b4ad6ba4c50250f2f0b0b 04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Loaded quota_driver: . 2016-07-21 18:08:55.494 7926 INFO neutron.plugins.ml2.managers [req-ee6a68f1-ed8a-4f22-9dea-646fb97ff795 4b0e25c70d2b4ad6ba4c50250f2f0b0b 04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Extension driver 'dns' failed in process_create_port 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource [req-ee6a68f1-ed8a-4f22-9dea-646fb97ff795 4b0e25c70d2b4ad6ba4c50250f2f0b0b 04ee0e71babe4fd7aa16c3f64a8fca89 - - -] create failed 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource Traceback (most recent call last): 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 84, in resource 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource result = method(request=request, **args) 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 410, in create 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource return self._create(request, body, **kwargs) 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource ectxt.value = e.inner_exc 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in __exit__ 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource self.force_reraise() 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in force_reraise 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource return f(*args, **kwargs) 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 521, in _create 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource obj = do_create(body) 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 503, in do_create 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource request.context, reservation.reservation_id) 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in __exit__ 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource self.force_reraise() 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in force_reraise 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource six.reraise(self.type_, self
[Yahoo-eng-team] [Bug 1605336] Re: Neutron loadbalancer VIP port fails to create
** Tags removed: trove ** Also affects: neutron Importance: Undecided Status: New ** Project changed: neutron => f5openstackcommunitylbaas ** Description changed: When trying to create a Loadbalancer (v1) VIP with the command: - neutron lb-vip-create --address 10.97.0.254 --name vip-97 \ - --protocol-port 22 --protocol TCP --subnet-id subnet-97 hapool-97 + ``` + neutron lb-vip-create --address 10.97.0.254 --name vip-97 \ + --protocol-port 22 --protocol TCP --subnet-id subnet-97 hapool-97 + ``` Where subnet-97 is a subnet to tenant-97, which have 'dns_domain' set to an existing domain. The domain works - creating an instance + floating IP on that will register the set dns_name in the domain. However, the lb-vip-create will fail with - Request Failed: internal server error while processing your request. - Neutron server returns request_ids: ['req-ee6a68f1-ed8a-4f22-9dea-646fb97ff795'] + Request Failed: internal server error while processing your request. + Neutron server returns request_ids: ['req-ee6a68f1-ed8a-4f22-9dea-646fb97ff795'] and the log will say: - ==> /var/log/neutron/neutron-server.log <== - 2016-07-21 18:08:54.940 7926 INFO neutron.wsgi [req-cc53af04-89fc-482c-8a4f-0a3f5cc2e614 4b0e25c70d2b4ad6ba4c50250f2f0b0b 04ee0e71babe4fd7aa16c3f64a8fca89 - - -] 10.0.4.1 - - [21/Jul/2016 18:08:54] "GET /v2.0/lb/pools.json?fields=id&name=hapool-97 HTTP/1.1" 200 257 0.070421 - 2016-07-21 18:08:55.027 7926 INFO neutron.wsgi [req-e95bbb13-c38e-4cdf-afc5-9bba3351b8ff 4b0e25c70d2b4ad6ba4c50250f2f0b0b 04ee0e71babe4fd7aa16c3f64a8fca89 - - -] 10.0.4.1 - - [21/Jul/2016 18:08:55] "GET /v2.0/subnets.json?fields=id&name=subnet-97 HTTP/1.1" 200 259 0.081731 - 2016-07-21 18:08:55.037 7926 INFO neutron.quota [req-ee6a68f1-ed8a-4f22-9dea-646fb97ff795 4b0e25c70d2b4ad6ba4c50250f2f0b0b 04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Loaded quota_driver: . - 2016-07-21 18:08:55.494 7926 INFO neutron.plugins.ml2.managers [req-ee6a68f1-ed8a-4f22-9dea-646fb97ff795 4b0e25c70d2b4ad6ba4c50250f2f0b0b 04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Extension driver 'dns' failed in process_create_port - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource [req-ee6a68f1-ed8a-4f22-9dea-646fb97ff795 4b0e25c70d2b4ad6ba4c50250f2f0b0b 04ee0e71babe4fd7aa16c3f64a8fca89 - - -] create failed - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource Traceback (most recent call last): - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 84, in resource - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource result = method(request=request, **args) - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 410, in create - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource return self._create(request, body, **kwargs) - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource ectxt.value = e.inner_exc - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in __exit__ - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource self.force_reraise() - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in force_reraise - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource return f(*args, **kwargs) - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 521, in _create - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource obj = do_create(body) - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 503, in do_create - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource request.context, reservation.reservation_id) - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in __exit__ - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource self.force_reraise() - 2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in force_reraise - 2016-07-21 18:08:55.719 792
[Yahoo-eng-team] [Bug 1594795] [NEW] AttributeError: 'unicode' object has no attribute 'get'
Public bug reported: I'm getting this when running openstack flavor list and many other commands. Running with --debug, I see: Making authentication request to http://control:35357/v3/auth/tokens "POST /v3/auth/tokens HTTP/1.1" 201 11701 run(Namespace(all=False, columns=[], formatter='table', limit=None, long=False, marker=None, max_width=0, noindent=False, public=True, quote_mode='nonnumeric')) Instantiating compute client for VAPI Version Major: 2, Minor: 0 Making authentication request to http://control:35357/v3/auth/tokens "POST /v3/auth/tokens HTTP/1.1" 201 11701 REQ: curl -g -i -X GET http://10.0.4.1:8774/v2/1857a7b08b8046038005b98e8b238843/flavors/detail -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}226eb1000d0f69fb06823fefb599f559729e0969" Starting new HTTP connection (1): 10.0.4.1 "GET /v2/1857a7b08b8046038005b98e8b238843/flavors/detail HTTP/1.1" 503 170 RESP: [503] Content-Length: 170 Content-Type: application/json; charset=UTF-8 X-Compute-Request-Id: req-f296672e-afa4-455a-ab14-2d9749658521 Date: Tue, 21 Jun 2016 12:33:58 GMT Connection: keep-alive RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.\n\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} So it would be "nice" (!!) if it could actually SAY that - that it can't connect to the service! Not just some descriptive Python code error. ** Affects: nova Importance: Undecided Status: New ** Description changed: I'm getting this when running - openstack flavor list + openstack flavor list and many other commands. Running with --debug, I see: - Making authentication request to http://openstack.bayour.com:35357/v3/auth/tokens - "POST /v3/auth/tokens HTTP/1.1" 201 11701 - run(Namespace(all=False, columns=[], formatter='table', limit=None, long=False, marker=None, max_width=0, noindent=False, public=True, quote_mode='nonnumeric')) - Instantiating compute client for VAPI Version Major: 2, Minor: 0 - Making authentication request to http://openstack.bayour.com:35357/v3/auth/tokens - "POST /v3/auth/tokens HTTP/1.1" 201 11701 - REQ: curl -g -i -X GET http://10.0.4.1:8774/v2/1857a7b08b8046038005b98e8b238843/flavors/detail -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}226eb1000d0f69fb06823fefb599f559729e0969" - Starting new HTTP connection (1): 10.0.4.1 - "GET /v2/1857a7b08b8046038005b98e8b238843/flavors/detail HTTP/1.1" 503 170 - RESP: [503] Content-Length: 170 Content-Type: application/json; charset=UTF-8 X-Compute-Request-Id: req-f296672e-afa4-455a-ab14-2d9749658521 Date: Tue, 21 Jun 2016 12:33:58 GMT Connection: keep-alive - RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.\n\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} + Making authentication request to http://control:35357/v3/auth/tokens + "POST /v3/auth/tokens HTTP/1.1" 201 11701 + run(Namespace(all=False, columns=[], formatter='table', limit=None, long=False, marker=None, max_width=0, noindent=False, public=True, quote_mode='nonnumeric')) + Instantiating compute client for VAPI Version Major: 2, Minor: 0 + Making authentication request to http://control:35357/v3/auth/tokens + "POST /v3/auth/tokens HTTP/1.1" 201 11701 + REQ: curl -g -i -X GET http://10.0.4.1:8774/v2/1857a7b08b8046038005b98e8b238843/flavors/detail -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}226eb1000d0f69fb06823fefb599f559729e0969" + Starting new HTTP connection (1): 10.0.4.1 + "GET /v2/1857a7b08b8046038005b98e8b238843/flavors/detail HTTP/1.1" 503 170 + RESP: [503] Content-Length: 170 Content-Type: application/json; charset=UTF-8 X-Compute-Request-Id: req-f296672e-afa4-455a-ab14-2d9749658521 Date: Tue, 21 Jun 2016 12:33:58 GMT Connection: keep-alive + RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.\n\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} So it would be "nice" (!!) if it could actually SAY that - that it can't connect to the service! Not just some descriptive Python code error. -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1594795 Title: AttributeError: 'unicode' object has no attribute 'get' Status in OpenStack Compute (nova): New Bug description: I'm getting this when running openstack flavor list and many other commands. Running with --debug, I see: Making authentication request to http://control:35357/v3/auth/tokens "POST /v3/auth/tokens HTTP/1.1" 201 11701 run(Namespace(all=False, columns=[], formatter='table', limit=None, long=False, marker=None, max_width=0, noindent=False, public=Tru
[Yahoo-eng-team] [Bug 1594320] Re: Can't create security group
Apparently the problem was me. I had removed "auth_host" and "auth_protocol" (and not set "auth_uri" or "identity_uri") which apparently made it default to "https://127.0.0.1:35357";. ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1594320 Title: Can't create security group Status in OpenStack Compute (nova): Invalid Bug description: - s n i p - bladeA01b:~# openstack security group create --description "Allow incoming ICMP connections." icmp Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-e6ea8936-e20a-4b47-854c-bae4d881fc89) - s n i p - - s n i p - ==> /var/log/nova/nova-api.log <== 2016-06-20 11:40:36.204 16023 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. 2016-06-20 11:40:36.205 16023 WARNING oslo_config.cfg [-] Option "memcached_servers" from group "DEFAULT" is deprecated for removal. Its value may be silently ignored in the future. 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver [req-e6ea8936-e20a-4b47-854c-bae4d881fc89 9a2b35cadef54b24a231c4f47f07b371 db39ce688efb4a5bba1e0d3dd682cce6 - - -] Neutron Error creating security group icmp 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver Traceback (most recent call last): 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver File "/usr/lib/python2.7/dist-packages/nova/network/security_group/neutron_driver.py", line 52, in create_security_group 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver body).get('security_group') 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 97, in with_params 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver ret = self.function(instance, *args, **kwargs) 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 853, in create_security_group 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver return self.post(self.security_groups_path, body=body) 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 363, in post 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver headers=headers, params=params) 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 298, in do_request 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver self._handle_fault_response(status_code, replybody, resp) 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 273, in _handle_fault_response 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver exception_handler_v20(status_code, error_body) 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 84, in exception_handler_v20 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver request_ids=request_ids) 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver InternalServerError: The server has either erred or is incapable of performing the requested operation. 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver Neutron server returns request_ids: ['req-4bfbf32a-1bf0-4336-bd56-ddb80ca1098a'] 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver 2016-06-20 11:40:37.963 16023 ERROR nova.api.openstack.extensions [req-e6ea8936-e20a-4b47-854c-bae4d881fc89 9a2b35cadef
[Yahoo-eng-team] [Bug 1594320] [NEW] Can't create security group
Public bug reported: - s n i p - bladeA01b:~# openstack security group create --description "Allow incoming ICMP connections." icmp Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-e6ea8936-e20a-4b47-854c-bae4d881fc89) - s n i p - - s n i p - ==> /var/log/nova/nova-api.log <== 2016-06-20 11:40:36.204 16023 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. 2016-06-20 11:40:36.205 16023 WARNING oslo_config.cfg [-] Option "memcached_servers" from group "DEFAULT" is deprecated for removal. Its value may be silently ignored in the future. 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver [req-e6ea8936-e20a-4b47-854c-bae4d881fc89 9a2b35cadef54b24a231c4f47f07b371 db39ce688efb4a5bba1e0d3dd682cce6 - - -] Neutron Error creating security group icmp 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver Traceback (most recent call last): 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver File "/usr/lib/python2.7/dist-packages/nova/network/security_group/neutron_driver.py", line 52, in create_security_group 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver body).get('security_group') 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 97, in with_params 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver ret = self.function(instance, *args, **kwargs) 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 853, in create_security_group 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver return self.post(self.security_groups_path, body=body) 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 363, in post 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver headers=headers, params=params) 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 298, in do_request 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver self._handle_fault_response(status_code, replybody, resp) 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 273, in _handle_fault_response 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver exception_handler_v20(status_code, error_body) 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 84, in exception_handler_v20 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver request_ids=request_ids) 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver InternalServerError: The server has either erred or is incapable of performing the requested operation. 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver Neutron server returns request_ids: ['req-4bfbf32a-1bf0-4336-bd56-ddb80ca1098a'] 2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver 2016-06-20 11:40:37.963 16023 ERROR nova.api.openstack.extensions [req-e6ea8936-e20a-4b47-854c-bae4d881fc89 9a2b35cadef54b24a231c4f47f07b371 db39ce688efb4a5bba1e0d3dd682cce6 - - -] Unexpected exception in API method 2016-06-20 11:40:37.963 16023 ERROR nova.api.openstack.extensions Traceback (most recent call last): 2016-06-20 11:40:37.963 16023 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, in wrapped 2016-06-20 11:40:37.963 16023 ERROR nova.api.openstack.extensions return f(*args, **kwargs) 2016-06-20 11:40:37.963 16023 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/dist-packages/nova