Public bug reported:

Using nova 1:2014.1.4-0ubuntu2 (Icehouse) on Ubuntu 14.04.2 LTS

After associating a floating IP address to an instance in Build/Spawning
state, 'nova list' and 'nova show' need - per default - a lot of time
(up to 40 minutes) to display that floating IP.

Steps to reproduce:

* Launching instance via Horizon
* Associate a floating IP address while instance is in Build/Spawning state via 
Horizon

Expected result:

* 'nova list' and 'nova show' should print the floating IP consistently
* the floating IP should be part of the related row in 
nova.instance_info_caches database table consistently

Actual result:

* while in Build/Spawning state 'nova list' and 'nova show' displays the 
floating IP address
* while in Build/Spawning state the floating IP is part of the related row in 
nova.instance_info_caches

* when the instance is switching to Active/Running state, the floating
IP disappears in 'nova list', 'nova show' and the
nova.instance_info_caches entry

* a little later (related to heal_instance_info_cache_interval (see
below)) the floating IP reappears

Side note 1: This issue does not occur, if the floating IP is associated after 
launching (in Active/Running state).
Side note 2: In Horizon, the floating IP is listed all the time.
Side note 3: The floating IP is working (ping, ssh), even if not displayed.

Output of 'select * from nova.instance_info_cache':

Instance in Build/Spawning:
*************************** 38. row ***************************
   created_at: 2015-04-24 09:06:23
   updated_at: 2015-04-24 09:06:43
   deleted_at: NULL
           id: 1671
 network_info: [{"ovs_interfaceid": "b2c284ea-ef23-42e1-9522-b263f24db588", 
"network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 4, 
"type": "fixed", "floating_ips": [{"meta": {}, "version": 4, "type": 
"floating", "address": "10.0.0.5"}], "address": "192.168.178.212"}], "version": 
4, "meta": {"dhcp_server": "192.168.178.3"}, "dns": [], "routes": [], "cidr": 
"192.168.178.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", 
"address": "192.168.178.1"}}], "meta": {"injected": false, "tenant_id": 
"ee8d0dd2202243389179ba2eb5a29e8c"}, "id": 
"276de287-a929-4263-aad5-3b30d6dcc8c9", "label": "neues-netz"}, "devname": 
"tapb2c284ea-ef", "qbh_params": null, "meta": {}, "details": {"port_filter": 
true, "ovs_hybrid_plug": true}, "address": "fa:16:3e:8a:32:19", "active": 
false, "type": "ovs", "id": "b2c284ea-ef23-42e1-9522-b263f24db588", 
"qbg_params": null}]
instance_uuid: f0d22419-1cac-47ce-9063-eee37fad97b9
      deleted: 0

Instance switches to Active/Running ("floating_ips" becomes empty):
*************************** 38. row ***************************
   created_at: 2015-04-24 09:06:23
   updated_at: 2015-04-24 09:07:04
   deleted_at: NULL
           id: 1671
 network_info: [{"ovs_interfaceid": "b2c284ea-ef23-42e1-9522-b263f24db588", 
"network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 4, 
"type": "fixed", "floating_ips": [], "address": "192.168.178.212"}], "version": 
4, "meta": {"dhcp_server": "192.168.178.3"}, "dns": [], "routes": [], "cidr": 
"192.168.178.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", 
"address": "192.168.178.1"}}], "meta": {"injected": false, "tenant_id": 
"ee8d0dd2202243389179ba2eb5a29e8c"}, "id": 
"276de287-a929-4263-aad5-3b30d6dcc8c9", "label": "neues-netz"}, "devname": 
"tapb2c284ea-ef", "qbh_params": null, "meta": {}, "details": {"port_filter": 
true, "ovs_hybrid_plug": true}, "address": "fa:16:3e:8a:32:19", "active": 
false, "type": "ovs", "id": "b2c284ea-ef23-42e1-9522-b263f24db588", 
"qbg_params": null}]
instance_uuid: f0d22419-1cac-47ce-9063-eee37fad97b9
      deleted: 0

After ~ 40 minutes:
*************************** 38. row ***************************
  created_at: 2015-04-24 09:06:23
   updated_at: 2015-04-24 09:45:35
   deleted_at: NULL
           id: 1671
 network_info: [{"ovs_interfaceid": "b2c284ea-ef23-42e1-9522-b263f24db588", 
"network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 4, 
"type": "fixed", "floating_ips": [{"meta": {}, "version": 4, "type": 
"floating", "address": "10.0.0.5"}], "address": "192.168.178.212"}], "version": 
4, "meta": {"dhcp_server": "192.168.178.3"}, "dns": [], "routes": [], "cidr": 
"192.168.178.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", 
"address": "192.168.178.1"}}], "meta": {"injected": false, "tenant_id": 
"ee8d0dd2202243389179ba2eb5a29e8c"}, "id": 
"276de287-a929-4263-aad5-3b30d6dcc8c9", "label": "neues-netz"}, "devname": 
"tapb2c284ea-ef", "qbh_params": null, "meta": {}, "details": {"port_filter": 
true, "ovs_hybrid_plug": true}, "address": "fa:16:3e:8a:32:19", "active": true, 
"type": "ovs", "id": "b2c284ea-ef23-42e1-9522-b263f24db588", "qbg_params": 
null}]
instance_uuid: f0d22419-1cac-47ce-9063-eee37fad97b9
      deleted: 0

The related part in nova-compute.log

2015-04-24 11:07:04.544 14860 INFO nova.virt.libvirt.driver [-] [instance: 
f0d22419-1cac-47ce-9063-eee37fad97b9] Instance spawned successfully.
[...]
2015-04-24 11:45:36.012 14860 DEBUG nova.compute.manager [-] [instance: 
f0d22419-1cac-47ce-9063-eee37fad97b9] Updated the network info_cache for 
instance _heal_instance_info_cache 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:4897


Current workaround:

Setting 'heal_instance_info_cache_interval=1' in nova-compute.conf
reduces the delay from 40 minutes to 2 minutes, but I guess setting that
setting to '1' might be a problem in production.

Maybe it could be possible to fire up the 'instance
_heal_instance_info_cache' directly after an instance switches to
Active/Running state.

** Affects: nova
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1448014

Title:
  Delayed display of floating IPs

Status in OpenStack Compute (Nova):
  New

Bug description:
  Using nova 1:2014.1.4-0ubuntu2 (Icehouse) on Ubuntu 14.04.2 LTS

  After associating a floating IP address to an instance in
  Build/Spawning state, 'nova list' and 'nova show' need - per default -
  a lot of time (up to 40 minutes) to display that floating IP.

  Steps to reproduce:

  * Launching instance via Horizon
  * Associate a floating IP address while instance is in Build/Spawning state 
via Horizon

  Expected result:

  * 'nova list' and 'nova show' should print the floating IP consistently
  * the floating IP should be part of the related row in 
nova.instance_info_caches database table consistently

  Actual result:

  * while in Build/Spawning state 'nova list' and 'nova show' displays the 
floating IP address
  * while in Build/Spawning state the floating IP is part of the related row in 
nova.instance_info_caches

  * when the instance is switching to Active/Running state, the floating
  IP disappears in 'nova list', 'nova show' and the
  nova.instance_info_caches entry

  * a little later (related to heal_instance_info_cache_interval (see
  below)) the floating IP reappears

  Side note 1: This issue does not occur, if the floating IP is associated 
after launching (in Active/Running state).
  Side note 2: In Horizon, the floating IP is listed all the time.
  Side note 3: The floating IP is working (ping, ssh), even if not displayed.

  Output of 'select * from nova.instance_info_cache':

  Instance in Build/Spawning:
  *************************** 38. row ***************************
     created_at: 2015-04-24 09:06:23
     updated_at: 2015-04-24 09:06:43
     deleted_at: NULL
             id: 1671
   network_info: [{"ovs_interfaceid": "b2c284ea-ef23-42e1-9522-b263f24db588", 
"network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 4, 
"type": "fixed", "floating_ips": [{"meta": {}, "version": 4, "type": 
"floating", "address": "10.0.0.5"}], "address": "192.168.178.212"}], "version": 
4, "meta": {"dhcp_server": "192.168.178.3"}, "dns": [], "routes": [], "cidr": 
"192.168.178.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", 
"address": "192.168.178.1"}}], "meta": {"injected": false, "tenant_id": 
"ee8d0dd2202243389179ba2eb5a29e8c"}, "id": 
"276de287-a929-4263-aad5-3b30d6dcc8c9", "label": "neues-netz"}, "devname": 
"tapb2c284ea-ef", "qbh_params": null, "meta": {}, "details": {"port_filter": 
true, "ovs_hybrid_plug": true}, "address": "fa:16:3e:8a:32:19", "active": 
false, "type": "ovs", "id": "b2c284ea-ef23-42e1-9522-b263f24db588", 
"qbg_params": null}]
  instance_uuid: f0d22419-1cac-47ce-9063-eee37fad97b9
        deleted: 0

  Instance switches to Active/Running ("floating_ips" becomes empty):
  *************************** 38. row ***************************
     created_at: 2015-04-24 09:06:23
     updated_at: 2015-04-24 09:07:04
     deleted_at: NULL
             id: 1671
   network_info: [{"ovs_interfaceid": "b2c284ea-ef23-42e1-9522-b263f24db588", 
"network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 4, 
"type": "fixed", "floating_ips": [], "address": "192.168.178.212"}], "version": 
4, "meta": {"dhcp_server": "192.168.178.3"}, "dns": [], "routes": [], "cidr": 
"192.168.178.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", 
"address": "192.168.178.1"}}], "meta": {"injected": false, "tenant_id": 
"ee8d0dd2202243389179ba2eb5a29e8c"}, "id": 
"276de287-a929-4263-aad5-3b30d6dcc8c9", "label": "neues-netz"}, "devname": 
"tapb2c284ea-ef", "qbh_params": null, "meta": {}, "details": {"port_filter": 
true, "ovs_hybrid_plug": true}, "address": "fa:16:3e:8a:32:19", "active": 
false, "type": "ovs", "id": "b2c284ea-ef23-42e1-9522-b263f24db588", 
"qbg_params": null}]
  instance_uuid: f0d22419-1cac-47ce-9063-eee37fad97b9
        deleted: 0

  After ~ 40 minutes:
  *************************** 38. row ***************************
    created_at: 2015-04-24 09:06:23
     updated_at: 2015-04-24 09:45:35
     deleted_at: NULL
             id: 1671
   network_info: [{"ovs_interfaceid": "b2c284ea-ef23-42e1-9522-b263f24db588", 
"network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 4, 
"type": "fixed", "floating_ips": [{"meta": {}, "version": 4, "type": 
"floating", "address": "10.0.0.5"}], "address": "192.168.178.212"}], "version": 
4, "meta": {"dhcp_server": "192.168.178.3"}, "dns": [], "routes": [], "cidr": 
"192.168.178.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", 
"address": "192.168.178.1"}}], "meta": {"injected": false, "tenant_id": 
"ee8d0dd2202243389179ba2eb5a29e8c"}, "id": 
"276de287-a929-4263-aad5-3b30d6dcc8c9", "label": "neues-netz"}, "devname": 
"tapb2c284ea-ef", "qbh_params": null, "meta": {}, "details": {"port_filter": 
true, "ovs_hybrid_plug": true}, "address": "fa:16:3e:8a:32:19", "active": true, 
"type": "ovs", "id": "b2c284ea-ef23-42e1-9522-b263f24db588", "qbg_params": 
null}]
  instance_uuid: f0d22419-1cac-47ce-9063-eee37fad97b9
        deleted: 0

  The related part in nova-compute.log

  2015-04-24 11:07:04.544 14860 INFO nova.virt.libvirt.driver [-] [instance: 
f0d22419-1cac-47ce-9063-eee37fad97b9] Instance spawned successfully.
  [...]
  2015-04-24 11:45:36.012 14860 DEBUG nova.compute.manager [-] [instance: 
f0d22419-1cac-47ce-9063-eee37fad97b9] Updated the network info_cache for 
instance _heal_instance_info_cache 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:4897

  
  Current workaround:

  Setting 'heal_instance_info_cache_interval=1' in nova-compute.conf
  reduces the delay from 40 minutes to 2 minutes, but I guess setting
  that setting to '1' might be a problem in production.

  Maybe it could be possible to fire up the 'instance
  _heal_instance_info_cache' directly after an instance switches to
  Active/Running state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1448014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to