[Yahoo-eng-team] [Bug 1492274] [NEW] nova evacuate does not update instance's neutron port location in the DB
Public bug reported: nova evacuate and nova host-evacuate doesn't update the database with the new neutron port location after the instance has successfully evacuate. The instance's neutron port is created on the right compute node and the neutron port is created correctly using openvswitch. The instance doesn't lose connectivity. Everything is fine with migrate/live-migration/host-live-migration To reproduce: shutdown a compute node and execute a nova evacuate or a nova host-evacuate. Expected Result: neutron port-show show the neutron port are updated with the new neutron port location Actual Result: neutron port-show still show the previous compute node Version used : ii nova-api 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - API frontend ii nova-cert1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - certificate management ii nova-common 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - common files ii nova-conductor 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - conductor service ii nova-novncproxy 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - NoVNC proxy ii nova-scheduler 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - virtual machine scheduler ii python-nova 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute Python libraries ii python-novaclient1:2.22.0-0ubuntu1~cloud0 all client library for OpenStack Compute API ** Affects: nova Importance: Undecided Status: New ** Tags: evacuate ** Summary changed: - nova evacuate does not update neutron port location in the DB + nova evacuate does not update instance's neutron port location in the DB -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1492274 Title: nova evacuate does not update instance's neutron port location in the DB Status in OpenStack Compute (nova): New Bug description: nova evacuate and nova host-evacuate doesn't update the database with the new neutron port location after the instance has successfully evacuate. The instance's neutron port is created on the right compute node and the neutron port is created correctly using openvswitch. The instance doesn't lose connectivity. Everything is fine with migrate/live-migration/host-live-migration To reproduce: shutdown a compute node and execute a nova evacuate or a nova host-evacuate. Expected Result: neutron port-show show the neutron port are updated with the new neutron port location Actual Result: neutron port-show still show the previous compute node Version used : ii nova-api 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - API frontend ii nova-cert1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - certificate management ii nova-common 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - common files ii nova-conductor 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - conductor service ii nova-novncproxy 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - NoVNC proxy ii nova-scheduler 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute - virtual machine scheduler ii python-nova 1:2015.1.0-0ubuntu1.1~cloud0 all OpenStack Compute Python libraries ii python-novaclient1:2.22.0-0ubuntu1~cloud0 all client library for OpenStack Compute API To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1492274/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1389880] [NEW] VM loses connectivity on floating ip association when using DVR
Public bug reported: Presence: Juno 2014.2-1 RDO , ubuntu 12.04 openvswitch version on ubuntu is 2.0.2 Description: Whenever create FIP on a VM, it adds the FIP to ALL other compute nodes, a routing prefix in the FIP namespace, and IP interface alias on the qrouter. However, the iptables gets updated normally with only the DNAT for the particular IP of the VM on that compute node This causes the FIP proxy arp to answer ARP requests for ALL VM's on ALL compute nodes which results in compute nodes answering ARPs where they do not have the VM effectively blackholing traffic to that ip. Here is a demonstration of the problem: Before adding a vm+fip on compute4 [root@compute2 ~]# ip netns exec fip-616a6213-c339-4164-9dff-344ae9e04929 ip route show default via 173.209.44.1 dev fg-6ede0596-3a 169.254.31.28/31 dev fpr-3a90aae6-3 proto kernel scope link src 169.254.31.29 173.209.44.0/24 dev fg-6ede0596-3a proto kernel scope link src 173.209.44.6 173.209.44.4 via 169.254.31.28 dev fpr-3a90aae6-3 [root@compute3 neutron]# ip netns exec fip-616a6213-c339-4164-9dff-344ae9e04929 ip route show default via 173.209.44.1 dev fg-26bef858-6b 169.254.31.238/31 dev fpr-3a90aae6-3 proto kernel scope link src 169.254.31.239 173.209.44.0/24 dev fg-26bef858-6b proto kernel scope link src 173.209.44.5 173.209.44.3 via 169.254.31.238 dev fpr-3a90aae6-3 [root@compute4 ~]# ip netns exec fip-616a6213-c339-4164-9dff-344ae9e04929 ip route show default via 173.209.44.1 dev fg-2919b6be-f4 173.209.44.0/24 dev fg-2919b6be-f4 proto kernel scope link src 173.209.44.8 after creating a new vm on compute4 and attaching a floating IP to it, we get this result. of course at this point, only the vm on compute4 is able to ping the public network [root@compute2 ~]# ip netns exec fip-616a6213-c339-4164-9dff-344ae9e04929 ip route show default via 173.209.44.1 dev fg-6ede0596-3a 169.254.31.28/31 dev fpr-3a90aae6-3 proto kernel scope link src 169.254.31.29 173.209.44.0/24 dev fg-6ede0596-3a proto kernel scope link src 173.209.44.6 173.209.44.4 via 169.254.31.28 dev fpr-3a90aae6-3 173.209.44.7 via 169.254.31.28 dev fpr-3a90aae6-3 [root@compute3 neutron]# ip netns exec fip-616a6213-c339-4164-9dff-344ae9e04929 ip route show default via 173.209.44.1 dev fg-26bef858-6b 169.254.31.238/31 dev fpr-3a90aae6-3 proto kernel scope link src 169.254.31.239 173.209.44.0/24 dev fg-26bef858-6b proto kernel scope link src 173.209.44.5 173.209.44.3 via 169.254.31.238 dev fpr-3a90aae6-3 173.209.44.7 via 169.254.31.238 dev fpr-3a90aae6-3 [root@compute4 ~]# ip netns exec fip-616a6213-c339-4164-9dff-344ae9e04929 ip route show default via 173.209.44.1 dev fg-2919b6be-f4 169.254.30.20/31 dev fpr-3a90aae6-3 proto kernel scope link src 169.254.30.21 173.209.44.0/24 dev fg-2919b6be-f4 proto kernel scope link src 173.209.44.8 173.209.44.3 via 169.254.30.20 dev fpr-3a90aae6-3 173.209.44.4 via 169.254.30.20 dev fpr-3a90aae6-3 173.209.44.7 via 169.254.30.20 dev fpr-3a90aae6-3 **when we deleted the extra FIP from each Compute Nodes Namespace, everything starts to work just fine** Following are the router, floating IP information and config files : +---+--+ | Field | Value | +---+--+ | admin_state_up| True | | distributed | True | | external_gateway_info | {"network_id": "616a6213-c339-4164-9dff-344ae9e04929", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "0077e2d5-3c3d-4cd2-b55c-ee380fba7867", "ip_address": "173.209.44.2"}]} | | ha| False | | id| 3a90aae6-3107-49e4-a190-92ed37a43b1a
[Yahoo-eng-team] [Bug 1388305] [NEW] When using DVR, port list for floating IP is empy
Public bug reported: The port list doesn't get updated correctly when using DVR. see https://ask.openstack.org/en/question/51634/juno-dvr-associate- floating-ip-reported-no-ports-available/ for detail ** Affects: horizon Importance: Undecided Status: New ** Tags: dvr -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1388305 Title: When using DVR, port list for floating IP is empy Status in OpenStack Dashboard (Horizon): New Bug description: The port list doesn't get updated correctly when using DVR. see https://ask.openstack.org/en/question/51634/juno-dvr-associate- floating-ip-reported-no-ports-available/ for detail To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1388305/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp