[Yahoo-eng-team] [Bug 1614436] Re: Creation of loadbalancer fails with plug vip exception

2016-08-18 Thread Preethi Dsilva
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614436

Title:
  Creation of loadbalancer fails with plug vip exception

Status in octavia:
  New

Bug description:
  Here is the scenario:
  I have two compete nodes with following bridge mappings:
  Compute node 1:
  1. physnet3:br-hed0 (This is the octavia-mgt-network)
  2. physnet2: br-hed2
  Compute node 2:
  1. physnet3:br-hed0 (This is the octavia-mgt-network)
  2. physnet1:br-hed1
  3. physnet2:br-hed2
  Now if I create a loadbalancer with VIP in physnet1, the NOVA is scheduling 
the amphora image on compute node1. However as there is no physnet1 mapping in 
compute node 1, the octavia is failing to plug the amphora image into VIP 
network.
  Expected result:
  Octavia should internally check if the availability zone on which nova is 
scheduling the amphora image has the mapping for the required VIP network or 
not.
  Here is the VIP network details:
  stack@padawan-cp1-c1-m1-mgmt:~/scratch/ansible/next/hos/ansible$ neutron 
net-show net1
  ---+
  Field Value
  ---+
  admin_state_upTrue
  availability_zone_hints
  availability_zonesnova
  created_at2016-07-29T03:45:02
  description
  idcd5a5e69-f810-4f08-ad9f-72f6184754af
  ipv4_address_scope 
  ipv6_address_scope 
  mtu   1500
  name  net1
  provider:network_type vlan
  provider:physical_network physnet1
  provider:segmentation_id  1442
  router:external   False
  sharedFalse
  statusACTIVE
  subnets   115f7f23-68e2-4cba-9209-97d362612a7f
  tags   
  tenant_id 6b192dcb6a704f72b039d0552bec5e11
  updated_at2016-07-29T03:45:02
  ---+
  stack@padawan-cp1-c1-m1-mgmt:~/scratch/ansible/next/hos/ansible$
  Here is the exception from octavia-worker.log:
  "/var/log/octavia/octavia-worker.log" [readonly] 2554L, 591063C 1,1 Top
  2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/controller/queue/endpoint.py",
 line 45, in create_load_balancer
  2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher 
self.worker.create_load_balancer(load_balancer_id)
  2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/controller/worker/controller_worker.py",
 line 322, in create_load_balancer
  2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher 
post_lb_amp_assoc.run()
  2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py",
 line 230, in run
  2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher for _state 
in self.run_iter(timeout=timeout):
  2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py",
 line 308, in run_iter
  2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher 
failure.Failure.reraise_if_any(fails)
  2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/types/failure.py",
 line 336, in reraise_if_any
  2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher 
failures[0].reraise()
  2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/types/failure.py",
 line 343, in reraise
  2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(*self._exc_info)
  2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py",
 line 82, in _execute_task
  2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher result = 
task.execute(**arguments)
  2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/controller/worker/tasks/network_tasks.py",
 line 279, in execute
  2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher 
loadbalancer.vip)
  2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py",
 line 278, in plug_vip
  2016-

[Yahoo-eng-team] [Bug 1614436] [NEW] Creation of loadbalancer fails with plug vip exception

2016-08-18 Thread Preethi Dsilva
Public bug reported:

Here is the scenario:
I have two compete nodes with following bridge mappings:
Compute node 1:
1. physnet3:br-hed0 (This is the octavia-mgt-network)
2. physnet2: br-hed2
Compute node 2:
1. physnet3:br-hed0 (This is the octavia-mgt-network)
2. physnet1:br-hed1
3. physnet2:br-hed2
Now if I create a loadbalancer with VIP in physnet1, the NOVA is scheduling the 
amphora image on compute node1. However as there is no physnet1 mapping in 
compute node 1, the octavia is failing to plug the amphora image into VIP 
network.
Expected result:
Octavia should internally check if the availability zone on which nova is 
scheduling the amphora image has the mapping for the required VIP network or 
not.
Here is the VIP network details:
stack@padawan-cp1-c1-m1-mgmt:~/scratch/ansible/next/hos/ansible$ neutron 
net-show net1
---+
Field   Value
---+
admin_state_up  True
availability_zone_hints  
availability_zones  nova
created_at  2016-07-29T03:45:02
description  
id  cd5a5e69-f810-4f08-ad9f-72f6184754af
ipv4_address_scope   
ipv6_address_scope   
mtu 1500
namenet1
provider:network_type   vlan
provider:physical_network   physnet1
provider:segmentation_id1442
router:external False
shared  False
status  ACTIVE
subnets 115f7f23-68e2-4cba-9209-97d362612a7f
tags 
tenant_id   6b192dcb6a704f72b039d0552bec5e11
updated_at  2016-07-29T03:45:02
---+
stack@padawan-cp1-c1-m1-mgmt:~/scratch/ansible/next/hos/ansible$
Here is the exception from octavia-worker.log:
"/var/log/octavia/octavia-worker.log" [readonly] 2554L, 591063C 1,1 Top
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/controller/queue/endpoint.py",
 line 45, in create_load_balancer
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher 
self.worker.create_load_balancer(load_balancer_id)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/controller/worker/controller_worker.py",
 line 322, in create_load_balancer
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher 
post_lb_amp_assoc.run()
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py",
 line 230, in run
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher for _state in 
self.run_iter(timeout=timeout):
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py",
 line 308, in run_iter
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher 
failure.Failure.reraise_if_any(fails)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/types/failure.py",
 line 336, in reraise_if_any
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher 
failures[0].reraise()
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/types/failure.py",
 line 343, in reraise
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(*self._exc_info)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py",
 line 82, in _execute_task
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher result = 
task.execute(**arguments)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/controller/worker/tasks/network_tasks.py",
 line 279, in execute
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher 
loadbalancer.vip)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py",
 line 278, in plug_vip
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher 
subnet.network_id)
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher File 
"/opt/stack/venv/octavia-20160727T193208Z/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py",
 line 93, in _plug_amphora_vip
2016-07-29 03:46:39.043 10434 ERROR oslo_messaging.rpc.dispatcher raise 
base.PlugVIPException(message)

[Yahoo-eng-team] [Bug 1607369] [NEW] In case of PCI-PT the mac address of the port should be flushed when the Vm attached to it is deleted

2016-07-28 Thread Preethi Dsilva
Public bug reported:

1.brought up a pci-pt setup
2.Created a pci-pt port (it is assigned a mac starting with fa:)
3.now boot a Vm with port
4.on successful boot ,port created in step 2 gets mac of the nic of compute
5.now delete the vm ,we see that even though Vm is deleted The port still 
contains mac of the compute nic 
6.if we would want to boot a new vm on the same compute ,we will need to either 
use the same port or first delete the port created in step 2 and create new 
port.

Ideal scenario would be once vm is deleted ,The mac associated with the port 
(compute nic mac) should be released.
stack@hlm:~$ neutron port-list
+--+--+---+---+
| id   | name | mac_address   | fixed_ips   
  |
+--+--+---+---+
| 6354907d-47bb-4a9f-b68a-1079d7d36a77 |  | 14:02:ec:6d:6e:98 | 
{"subnet_id": "f77cc897-3168-4be2-a0e3-b36597e77177", "ip_address": "7.7.7.3"}  
  |
| 7cd5cef5-af68-464b-9fc3-34aa6a0889a2 |  | fa:16:3e:29:52:3a | 
{"subnet_id": "f77cc897-3168-4be2-a0e3-b36597e77177", "ip_address": "7.7.7.2"}  
  |
| 8e69dfc9-1f5b-4a9b-8e25-8d941841ae0b |  | 14:02:ec:6d:6e:99 | 
{"subnet_id": "715baf00-765a-469b-8850-3bf2321d8ea5", "ip_address": 
"17.17.17.3"} |
| a88d264e-a35a-4027-b975-29631b629232 |  | fa:16:3e:34:c9:cf | 
{"subnet_id": "715baf00-765a-469b-8850-3bf2321d8ea5", "ip_address": 
"17.17.17.2"} |
+--+--+---+---+


stack@hlm:~$ nova list
+--+--+++-+---+
| ID   | Name | Status | Task State | Power 
State | Networks  |
+--+--+++-+---+
| 61fe0ea1-6364-469b-ae6f-a2255268f8c5 | VM   | ACTIVE | -  | Running   
  | n5=7.7.7.3; n6=17.17.17.3 |
+--+--+++-+---+


stack@hlm:~$ neutron port-create n6 --vnic-type=direct-physical
Created a new port:
+---+---+
| Field | Value 
|
+---+---+
| admin_state_up| True  
|
| allowed_address_pairs |   
|
| binding:host_id   |   
|
| binding:profile   | {}
|
| binding:vif_details   | {}
|
| binding:vif_type  | unbound   
|
| binding:vnic_type | direct-physical   
|
| created_at| 2016-07-28T06:35:17   
|
| description   |   
|
| device_id |   
|
| device_owner  |   
|
| dns_name  |   
|
| extra_dhcp_opts   |   
|
| fixed_ips | {"subnet_id": "715baf00-765a-469b-8850-3bf2321d8ea5", 
"ip_address": "17.17.17.4"} |
| id| 1769331d-0c5c-46ff-957e-a538a84b5095  
|
| mac_address   | fa:16:3e:57:ba:4e 
|
| name  |   
|
| network_id| 690be87f-b60b-4a08-9a1b-b147a5b41435  
|
| security_groups   | 439977a7-50d1-40a7-bf38-6ad493c81e1f  
|
| status

[Yahoo-eng-team] [Bug 1578491] [NEW] The supported devs value specified in the config is not taken into consideration w.r.t PCI passthrough

2016-05-04 Thread Preethi Dsilva
Public bug reported:

The 'supported_pci_vendor_devs' specified in config is not taken into
account.

1.brought up a devstack setup
2.In the ml2_conf_sriov.ini specified the supported_pci_vendor_devs to 
15b3:1007(For mellanox card PF)
.[ml2_sriov]
supported_pci_vendor_devs = 15b3:1007
3.Booted a VM with port having vnic-type=direct-physical

The port binding fails due to unsupported device vendor type.

When I manually add 15b3:1007 to 
 cfg.ListOpt('supported_pci_vendor_devs',
   default=['15b3:1004', '8086:10ca'],

in
/opt/stack/neutron/neutron/plugins/ml2/drivers/mech_sriov/mech_driver/mech_driver.py
the binding succeds.

the PF vendor product id should be included in the supported_pci_vendor
type.

LOgs:
=
b25ed^[[00;32m] ^[[01;35m^[[00;32mUnsupported pci_vendor 15b3:1007^[[00m 
^[[00;33mfrom (pid=21609) _check_supported_pci_vendor_device 
/opt/stack/neutron/neutron/plugins/ml2/drivers/mech_sriov/mech_driver/mech_driver.py:186^[[00m
2016-04-25 09:48:57.819 ^[[00;32mDEBUG 
neutron.plugins.ml2.drivers.mech_sriov.mech_driver.mech_driver 
[^[[01;36mreq-354c1c85-3efb-48e1-9d98-b38faa98aa10 ^[[00;36mneutron 
079673fa97634d1184fa74acefeb25ed^[[00;32m] ^[[01;35m^[[00;32mRefusing to bind 
due to unsupported pci_vendor device^[[00m ^[[00;33mfrom (pid=21609) bind_port 
/opt/stack/neutron/neutron/plugins/ml2/drivers/mech_sriov/mech_driver/mech_driver.py:110^[[00m
2016-04-25 09:48:57.820 ^[[01;31mERROR neutron.plugins.ml2.managers 
[^[[01;36mreq-354c1c85-3efb-48e1-9d98-b38faa98aa10 ^[[00;36mneutron 
079673fa97634d1184fa74acefeb25ed^[[01;31m] ^[[01;35m^[[01;31mFailed to bind 
port bbef1117-1e0d-4fc8-8250-c9debf763265 on host PCIPT-Compute for vnic_type 
direct-physical using segments [{'segmentation_id': 1403, 'physical_network': 
u'physnet1', 'id': u'd45098cc-a779-42f5-ac14-06976f63ab21', 'network_type': 
u'vlan'}]^[[00m
2016-04-25 09:48:57.834 ^[[00;32mDEBUG neutron.plugins.ml2.db 
[^[[01;36mreq-354c1c85-3efb-48e1-9d98-b38faa98aa10 ^[[00;36mneutron 
079673fa97634d1184fa74acefeb25ed^[[00;32m] ^[[01;35m^[[00;32mFor port 
bbef1117-1e0d-4fc8-8250-c9debf763265, host PCIPT-Compute, cleared binding 
levels^[[00m ^[[00;33mfrom (pid=21609) clear_binding_levels 
/opt/stack/neutron/neutron/plugins/ml2/db.py:118^[[00m
2016-04-25 09:48:57.835 ^[[00;32mDEBUG neutron.plugins.ml2.db 
[^[[01;36mreq-354c1c85-3efb-48e1-9d98-b38faa98aa10 ^[[00;36mneutron 
079673fa97634d1184fa74acefeb25ed^[[00;32m] ^[[01;35m^[[00;32mAttempted to set 
empty binding levels^[[00m ^[[00;33mfrom (pid=21609) set_binding_levels 
/opt/stack/neutron/neutron/plugins/ml2/db.py:93^[[00m
2016-04-25 09:48:57.855 ^[[00;32mDEBUG neutron.plugins.ml2.plugin 
[^[[01;36mreq-354c1c85-3efb-48e1-9d98-b38faa98aa10 ^[[00;36mneutron 
079673fa97634d1184fa74acefeb25ed^[[00;32m] ^[[01;35m^[[00;32mIn 
_notify_port_updated(), no bound segment for port 
bbef1117-1e0d-4fc8-8250-c9debf763265 on network 
bed4c0bc-8e2f-4fc4-bc65-8e01120a1d40^[[00m ^[[00;33mfrom (pid=21609) 
_notify_port_updated /opt/stack/neutron/neutron/plugins/ml2/plugin.py:596^[[00m

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1578491

Title:
  The supported devs value specified in the config  is not taken into
  consideration  w.r.t PCI passthrough

Status in neutron:
  New

Bug description:
  The 'supported_pci_vendor_devs' specified in config is not taken into
  account.

  1.brought up a devstack setup
  2.In the ml2_conf_sriov.ini specified the supported_pci_vendor_devs to 
15b3:1007(For mellanox card PF)
  .[ml2_sriov]
  supported_pci_vendor_devs = 15b3:1007
  3.Booted a VM with port having vnic-type=direct-physical

  The port binding fails due to unsupported device vendor type.

  When I manually add 15b3:1007 to 
   cfg.ListOpt('supported_pci_vendor_devs',
 default=['15b3:1004', '8086:10ca'],

  in
  
/opt/stack/neutron/neutron/plugins/ml2/drivers/mech_sriov/mech_driver/mech_driver.py
  the binding succeds.

  the PF vendor product id should be included in the
  supported_pci_vendor type.

  LOgs:
  =
  b25ed^[[00;32m] ^[[01;35m^[[00;32mUnsupported pci_vendor 15b3:1007^[[00m 
^[[00;33mfrom (pid=21609) _check_supported_pci_vendor_device 
/opt/stack/neutron/neutron/plugins/ml2/drivers/mech_sriov/mech_driver/mech_driver.py:186^[[00m
  2016-04-25 09:48:57.819 ^[[00;32mDEBUG 
neutron.plugins.ml2.drivers.mech_sriov.mech_driver.mech_driver 
[^[[01;36mreq-354c1c85-3efb-48e1-9d98-b38faa98aa10 ^[[00;36mneutron 
079673fa97634d1184fa74acefeb25ed^[[00;32m] ^[[01;35m^[[00;32mRefusing to bind 
due to unsupported pci_vendor device^[[00m ^[[00;33mfrom (pid=21609) bind_port 
/opt/stack/neutron/neutron/plugins/ml2/drivers/mech_sriov/mech_driver/mech_driver.py:110^[[00m
  2016-04-25 09:48:57.820 ^[[01;31mERROR neutron.plugins.ml2.managers 
[^[[01;36mreq-354c1c85-3efb-48e1-9d98-b38faa98aa10 ^[[00;36mneutron 
079673

[Yahoo-eng-team] [Bug 1572509] Re: Nova boot fails when freed SRIOV port is used for booting

2016-04-21 Thread Preethi Dsilva
*** This bug is a duplicate of bug 1572593 ***
https://bugs.launchpad.net/bugs/1572593

** Project changed: neutron => nova-project

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572509

Title:
  Nova boot fails when freed SRIOV port is used for booting

Status in Nova:
  Incomplete

Bug description:
  Nova boot fails when freed SRIOV port is used for booting

  steps to reproduce:
  ==
  1.create a SRIOV port
  2.boot a vm -->Boot is successful and vm gets ip
  3.now delete the vm using nova delete --successful (mac is released from VF)
  4.using the port created in step 1 boot a new vm.

  VM fails to boot with following error
  [01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager ^[[01;35m[instance: 
a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00mhostname=instance.hostname)
  ^[[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager 
^[[01;35m[instance: a47344fb-3254-4409-9c23-55e3cde693d9] 
^[[00mPortNotUsableDNS: Port 7219a612-014e-452e-b79a-19c87cc33db4 not usable 
for instance a47344fb-3254-4409-9c23-55e3cde693d9. Value vm4 assigned to 
dns_name attribute does not match instance's hostname vmtest4
  ^[[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager 
^[[01;35m[instance: a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00m

  Expected :
  =
  As port is unbound in step 3 we should be able to bind it in step 4.

  The setup consists of controller and compute node with Mellanox card
  enabled for SRIOV. Ubuntu 14.04 qcow2 image is used for tenant VM boot

  Tested the above with MItaka code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova-project/+bug/1572509/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572831] [NEW] VM's go into error state when booted with SRIOV nic

2016-04-20 Thread Preethi Dsilva
Public bug reported:

VM's go into error state when booted with SRIOV nic

STeps to reproduce:

1.enable sriov in the bios in my case I have mellanox card with dual port nic 
which shows up in the OS as eth4 and eth5
2.provide PCI whitelist in nova.conf
pci_passthrough_whitelist = 
{"address":"*:04:00.*","physical_network":"physnet1"}
3.the mlx4_core file is set as options mlx4_core port_type_array=2,2 
num_vfs=3,3,0 probe_vf=3,3,0 enable_64b_cqe_eqe=0 log_num_mgm_entry_size=-1
4.Its observed that 3 vm's went into eth4 vf's and 3 vm's went into eth5 vf's
the sequence is first vm landed on eth4 vf2 then 2nd on eth4 vf1  both were up 
with ip assigned.3rd vm landed on eth5 vf5 bt state of VF remained in auto 
state(if we manually set the state to enable then vm gets IP but nova fails to 
do so hence vm goes into error state)
5.4th vm landed into eth5 again however nova was able to make state to enable 
hence the vm got IP
5th vm landed on eth4 vf0 and it gt ip

This pattern is not certain.Every time vm goes into error the logs show the 
below error 
VirtualInterfaceCreateException: Virtual Interface creation failed

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1572831

Title:
  VM's go into error state when booted with SRIOV nic

Status in OpenStack Compute (nova):
  New

Bug description:
  VM's go into error state when booted with SRIOV nic

  STeps to reproduce:
  
  1.enable sriov in the bios in my case I have mellanox card with dual port nic 
which shows up in the OS as eth4 and eth5
  2.provide PCI whitelist in nova.conf
  pci_passthrough_whitelist = 
{"address":"*:04:00.*","physical_network":"physnet1"}
  3.the mlx4_core file is set as options mlx4_core port_type_array=2,2 
num_vfs=3,3,0 probe_vf=3,3,0 enable_64b_cqe_eqe=0 log_num_mgm_entry_size=-1
  4.Its observed that 3 vm's went into eth4 vf's and 3 vm's went into eth5 vf's
  the sequence is first vm landed on eth4 vf2 then 2nd on eth4 vf1  both were 
up with ip assigned.3rd vm landed on eth5 vf5 bt state of VF remained in auto 
state(if we manually set the state to enable then vm gets IP but nova fails to 
do so hence vm goes into error state)
  5.4th vm landed into eth5 again however nova was able to make state to enable 
hence the vm got IP
  5th vm landed on eth4 vf0 and it gt ip

  This pattern is not certain.Every time vm goes into error the logs show the 
below error 
  VirtualInterfaceCreateException: Virtual Interface creation failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1572831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572826] [NEW] dev name in the pci whitelist is not honored for SRIOV

2016-04-20 Thread Preethi Dsilva
Public bug reported:

dev name in the pci whitelist is not honored for SRIOV

steps to reproduce:

1.enable sriov in the bios in my case I have mellanox card with dual port nic 
which shows up in teh OS as eth4 and eth5
2.provide PCI whitelist in  nova.conf
pci_passthrough_whitelist = {"devname":"eth4","physical_network":"physnet1"}
3.the mlx4_core file is set as options mlx4_core port_type_array=2,2 num_vfs=6 
probe_vf=6 enable_64b_cqe_eqe=0  log_num_mgm_entry_size=-1
However, the behavior seen is that irrespective of the devname specified the 
tenant VM gets booted into eth4 or eth5 .

Tested the issue with MItaka code I am attaching the nova logs and
local.conf for your reference.

** Affects: nova
 Importance: Undecided
 Status: New

** Project changed: neutron => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572826

Title:
  dev name in the pci whitelist is not honored for SRIOV

Status in OpenStack Compute (nova):
  New

Bug description:
  dev name in the pci whitelist is not honored for SRIOV

  steps to reproduce:
  
  1.enable sriov in the bios in my case I have mellanox card with dual port nic 
which shows up in teh OS as eth4 and eth5
  2.provide PCI whitelist in  nova.conf
  pci_passthrough_whitelist = {"devname":"eth4","physical_network":"physnet1"}
  3.the mlx4_core file is set as options mlx4_core port_type_array=2,2 
num_vfs=6 probe_vf=6 enable_64b_cqe_eqe=0  log_num_mgm_entry_size=-1
  However, the behavior seen is that irrespective of the devname specified the 
tenant VM gets booted into eth4 or eth5 .

  Tested the issue with MItaka code I am attaching the nova logs and
  local.conf for your reference.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1572826/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572509] Re: Nova boot fails when freed SRIOV port is used for booting

2016-04-20 Thread Preethi Dsilva
** Project changed: nova => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572509

Title:
  Nova boot fails when freed SRIOV port is used for booting

Status in neutron:
  New

Bug description:
  Nova boot fails when freed SRIOV port is used for booting

  steps to reproduce:
  ==
  1.create a SRIOV port
  2.boot a vm -->Boot is successful and vm gets ip
  3.now delete the vm using nova delete --successful (mac is released from VF)
  4.using the port created in step 1 boot a new vm.

  VM fails to boot with following error
  [01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager ^[[01;35m[instance: 
a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00mhostname=instance.hostname)
  ^[[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager 
^[[01;35m[instance: a47344fb-3254-4409-9c23-55e3cde693d9] 
^[[00mPortNotUsableDNS: Port 7219a612-014e-452e-b79a-19c87cc33db4 not usable 
for instance a47344fb-3254-4409-9c23-55e3cde693d9. Value vm4 assigned to 
dns_name attribute does not match instance's hostname vmtest4
  ^[[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager 
^[[01;35m[instance: a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00m

  Expected :
  =
  As port is unbound in step 3 we should be able to bind it in step 4.

  The setup consists of controller and compute node with Mellanox card
  enabled for SRIOV. Ubuntu 14.04 qcow2 image is used for tenant VM boot

  Tested the above with MItaka code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572509/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572509] [NEW] Nova boot fails when freed SRIOV port is used for booting

2016-04-20 Thread Preethi Dsilva
Public bug reported:

Nova boot fails when freed SRIOV port is used for booting

steps to reproduce:
==
1.create a SRIOV port
2.boot a vm -->Boot is successful and vm gets ip
3.now delete the vm using nova delete --successful (mac is released from VF)
4.using the port created in step 1 boot a new vm.

VM fails to boot with following error
[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager ^[[01;35m[instance: 
a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00mhostname=instance.hostname)
^[[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager ^[[01;35m[instance: 
a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00mPortNotUsableDNS: Port 
7219a612-014e-452e-b79a-19c87cc33db4 not usable for instance 
a47344fb-3254-4409-9c23-55e3cde693d9. Value vm4 assigned to dns_name attribute 
does not match instance's hostname vmtest4
^[[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager ^[[01;35m[instance: 
a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00m

Expected :
=
As port is unbound in step 3 we should be able to bind it in step 4.

The setup consists of controller and compute node with Mellanox card
enabled for SRIOV. Ubuntu 14.04 qcow2 image is used for tenant VM boot

Tested the above with MItaka code.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1572509

Title:
  Nova boot fails when freed SRIOV port is used for booting

Status in OpenStack Compute (nova):
  New

Bug description:
  Nova boot fails when freed SRIOV port is used for booting

  steps to reproduce:
  ==
  1.create a SRIOV port
  2.boot a vm -->Boot is successful and vm gets ip
  3.now delete the vm using nova delete --successful (mac is released from VF)
  4.using the port created in step 1 boot a new vm.

  VM fails to boot with following error
  [01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager ^[[01;35m[instance: 
a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00mhostname=instance.hostname)
  ^[[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager 
^[[01;35m[instance: a47344fb-3254-4409-9c23-55e3cde693d9] 
^[[00mPortNotUsableDNS: Port 7219a612-014e-452e-b79a-19c87cc33db4 not usable 
for instance a47344fb-3254-4409-9c23-55e3cde693d9. Value vm4 assigned to 
dns_name attribute does not match instance's hostname vmtest4
  ^[[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager 
^[[01;35m[instance: a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00m

  Expected :
  =
  As port is unbound in step 3 we should be able to bind it in step 4.

  The setup consists of controller and compute node with Mellanox card
  enabled for SRIOV. Ubuntu 14.04 qcow2 image is used for tenant VM boot

  Tested the above with MItaka code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1572509/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560506] [NEW] Nova does not honour security group specified during nova boot

2016-03-22 Thread Preethi Dsilva
Public bug reported:

Nova does not honour  security  group specified during nova boot

Description:

1.created a network n1
2.create a neutron port with network n1
3.create a custom security group s1 with icmp protocol
4.boot a vm with port created in step 2 and security group specified as s1

we see that first security group s1 is considered however,once vm is
active we see that it is overwritten by default security group.due to
this vm is not pingable(I do not have icmp rule in my default security
group)

While booting vm Nova should perform neutron port update as security
group is explicitly mentioned.


This issue was found in liberty,however it may be same issue in mitaka as well

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1560506

Title:
  Nova does not honour  security  group specified during nova boot

Status in OpenStack Compute (nova):
  New

Bug description:
  Nova does not honour  security  group specified during nova boot

  Description:
  
  1.created a network n1
  2.create a neutron port with network n1
  3.create a custom security group s1 with icmp protocol
  4.boot a vm with port created in step 2 and security group specified as s1

  we see that first security group s1 is considered however,once vm is
  active we see that it is overwritten by default security group.due to
  this vm is not pingable(I do not have icmp rule in my default security
  group)

  While booting vm Nova should perform neutron port update as security
  group is explicitly mentioned.

  
  This issue was found in liberty,however it may be same issue in mitaka as well

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1560506/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558392] [NEW] List command in Neutron throws list index out of range error, when there are no entries for a object

2016-03-19 Thread Preethi Dsilva
Public bug reported:

On removing the last entry of a object say network or floatingip ,if neutron 
list command is issued it throws "list index out of range" error 
stack@hlm:~$ neutron net-list
-
id  namesubnets
-
206d8033-a577-488f-b6bc-5558e37e0e54n1  
ac5353b7-edb5-4138-9be4-b84bc22b9718 1.1.1.0/24
-
stack@hlm:~$ neutron net-delete n1
Deleted network: n1
stack@hlm:~$ neutron net-list
list index out of range
stack@hlm:~$
stack@hlm:~$ neutron floatingip-list
list index out of range
stack@hlm:~$

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: liberty-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558392

Title:
  List command in Neutron throws list index out of range error,when
  there are no entries for a object

Status in neutron:
  New

Bug description:
  On removing the last entry of a object say network or floatingip ,if neutron 
list command is issued it throws "list index out of range" error 
  stack@hlm:~$ neutron net-list
  
-
  idnamesubnets
  
-
  206d8033-a577-488f-b6bc-5558e37e0e54  n1  
ac5353b7-edb5-4138-9be4-b84bc22b9718 1.1.1.0/24
  
-
  stack@hlm:~$ neutron net-delete n1
  Deleted network: n1
  stack@hlm:~$ neutron net-list
  list index out of range
  stack@hlm:~$
  stack@hlm:~$ neutron floatingip-list
  list index out of range
  stack@hlm:~$

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558392/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461876] [NEW] Controller stacking fails due to ERROR: openstack Internal Server Error (HTTP 500)

2015-06-04 Thread Preethi Dsilva
Public bug reported:

stacking of controller fails due to ERROR: openstack Internal Server Error 
(HTTP 500) while creating openstack project,which results in no project or user 
being created.
I am attaching the local.conf and stack.sh.log file for your reference 
here controller is stacked from tag 2014.2.3

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: 2014.2.3

** Attachment added: "stack.sh.log.2015-06-04-094543"
   
https://bugs.launchpad.net/bugs/1461876/+attachment/4409730/+files/stack.sh.log.2015-06-04-094543

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1461876

Title:
  Controller stacking fails due to  ERROR: openstack Internal Server
  Error (HTTP 500)

Status in OpenStack Identity (Keystone):
  New

Bug description:
  stacking of controller fails due to ERROR: openstack Internal Server Error 
(HTTP 500) while creating openstack project,which results in no project or user 
being created.
  I am attaching the local.conf and stack.sh.log file for your reference 
  here controller is stacked from tag 2014.2.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1461876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461847] [NEW] Stacking controller fails due to error: 'wsgiref' is not in global-requirements.txt 2014.2.3

2015-06-04 Thread Preethi Dsilva
Public bug reported:

Stacking of controller fails at nova update with error 'wsgiref' is not
in global-requirements.txt .The system is built from 2014.2.3 tag

2015-06-04 08:37:28.696 | + timeout -s SIGINT 0 git clone 
https://git.openstack.org/openstack/nova.git /opt/stack/nova
2015-06-04 08:37:28.697 | Cloning into '/opt/stack/nova'...
2015-06-04 08:40:02.109 | + cd /opt/stack/nova
2015-06-04 08:40:02.110 | + git checkout 2014.2.3
2015-06-04 08:40:02.563 | Note: checking out '2014.2.3'.
2015-06-04 08:40:02.563 |
2015-06-04 08:40:02.563 | You are in 'detached HEAD' state. You can look 
around, make experimental
2015-06-04 08:40:02.563 | changes and commit them, and you can discard any 
commits you make in this
2015-06-04 08:40:02.563 | state without impacting any branches by performing 
another checkout.
2015-06-04 08:40:02.563 |
2015-06-04 08:40:02.563 | If you want to create a new branch to retain commits 
you create, you may
2015-06-04 08:40:02.563 | do so (now or later) by using -b with the checkout 
command again. Example:
2015-06-04 08:40:02.563 |
2015-06-04 08:40:02.563 |   git checkout -b new_branch_name
2015-06-04 08:40:02.563 |
2015-06-04 08:40:02.563 | HEAD is now at e6452b9... Updated from global 
requirements
2015-06-04 08:40:02.567 | + cd /opt/stack/nova
2015-06-04 08:40:02.567 | + git show --oneline
2015-06-04 08:40:02.569 | + head -1
2015-06-04 08:40:02.571 | e6452b9 Updated from global requirements
2015-06-04 08:40:02.572 | + cd /opt/devstack
2015-06-04 08:40:02.572 | + setup_develop /opt/stack/nova
2015-06-04 08:40:02.572 | + local project_dir=/opt/stack/nova
2015-06-04 08:40:02.572 | + setup_package_with_req_sync /opt/stack/nova -e
2015-06-04 08:40:02.572 | + local project_dir=/opt/stack/nova
2015-06-04 08:40:02.572 | + local flags=-e
2015-06-04 08:40:02.572 | ++ cd /opt/stack/nova
2015-06-04 08:40:02.573 | ++ git diff --exit-code
2015-06-04 08:40:02.690 | + local update_requirements=
2015-06-04 08:40:02.690 | + [[ '' != \c\h\a\n\g\e\d ]]
2015-06-04 08:40:02.690 | + [[ strict == \s\o\f\t ]]
2015-06-04 08:40:02.691 | + cd /opt/stack/requirements
2015-06-04 08:40:02.691 | + python update.py /opt/stack/nova
2015-06-04 08:40:02.822 | 'wsgiref' is not in global-requirements.txt
2015-06-04 08:40:02.822 | Traceback (most recent call last):
2015-06-04 08:40:02.822 |   File "update.py", line 274, in 
2015-06-04 08:40:02.822 | main()
2015-06-04 08:40:02.822 |   File "update.py", line 269, in main
2015-06-04 08:40:02.823 | args[0], stdout=stdout, source=options.source)
2015-06-04 08:40:02.823 |   File "update.py", line 226, in _copy_requires
2015-06-04 08:40:02.823 | source_reqs, dest_path, suffix, softupdate, 
hacking, stdout)
2015-06-04 08:40:02.823 |   File "update.py", line 199, in 
_sync_requirements_file
2015-06-04 08:40:02.823 | raise Exception("nonstandard requirement 
present.")
2015-06-04 08:40:02.824 | Exception: nonstandard requirement present.
2015-06-04 08:40:02.829 | + exit_trap
2015-06-04 08:40:02.829 | + local r=1
2015-06-04 08:40:02.829 | ++ jobs -p
2015-06-04 08:40:02.831 | + jobs=
2015-06-04 08:40:02.831 | + [[ -n '' ]]
2015-06-04 08:40:02.831 | + kill_spinner
2015-06-04 08:40:02.831 | + '[' '!' -z '' ']'
2015-06-04 08:40:02.831 | + [[ 1 -ne 0 ]]
2015-06-04 08:40:02.831 | + echo 'Error on exit'
2015-06-04 08:40:02.831 | Error on exit
2015-06-04 08:40:02.832 | + [[ -z /opt/stack/logs ]]
2015-06-04 08:40:02.832 | + /opt/devstack/tools/worlddump.py -d /opt/stack/logs
2015-06-04 08:40:03.015 | + exit 1

** Affects: nova-project
 Importance: Undecided
 Status: New


** Tags: 2014.2.3

** Attachment added: "stack.sh.log.2015-06-04-083140"
   
https://bugs.launchpad.net/bugs/1461847/+attachment/4409673/+files/stack.sh.log.2015-06-04-083140

** Project changed: keystone => nova-project

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1461847

Title:
  Stacking controller fails due to error: 'wsgiref' is not in global-
  requirements.txt 2014.2.3

Status in The Nova Project:
  New

Bug description:
  Stacking of controller fails at nova update with error 'wsgiref' is
  not in global-requirements.txt .The system is built from 2014.2.3 tag

  2015-06-04 08:37:28.696 | + timeout -s SIGINT 0 git clone 
https://git.openstack.org/openstack/nova.git /opt/stack/nova
  2015-06-04 08:37:28.697 | Cloning into '/opt/stack/nova'...
  2015-06-04 08:40:02.109 | + cd /opt/stack/nova
  2015-06-04 08:40:02.110 | + git checkout 2014.2.3
  2015-06-04 08:40:02.563 | Note: checking out '2014.2.3'.
  2015-06-04 08:40:02.563 |
  2015-06-04 08:40:02.563 | You are in 'detached HEAD' state. You can look 
around, make experimental
  2015-06-04 08:40:02.563 | changes and commit them, and you can discard any 
commits you make in this
  2015-06-04 08:40:02.563 | state without impacting any branches by performing 
another checkout.
  2015-06-04 08:40:02.563 |
  2015-06-04 08:40:02.5

[Yahoo-eng-team] [Bug 1461835] [NEW] devstack fails due to error:'lockfile' is not in global-requirements.txt" 2014.2.3

2015-06-04 Thread Preethi Dsilva
Public bug reported:

STacking of controller fails at keystone update with error 'lockfile' is
not in global-requirements.txt.The system is built from 2014.2.3 tag


Log:

2015-06-04 05:01:36.180 | + git checkout 2014.2.3
2015-06-04 05:01:36.272 | Note: checking out '2014.2.3'.
2015-06-04 05:01:36.272 |
2015-06-04 05:01:36.272 | You are in 'detached HEAD' state. You can look 
around, make experimental
2015-06-04 05:01:36.272 | changes and commit them, and you can discard any 
commits you make in this
2015-06-04 05:01:36.272 | state without impacting any branches by performing 
another checkout.
2015-06-04 05:01:36.272 |
2015-06-04 05:01:36.272 | If you want to create a new branch to retain commits 
you create, you may
2015-06-04 05:01:36.272 | do so (now or later) by using -b with the checkout 
command again. Example:
2015-06-04 05:01:36.272 |
2015-06-04 05:01:36.272 |   git checkout -b new_branch_name
2015-06-04 05:01:36.272 |
2015-06-04 05:01:36.272 | HEAD is now at 91a3387... Merge "Updated from global 
requirements" into stable/juno
2015-06-04 05:01:36.274 | + cd /opt/stack/keystone
2015-06-04 05:01:36.274 | + git show --oneline
2015-06-04 05:01:36.275 | + head -1
2015-06-04 05:01:36.279 | 91a3387 Merge "Updated from global requirements" into 
stable/juno
2015-06-04 05:01:36.280 | + cd /opt/devstack
2015-06-04 05:01:36.280 | + setup_develop /opt/stack/keystone
2015-06-04 05:01:36.280 | + local project_dir=/opt/stack/keystone
2015-06-04 05:01:36.280 | + setup_package_with_req_sync /opt/stack/keystone -e
2015-06-04 05:01:36.280 | + local project_dir=/opt/stack/keystone
2015-06-04 05:01:36.281 | + local flags=-e
2015-06-04 05:01:36.281 | ++ cd /opt/stack/keystone
2015-06-04 05:01:36.281 | ++ git diff --exit-code
2015-06-04 05:01:36.309 | + local update_requirements=
2015-06-04 05:01:36.309 | + [[ '' != \c\h\a\n\g\e\d ]]
2015-06-04 05:01:36.310 | + [[ strict == \s\o\f\t ]]
2015-06-04 05:01:36.310 | + cd /opt/stack/requirements
2015-06-04 05:01:36.310 | + python update.py /opt/stack/keystone
2015-06-04 05:01:36.410 | Version change for: pbr, webob, eventlet, greenlet, 
netaddr, pastedeploy, paste, routes, six, sqlalchemy, sqlalchemy-migrate, 
passlib, iso8601, python-keystoneclient, keystonemiddleware, oslo.config, 
oslo.messaging, oslo.db, oslo.i18n, oslo.serialization, oslo.utils, babel, 
oauthlib, dogpile.cache, pycadf, posix-ipc
2015-06-04 05:01:36.410 | Updated /opt/stack/keystone/requirements.txt:
2015-06-04 05:01:36.410 | pbr>=0.6,!=0.7,<1.0->   pbr>=0.11,<2.0
2015-06-04 05:01:36.410 | WebOb>=1.2.3,<=1.3.1   ->   WebOb>=1.2.3
2015-06-04 05:01:36.411 | eventlet>=0.15.1,<=0.15.2  ->   
eventlet>=0.17.3
2015-06-04 05:01:36.411 | greenlet>=0.3.2,<=0.4.2->   
greenlet>=0.3.2
2015-06-04 05:01:36.411 | netaddr>=0.7.12,<=0.7.13   ->   
netaddr>=0.7.12
2015-06-04 05:01:36.411 | PasteDeploy>=1.5.0,<=1.5.2 ->   
PasteDeploy>=1.5.0
2015-06-04 05:01:36.411 | Paste<=1.7.5.1 ->   Paste
2015-06-04 05:01:36.411 | Routes>=1.12.3,!=2.0,<=2.1 ->   
Routes>=1.12.3,!=2.0
2015-06-04 05:01:36.411 | six>=1.7.0,<=1.9.0 ->   six>=1.9.0
2015-06-04 05:01:36.411 | SQLAlchemy>=0.8.4,<=0.9.99,!=0 ->   
SQLAlchemy>=0.9.7,<=0.9.99
2015-06-04 05:01:36.411 | sqlalchemy-migrate==0.9.1  ->   
sqlalchemy-migrate>=0.9.6
2015-06-04 05:01:36.411 | passlib<=1.6.2 ->   passlib
2015-06-04 05:01:36.411 | iso8601>=0.1.9,<=0.1.10->   iso8601>=0.1.9
2015-06-04 05:01:36.411 | python-keystoneclient>=0.10.0, ->   
python-keystoneclient>=1.3.0
2015-06-04 05:01:36.411 | keystonemiddleware>=1.0.0,<1.4 ->   
keystonemiddleware>=1.5.0
2015-06-04 05:01:36.411 | oslo.config>=1.4.0,<=1.6.0 # A ->   
oslo.config>=1.11.0  # Apache-2.0
2015-06-04 05:01:36.411 | oslo.messaging>=1.4.0,<1.5.0   ->   
oslo.messaging>=1.8.0  # Apache-2.0
2015-06-04 05:01:36.411 | oslo.db>=1.0.0,<1.1  # Apache- ->   
oslo.db>=1.10.0  # Apache-2.0
2015-06-04 05:01:36.411 | oslo.i18n>=1.0.0,<=1.3.1 # Apa ->   
oslo.i18n>=1.5.0  # Apache-2.0
2015-06-04 05:01:36.411 | oslo.serialization>=1.0.0,<=1. ->   
oslo.serialization>=1.4.0   # Apache-2.0
2015-06-04 05:01:36.411 | oslo.utils>=1.0.0,<=1.2.1 # Ap ->   
oslo.utils>=1.4.0   # Apache-2.0
2015-06-04 05:01:36.411 | Babel>=1.3,<=1.3   ->   Babel>=1.3
2015-06-04 05:01:36.411 | oauthlib>=0.6,<=0.7.2  ->   oauthlib>=0.6
2015-06-04 05:01:36.411 | dogpile.cache>=0.5.3,<=0.5.6   ->   
dogpile.cache>=0.5.3
2015-06-04 05:01:36.411 | pycadf>=0.6.0,<0.7.0  # Apache ->   pycadf>=0.8.0
2015-06-04 05:01:36.411 | posix_ipc<=0.9.9   ->   posix_ipc
2015-06-04 05:01:36.411 | 'lockfile' is not in global-requirements.txt
2015-06-04 05:01:36.411 | Traceback (most recent call last):
2015-06-04 05:01:36.411 |   File "update.py", line 274, in 
2015-06-04 05:01:36.412 | main()
2015-06-04 05:01

[Yahoo-eng-team] [Bug 1433501] [NEW] q-agt service fails to come up

2015-03-18 Thread Preethi Dsilva
Public bug reported:

description:
=
I am trying to bring up devstack with controller and network node in single 
devstack.
Performed git pull of devstack folder,neutron,nova and networking-vsphere folder
while stacking the controller the q-agt fails to come up resulting in stack 
failure

The local.conf for the run is:
=
[[local|localrc]]

OFFLINE=true
#RECLONE=true
#PUBLIC_INTERFACE=eth2
HOST_IP=
DATA_IP=
GIT_BASE=http://review.openstack.org/p
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=tokentoken


enable_plugin networking-vsphere 
http://git.openstack.org/stackforge/networking-vsphere
USE_SCREEN=True
OVSVAPP_MODE=server

# Network settings
# Use VLAN to segregate the virtual networks
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=1000:2999
PHYSICAL_NETWORK=physnet1
OVS_PHYSICAL_BRIDGE=br-eth1
OVS_BRIDGE_MAPPINGS=physnet1:br-eth1,exnet:br-ex

Q_PLUGIN=ml2
Q_AGENT=openvswitch
Q_ML2_TENANT_NETWORK_TYPE=vlan
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,l2population,ovsvapp

#FIXED_RANGE=10.4.128.0/20
#FIXED_NETWORK_SIZE=4096
#FLOATING_RANGE=10.201.0.128/25
#NETWORK_GATEWAY=10.4.128.1
#PUBLIC_NETWORK_GATEWAY=10.201.0.129

#MULTI_HOST=1
LOGFILE=/opt/stack/logs/stack.sh.log
DATABASE_TYPE=mysql
SERVICE_HOST=
MYSQL_HOST=
RABBIT_HOST=
Q_HOST=
#GLANCE_HOSTPORT=$SERVICE_HOST:9292
SCREEN_LOGDIR=$DEST/logs/screen

#Nova services
disable_service n-net
enable_service=q-l3
enable_service=ovsvapp-compute
ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,neutron,rabbit,n-cpu,n-api,n-novnc,n-xvnc,ovsvapp-compute

VIRT_DRIVER=vsphere
VMWAREAPI_IP=
VMWAREAPI_USER=root
VMWAREAPI_PASSWORD=vmware
VMWAREAPI_CLUSTER=Cluster2

[[post-config|/$Q_PLUGIN_CONF_FILE]]
[agent]
l2_population=TRUE
#keystone setting
KEYSTONE_TOKEN_FORMAT=UUID

[[post-config|$NEUTRON_CONF]]
[DEFAULT]
service_plugins= neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
base_mac = fa:16:3e:02:00:00

The logs are placed at:
http://15.126.220.115/1410/137017/be72b811ecbdfa021ebb94040eb0ac5ea234e797/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433501

Title:
  q-agt service fails to come up

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  description:
  =
  I am trying to bring up devstack with controller and network node in single 
devstack.
  Performed git pull of devstack folder,neutron,nova and networking-vsphere 
folder
  while stacking the controller the q-agt fails to come up resulting in stack 
failure

  The local.conf for the run is:
  =
  [[local|localrc]]

  OFFLINE=true
  #RECLONE=true
  #PUBLIC_INTERFACE=eth2
  HOST_IP=
  DATA_IP=
  GIT_BASE=http://review.openstack.org/p
  ADMIN_PASSWORD=password
  MYSQL_PASSWORD=password
  RABBIT_PASSWORD=password
  SERVICE_PASSWORD=password
  SERVICE_TOKEN=tokentoken

  
  enable_plugin networking-vsphere 
http://git.openstack.org/stackforge/networking-vsphere
  USE_SCREEN=True
  OVSVAPP_MODE=server

  # Network settings
  # Use VLAN to segregate the virtual networks
  ENABLE_TENANT_VLANS=True
  TENANT_VLAN_RANGE=1000:2999
  PHYSICAL_NETWORK=physnet1
  OVS_PHYSICAL_BRIDGE=br-eth1
  OVS_BRIDGE_MAPPINGS=physnet1:br-eth1,exnet:br-ex

  Q_PLUGIN=ml2
  Q_AGENT=openvswitch
  Q_ML2_TENANT_NETWORK_TYPE=vlan
  Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,l2population,ovsvapp

  #FIXED_RANGE=10.4.128.0/20
  #FIXED_NETWORK_SIZE=4096
  #FLOATING_RANGE=10.201.0.128/25
  #NETWORK_GATEWAY=10.4.128.1
  #PUBLIC_NETWORK_GATEWAY=10.201.0.129

  #MULTI_HOST=1
  LOGFILE=/opt/stack/logs/stack.sh.log
  DATABASE_TYPE=mysql
  SERVICE_HOST=
  MYSQL_HOST=
  RABBIT_HOST=
  Q_HOST=
  #GLANCE_HOSTPORT=$SERVICE_HOST:9292
  SCREEN_LOGDIR=$DEST/logs/screen

  #Nova services
  disable_service n-net
  enable_service=q-l3
  enable_service=ovsvapp-compute
  
ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,neutron,rabbit,n-cpu,n-api,n-novnc,n-xvnc,ovsvapp-compute

  VIRT_DRIVER=vsphere
  VMWAREAPI_IP=
  VMWAREAPI_USER=root
  VMWAREAPI_PASSWORD=vmware
  VMWAREAPI_CLUSTER=Cluster2

  [[post-config|/$Q_PLUGIN_CONF_FILE]]
  [agent]
  l2_population=TRUE
  #keystone setting
  KEYSTONE_TOKEN_FORMAT=UUID

  [[post-config|$NEUTRON_CONF]]
  [DEFAULT]
  service_plugins= neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
  base_mac = fa:16:3e:02:00:00

  The logs are placed at:
  http://15.126.220.115/1410/137017/be72b811ecbdfa021ebb94040eb0ac5ea234e797/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1433501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355087] [NEW] when a interface is added after router gateway set, external connectivity using snat fails

2014-08-11 Thread Preethi Dsilva
Public bug reported:

1.create n/w,subnet
2.create a dvr and attach the subnet
3/create external network and attach the router gateway 
4.now boot a vm in that subnet 
5.ping to external network -successful
6.create a new network,subnet attach it to router created in step 2.
7.boot a vm and ping to external network -fails
8.try to ping to external network using vm created in step 4 -fails

Reason:
===
when new subnet is added ,all the sg ports inside snat namespace are updated 
with default gateway of subnet added
say i had subnet 4.4.4.0/24 already attached to router its sg port had ip 
4.4.4.2,now when i add new subnet say 5.5.5.0/24 this router 
sg port of 4.4.4.0/24 becomes 5.5.5.1 also sg ip of 5.5.5.0/24 also becomes 
5.5.5.1 (even though 5.5.5.1 has device owner 
=network:router_interface_distributed and 5.5.5.2 has device owner as 
network:router_centralized_snat)

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1355087

Title:
  when a interface is added after router gateway set,external
  connectivity using snat fails

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  1.create n/w,subnet
  2.create a dvr and attach the subnet
  3/create external network and attach the router gateway 
  4.now boot a vm in that subnet 
  5.ping to external network -successful
  6.create a new network,subnet attach it to router created in step 2.
  7.boot a vm and ping to external network -fails
  8.try to ping to external network using vm created in step 4 -fails

  Reason:
  ===
  when new subnet is added ,all the sg ports inside snat namespace are updated 
with default gateway of subnet added
  say i had subnet 4.4.4.0/24 already attached to router its sg port had ip 
4.4.4.2,now when i add new subnet say 5.5.5.0/24 this router 
  sg port of 4.4.4.0/24 becomes 5.5.5.1 also sg ip of 5.5.5.0/24 also becomes 
5.5.5.1 (even though 5.5.5.1 has device owner 
=network:router_interface_distributed and 5.5.5.2 has device owner as 
network:router_centralized_snat)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1355087/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1354097] [NEW] With agent_mode=dvr_snat on a service node, router gateway set results in DVR namespace being created all nodes

2014-08-07 Thread Preethi Dsilva
Public bug reported:

1.create 2 networks,subnet in each n/w
2.create a dvr and attach the subnets to dvr
3.create external network 
on setting router gateway the dvr is hosted on all nodes with 
agent_mode=dvr_snat and dvr even though no VM's are hosted on these subnets

my setup consists of a controller a node with agent_mode=dvr_snat and
two compute nodes with agent_mode =dvr

Expected results:
--
on setting router gateway the snat namespace and dvr naspace can be created on 
node with dvr_snat ,however they should be hosted on node with agent_mode=dvr 
until a vm is hosted in the subnet attached on that node.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1354097

Title:
  With agent_mode=dvr_snat on a service node,router gateway set results
  in DVR namespace being created all nodes

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  1.create 2 networks,subnet in each n/w
  2.create a dvr and attach the subnets to dvr
  3.create external network 
  on setting router gateway the dvr is hosted on all nodes with 
agent_mode=dvr_snat and dvr even though no VM's are hosted on these subnets

  my setup consists of a controller a node with agent_mode=dvr_snat and
  two compute nodes with agent_mode =dvr

  Expected results:
  --
  on setting router gateway the snat namespace and dvr naspace can be created 
on node with dvr_snat ,however they should be hosted on node with 
agent_mode=dvr until a vm is hosted in the subnet attached on that node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1354097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353266] [NEW] Router update creates router namespaces on nodes even though no VM is hosted for attached subnets

2014-08-05 Thread Preethi Dsilva
Public bug reported:

1.create 2 networks,subnet on each network
2.create a dvr and attch these subnets to dvr
3.now perform router update(say adding extra route )
4.we see that dvr router namespace is created on all nodes with 
agent_mode=dvr_snat or agent_mode=dvr

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1353266

Title:
  Router update creates router namespaces on nodes even though no VM is
  hosted for attached subnets

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  1.create 2 networks,subnet on each network
  2.create a dvr and attch these subnets to dvr
  3.now perform router update(say adding extra route )
  4.we see that dvr router namespace is created on all nodes with 
agent_mode=dvr_snat or agent_mode=dvr

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1353266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352808] [NEW] when enable_snat is updated from false to true in a DVR vm fails to reach external network using snat

2014-08-05 Thread Preethi Dsilva
Public bug reported:

1.create network,subnet,dvr perform router interface add
2.boot a vm,
3.attach router gateway with disable-snat
4.now update router with enable-snat=true

update happens however we see that a extra sg port is created in snat
namespace for the interface added on router update which causes external
network unreachability for vm ,(here iptable rules are added for the vm
on router update)

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1352808

Title:
  when enable_snat is updated from false to true in a DVR vm fails to
  reach external network using snat

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  1.create network,subnet,dvr perform router interface add
  2.boot a vm,
  3.attach router gateway with disable-snat
  4.now update router with enable-snat=true

  update happens however we see that a extra sg port is created in snat
  namespace for the interface added on router update which causes
  external network unreachability for vm ,(here iptable rules are added
  for the vm on router update)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1352808/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352798] [NEW] when a external network is attached to the router with disable-snat no snat namespace should be created

2014-08-05 Thread Preethi Dsilva
Public bug reported:

1.create a network,subnet,a dvr,attach subnet to dvr
2.boot a vm ,router namespace will be created
3.now set router gateway with disable-snat 
4.snat namespace is created with sg ports 

snat namespace should not be created as we have disabled snat for this
router.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1352798

Title:
  when a external network is attached to the router with disable-snat no
  snat namespace should be created

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  1.create a network,subnet,a dvr,attach subnet to dvr
  2.boot a vm ,router namespace will be created
  3.now set router gateway with disable-snat 
  4.snat namespace is created with sg ports 

  snat namespace should not be created as we have disabled snat for this
  router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1352798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352786] [NEW] DVR:once the router gateway is set if we update router name , it fails with "Request Failed: internal server error while processing your request" error"

2014-08-05 Thread Preethi Dsilva
Public bug reported:

1.create a dvr with name say dvr1
2.add router gateway
3.now perform router update neutron router-update dvr1 --name dvr2
Actual Results: 
Request Failed: internal server error while processing your request" error is 
seen,trace related to L3RouterPlugin logs are seen
Expected Results: 
router name should be updated

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1352786

Title:
  DVR:once the router gateway is set if we update router name , it fails
  with "Request Failed: internal server error while processing your
  request" error"

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  1.create a dvr with name say dvr1
  2.add router gateway
  3.now perform router update neutron router-update dvr1 --name dvr2
  Actual Results: 
  Request Failed: internal server error while processing your request" error is 
seen,trace related to L3RouterPlugin logs are seen
  Expected Results: 
  router name should be updated

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1352786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323988] [NEW] Neutron:subnet update with host routes fails to update the routes"

2014-05-28 Thread Preethi Dsilva
Public bug reported:

1.create net1,net2
2.create subnet1 2.2.2.0/24 with --no-gateway in net1
3.create subnet2 1.1.1.0/24 in net2
3.host a 2 vms in each network
4.create router 
5.add subnet2 to router
6.now create a port in subnet1 (device_owner=network:router-interface)
7.attach this port to router
8.now update the subnet2 with this ip as next hop eg.neutron subnet-update 
42046da7-ac55-4f43-9f67-877b4347a25f --host_routes type=dict list=true 
destination=1.1.1.0/24,nexthop=ip of port created in step 6)
9.restart vm network

vm1 fails to reach vm2 (host routes are not updated)

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323988

Title:
  Neutron:subnet update with host routes fails to update the routes"

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  1.create net1,net2
  2.create subnet1 2.2.2.0/24 with --no-gateway in net1
  3.create subnet2 1.1.1.0/24 in net2
  3.host a 2 vms in each network
  4.create router 
  5.add subnet2 to router
  6.now create a port in subnet1 (device_owner=network:router-interface)
  7.attach this port to router
  8.now update the subnet2 with this ip as next hop eg.neutron subnet-update 
42046da7-ac55-4f43-9f67-877b4347a25f --host_routes type=dict list=true 
destination=1.1.1.0/24,nexthop=ip of port created in step 6)
  9.restart vm network

  vm1 fails to reach vm2 (host routes are not updated)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1323988/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp