[Yahoo-eng-team] [Bug 1641788] Re: AgentStatusCheckWorker doesn't reset or start after stop

2016-11-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/397497
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7c7e2418a238d53042b1b30fefe9aea05cf02681
Submitter: Jenkins
Branch:master

commit 7c7e2418a238d53042b1b30fefe9aea05cf02681
Author: Kevin Benton 
Date:   Mon Nov 14 17:53:30 2016 -0800

Fix reset/start methods on AgentStatusCheckWorker

This worker would fail to start again if stop() or reset()
was called on it because of some bad conditional logic. This
doesn't appear to impact the current in-tree use case but
it should behave correctly.

Closes-Bug: #1641788
Change-Id: Id6334c1ef6c99bd112ada31e8fe3746d7e035356


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1641788

Title:
  AgentStatusCheckWorker doesn't reset or start after stop

Status in neutron:
  Fix Released

Bug description:
  The AgentStatusCheckWorker we have in tree doesn't correctly recover
  if the .stop() or .reset() methods are called to it due to some bad
  conditionals. This doesn't currently impact the in-tree use-case since
  we don't stop and restart the status checkers, but it should be fixed
  so it can be safely re-used elsewhere for periodic workers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1641788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-11-17 Thread Surya Prakash Singh
Need improvement in kolla side too

** Also affects: kolla
   Importance: Undecided
   Status: New

** Changed in: kolla
   Importance: Undecided => Wishlist

** Changed in: kolla
Milestone: None => ocata-1

** Changed in: kolla
 Assignee: (unassigned) => Surya Prakash Singh (confisurya)

** Changed in: kolla
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in Cinder:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in kolla:
  In Progress
Status in kuryr:
  In Progress
Status in kuryr-libnetwork:
  In Progress
Status in Magnum:
  In Progress
Status in Mistral:
  Fix Released
Status in Murano:
  In Progress
Status in networking-calico:
  In Progress
Status in networking-ovn:
  Fix Released
Status in networking-sfc:
  In Progress
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in python-muranoclient:
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in tacker:
  In Progress
Status in watcher:
  In Progress

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630920] Re: native/idl ovsdb driver loses some ovsdb transactions

2016-11-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/383540
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=3d500d36608e83d202c1a6c8438ea5961a7debe1
Submitter: Jenkins
Branch:master

commit 3d500d36608e83d202c1a6c8438ea5961a7debe1
Author: Terry Wilson 
Date:   Thu Oct 6 21:52:56 2016 -0500

Only send string values to OVSDB other_config column

The other_config columns in OVSDB are defined as maps with string
keys and string values. The OVS agent was passing an integer
segmentation id and could pass None as the physical_network.
Unfortunately, the upstream Python OVS library does not pass the
exceptions through to us.

Change-Id: Iafa6be3749b1ee863f5fa71150c708fc46951510
Closes-Bug: #1630920


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630920

Title:
  native/idl ovsdb driver loses some ovsdb transactions

Status in networking-bgpvpn:
  New
Status in neutron:
  Fix Released

Bug description:
  It seems the 'native' and the 'vsctl' ovsdb drivers behave
  differently. The native/idl driver seems to lose some ovsdb
  transactions, at least the transactions setting the 'other_config' ovs
  port attribute.

  I have written about this in a comment of an earlier bug report
  (https://bugs.launchpad.net/neutron/+bug/1626010). But I opened this
  new bug report because the two problems seem to be independent and
  that other comment may have gone unnoticed.

  It is not completely clear to me what difference this causes in user-
  observable behavior. I think it at least leads to losing information
  about which conntrack zone to use in the openvswitch firewall driver.
  See here:

  
https://github.com/openstack/neutron/blob/3ade301/neutron/agent/linux/openvswitch_firewall/firewall.py#L257

  The details:

  If I use the vsctl ovsdb driver:

  ml2_conf.ini:
  [ovs]
  ovsdb_interface = vsctl

  then I see this:

  $ > /opt/stack/logs/q-agt.log
  $ sudo ovs-vsctl list Port | grep other_config | grep -c net_uuid
  1
  $ openstack server create --flavor cirros256 --image cirros-0.3.4-x86_64-uec 
--nic net-id=net0 --wait vm0
  $ sudo ovs-vsctl list Port | grep other_config | grep -c net_uuid
  2
  $ openstack server delete vm0
  $ sleep 3
  $ sudo ovs-vsctl list Port | grep other_config | grep -c net_uuid
  1
  $ egrep -c 'Transaction caused no change' /opt/stack/logs/q-agt.log 
  0

  But if I use the (default) native driver:

  ml2_conf.ini:
  [ovs]
  ovsdb_interface = native

  Then this happens:

  $ > /opt/stack/logs/q-agt.log
  $ sudo ovs-vsctl list Port | grep other_config | grep -c net_uuid
  1
  $ openstack server create --flavor cirros256 --image cirros-0.3.4-x86_64-uec 
--nic net-id=net0 --wait vm0
  $ sudo ovs-vsctl list Port | grep other_config | grep -c net_uuid
  1
  $ openstack server delete vm0
  $ sleep 3
  $ sudo ovs-vsctl list Port | grep other_config | grep -c net_uuid
  1
  $ egrep -c 'Transaction caused no change' /opt/stack/logs/q-agt.log
  22

  A sample log message from q-agt.log:

  2016-10-06 09:23:05.447 DEBUG neutron.agent.ovsdb.impl_idl [-] Running txn 
command(idx=0): DbSetCommand(table=Port, col_values=(('other_config', {'tag': 
1}),), record=tap8e2a390d-63) from (pid=6068) do_commit 
/opt/stack/neutron/neutron/agent/ovsdb/impl_idl.py:99
  2016-10-06 09:23:05.448 DEBUG neutron.agent.ovsdb.impl_idl [-] Transaction 
caused no change from (pid=6068) do_commit 
/opt/stack/neutron/neutron/agent/ovsdb/impl_idl.py:126

  devstack version: 563d377
  neutron version: 3ade301

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1630920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-11-17 Thread qinchunhua
** Also affects: watcher
   Importance: Undecided
   Status: New

** Changed in: watcher
   Status: New => In Progress

** Changed in: watcher
 Assignee: (unassigned) => qinchunhua (qin-chunhua)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in Cinder:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in kuryr:
  In Progress
Status in kuryr-libnetwork:
  In Progress
Status in Magnum:
  In Progress
Status in Mistral:
  Fix Released
Status in Murano:
  In Progress
Status in networking-calico:
  In Progress
Status in networking-ovn:
  Fix Released
Status in networking-sfc:
  In Progress
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in python-muranoclient:
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in tacker:
  In Progress
Status in watcher:
  In Progress

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414947] Re: instance‘s root_gb is 0, but the actual root_gb size is not 0.

2016-11-17 Thread Charlotte Han
** Changed in: nova
   Status: Won't Fix => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1414947

Title:
  instance‘s root_gb is 0, but the actual root_gb size is not 0.

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  1. I have a flavor named 'disk0' that's disk size is 0.
  [root@opencos114-222 ~(keystone_admin)]# nova flavor-list
  
+--+---+---+--+---+--+---+-+---+
  | ID   | Name  | Memory_MB | Disk | 
Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  
+--+---+---+--+---+--+---+-+---+
  | 1| m1.tiny   | 512   | 1| 0 
|  | 1 | 1.0 | True  |
  | 2| m1.small  | 2048  | 20   | 0 
|  | 1 | 1.0 | True  |
  | 3| m1.medium | 4096  | 40   | 0 
|  | 2 | 1.0 | True  |
  | 4| m1.large  | 8192  | 80   | 0 
|  | 4 | 1.0 | True  |
  | 41ef4850-14a6-424a-81b6-99ba0edbec52 | disk0 | 100   | 0| 0 
|  | 1 | 1.0 | True  |
  | 5| m1.xlarge | 16384 | 160  | 0 
|  | 8 | 1.0 | True  |
  
+--+---+---+--+---+--+---+-+---+

  2. I use disk0 flavor to boot a instance and instance' uuid is  
9fb1389d-5d1b-45ac-8a9d-437f69f88422.
  [root@opencos114-222 ~(keystone_admin)]# nova boot --image 
66d2336c-31d2-495a-833d-6470b3d46263 --flavor 
41ef4850-14a6-424a-81b6-99ba0edbec52 --nic 
net-id=9b68affa-f2f1-4d74-99d0-5b3d712846dc hanrong
  
+--+---+
  | Property | Value
 |
  
+--+---+
  | OS-DCF:diskConfig| MANUAL   
 |
  | OS-EXT-AZ:availability_zone  | nova 
 |
  | OS-EXT-SRV-ATTR:host | -
 |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | -
 |
  | OS-EXT-SRV-ATTR:instance_name| instance-0015
 |
  | OS-EXT-STS:power_state   | 0
 |
  | OS-EXT-STS:task_state| scheduling   
 |
  | OS-EXT-STS:vm_state  | building 
 |
  | OS-SRV-USG:launched_at   | -
 |
  | OS-SRV-USG:terminated_at | -
 |
  | accessIPv4   |  
 |
  | accessIPv6   |  
 |
  | adminPass| Q5iUAxTW5r29 
 |
  | config_drive |  
 |
  | created  | 2015-03-28T08:38:38Z 
 |
  | flavor   | disk0 
(41ef4850-14a6-424a-81b6-99ba0edbec52)  |
  | hostId   |  
 |
  | id   | 9fb1389d-5d1b-45ac-8a9d-437f69f88422 
 |
  | image| cirror 
(66d2336c-31d2-495a-833d-6470b3d46263) |
  | key_name | -
 |
  | metadata | {}   
 |
  | name | hanrong  
 |
  | os-extended-volumes:volumes_attached | []   
 |
  | progress | 0
 |
  | security_groups  | default  
 |
  | serial_type  | file 
 |
  | status   | BUILD
 |
  | tenant_id| 94d1a1b3260648f4be6bc423fab73bfa 
 |
  | updated  | 2015-03-28T

[Yahoo-eng-team] [Bug 1582822] Re: PCI-PT : SRIOV enabled interface dev_name in pci whitelist does not give the product_id of PF for the direct-physical_network , it always take the VF's product_id

2016-11-17 Thread Vladik Romanovsky
*** This bug is a duplicate of bug 1613434 ***
https://bugs.launchpad.net/bugs/1613434

** This bug has been marked a duplicate of bug 1613434
   Whitelisted PFs aren't being recognized

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1582822

Title:
  PCI-PT : SRIOV enabled interface dev_name in pci whitelist does not
  give the product_id of PF for the direct-physical_network , it always
  take the VF's product_id

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  I wanted to Boot direct-physical port on sriov enabled device and use
  it as PCI device rather than sriov device

  Steps to reproduce :
  1) Create a direct-physical port and Boot a VM 
  neutron port-create n5 --binding:vnic-type direct-physical

  
  Nova.conf 

  pci_passthrough_whitelist = 
{"devname":"em49","physical_network":"physnet1","dev_type": "type-PF"}
  pci_passthrough_whitelist = 
{"devname":"em50","physical_network":"physnet2","dev_type": "type-VF"}


  n-cpu.log

  pci_stats details

  [PciDevicePool(count=7,numa_node=None,product_id='10ed',tags={dev_type
  ='type-VF',physical_network='physnet2'},vendor_id='8086'),
  PciDevicePool(count=1,numa_node=None,product_id='10ed',tags={dev_type
  ='type-PF',physical_network='physnet1'},vendor_id='8086')]

  
  stack@ubuntu:/opt/stack/logs$ lspci -nn | grep Eth
  02:00.0 Ethernet controller [0200]: Broadcom Corporation NetXtreme BCM5719 
Gigabit Ethernet PCIe [14e4:1657] (rev 01)
  02:00.1 Ethernet controller [0200]: Broadcom Corporation NetXtreme BCM5719 
Gigabit Ethernet PCIe [14e4:1657] (rev 01)
  02:00.2 Ethernet controller [0200]: Broadcom Corporation NetXtreme BCM5719 
Gigabit Ethernet PCIe [14e4:1657] (rev 01)
  02:00.3 Ethernet controller [0200]: Broadcom Corporation NetXtreme BCM5719 
Gigabit Ethernet PCIe [14e4:1657] (rev 01)
  04:00.0 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit 
SFI/SFP+ Network Connection [8086:10fb] (rev 01)
  04:00.1 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit 
SFI/SFP+ Network Connection [8086:10fb] (rev 01)
  04:10.0 Ethernet controller [0200]: Intel Corporation 82599 Ethernet 
Controller Virtual Function [8086:10ed] (rev 01)
  04:10.1 Ethernet controller [0200]: Intel Corporation 82599 Ethernet 
Controller Virtual Function [8086:10ed] (rev 01)
  04:10.2 Ethernet controller [0200]: Intel Corporation 82599 Ethernet 
Controller Virtual Function [8086:10ed] (rev 01)
  04:10.3 Ethernet controller [0200]: Intel Corporation 82599 Ethernet 
Controller Virtual Function [8086:10ed] (rev 01)
  04:10.4 Ethernet controller [0200]: Intel Corporation 82599 Ethernet 
Controller Virtual Function [8086:10ed] (rev 01)
  04:10.5 Ethernet controller [0200]: Intel Corporation 82599 Ethernet 
Controller Virtual Function [8086:10ed] (rev 01)
  04:10.6 Ethernet controller [0200]: Intel Corporation 82599 Ethernet 
Controller Virtual Function [8086:10ed] (rev 01)
  04:10.7 Ethernet controller [0200]: Intel Corporation 82599 Ethernet 
Controller Virtual Function [8086:10ed] (rev 01)
  04:11.0 Ethernet controller [0200]: Intel Corporation 82599 Ethernet 
Controller Virtual Function [8086:10ed] (rev 01)
  04:11.1 Ethernet controller [0200]: Intel Corporation 82599 Ethernet 
Controller Virtual Function [8086:10ed] (rev 01)
  04:11.2 Ethernet controller [0200]: Intel Corporation 82599 Ethernet 
Controller Virtual Function [8086:10ed] (rev 01)
  04:11.3 Ethernet controller [0200]: Intel Corporation 82599 Ethernet 
Controller Virtual Function [8086:10ed] (rev 01)
  04:11.4 Ethernet controller [0200]: Intel Corporation 82599 Ethernet 
Controller Virtual Function [8086:10ed] (rev 01)
  04:11.5 Ethernet controller [0200]: Intel Corporation 82599 Ethernet 
Controller Virtual Function [8086:10ed] (rev 01)

  
  tack@ubuntu:/opt/stack/logs$ neutron port-show 
449f818e-b909-48fe-9c91-9216f518f379
  
+---+--+
  | Field | Value   
 |
  
+---+--+
  | admin_state_up| True
 |
  | allowed_address_pairs | 
 |
  | binding:host_id   | ubuntu  
 |
  | binding:profile   | {"pci_slot": ":04:11.4", "physical_network": 
"physnet1", "pci_vendor_info": "8086:10ed"} |
  | binding:vif_details   | {"port_filter": false, "vlan": "1412"}  
 |
  | binding:vif_type  | h

[Yahoo-eng-team] [Bug 1642770] [NEW] Security group code is doing unnecessary work removing chains

2016-11-17 Thread Brian Haley
Public bug reported:

The security group code is generating a lot of these messages when
trying to boot VMs:

Attempted to remove chain sg-chain which does not exist

There's also ones specific to the port.  It seems to be calling
remove_chain(), even when it's a new port and it's initially setting up
it's filter.  I dropped a print_stack() in remove_chain() and see
tracebacks like this:

Prepare port filter for e8f41910-c24e-41f1-ae7f-355e9bb1d18a _apply_port_filter 
/opt/stack/neutron/neutron/agent/securitygroups_rpc.py:163
Preparing device (e8f41910-c24e-41f1-ae7f-355e9bb1d18a) filter 
prepare_port_filter 
/opt/stack/neutron/neutron/agent/linux/iptables_firewall.py:170
Attempted to remove chain sg-chain which does not exist remove_chain 
/opt/stack/neutron/neutron/agent/linux/iptables_manager.py:177
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
214, in main
result = function(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/ryu/lib/hub.py", line 54, in 
_launch
return func(*args, **kwargs)
  File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py",
 line 37, in agent_main_wrapper
ovs_agent.main(bridge_classes)
  File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2177, in main
agent.daemon_loop()
  File "/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 
154, in wrapper
return f(*args, **kwargs)
  File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2098, in daemon_loop
self.rpc_loop(polling_manager=pm)
  File "/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 
154, in wrapper
return f(*args, **kwargs)
  File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2049, in rpc_loop
port_info, ovs_restarted)
  File "/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 
154, in wrapper
return f(*args, **kwargs)
  File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1657, in process_network_ports
port_info.get('updated', set()))
  File "/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 266, in 
setup_port_filters
self.prepare_devices_filter(new_devices)
  File "/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 131, in 
decorated_function
*args, **kwargs)
  File "/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 139, in 
prepare_devices_filter
self._apply_port_filter(device_ids)
  File "/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 164, in 
_apply_port_filter
self.firewall.prepare_port_filter(device)
  File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
  File "/opt/stack/neutron/neutron/agent/firewall.py", line 139, in defer_apply
self.filter_defer_apply_off()
  File "/opt/stack/neutron/neutron/agent/linux/iptables_firewall.py", line 838, 
in filter_defer_apply_off
self._pre_defer_unfiltered_ports)
  File "/opt/stack/neutron/neutron/agent/linux/iptables_firewall.py", line 248, 
in _remove_chains_apply
self._remove_chain_by_name_v4v6(SG_CHAIN)
  File "/opt/stack/neutron/neutron/agent/linux/iptables_firewall.py", line 279, 
in _remove_chain_by_name_v4v6
self.iptables.ipv4['filter'].remove_chain(chain_name)
  File "/opt/stack/neutron/neutron/agent/linux/iptables_manager.py", line 178, 
in remove_chain
traceback.print_stack()

Looking at the code, there's a couple of things that are interesting:

1) prepare_port_filter() calls self._remove_chains() - why?
2) in the "defer" case above we always do 
_remove_chains_apply()/_setup_chains_apply() - is there some way to skip the 
remove?

This also led to us timing how long it's taking in the remove_chain()
code, since that's where the message is getting printed.  As the number
of ports and rules grow, it's spending more time spinning through chains
and rules.  It looks like that can be helped with a small code change,
which is just fallout from the real problem.  I'll send that out since
it helps a little.

More work still required.

** Affects: neutron
 Importance: Undecided
 Assignee: Brian Haley (brian-haley)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1642770

Title:
  Security group code is doing unnecessary work removing chains

Status in neutron:
  New

Bug description:
  The security group code is generating a lot of these messages when
  trying to boot VMs:

  Attempted to remove chain sg-chain which does not exist

  There's also ones specific to the port.  It seems to be calling
  remove_chain(), even when it's a new port and it's initially setting
  up it's filter.  I dropped a print_stack() in remove_chain() and see
  tracebacks lik

[Yahoo-eng-team] [Bug 1642764] [NEW] db_add() as part of transaction in new object fails native interface

2016-11-17 Thread Terry Wilson
Public bug reported:

When doing something like:

with self.ovsdb.transaction() as txn:
txn.add(self.ovsdb.add_br(self.br_name,
datapath_type=self.datapath_type))
txn.add(self.ovsdb.db_add('Bridge', self.br_name,
  'protocols', constants.OPENFLOW10))

the native interface fails due to the 'protocols' column not yet
existing on the temporary ovsdb object.

** Affects: neutron
 Importance: Undecided
 Assignee: Terry Wilson (otherwiseguy)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1642764

Title:
  db_add() as part of transaction in new object fails native interface

Status in neutron:
  In Progress

Bug description:
  When doing something like:

  with self.ovsdb.transaction() as txn:
  txn.add(self.ovsdb.add_br(self.br_name,
  datapath_type=self.datapath_type))
  txn.add(self.ovsdb.db_add('Bridge', self.br_name,
'protocols', constants.OPENFLOW10))

  the native interface fails due to the 'protocols' column not yet
  existing on the temporary ovsdb object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1642764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503686] Re: unable to update enable_snat using router-update command

2016-11-17 Thread Armando Migliaccio
neutronclient development is frozen. If you want this target the OSC
client. I know we still have a pending patch to allow us to set the
gateway [1]. So this to me is a WONTFIX.

[1] https://review.openstack.org/#/c/357973/

** Changed in: python-neutronclient
   Status: Confirmed => Won't Fix

** No longer affects: neutron

** Tags removed: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503686

Title:
  unable to update enable_snat using router-update command

Status in python-neutronclient:
  Won't Fix

Bug description:
  Currently enable_snat is allowed only when setting a gateway.

  $ neutron router-gateway-set   --disable-net
  $ neutron router-gateway-set   --enable-net

  There should be provision to set this flag with update command too.
  Like
  $ neutron router-update --enable-snat
  $ neutron router-update --disable-snat

  
  On Neutron, with the below command:
  curl -g -i -X PUT 
http://10.0.4.130:9696/v2.0/routers/deecfcf8-6a4d-494d-938e-515f5c9d5885.json 
-H "User-Agent: python-neutronclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: b964f5aed06147efa06d27392db4f4f4" 
-d '{"router": {"external_gateway_info": {"enable_snat": false}}}'

  Actual Response:
  HTTP/1.1 400 Bad Request
  Content-Length: 234
  Content-Type: application/json; charset=UTF-8
  X-Openstack-Request-Id: req-ac54539c-74eb-4fc1-8eac-339c928c69a6
  Date: Wed, 07 Sep 2016 08:31:22 GMT

  {"NeutronError": {"message": "Invalid input for external_gateway_info.
  Reason: Validation of dictionary's keys failed. Expected keys:
  set(['network_id']) Provided keys: set([u'enable_snat'])."

  Expected Response : That the external_gateway_info would have the SNAT
  disabled, even without the gateway network ID

  
  In Other words, 
  Expectation is that user can be allowed to enable/disable SNAT independently 
if the External Gateway Network ID is set. If not,
  then it should be avoided

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1503686/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378904] Re: renaming availability zone doesn't modify host's availability zone

2016-11-17 Thread JK
CONFIRMED FOR: MITAKA
$ nova-manage --version
13.0.0

# Reproduce Steps:
1) Create a HA/AZ.  ie., "AZ1"
2) Add compute nodes to "AZ1" (Admin->System->Host Aggregates->Manage Hosts)
3) Launch VM in this AZ.
4) Live migrate/migrate VM - will succeed
5) Create a new HA/AZ.  ie., "AZ2"
6) Remove compute nodes from "AZ1"
7) Add compute nodes to "AZ2"
8) Try to migrate VM

Fails with ERROR: Error: No valid host was found. There are not enough
hosts available. compute-1: (AvailabilityZoneFilter) avail zone az1 not
in host AZ: set([u'az2'])

# nova-scheduler.log
2016-11-17 21:08:38.690 168453 INFO nova.filters 
[req-e9cede77-e888-4553-83d6-4e112a8e44a7 59d4a769c88545acb86f646b2464f4d1 
93dd4afc2ddb4bfd88d8b5d13d348998 - - -] AvailabilityZoneFilter: (compute-1) 
REJECT: avail zone az1 not in host AZ: set([u'az2'])

# nova show  displays correct AZ for VM
| OS-EXT-AZ:availability_zone  | az2   

# however nova --debug list displays in the RESP BODY:
"OS-EXT-AZ:availability_zone": "az1"

# check the VM in DB, availability_zone is still listed as 'az1' as
well.


** Changed in: nova
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378904

Title:
  renaming availability zone doesn't modify host's availability zone

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi,

  After renaming our availability zones via Horizon Dashboard, we
  couldn't migrate any "old" instance anymore, the scheduler returning
  "No valid Host found"...

  After searching, we found in the nova DB `instances` table, the
  "availability_zone" field contains the name of the availability zone,
  instead of the ID ( or maybe it is intentional ;) ).

  So renaming AZ leaves the hosts created prior to this rename orphan
  and the scheduler cannot find any valid host for them...

  Our openstack install is on debian wheezy, with the icehouse
  "official" repository from archive.gplhost.com/debian/, up to date.

  If you need any more infos, I'd be glad to help.

  Cheers

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378904/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611074] Re: Reformatting of ephemeral drive fails on resize of Azure VM

2016-11-17 Thread Scott Moser
** Changed in: cloud-init (Ubuntu)
   Status: Fix Released => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Status: Fix Committed => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1611074

Title:
  Reformatting of ephemeral drive fails on resize of Azure VM

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Confirmed
Status in cloud-init source package in Xenial:
  Confirmed

Bug description:
  === Begin SRU Template ===
  [Impact]
  In some cases, cloud-init writes entries to /etc/fstab, and on azure it will
  even format a disk for mounting and then write the entry for that 'ephemeral'
  disk there.

  A supported operation on Azure is to "resize" the system.  When you do this
  the system is shut down, resized (given larger/faster disks and more CPU) and
  then brought back up.  In that process, the "ephemeral" disk re-initialized
  to its original NTFS format.  The designed goal is for cloud-init to recognize
  this situation and re-format the disk to ext4.

  The problem is that the mount of that disk happens before cloud-init can
  reformat.  Thats because the entry in fstab has 'auto' and is automatically
  mounted.  The end result is that after resize operation the user will be left
  with the ephemeral disk mounted at /mnt and having a ntfs filesystem rather
  than ext4.

  [Test Case]
  The text in comment 3 describes how to recreate by the original reporter.
  Another way to do this is to just re-format the ephemeral disk as
  ntfs and then reboot.  The result *should* be that after reboot it
  comes back up and has an ext4 filesystem on it.

  1.) boot system on azure
    (for this, i use https://gist.github.com/smoser/5806147, but you can
     use web ui or any other way).
 Save output of
   journalctl --no-pager > journalctl.orig
   systemctl status --no-pager > systemctl-status.orig
   systemctl --no-pager > systemctl.orig

  2.) unmount the ephemeral disk
     $ umount /mnt

  3.) repartition it so that mkfs.ntfs does less and is faster
     This is not strictly necessary, but mkfs.ntfs can take upwards of
     20 minutes.  shrinking /dev/sdb2 to be 200M means it will finish
     in < 1 minute.

     $ disk=/dev/disk/cloud/azure_resource
     $ part=/dev/disk/cloud/azure_resource-part1
     $ echo "2048,$((2*1024*100)),7" | sudo sfdisk "$disk"
     $ time mkfs.ntfs --quick "$part"

  4.) reboot
  5.) expect that /proc/mounts has /dev/disk/cloud/azure_resource-part1 as ext4
  and that fstab has x-systemd.requires in it.

  $ awk '$2 == "/mnt" { print $0 }' /proc/mounts
  /dev/sdb1 /mnt ext4 rw,relatime,data=ordered 0 0

  $ awk '$2 == "/mnt" { print $0 }' /etc/fstab
  /dev/sdb1 /mnt auto 
defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 0 2

  6.) collect journal and systemctl information as described in step 1 above.
  Compare output, specifically looking for case insensitve "breaks"

  [Regression Potential]
  Regression is unlikely.  Likely failure case is just that the problem is not
  correctly fixed, and the user ends up with either an NTFS formated disk that
  is mounted at /mnt or there is nothing mounted at /mnt.

  === End SRU Template ===

  After resizing a 16.04 VM on Azure, the VM is presented with a new
  ephemeral drive (of a different size), which initially is NTFS
  formatted. Cloud-init tries to format the appropriate partition ext4,
  but fails because it is mounted. Cloud-init has unmount logic for
  exactly this case in the get_data call on the Azure data source, but
  this is never called because fresh cache is found.

  Jun 27 19:07:47 azubuntu1604arm [CLOUDINIT] handlers.py[DEBUG]: start: 
init-network/check-cache: attempting to read from cache [trust]
  Jun 27 19:07:47 azubuntu1604arm [CLOUDINIT] util.py[DEBUG]: Reading from 
/var/lib/cloud/instance/obj.pkl (quiet=False)
  Jun 27 19:07:47 azubuntu1604arm [CLOUDINIT] util.py[DEBUG]: Read 5950 bytes 
from /var/lib/cloud/instance/obj.pkl
  Jun 27 19:07:47 azubuntu1604arm [CLOUDINIT] stages.py[DEBUG]: restored from 
cache: DataSourceAzureNet [seed=/dev/sr0]
  Jun 27 19:07:47 azubuntu1604arm [CLOUDINIT] handlers.py[DEBUG]: finish: 
init-network/check-cache: SUCCESS: restored from cache: DataSourceAzureNet 
[seed=/dev/sr0]
  ...
  Jun 27 19:07:48 azubuntu1604arm [CLOUDINIT] cc_disk_setup.py[DEBUG]: Creating 
file system None on /dev/sdb1
  Jun 27 19:07:48 azubuntu1604arm [CLOUDINIT] cc_disk_setup.py[DEBUG]:  
Using cmd: /sbin/mkfs.ext4 /dev/sdb1
  Jun 27 19:07:48 azubuntu1604arm [CLOUDINIT] util.py[DEBUG]: Running command 
['/sbin/mkfs.ext4', '/dev/sdb1'] with allowed return codes [0] (shell=False, 
capture=True)
  Jun 27 19:07:48 azubuntu1604arm [CLOUDINIT] util.py[DEBUG]: Creating fs for 
/dev/disk/cloud/azure_resource took 0.052 seconds
  Jun 27 19:07:48 azubuntu1604arm [CLOUDINIT] u

[Yahoo-eng-team] [Bug 1642729] [NEW] instances.updated_at should be indexed for "nova list --changes-since" queries

2016-11-17 Thread Matt Riedemann
Public bug reported:

As noted in this spec:

https://review.openstack.org/#/c/393205/6/specs/ocata/approved/add-
whitelist-for-server-list-filter-sort-parameters.rst

We should have an index on the instances.updated_at table because that's
what's used to filter instances when using the --changes-since filter
parameter with nova list.

** Affects: nova
 Importance: Wishlist
 Status: Confirmed


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1642729

Title:
  instances.updated_at should be indexed for "nova list --changes-since"
  queries

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  As noted in this spec:

  https://review.openstack.org/#/c/393205/6/specs/ocata/approved/add-
  whitelist-for-server-list-filter-sort-parameters.rst

  We should have an index on the instances.updated_at table because
  that's what's used to filter instances when using the --changes-since
  filter parameter with nova list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1642729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604397] Re: [SRU] python-swiftclient is missing in requirements.txt (for glare)

2016-11-17 Thread Launchpad Bug Tracker
This bug was fixed in the package python-glance-store -
0.18.0-0ubuntu1.1

---
python-glance-store (0.18.0-0ubuntu1.1) yakkety; urgency=medium

  [ Corey Bryant ]
  * d/control: Add run-time dependency for python-swiftclient (LP: #1604397).
  * d/p/drop-enum34.patch: Fix python3 test failures.

  [ Thomas Goirand ]
  * Fixed enum34 runtime depends.

 -- Corey Bryant   Thu, 03 Nov 2016 15:18:12
-0400

** Changed in: python-glance-store (Ubuntu Yakkety)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1604397

Title:
  [SRU] python-swiftclient is missing in requirements.txt (for glare)

Status in Ubuntu Cloud Archive:
  Fix Committed
Status in Ubuntu Cloud Archive newton series:
  Fix Committed
Status in Glance:
  New
Status in python-glance-store package in Ubuntu:
  Fix Released
Status in python-glance-store source package in Yakkety:
  Fix Released
Status in python-glance-store source package in Zesty:
  Fix Released

Bug description:
  [Description]
  [Test Case]
  I'm using UCA glance packages (version "13.0.0~b1-0ubuntu1~cloud0").
  And I've got this error:
  <30>Jul 18 16:03:45 node-2 glance-glare[17738]: ERROR: Store swift could not 
be configured correctly. Reason: Missing dependency python_swiftclient.

  Installing "python-swiftclient" fix the problem.

  In master
  (https://github.com/openstack/glance/blob/master/requirements.txt)
  package "python-swiftclient" is not included in requirements.txt. So
  UCA packages don't have proper dependencies.

  I think requirements.txt should be updated (add python-swiftclient
  there). This change should affect UCA packages.

  [Regression Potential]
  Minimal as this just adds a new dependency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1604397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1627044] Re: Last chance call to neutron if VIF plugin notification is lost

2016-11-17 Thread Isaku Yamahata
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627044

Title:
  Last chance call to neutron if VIF plugin notification is lost

Status in neutron:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  While spawning a new VM, Nova waits for event from Neutron that its
  port is configured. In some cases Neutron event is lost (e.g. RabbitMQ
  issue) and if vif_plugging_is_fatal=True (it is by default) the
  instance is set to ERROR state. It happens even if in fact port is
  ACTIVE on Neutron side and all should work fine.

  This workflow could be improved by calling Neutron before failing.
  Nova could check real state of each port in Neutron just before setting the 
instance in ERROR (if at least one port is not ACTIVE).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1627044/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642692] [NEW] Protocol can't be deleted after federated_user is created

2016-11-17 Thread Rodrigo Duarte
Public bug reported:

When authenticating a user via federation, a federated_user entry is
created in keystone's database, an example of such entry is below:

mysql> select * from federated_user;
++--+--+-+---+-+
| id | user_id  | idp_id   | protocol_id | unique_id
 | display_name|
++--+--+-+---+-+
|  1 | 15ddf8fda20842c68b9b6d91d1a7 | testshib | mapped  | 
myself%40testshib.org | mys...@testshib.org |
++--+--+-+---+-+

The federated_user_protocol_id foreign key prevents the protocol
deletion:

Details: An unexpected error prevented the server from fulfilling your
request: (pymysql.err.IntegrityError) (1451, u'Cannot delete or update a
parent row: a foreign key constraint fails (`keystone`.`federated_user`,
CONSTRAINT `federated_user_protocol_id_fkey` FOREIGN KEY (`protocol_id`,
`idp_id`) REFERENCES `federation_protocol` (`id`, `idp_id`))') [SQL:
u'DELETE FROM federation_protocol WHERE federation_protocol.id = %(id)s
AND federation_protocol.idp_id = %(idp_id)s'] [parameters: {'idp_id':
u'testshib', 'id': u'mapped'}]

This can be also happening with the "idp_id" column as well.

This prevents automated tests like [1] to properly work, since it
creates and destroys the identity provider, mapping and protocol during
its execution.

[1] https://review.openstack.org/#/c/324769/

** Affects: keystone
 Importance: Undecided
 Assignee: Ron De Rose (ronald-de-rose)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1642692

Title:
  Protocol can't be deleted after federated_user is created

Status in OpenStack Identity (keystone):
  New

Bug description:
  When authenticating a user via federation, a federated_user entry is
  created in keystone's database, an example of such entry is below:

  mysql> select * from federated_user;
  
++--+--+-+---+-+
  | id | user_id  | idp_id   | protocol_id | unique_id  
   | display_name|
  
++--+--+-+---+-+
  |  1 | 15ddf8fda20842c68b9b6d91d1a7 | testshib | mapped  | 
myself%40testshib.org | mys...@testshib.org |
  
++--+--+-+---+-+

  The federated_user_protocol_id foreign key prevents the protocol
  deletion:

  Details: An unexpected error prevented the server from fulfilling your
  request: (pymysql.err.IntegrityError) (1451, u'Cannot delete or update
  a parent row: a foreign key constraint fails
  (`keystone`.`federated_user`, CONSTRAINT
  `federated_user_protocol_id_fkey` FOREIGN KEY (`protocol_id`,
  `idp_id`) REFERENCES `federation_protocol` (`id`, `idp_id`))') [SQL:
  u'DELETE FROM federation_protocol WHERE federation_protocol.id =
  %(id)s AND federation_protocol.idp_id = %(idp_id)s'] [parameters:
  {'idp_id': u'testshib', 'id': u'mapped'}]

  This can be also happening with the "idp_id" column as well.

  This prevents automated tests like [1] to properly work, since it
  creates and destroys the identity provider, mapping and protocol
  during its execution.

  [1] https://review.openstack.org/#/c/324769/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1642692/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642687] [NEW] Missing domain for federated users

2016-11-17 Thread Ron De Rose
Public bug reported:

When creating federated users, as part of shadowing users, the user's
domain_id is not set; leaving it null in the user table. An Identity
Provider (IdP) should be mapped to a domain and users from that IdP
should be created within that domain.

** Affects: keystone
 Importance: Undecided
 Assignee: Ron De Rose (ronald-de-rose)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Ron De Rose (ronald-de-rose)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1642687

Title:
  Missing domain for federated users

Status in OpenStack Identity (keystone):
  New

Bug description:
  When creating federated users, as part of shadowing users, the user's
  domain_id is not set; leaving it null in the user table. An Identity
  Provider (IdP) should be mapped to a domain and users from that IdP
  should be created within that domain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1642687/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642689] [NEW] ceph: volume detach fails with "libvirtError: operation failed: disk vdb not found"

2016-11-17 Thread Matt Riedemann
Public bug reported:

Seeing this failure in this job:

tempest.api.volume.test_volumes_snapshots.VolumesV1SnapshotTestJSON.test_snapshot_create_with_volume_in_use[compute
,id-b467b54c-07a4-446d-a1cf-651dedcc3ff1]

While detaching a volume on cleanup it times out waiting for the volume
status to go from 'in-use' back to 'available':

2016-11-17 11:58:59.110301 | Captured traceback:
2016-11-17 11:58:59.110310 | ~~~
2016-11-17 11:58:59.110322 | Traceback (most recent call last):
2016-11-17 11:58:59.110342 |   File "tempest/common/waiters.py", line 189, 
in wait_for_volume_status
2016-11-17 11:58:59.110356 | raise lib_exc.TimeoutException(message)
2016-11-17 11:58:59.110373 | tempest.lib.exceptions.TimeoutException: 
Request timed out
2016-11-17 11:58:59.110405 | Details: Volume 
db12eda4-4ce6-4f00-a4e0-9df115f230e5 failed to reach available status (current 
in-use) within the required time (196 s).

The volume detach request is here:

2016-11-17 11:58:59.031058 | 2016-11-17 11:38:55,018 8316 INFO 
[tempest.lib.common.rest_client] Request 
(VolumesV1SnapshotTestJSON:_run_cleanups): 202 DELETE 
http://127.0.0.1:8774/v2.1/servers/584a65b5-07fa-4994-a2d5-1676d0e13a8c/os-volume_attachments/db12eda4-4ce6-4f00-a4e0-9df115f230e5
 0.277s
2016-11-17 11:58:59.031103 | 2016-11-17 11:38:55,018 8316 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
2016-11-17 11:58:59.031113 | Body: None
2016-11-17 11:58:59.031235 | Response - Headers: {'content-location': 
'http://127.0.0.1:8774/v2.1/servers/584a65b5-07fa-4994-a2d5-1676d0e13a8c/os-volume_attachments/db12eda4-4ce6-4f00-a4e0-9df115f230e5',
 'content-type': 'application/json', 'x-openstack-nova-api-version': '2.1', 
'date': 'Thu, 17 Nov 2016 11:38:55 GMT', 'content-length': '0', 'status': 
'202', 'connection': 'close', 'x-compute-request-id': 
'req-9f0541d3-6eec-4793-8852-7bd01708932e', 'openstack-api-version': 'compute 
2.1', 'vary': 'X-OpenStack-Nova-API-Version'}
2016-11-17 11:58:59.031248 | Body: 

Following the req-9f0541d3-6eec-4793-8852-7bd01708932e request ID to the
compute logs we see this detach failure:

http://logs.openstack.org/00/398800/1/gate/gate-tempest-dsvm-full-
devstack-plugin-ceph-ubuntu-
xenial/a387fb0/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-11-17_11_39_00_649

2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager 
[req-9f0541d3-6eec-4793-8852-7bd01708932e 
tempest-VolumesV1SnapshotTestJSON-1819335716 
tempest-VolumesV1SnapshotTestJSON-1819335716] [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c] Failed to detach volume 
db12eda4-4ce6-4f00-a4e0-9df115f230e5 from /dev/vdb
2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c] Traceback (most recent call last):
2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 4757, in 
_driver_detach_volume
2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c] encryption=encryption)
2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1307, in detach_volume
2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c] wait_for_detach()
2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 385, 
in func
2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c] return evt.wait()
2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait
2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c] return hubs.get_hub().switch()
2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 294, in 
switch
2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c] return self.greenlet.switch()
2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 136, 
in _run_loop
2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c] result = func(*self.args, **self.kw)
2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2

[Yahoo-eng-team] [Bug 1642681] [NEW] _process_instance_vif_deleted_event fails with "TypeError: detach_interface() takes exactly 4 arguments (3 given)"

2016-11-17 Thread Matt Riedemann
Public bug reported:

Seeing this in a gate job here:

http://logs.openstack.org/00/398800/1/gate/gate-tempest-dsvm-full-
devstack-plugin-ceph-ubuntu-
xenial/a387fb0/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-11-17_11_29_10_965

2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server 
[req-1e141c4e-e2eb-4b06-89b5-849ecf4d065d nova service] Exception during 
message handling
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
155, in _process_incoming
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
225, in dispatch
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
195, in _do_dispatch
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 75, in wrapped
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server function_name, 
call_dict, binary)
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server 
self.force_reraise()
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 66, in wrapped
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server return f(self, 
context, *args, **kw)
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 6755, in 
external_instance_event
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server event.tag)
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 6724, in 
_process_instance_vif_deleted_event
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server 
self.driver.detach_interface(instance, vif)
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server TypeError: 
detach_interface() takes exactly 4 arguments (3 given)
2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server 

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22_process_instance_vif_deleted_event%5C%22%20AND%20message%3A%5C%22TypeError%3A%20detach_interface()%20takes%20exactly%204%20arguments%20(3%20given)%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22&from=7d

New regression as of 11/16: https://review.openstack.org/#/c/209362/

** Affects: nova
 Importance: High
 Assignee: Dan Smith (danms)
 Status: In Progress


** Tags: compute neutron

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
 Assignee: (unassigned) => Dan Smith (danms)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1642681

Title:
  _process_instance_vif_deleted_event fails with "TypeError:
  detach_interface() takes exactly 4 arguments (3 given)"

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Seeing this in a gate job here:

  http://logs.openstack.org/00/398800/1/gate/gate-tempest-dsvm-full-
  devstack-plugin-ceph-ubuntu-
  xenial/a387fb0/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-11-17_11_29_10_965

  2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server 
[req-1e141c4e-e2eb-4b06-89b5-849ecf4d065d nova service] Exception during 
message handling
  2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
155, in _process_incoming
  2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-11-17 11:29:10.965 2249 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
225, in dispatch
  2016-11-17 11:29:10.965 2249

[Yahoo-eng-team] [Bug 1642679] [NEW] The OpenStack network_config.json implementation fails on Hyper-V compute nodes

2016-11-17 Thread Adrian Vladu
Public bug reported:

We have discovered an issue when booting Xenial instances on OpenStack
environments (Liberty or newer) and Hyper-V compute nodes using config
drive as metadata source.

When applying the network_config.json, cloudbase-init fails with this error:
http://paste.openstack.org/show/RvHZJqn48JBb0TO9QznL/

The fix would be to add 'hyperv' as a link type here:
/usr/lib/python3/dist-packages/cloudinit/sources/helpers/openstack.py, line 587

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1642679

Title:
  The OpenStack network_config.json implementation fails on Hyper-V
  compute nodes

Status in cloud-init:
  New

Bug description:
  We have discovered an issue when booting Xenial instances on OpenStack
  environments (Liberty or newer) and Hyper-V compute nodes using config
  drive as metadata source.

  When applying the network_config.json, cloudbase-init fails with this error:
  http://paste.openstack.org/show/RvHZJqn48JBb0TO9QznL/

  The fix would be to add 'hyperv' as a link type here:
  /usr/lib/python3/dist-packages/cloudinit/sources/helpers/openstack.py, line 
587

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1642679/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642483] Re: Deploy is failing after deregistering infoblox due to IPAM agent

2016-11-17 Thread Armando Migliaccio
If the intention is to go back to the internal IPAM backend once you
have deployed pluggable IPAM with Infoblox, this is not a scenario that
we contemplated and be willing to support. I'll defer to the infoblox
team for more details.

** Also affects: networking-infoblox
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Incomplete

** Changed in: networking-infoblox
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1642483

Title:
  Deploy is failing after deregistering infoblox due to IPAM agent

Status in networking-infoblox:
  Incomplete
Status in neutron:
  Incomplete

Bug description:
  Deploy is failing when neutron network is created after registering
  infoblox and then infoblox is Unregistered before deploy of Virtual
  Machine.

  Error:
  Deploy of virtual machine xyz on host xxx failed with exception: Build of 
instance 8b9a1d54-9417-454b-bb4f-b94d0d89da5d was re-scheduled: Subnet 
0d08b30c-4eb8-4b2a-9bd0-0b4e0ba5d694 could not be found. Neutron server returns 
request_ids: ['req-9bc7b6c9-bc73-4234-9854-f5d091a52616']

  Although

  The subnet is existing:
  [root@abc# neutron subnet-list
  
+--+--++--+
  | id | name | cidr | allocation_pools |
  
+--+--++--+
  | 0d08b30c-4eb8-4b2a-9bd0-0b4e0ba5d694 | | 120.20.10.0/24 | {"start": 
"120.20.10.2", "end": "120.20.10.254"} |

  Steps:
  1.Registered Infoblox
  2.Created network net_test.

  [root@jupiter-vm951 powervc]# neutron net-show 
c51c8399-2b9e-4319-ab82-3dbbf46adfa8
  +---+--+
  | Field | Value |
  +---+--+
  | admin_state_up | True |
  | availability_zone_hints | |
  | availability_zones | |
  | created_at | 2016-11-11T10:15:59Z |
  | description | |
  | id | c51c8399-2b9e-4319-ab82-3dbbf46adfa8 |
  | ipv4_address_scope | |
  | ipv6_address_scope | |
  | mtu | 1500 |
  | name | net_test |
  | project_id | 38b6750413e742db97ffa854c2752848 |
  | provider:network_type | vlan |
  | provider:physical_network | default |
  | provider:segmentation_id | 1510 |
  | revision_number | 4 |
  | router:external | False |
  | shared | False |
  | status | ACTIVE |
  | subnets | 0d08b30c-4eb8-4b2a-9bd0-0b4e0ba5d694 |
  | tags | |
  | tenant_id | 38b6750413e742db97ffa854c2752848 |
  | updated_at | 2016-11-11T10:16:01Z |
  +---+--+

  3.Unregistered infoblox and waited.
  4.Tried deploy of virtual machine xyz.
  Deploy of virtual machine xyz on host xxx failed with exception: Build of 
instance 8b9a1d54-9417-454b-bb4f-b94d0d89da5d was re-scheduled: Subnet 
0d08b30c-4eb8-4b2a-9bd0-0b4e0ba5d694 could not be found. Neutron server returns 
request_ids: ['req-9bc7b6c9-bc73-4234-9854-f5d091a52616']

  Exception seen:

  2016-11-11 05:43:16.389 55297 WARNING nova.compute.manager 
[req-e43bafe4-866f-4865-866e-45ad8c5408e8 
0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 
38b6750413e742db97ffa854c2752848 - - -] [instance: 
8b9a1d54-9417-454b-bb4f-b94d0d89da5d] Instance failed to spawn
  2016-11-11 05:43:16.389 55297 ERROR nova.compute.manager [instance: 
8b9a1d54-9417-454b-bb4f-b94d0d89da5d] o.start_deploy_simple()
  2016-11-11 05:43:16.389 55297 ERROR nova.compute.manager [instance: 
8b9a1d54-9417-454b-bb4f-b94d0d89da5d] r = f(*args, **kwds)
  2016-11-11 05:43:16.389 55297 ERROR nova.compute.manager [instance: 
8b9a1d54-9417-454b-2016-11-11 05:43:16.389 55297 ERROR nova.compute.manager 
[instance: 8b9a1d54-9417-454b-bb4f-b94d0d89da5d] for d in self.network_info:
  2016-11-11 05:43:16.389 55297 ERROR nova.compute.manager [instance: 
8b9a1d54-9417-454b-bb4f-b94d0d89da5d] self.wait()
  2016-11-11 05:43:16.389 55297 ERROR nova.compute.manager [instance: 
8b9a1d54-9417-454b-bb4f-b94d0d89da5d] self[:] = self._gt.wait()
  2016-11-11 05:43:16.389 55297 ERROR nova.compute.manager [instance: 
8b9a1d54-9417-454b-bb4f-b94d0d89da5d] return self._exit_event.wait()
  2016-11-11 05:43:16.389 55297 ERROR nova.compute.manager [instance: 
8b9a1d54-9417-454b-bb4f-b94d0d89da5d] current.throw(*self._exc)
  2016-11-11 05:43:16.389 55297 ERROR nova.compute.manager [instance: 
8b9a1d54-9417-454b-bb4f-b94d0d89da5d] result = function(*args, **kwargs)
  2016-11-11 05:43:16.389 55297 ERROR nova.compute.manager [instance: 
8b9a1d54-9417-454b-bb4f-b94d0d89da5d] return func(*args, **kwargs)
  2016-11-11 05:43:16.389 55297 ERROR nova.compute.manager [instance: 
8b9a1d54-9417-454b-bb4f-b94d0d89da5d] return self.post(self.ports_path, 
body=body)
  2016-11-11 05:43:16.389 55297 ERROR nova.compute.manager [instance: 
8b9a1d54-9

[Yahoo-eng-team] [Bug 1630920] Re: native/idl ovsdb driver loses some ovsdb transactions

2016-11-17 Thread Thomas Morin
** Also affects: bgpvpn
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630920

Title:
  native/idl ovsdb driver loses some ovsdb transactions

Status in networking-bgpvpn:
  New
Status in neutron:
  In Progress

Bug description:
  It seems the 'native' and the 'vsctl' ovsdb drivers behave
  differently. The native/idl driver seems to lose some ovsdb
  transactions, at least the transactions setting the 'other_config' ovs
  port attribute.

  I have written about this in a comment of an earlier bug report
  (https://bugs.launchpad.net/neutron/+bug/1626010). But I opened this
  new bug report because the two problems seem to be independent and
  that other comment may have gone unnoticed.

  It is not completely clear to me what difference this causes in user-
  observable behavior. I think it at least leads to losing information
  about which conntrack zone to use in the openvswitch firewall driver.
  See here:

  
https://github.com/openstack/neutron/blob/3ade301/neutron/agent/linux/openvswitch_firewall/firewall.py#L257

  The details:

  If I use the vsctl ovsdb driver:

  ml2_conf.ini:
  [ovs]
  ovsdb_interface = vsctl

  then I see this:

  $ > /opt/stack/logs/q-agt.log
  $ sudo ovs-vsctl list Port | grep other_config | grep -c net_uuid
  1
  $ openstack server create --flavor cirros256 --image cirros-0.3.4-x86_64-uec 
--nic net-id=net0 --wait vm0
  $ sudo ovs-vsctl list Port | grep other_config | grep -c net_uuid
  2
  $ openstack server delete vm0
  $ sleep 3
  $ sudo ovs-vsctl list Port | grep other_config | grep -c net_uuid
  1
  $ egrep -c 'Transaction caused no change' /opt/stack/logs/q-agt.log 
  0

  But if I use the (default) native driver:

  ml2_conf.ini:
  [ovs]
  ovsdb_interface = native

  Then this happens:

  $ > /opt/stack/logs/q-agt.log
  $ sudo ovs-vsctl list Port | grep other_config | grep -c net_uuid
  1
  $ openstack server create --flavor cirros256 --image cirros-0.3.4-x86_64-uec 
--nic net-id=net0 --wait vm0
  $ sudo ovs-vsctl list Port | grep other_config | grep -c net_uuid
  1
  $ openstack server delete vm0
  $ sleep 3
  $ sudo ovs-vsctl list Port | grep other_config | grep -c net_uuid
  1
  $ egrep -c 'Transaction caused no change' /opt/stack/logs/q-agt.log
  22

  A sample log message from q-agt.log:

  2016-10-06 09:23:05.447 DEBUG neutron.agent.ovsdb.impl_idl [-] Running txn 
command(idx=0): DbSetCommand(table=Port, col_values=(('other_config', {'tag': 
1}),), record=tap8e2a390d-63) from (pid=6068) do_commit 
/opt/stack/neutron/neutron/agent/ovsdb/impl_idl.py:99
  2016-10-06 09:23:05.448 DEBUG neutron.agent.ovsdb.impl_idl [-] Transaction 
caused no change from (pid=6068) do_commit 
/opt/stack/neutron/neutron/agent/ovsdb/impl_idl.py:126

  devstack version: 563d377
  neutron version: 3ade301

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1630920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642543] Re: gate-grenade-dsvm-ubuntu-xenial failing on stable/newton due to devstack not supporting xenial during Mitaka

2016-11-17 Thread Matt Riedemann
https://review.openstack.org/#/c/398920/

** Also affects: openstack-gate
   Importance: Undecided
   Status: New

** Changed in: openstack-gate
   Status: New => In Progress

** Changed in: nova
   Status: New => Invalid

** Changed in: openstack-gate
 Assignee: (unassigned) => Lee Yarwood (lyarwood)

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1642543

Title:
  gate-grenade-dsvm-ubuntu-xenial failing on stable/newton due to
  devstack not supporting xenial during Mitaka

Status in OpenStack-Gate:
  In Progress

Bug description:
  Description
  ===
  gate-grenade-dsvm-ubuntu-xenial failing on stable/newton due to devstack not 
supporting xenial during Mitaka.

  Steps to reproduce
  ==
  Attempt to run gate-grenade-dsvm-ubuntu-xenial against stable/newton.

  Expected result
  ===
  gate-grenade-dsvm-ubuntu-xenial is able to install devstack using 
stable/mitaka on xenial.

  Actual result
  =
  gate-grenade-dsvm-ubuntu-xenial currently fails to install devstack using 
stable/mitaka as xenial is not listed as a tested release.

  Environment
  ===
  Any stable/newton xenial based grenade job.

  Logs & Configs
  ==
  
http://logs.openstack.org/12/398812/1/check/gate-grenade-dsvm-ubuntu-xenial/d9300b7/logs/grenade.sh.txt.gz#_2016-11-17_08_42_15_961

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-gate/+bug/1642543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642628] [NEW] Detaching encryptors from volumes that are still attached to domains can result in failure

2016-11-17 Thread Lee Yarwood
Public bug reported:

Description
===
Detaching encryptors from volumes that are still attached to domains can result 
in failure.

Steps to reproduce
==
- Attach an encrypted volume to an instance.
- Mount and use the volume within the instance.
- Attempt to detach the volume via Nova while the volume is in-use within the 
instance.

Expected result
===
The volume is detached.

Actual result
=
Nova first attempts to detach the encryptors from the volume that is still 
attached to the libvirt domain. As a result this can fail with `Device or 
resource busy` as I/O is still in-flight between the instance and volume.

Environment
===
1. master, stable/newton.

2. Which hypervisor did you use?
   Libvirt + KVM

2. Which storage type did you use?
   LVM / iSCSI + LUKS

3. Which networking type did you use?
   N/A

Logs & Configs
==

Failed to detach an encrypted volume
https://bugzilla.redhat.com/show_bug.cgi?id=1388417

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1642628

Title:
  Detaching encryptors from volumes that are still attached to domains
  can result in failure

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Detaching encryptors from volumes that are still attached to domains can 
result in failure.

  Steps to reproduce
  ==
  - Attach an encrypted volume to an instance.
  - Mount and use the volume within the instance.
  - Attempt to detach the volume via Nova while the volume is in-use within the 
instance.

  Expected result
  ===
  The volume is detached.

  Actual result
  =
  Nova first attempts to detach the encryptors from the volume that is still 
attached to the libvirt domain. As a result this can fail with `Device or 
resource busy` as I/O is still in-flight between the instance and volume.

  Environment
  ===
  1. master, stable/newton.

  2. Which hypervisor did you use?
 Libvirt + KVM

  2. Which storage type did you use?
 LVM / iSCSI + LUKS

  3. Which networking type did you use?
 N/A

  Logs & Configs
  ==

  Failed to detach an encrypted volume
  https://bugzilla.redhat.com/show_bug.cgi?id=1388417

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1642628/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268751] Re: Potential token revocation abuse via group membership

2016-11-17 Thread Richard
** Changed in: keystone
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1268751

Title:
  Potential token revocation abuse via group membership

Status in OpenStack Identity (keystone):
  Won't Fix
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  If a group is deleted, all tokens for all users that are a member of
  that group are revoked.  This leads to potential abuse:

  1.  A group admin adds a user to a group without users knowledge
  2. User creates token
  3. Admin  deletes group.  
  4.  All of the users tokens are revoked.

  Admittedly, this abuse must be instigated by a group admin, which is
  the global admin in the default policy file, but an alternative policy
  file could allow for the delegation of "add user to group" behavior.
  In such a system, this could act as a denial of service attack for a
  set of users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1268751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642603] [NEW] Horizon dialog box cannot be moved once titlebar is off screen

2016-11-17 Thread Ivica
Public bug reported:

Select Compute->Images then click the +Create Image button.

Using the mouse, grab the titlebar of the Create Image dialog box and
drag it upwards such that the title bar is no longer visible in the
window.

You can no longer drag the dialog box down to access fields in the top
of the dialog box

Expected results: You should not be able to drag the dialog box outside
the frame. A Scroll bar should be placed on the side of the dialog box
to allow users to scroll up and down to see fields that are not
displayed on the screen.

Workaround: Click on the Cancel button to close the dialog box and start
over. Enter items at the top of the dialog box first, then drag the
dialog box up so that fields in the lower area can be addressed.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1642603

Title:
  Horizon dialog box cannot be moved once titlebar is off screen

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Select Compute->Images then click the +Create Image button.

  Using the mouse, grab the titlebar of the Create Image dialog box and
  drag it upwards such that the title bar is no longer visible in the
  window.

  You can no longer drag the dialog box down to access fields in the top
  of the dialog box

  Expected results: You should not be able to drag the dialog box
  outside the frame. A Scroll bar should be placed on the side of the
  dialog box to allow users to scroll up and down to see fields that are
  not displayed on the screen.

  Workaround: Click on the Cancel button to close the dialog box and
  start over. Enter items at the top of the dialog box first, then drag
  the dialog box up so that fields in the lower area can be addressed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1642603/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642589] [NEW] Resource tracker logs at too high level

2016-11-17 Thread Bob Ball
Public bug reported:

There are very few INFO messages logged by periodic tasks, but every
minute the resource tracker logs 4 lines, for example:

<182>Nov 17 13:26:24 node-4 nova-compute: 2016-11-17 13:26:24.942 1143 INFO 
nova.compute.resource_tracker [req-85a43595-8d23-4f5f-bfb2-59689d3b873f - - - - 
-] Auditing locally available compute resources for node xrtmia-03-01
<182>Nov 17 13:26:25 node-4 nova-compute: 2016-11-17 13:26:25.749 1143 INFO 
nova.compute.resource_tracker [req-85a43595-8d23-4f5f-bfb2-59689d3b873f - - - - 
-] Total usable vcpus: 24, total allocated vcpus: 4
<182>Nov 17 13:26:25 node-4 nova-compute: 2016-11-17 13:26:25.749 1143 INFO 
nova.compute.resource_tracker [req-85a43595-8d23-4f5f-bfb2-59689d3b873f - - - - 
-] Final resource view: name=xrtmia-03-01 phys_ram=98293MB used_ram=12905MB 
phys_disk=450GB used_disk=40GB total_vcpus=24 used_vcpus=4 pci_stats=[]
<182>Nov 17 13:26:25 node-4 nova-compute: 2016-11-17 13:26:25.789 1143 INFO 
nova.compute.resource_tracker [req-85a43595-8d23-4f5f-bfb2-59689d3b873f - - - - 
-] Compute_service record updated for node-4.domain.tld:xrtmia-03-01

Of these 4 lines, only one is useful at the INFO level; the one reporting the 
final resource view.
The other three log lines should be reduced to DEBUG level as they are 
generally not useful to be logged in normal operation.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

- Every minute the resource tracker logs 4 lines, for example:
+ There are very few INFO messages logged by periodic tasks, but every
+ minute the resource tracker logs 4 lines, for example:
  
  <182>Nov 17 13:26:24 node-4 nova-compute: 2016-11-17 13:26:24.942 1143 INFO 
nova.compute.resource_tracker [req-85a43595-8d23-4f5f-bfb2-59689d3b873f - - - - 
-] Auditing locally available compute resources for node xrtmia-03-01
  <182>Nov 17 13:26:25 node-4 nova-compute: 2016-11-17 13:26:25.749 1143 INFO 
nova.compute.resource_tracker [req-85a43595-8d23-4f5f-bfb2-59689d3b873f - - - - 
-] Total usable vcpus: 24, total allocated vcpus: 4
  <182>Nov 17 13:26:25 node-4 nova-compute: 2016-11-17 13:26:25.749 1143 INFO 
nova.compute.resource_tracker [req-85a43595-8d23-4f5f-bfb2-59689d3b873f - - - - 
-] Final resource view: name=xrtmia-03-01 phys_ram=98293MB used_ram=12905MB 
phys_disk=450GB used_disk=40GB total_vcpus=24 used_vcpus=4 pci_stats=[]
  <182>Nov 17 13:26:25 node-4 nova-compute: 2016-11-17 13:26:25.789 1143 INFO 
nova.compute.resource_tracker [req-85a43595-8d23-4f5f-bfb2-59689d3b873f - - - - 
-] Compute_service record updated for node-4.domain.tld:xrtmia-03-01
  
  Of these 4 lines, only one is useful at the INFO level; the one reporting the 
final resource view.
  The other three log lines should be reduced to DEBUG level as they are 
generally not useful to be logged in normal operation.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1642589

Title:
  Resource tracker logs at too high level

Status in OpenStack Compute (nova):
  New

Bug description:
  There are very few INFO messages logged by periodic tasks, but every
  minute the resource tracker logs 4 lines, for example:

  <182>Nov 17 13:26:24 node-4 nova-compute: 2016-11-17 13:26:24.942 1143 INFO 
nova.compute.resource_tracker [req-85a43595-8d23-4f5f-bfb2-59689d3b873f - - - - 
-] Auditing locally available compute resources for node xrtmia-03-01
  <182>Nov 17 13:26:25 node-4 nova-compute: 2016-11-17 13:26:25.749 1143 INFO 
nova.compute.resource_tracker [req-85a43595-8d23-4f5f-bfb2-59689d3b873f - - - - 
-] Total usable vcpus: 24, total allocated vcpus: 4
  <182>Nov 17 13:26:25 node-4 nova-compute: 2016-11-17 13:26:25.749 1143 INFO 
nova.compute.resource_tracker [req-85a43595-8d23-4f5f-bfb2-59689d3b873f - - - - 
-] Final resource view: name=xrtmia-03-01 phys_ram=98293MB used_ram=12905MB 
phys_disk=450GB used_disk=40GB total_vcpus=24 used_vcpus=4 pci_stats=[]
  <182>Nov 17 13:26:25 node-4 nova-compute: 2016-11-17 13:26:25.789 1143 INFO 
nova.compute.resource_tracker [req-85a43595-8d23-4f5f-bfb2-59689d3b873f - - - - 
-] Compute_service record updated for node-4.domain.tld:xrtmia-03-01

  Of these 4 lines, only one is useful at the INFO level; the one reporting the 
final resource view.
  The other three log lines should be reduced to DEBUG level as they are 
generally not useful to be logged in normal operation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1642589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640767] Re: Create a qos-policy , name attribute is necessary when using openstack command line interface, and it become non essential option when I use API or curl command to c

2016-11-17 Thread Akihiro Motoki
As we discussed in the above mailing list thread by slaweq, requiring
'name' is a design decision in both OSC and neutronclient CLI. Let's
mark this as Won't Fix.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1640767

Title:
  Create a qos-policy , name attribute  is necessary when using
  openstack command line interface,  and it become non essential option
  when I use API or curl command to create

Status in neutron:
  Won't Fix

Bug description:
  Create a qos-policy , name attribute  is necessary when using openstack 
command line interface,  
  and it become non essential option when I use API or curl command to create.

  creation of net has similar problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1640767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642579] [NEW] use neutron for nova-compute but compute use nova-network rpc api

2016-11-17 Thread Wang Liming
Public bug reported:

my nova.conf like this:
network_api_class=nova.network.neutronv2.api.API
use_neutron=True

when I create a vm because nova-compute not found the glance image then a 
exceptin occure:
2016-11-14 11:32:48.729 37828  Traceback (most recent call last):
2016-11-14 11:32:48.729 37828File 
"/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 358, in 
_cleanup_allocated_networks
2016-11-14 11:32:48.729 37828  context, instance, 
requested_networks=requested_networks)
2016-11-14 11:32:48.729 37828File 
"/usr/lib/python2.7/site-packages/nova/network/api.py", line 49, in wrapped
2016-11-14 11:32:48.729 37828  return func(self, context, *args, **kwargs)
2016-11-14 11:32:48.729 37828File 
"/usr/lib/python2.7/site-packages/nova/network/api.py", line 299, in 
deallocate_for_instance
2016-11-14 11:32:48.729 37828  requested_networks=requested_networks)
2016-11-14 11:32:48.729 37828File 
"/usr/lib/python2.7/site-packages/nova/network/rpcapi.py", line 183, in 
deallocate_for_instance
2016-11-14 11:32:48.729 37828  return cctxt.call(ctxt, 
'deallocate_for_instance', **kwargs)
2016-11-14 11:32:48.729 37828File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 413, in 
call
2016-11-14 11:32:48.729 37828  return self.prepare().call(ctxt, method, 
**kwargs)
2016-11-14 11:32:48.729 37828File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 158, in 
call
2016-11-14 11:32:48.729 37828  retry=self.retry)
2016-11-14 11:32:48.729 37828File 
"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 91, in 
_send
2016-11-14 11:32:48.729 37828  timeout=timeout, retry=retry)
2016-11-14 11:32:48.729 37828File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
512, in send
2016-11-14 11:32:48.729 37828  retry=retry)
2016-11-14 11:32:48.729 37828File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
501, in _send
2016-11-14 11:32:48.729 37828  result = self._waiter.wait(msg_id, timeout)
2016-11-14 11:32:48.729 37828File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
379, in wait
2016-11-14 11:32:48.729 37828  message = self.waiters.get(msg_id, 
timeout=timeout)
2016-11-14 11:32:48.729 37828File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
277, in get
2016-11-14 11:32:48.729 37828  'to message ID %s' % msg_id)
2016-11-14 11:32:48.729 37828  MessagingTimeout: Timed out waiting for a reply 
to message ID e3b4c7fb434b48908ac4f0ef49fb77f1

I use the neutron for netowrk but the error log show that nova-conductor use 
nova-network rpc api
for example the code:File 
"/usr/lib/python2.7/site-packages/nova/network/api.py", line 299, in 
deallocate_for_instance 
in fact it should use 
/usr/lib/python2.7/site-packages/nova/network/neutronv2.api.py

why? I guess maybe I create a error instance then the bug occure
if I create a active instance the bug will not be occure

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1642579

Title:
  use neutron for nova-compute but compute use nova-network rpc api

Status in OpenStack Compute (nova):
  New

Bug description:
  my nova.conf like this:
  network_api_class=nova.network.neutronv2.api.API
  use_neutron=True

  when I create a vm because nova-compute not found the glance image then a 
exceptin occure:
  2016-11-14 11:32:48.729 37828  Traceback (most recent call last):
  2016-11-14 11:32:48.729 37828File 
"/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 358, in 
_cleanup_allocated_networks
  2016-11-14 11:32:48.729 37828  context, instance, 
requested_networks=requested_networks)
  2016-11-14 11:32:48.729 37828File 
"/usr/lib/python2.7/site-packages/nova/network/api.py", line 49, in wrapped
  2016-11-14 11:32:48.729 37828  return func(self, context, *args, **kwargs)
  2016-11-14 11:32:48.729 37828File 
"/usr/lib/python2.7/site-packages/nova/network/api.py", line 299, in 
deallocate_for_instance
  2016-11-14 11:32:48.729 37828  requested_networks=requested_networks)
  2016-11-14 11:32:48.729 37828File 
"/usr/lib/python2.7/site-packages/nova/network/rpcapi.py", line 183, in 
deallocate_for_instance
  2016-11-14 11:32:48.729 37828  return cctxt.call(ctxt, 
'deallocate_for_instance', **kwargs)
  2016-11-14 11:32:48.729 37828File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 413, in 
call
  2016-11-14 11:32:48.729 37828  return self.prepare().call(ctxt, method, 
**kwargs)
  2016-11-14 11:32:48.729 37828File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 158, in 
call
  2016-11-14 11:32:48.729 37828  retry=self.r

[Yahoo-eng-team] [Bug 1621698] Re: AFTER_DELETE event for SECURITY_GROUP_RULE should contain sg_id

2016-11-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/367728
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=3e25ee3e28e1832b8eb0474db1e980b75f377df1
Submitter: Jenkins
Branch:master

commit 3e25ee3e28e1832b8eb0474db1e980b75f377df1
Author: Hong Hui Xiao 
Date:   Fri Sep 9 10:27:47 2016 +0800

Add sg_id in the AFTER_DELETE event of sg_rule delete

In the AFTER_DELETE of sg_rule delete, the sg_rule is actually deleted.
This makes it impossible/hard to know which sg is affected.

This patch add the sg_id in the event. And this patch will be used in [1].

[1] https://review.openstack.org/#/c/367718

Change-Id: Ibdef6a703913b74504e402d225b1a0dffadb7aff
Closes-Bug: #1621698


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1621698

Title:
  AFTER_DELETE event for SECURITY_GROUP_RULE should contain sg_id

Status in neutron:
  Fix Released

Bug description:
  In the notification of AFTER_DELETE event for SECURITY_GROUP_RULE, the
  security group rule is actually deleted from DB. There is no way for
  subscriber to know the latest information of related security group.

  To be specific, dragonflow will maintain the security group version,
  and we are using revision_number in dragonflow now. The sg_rule_delete
  will bump the sg revision, which will only happen after the db
  transaction. dragonflow stores security group rule as part of security
  group. So, in the AFTER_DELETE event of SECURITY_GROUP_RULE, we don't
  know which security group is updated, if neutron don't pass the
  security group id.

  We can query all security group and iterate their security group
  rules. But that is inefficient and it will be nice if neutron just
  pass the related security group id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1621698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642543] [NEW] gate-grenade-dsvm-ubuntu-xenial failing on stable/newton due to devstack not supporting xenial during Mitaka

2016-11-17 Thread Lee Yarwood
Public bug reported:

Description
===
gate-grenade-dsvm-ubuntu-xenial failing on stable/newton due to devstack not 
supporting xenial during Mitaka.

Steps to reproduce
==
Attempt to run gate-grenade-dsvm-ubuntu-xenial against stable/newton.

Expected result
===
gate-grenade-dsvm-ubuntu-xenial is able to install devstack using stable/mitaka 
on xenial.

Actual result
=
gate-grenade-dsvm-ubuntu-xenial currently fails to install devstack using 
stable/mitaka as xenial is not listed as a tested release.

Environment
===
Any stable/newton xenial based grenade job.

Logs & Configs
==
http://logs.openstack.org/12/398812/1/check/gate-grenade-dsvm-ubuntu-xenial/d9300b7/logs/grenade.sh.txt.gz#_2016-11-17_08_42_15_961

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1642543

Title:
  gate-grenade-dsvm-ubuntu-xenial failing on stable/newton due to
  devstack not supporting xenial during Mitaka

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  gate-grenade-dsvm-ubuntu-xenial failing on stable/newton due to devstack not 
supporting xenial during Mitaka.

  Steps to reproduce
  ==
  Attempt to run gate-grenade-dsvm-ubuntu-xenial against stable/newton.

  Expected result
  ===
  gate-grenade-dsvm-ubuntu-xenial is able to install devstack using 
stable/mitaka on xenial.

  Actual result
  =
  gate-grenade-dsvm-ubuntu-xenial currently fails to install devstack using 
stable/mitaka as xenial is not listed as a tested release.

  Environment
  ===
  Any stable/newton xenial based grenade job.

  Logs & Configs
  ==
  
http://logs.openstack.org/12/398812/1/check/gate-grenade-dsvm-ubuntu-xenial/d9300b7/logs/grenade.sh.txt.gz#_2016-11-17_08_42_15_961

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1642543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642388] Re: ovsdb error: "attempting to write bad value to column other_config"

2016-11-17 Thread Thomas Morin
** Also affects: bgpvpn
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1642388

Title:
  ovsdb error: "attempting to write bad value to column other_config"

Status in networking-bgpvpn:
  New
Status in neutron:
  New

Bug description:
  We see this error in networking-bgpvpn tempest test runs:

  2016-11-16 17:13:51.031 20988 DEBUG neutron.agent.ovsdb.impl_idl [-] Running 
txn command(idx=0): DbSetCommand(table=Port, col_values=(('other_config', 
{'tag': 4}),), record=qvo655bfaa6-18) do_commit 
/opt/stack/new/neutron/neutron/agent/ovsdb/impl_idl.py:100
  2016-11-16 17:13:51.031 20988 ERROR neutron.agent.ovsdb.native.vlog [-] 
attempting to write bad value to column other_config (ovsdb error: expected 
string, got )
  2016-11-16 17:13:51.031 20988 DEBUG neutron.agent.ovsdb.impl_idl [-] 
Transaction caused no change do_commit 
/opt/stack/new/neutron/neutron/agent/ovsdb/impl_idl.py:127

  Nothing obviously points at something related or specific to
  networking-bgpvpn.

  Full logs:
  
http://logs.openstack.org/67/396967/4/check/gate-tempest-dsvm-networking-bgpvpn-bagpipe/bf56a53/logs/screen-q-agt.txt.gz?level=DEBUG#_2016-11-16_17_13_51_031

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1642388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642537] [NEW] Request fails with DBDeadlock

2016-11-17 Thread Miguel Angel Ajo
Public bug reported:

http://logs.openstack.org/00/352200/14/check/gate-tempest-dsvm-neutron-
linuxbridge-ubuntu-xenial/6ba61d3/console.html


2016-11-17 05:06:33.890498 | Captured traceback:
2016-11-17 05:06:33.890507 | ~~~
2016-11-17 05:06:33.890534 | Traceback (most recent call last):
2016-11-17 05:06:33.890566 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 289, in 
test_resize_server_confirm
2016-11-17 05:06:33.890586 | 
self._test_resize_server_confirm(stop=False)
2016-11-17 05:06:33.890613 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 271, in 
_test_resize_server_confirm
2016-11-17 05:06:33.890624 | 'VERIFY_RESIZE')
2016-11-17 05:06:33.890644 |   File "tempest/common/waiters.py", line 75, 
in wait_for_server_status
2016-11-17 05:06:33.890655 | server_id=server_id)
2016-11-17 05:06:33.890686 | tempest.exceptions.BuildErrorException: Server 
00876f9f-2f30-4507-93c0-f0f5a1699565 failed to build and is in ERROR status
2016-11-17 05:06:33.890761 | Details: {u'code': 500, u'message': u"Remote 
error: DBDeadlock (pymysql.err.InternalError) (1213, u'Deadlock found when 
trying to get lock; try restarting transaction') [SQL: u'UPDATE migrations SET 
updated_at=%(updated_at)s, status=%(status)s WHERE migrations.id = 
%(migrations_id)s'] [parame", u'created': u'2016-11-17T04:43:56Z'}
2016-11-17 05:06:33.890767 |

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1642537

Title:
  Request fails with DBDeadlock

Status in OpenStack Compute (nova):
  New

Bug description:
  http://logs.openstack.org/00/352200/14/check/gate-tempest-dsvm-
  neutron-linuxbridge-ubuntu-xenial/6ba61d3/console.html

  
  2016-11-17 05:06:33.890498 | Captured traceback:
  2016-11-17 05:06:33.890507 | ~~~
  2016-11-17 05:06:33.890534 | Traceback (most recent call last):
  2016-11-17 05:06:33.890566 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 289, in 
test_resize_server_confirm
  2016-11-17 05:06:33.890586 | 
self._test_resize_server_confirm(stop=False)
  2016-11-17 05:06:33.890613 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 271, in 
_test_resize_server_confirm
  2016-11-17 05:06:33.890624 | 'VERIFY_RESIZE')
  2016-11-17 05:06:33.890644 |   File "tempest/common/waiters.py", line 75, 
in wait_for_server_status
  2016-11-17 05:06:33.890655 | server_id=server_id)
  2016-11-17 05:06:33.890686 | tempest.exceptions.BuildErrorException: 
Server 00876f9f-2f30-4507-93c0-f0f5a1699565 failed to build and is in ERROR 
status
  2016-11-17 05:06:33.890761 | Details: {u'code': 500, u'message': u"Remote 
error: DBDeadlock (pymysql.err.InternalError) (1213, u'Deadlock found when 
trying to get lock; try restarting transaction') [SQL: u'UPDATE migrations SET 
updated_at=%(updated_at)s, status=%(status)s WHERE migrations.id = 
%(migrations_id)s'] [parame", u'created': u'2016-11-17T04:43:56Z'}
  2016-11-17 05:06:33.890767 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1642537/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-11-17 Thread Thomas Herve
** No longer affects: heat

** No longer affects: python-heatclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in Cinder:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in kuryr:
  In Progress
Status in kuryr-libnetwork:
  In Progress
Status in Magnum:
  In Progress
Status in Mistral:
  Fix Released
Status in Murano:
  In Progress
Status in networking-calico:
  In Progress
Status in networking-ovn:
  Fix Released
Status in networking-sfc:
  In Progress
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in python-muranoclient:
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in tacker:
  In Progress

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608553] Re: simplify chained comparison

2016-11-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/349552
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=42ac50b4b014ee461d545cc39cbbcf335ad776a4
Submitter: Jenkins
Branch:master

commit 42ac50b4b014ee461d545cc39cbbcf335ad776a4
Author: liaozd 
Date:   Mon Aug 1 22:51:55 2016 +0800

Simplify chained comparison

Change-Id: Ibcf2ba7fb96ace3e575681cd6db413fb37924c82
Implements: fix a bug
Closes-Bug: #1608553


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1608553

Title:
  simplify chained comparison

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  simplify chained comparison on
  ./openstack_dashboard/dashboards/identity/projects/workflows.py:104

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1608553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642517] [NEW] several resources lack revision_on_change attribute to bump the version of their parent resources

2016-11-17 Thread Lujin Luo
Public bug reported:

I went through some codes related to bumping parent resources revision
number when children resources, which are not top-level neutron objects,
are update. I found the following children resources lack of
"revises_on_change" attribute, which is used to bump parent resources'
revision number.

* PortBindingPort (Port)
* QosPortPolicyBinding (Port)
* SegmentHostMapping (Network Segment)
* RouterExtraAttributes (Router)
* ExternalNetwork (Network)
* QosNetworkPolicyBinding (Network)

** Affects: neutron
 Importance: Undecided
 Assignee: Lujin Luo (luo-lujin)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Lujin Luo (luo-lujin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1642517

Title:
  several resources lack revision_on_change attribute to bump the
  version of their parent resources

Status in neutron:
  New

Bug description:
  I went through some codes related to bumping parent resources revision
  number when children resources, which are not top-level neutron
  objects, are update. I found the following children resources lack of
  "revises_on_change" attribute, which is used to bump parent resources'
  revision number.

  * PortBindingPort (Port)
  * QosPortPolicyBinding (Port)
  * SegmentHostMapping (Network Segment)
  * RouterExtraAttributes (Router)
  * ExternalNetwork (Network)
  * QosNetworkPolicyBinding (Network)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1642517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642506] [NEW] Test fails due to wrong reference to attributes of dict object

2016-11-17 Thread Hiroaki Kobayashi
Public bug reported:

The reference to an attribute of dict object is wrong at _extend_servers() in
nova/api/openstack/compute/security_groups.py.
That causes an AttributeError: 'dict' object has no attribute 'XXX'

Concretely, "group.name " should be modified to "group['name']

** Affects: nova
 Importance: Undecided
 Assignee: Hiroaki Kobayashi (hiro-kobayashi)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Hiroaki Kobayashi (hiro-kobayashi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1642506

Title:
  Test fails due to wrong reference to attributes of dict object

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The reference to an attribute of dict object is wrong at _extend_servers() in
  nova/api/openstack/compute/security_groups.py.
  That causes an AttributeError: 'dict' object has no attribute 'XXX'

  Concretely, "group.name " should be modified to "group['name']

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1642506/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629816] Re: Misleading "DVR: Duplicate DVR router interface detected for subnet"

2016-11-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/381025
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=2f7c58f405288d323722ec6be224c4c70ccea8b3
Submitter: Jenkins
Branch:master

commit 2f7c58f405288d323722ec6be224c4c70ccea8b3
Author: Oleg Bondarev 
Date:   Mon Oct 3 12:53:18 2016 +0300

DVR: remove misleading error log

csnat_ofport is always OFPORT_INVALID on compute nodes
so the error was always wrong.
Not sure how it could mean duplicate dvr port even on controllers,
so the patch is just removing the condition and the log.

Closes-Bug: #1629816
Change-Id: Ifbb8128fbd932946dab84a73b780da495f2ea1af


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1629816

Title:
  Misleading "DVR: Duplicate DVR router interface detected for subnet"

Status in neutron:
  Fix Released

Bug description:
  The error message is seen on each ovs agent resync on compute node
  where there are dvr serviced ports. Resync can be triggered by any
  error - this is unrelated to this bug.

  The error message appears on processing distributed router port for a
  subnet which is already in local_dvr_map of the agent, see
  _bind_distributed_router_interface_port in ovs_dvr_neutron_agent.py:

   if subnet_uuid in self.local_dvr_map:
   ldm = self.local_dvr_map[subnet_uuid]
   csnat_ofport = ldm.get_csnat_ofport()
   if csnat_ofport == constants.OFPORT_INVALID:
   LOG.error(_LE("DVR: Duplicate DVR router interface detected "
 "for subnet %s"), subnet_uuid)
   return

  where csnat_ofport = OFPORT_INVALID by default and can only change
  when the agent processes csnat port of the router - this will never
  happen on compute node and we'll see the misleading log.

  The proposal would be to delete the condition and the log as they're
  useless.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1629816/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642500] [NEW] nova boot min_count and max_count not limited by neutron port quota

2016-11-17 Thread suntao
Public bug reported:

openstack-liberty

1) set user quota:
neutron-port:  10(4 used by network:router_interface and network:dhcp)
nova instance: 10
nova cores:10
nova rams: 51200M

2) want to create 10 vms in a request
# nova boot --flavor m1.tiny --image xxx --min-count 10 --max-count 10 --nic 
net-id=yyy instance


i thought nova boot would fail because only 6 ports left. Actually, 6 instances 
are created and no error happened.

is it reasonable?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1642500

Title:
  nova boot min_count and max_count not limited by neutron port quota

Status in OpenStack Compute (nova):
  New

Bug description:
  openstack-liberty

  1) set user quota:
  neutron-port:  10(4 used by network:router_interface and network:dhcp)
  nova instance: 10
  nova cores:10
  nova rams: 51200M

  2) want to create 10 vms in a request
  # nova boot --flavor m1.tiny --image xxx --min-count 10 --max-count 10 --nic 
net-id=yyy instance

  
  i thought nova boot would fail because only 6 ports left. Actually, 6 
instances are created and no error happened.

  is it reasonable?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1642500/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp