[Yahoo-eng-team] [Bug 1473303] [NEW] horizon gate failing due to latest release of mock

2015-07-10 Thread Lin Hua Cheng
Public bug reported:


the latest release of mock exposed some bad test in horizon and the gate
jobs are now failing

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1473303

Title:
  horizon gate failing due to latest release of mock

Status in OpenStack Dashboard (Horizon):
  New

Bug description:

  the latest release of mock exposed some bad test in horizon and the
  gate jobs are now failing

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1473303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472712] Re: Using SSL with rabbitmq prevents communication between nova-compute and conductor after latest nova updates

2015-07-10 Thread Markus Zoeller
I added "oslo.messaging" as affected project.

** Tags added: oslo

** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1472712

Title:
  Using SSL with rabbitmq prevents communication between nova-compute
  and conductor after latest nova updates

Status in OpenStack Compute (nova):
  New
Status in oslo.messaging:
  New

Bug description:
  On the latest update of the Ubuntu OpenStack packages, it was
  discovered that the nova-compute/nova-conductor
  (1:2014.1.4-0ubuntu2.1) packages encountered a bug with using SSL to
  connect to rabbitmq.

  When this problem occurs, the compute node cannot connect to the
  controller, and this message is constantly displayed:

  WARNING nova.conductor.api [req-4022395c-9501-47cf-bf8e-476e1cc58772
  None None] Timed out waiting for nova-conductor. Is it running? Or did
  this service start before nova-conductor?

  Investigation revealed that having rabbitmq configured with SSL was
  the root cause of this problem.  This seems to have been introduced
  with the current version of the nova packages.   Rabbitmq was not
  updated as part of this distribution update, but the messaging library
  (python-oslo.messaging 1.3.0-0ubuntu1.1) was updated.   So the problem
  could exist in any of these components.

  Versions installed:
  Openstack version: Icehouse
  Ubuntu 14.04.2 LTS
  nova-conductor1:2014.1.4-0ubuntu2.1
  nova-compute1:2014.1.4-0ubuntu2.1
  rabbitmq-server  3.2.4-1
  openssl:amd64/trusty-security   1.0.1f-1ubuntu2.15

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1472712/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473308] [NEW] NUMATopologyFilter raise exception and not continue filter next node when there is no wanted pagesize in current filtered host

2015-07-10 Thread liuxiuli
Public bug reported:

version: 
2015.1.0

question:
NUMATopologyFilter raise exception and not continue filter next node when there 
is no wanted pagesize in current filtered host

Reproduce steps:
There are two compute nodes: Node1 and Node2 . 
Node1 has 2M and 4K page size, and  Node2 has 1G and 4K page size. 
set hw:mem_page_size=1048576 in flavor, and create an instance by using this 
flavor

Expected result:
NUMATopologyFilter returns Node2 to build instance

Actual result:
NUMATopologyFilter raise this exception when NUMATopologyFilter filters Node1 
and do not continue to filter Node2.

Exception during message handling: Page size 1048576 is not supported by the 
host.
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 142, in 
inner
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher return 
func(*args, **kwargs)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 86, in 
select_destinations
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher 
filter_properties)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 67, 
in select_destinations
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher 
filter_properties)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 
138, in _schedule
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher 
filter_properties, index=num)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/host_manager.py", line 524, in 
get_filtered_hosts
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher hosts, 
filter_properties, index)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/filters.py", line 78, in 
get_filtered_objects
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher list_objs 
= list(objs)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/filters.py", line 44, in filter_all
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher if 
self._filter_one(obj, filter_properties):
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/filters/__init__.py", line 27, 
in _filter_one
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher return 
self.host_passes(obj, filter_properties)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/filters/numa_topology_filter.py",
 line 54, in host_passes
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher 
pci_stats=host_state.pci_stats))
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/hardware.py", line 1048, in 
numa_fit_instance_to_host
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher 
host_cell, instance_cell, limits)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/hardware.py", line 778, in 
_numa_fit_instance_cell
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher 
host_cell, instance_cell)
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/hardware.py", line 631, in 
_numa_cell_supports_pagesize_request
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher return 
verify_pagesizes(host_cell, inst_cell, [inst_cell.pagesize])
2015-07-09 13:08:56.771 10446 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/h

[Yahoo-eng-team] [Bug 1376596] Re: Cannot start nova-network on juno-1

2015-07-10 Thread Bruno Bompastor
** Changed in: nova
   Status: Opinion => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376596

Title:
  Cannot start nova-network on juno-1

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Hi!

  I was testing packstack --allinone on "RDO test day Juno milestone 3"
  and i cannot start nova-network.

  OS: CentOS7
  Openstack version: juno-1
  Packstack cmd: packstack --allinone --os-neutron-install=n 
--os-heat-cfn-install=y --os-heat-install=y --use-epel=y

  Error:

  2014-10-01 18:17:53.853 6108 AUDIT nova.service [-] Starting network node 
(version 2014.2-0.4.b3.el7.centos)
  2014-10-01 18:17:54.055 6108 ERROR nova.openstack.common.threadgroup [-] 
Failed to add interface: can't add lo to bridge br100: Invalid argument
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 
125, in wait
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup 
x.wait()
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 
47, in wait
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 293, in switch
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 492, 
in run_service
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup 
service.start()
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 164, in start
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/network/manager.py", line 1776, in 
init_host
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup 
super(FlatDHCPManager, self).init_host()
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/network/manager.py", line 334, in 
init_host
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup 
self._setup_network_on_host(ctxt, network)
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/network/manager.py", line 1785, in 
_setup_network_on_host
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup 
self._initialize_network(network)
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/network/manager.py", line 1451, in 
_initialize_network
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup 
self.l3driver.initialize_gateway(network)
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/network/l3.py", line 105, in 
initialize_gateway
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup 
gateway=(network_ref['gateway'] is not None))
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line 1411, in plug
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup 
return _get_interface_driver().plug(network, mac_address, gateway)
  2014-10-01 18:17:54.055 6108 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line 1460, in plug
  2

[Yahoo-eng-team] [Bug 1300265] Re: some tests call assert_called_once() into a mock, this function doesn't exists, and gets auto-mocked, falsely passing tests

2015-07-10 Thread Vitaly Gridnev
** Also affects: sahara
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1300265

Title:
  some tests call assert_called_once() into a mock, this function
  doesn't exists, and gets auto-mocked, falsely passing tests

Status in neutron:
  Fix Released
Status in Sahara:
  New

Bug description:
  neutron/tests/unit/agent/linux/test_async_process.py:
spawn.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
func.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
mock_start.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
mock_kill_event.send.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
mock_kill_process.assert_called_once(pid)
  neutron/tests/unit/test_dhcp_agent.py:
log.error.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_post_mortem_debug.py:
mock_post_mortem.assert_called_once()
  neutron/tests/unit/test_linux_interface.py:
log.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/cisco/test_nexus_plugin.py:
mock_db.assert_called_once()
  neutron/tests/unit/linuxbridge/test_lb_neutron_agent.py:
exec_fn.assert_called_once()
  
neutron/tests/unit/services/firewall/agents/l3reference/test_firewall_l3_agent.py:
mock_driver_update_firewall.assert_called_once(
  
neutron/tests/unit/services/firewall/agents/l3reference/test_firewall_l3_agent.py:
mock_driver_delete_firewall.assert_called_once(

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1300265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473369] [NEW] new mock release broke a bunch of unit tests

2015-07-10 Thread Kevin Benton
Public bug reported:

http://lists.openstack.org/pipermail/openstack-dev/2015-July/069156.html

** Affects: neutron
 Importance: Critical
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473369

Title:
  new mock release broke a bunch of unit tests

Status in neutron:
  In Progress

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2015-July/069156.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1473369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300265] Re: some tests call assert_called_once() into a mock, this function doesn't exists, and gets auto-mocked, falsely passing tests

2015-07-10 Thread Sergey Vilgelm
** Also affects: murano
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1300265

Title:
  some tests call assert_called_once() into a mock, this function
  doesn't exists, and gets auto-mocked, falsely passing tests

Status in murano:
  In Progress
Status in neutron:
  Fix Released
Status in Sahara:
  Triaged

Bug description:
  neutron/tests/unit/agent/linux/test_async_process.py:
spawn.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
func.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
mock_start.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
mock_kill_event.send.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
mock_kill_process.assert_called_once(pid)
  neutron/tests/unit/test_dhcp_agent.py:
log.error.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_post_mortem_debug.py:
mock_post_mortem.assert_called_once()
  neutron/tests/unit/test_linux_interface.py:
log.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/cisco/test_nexus_plugin.py:
mock_db.assert_called_once()
  neutron/tests/unit/linuxbridge/test_lb_neutron_agent.py:
exec_fn.assert_called_once()
  
neutron/tests/unit/services/firewall/agents/l3reference/test_firewall_l3_agent.py:
mock_driver_update_firewall.assert_called_once(
  
neutron/tests/unit/services/firewall/agents/l3reference/test_firewall_l3_agent.py:
mock_driver_delete_firewall.assert_called_once(

To manage notifications about this bug go to:
https://bugs.launchpad.net/murano/+bug/1300265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300265] Re: some tests call assert_called_once() into a mock, this function doesn't exists, and gets auto-mocked, falsely passing tests

2015-07-10 Thread Sergey Vilgelm
I think, this bug is more related:
https://bugs.launchpad.net/neutron/+bug/1473369

** No longer affects: murano

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1300265

Title:
  some tests call assert_called_once() into a mock, this function
  doesn't exists, and gets auto-mocked, falsely passing tests

Status in neutron:
  Fix Released
Status in Sahara:
  In Progress

Bug description:
  neutron/tests/unit/agent/linux/test_async_process.py:
spawn.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
func.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
mock_start.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
mock_kill_event.send.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
mock_kill_process.assert_called_once(pid)
  neutron/tests/unit/test_dhcp_agent.py:
log.error.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_post_mortem_debug.py:
mock_post_mortem.assert_called_once()
  neutron/tests/unit/test_linux_interface.py:
log.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/cisco/test_nexus_plugin.py:
mock_db.assert_called_once()
  neutron/tests/unit/linuxbridge/test_lb_neutron_agent.py:
exec_fn.assert_called_once()
  
neutron/tests/unit/services/firewall/agents/l3reference/test_firewall_l3_agent.py:
mock_driver_update_firewall.assert_called_once(
  
neutron/tests/unit/services/firewall/agents/l3reference/test_firewall_l3_agent.py:
mock_driver_delete_firewall.assert_called_once(

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1300265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473369] Re: new mock release broke a bunch of unit tests

2015-07-10 Thread Sergey Vilgelm
** Also affects: python-muranoclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473369

Title:
  new mock release broke a bunch of unit tests

Status in murano:
  In Progress
Status in murano juno series:
  New
Status in murano kilo series:
  New
Status in neutron:
  In Progress
Status in python-muranoclient:
  In Progress

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2015-July/069156.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/murano/+bug/1473369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473369] Re: new mock release broke a bunch of unit tests

2015-07-10 Thread Kirill Zaitsev
** Also affects: murano
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473369

Title:
  new mock release broke a bunch of unit tests

Status in murano:
  In Progress
Status in murano juno series:
  New
Status in murano kilo series:
  New
Status in neutron:
  In Progress
Status in python-muranoclient:
  In Progress

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2015-July/069156.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/murano/+bug/1473369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473369] Re: new mock release broke a bunch of unit tests

2015-07-10 Thread Kirill Zaitsev
** Also affects: murano/kilo
   Importance: Undecided
   Status: New

** Also affects: murano/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473369

Title:
  new mock release broke a bunch of unit tests

Status in murano:
  In Progress
Status in murano juno series:
  New
Status in murano kilo series:
  New
Status in neutron:
  In Progress
Status in python-muranoclient:
  In Progress
Status in python-muranoclient kilo series:
  New

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2015-July/069156.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/murano/+bug/1473369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473369] Re: new mock release broke a bunch of unit tests

2015-07-10 Thread Kirill Zaitsev
** Also affects: python-muranoclient/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473369

Title:
  new mock release broke a bunch of unit tests

Status in murano:
  In Progress
Status in murano juno series:
  New
Status in murano kilo series:
  In Progress
Status in neutron:
  In Progress
Status in python-muranoclient:
  In Progress
Status in python-muranoclient kilo series:
  New

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2015-July/069156.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/murano/+bug/1473369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473401] [NEW] Nova unit tests failing for mock 1.1.0

2015-07-10 Thread Ghanshyam Mann
Public bug reported:

All failure-

http://logs.openstack.org/94/197394/2/check/gate-nova-
python27/579ebb9/testr_results.html.gz

http://logs.openstack.org/94/197394/2/check/gate-nova-
docs/40f8a27/console.html

** Affects: nova
 Importance: Undecided
 Assignee: Ghanshyam Mann (ghanshyammann)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1473401

Title:
  Nova unit tests failing for mock 1.1.0

Status in OpenStack Compute (nova):
  New

Bug description:
  All failure-

  http://logs.openstack.org/94/197394/2/check/gate-nova-
  python27/579ebb9/testr_results.html.gz

  http://logs.openstack.org/94/197394/2/check/gate-nova-
  docs/40f8a27/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1473401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459046] Re: [SRU] nova-* services do not start if rsyslog is not yet started

2015-07-10 Thread Louis Bouchard
** Also affects: nova (Ubuntu Utopic)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459046

Title:
  [SRU] nova-* services do not start if rsyslog is not yet started

Status in OpenStack Compute (nova):
  New
Status in oslo.log:
  New
Status in nova package in Ubuntu:
  In Progress
Status in nova source package in Trusty:
  In Progress
Status in nova source package in Utopic:
  In Progress

Bug description:
  [Impact]

   * If Nova services are configured to log to syslog (use_syslog=True) they
 will currently fail with ECONNREFUSED if they cannot connect to syslog.
 This patch adds support for allowing nova to retry connecting a 
 configurable number of times before print an error message and continuing
 with startup.

  [Test Case]

   * Configure nova with use_syslog=True in nova.conf, stop rsyslog service and
 restart nova services. Check that upstart nova logs to see retries 
 occurring then start rsyslog and observe connection succeed and 
 nova-compute startup.

  [Regression Potential]

   * None

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473413] [NEW] py34 fails in neutron.tests.unit.agent.test_securitygroups_rpc

2015-07-10 Thread Ihar Hrachyshka
Public bug reported:

The job sometimes fails in gate for multiple security groups unit tests:

2015-07-10 10:43:53.260 | FAIL: 
neutron.tests.unit.agent.test_securitygroups_rpc.TestSecurityGroupAgentWithOVSIptables.test_security_group_rule_updated
2015-07-10 10:43:53.261 | 
--
2015-07-10 10:43:53.261 | Empty attachments:
2015-07-10 10:43:53.261 |   pythonlogging:'neutron.api.extensions'
2015-07-10 10:43:53.261 | 
2015-07-10 10:43:53.262 | pythonlogging:'': {{{
2015-07-10 10:43:53.262 | 2015-07-10 10:43:06,008 INFO 
[neutron.agent.securitygroups_rpc] Preparing filters for devices ['tap_port1', 
'tap_port3']
2015-07-10 10:43:53.262 | 2015-07-10 10:43:06,008  WARNING 
[neutron.agent.securitygroups_rpc] security_group_info_for_devices rpc call not 
supported by the server, falling back to old security_group_rules_for_devices 
which scales worse.
2015-07-10 10:43:53.263 | 2015-07-10 10:43:06,012 INFO 
[neutron.agent.securitygroups_rpc] Security group rule updated 
['security_group1']
2015-07-10 10:43:53.263 | 2015-07-10 10:43:06,013 INFO 
[neutron.agent.securitygroups_rpc] Refresh firewall rules
2015-07-10 10:43:53.263 | }}}
2015-07-10 10:43:53.264 | 
2015-07-10 10:43:53.264 | Traceback (most recent call last):
2015-07-10 10:43:53.264 |   File 
"/home/jenkins/workspace/gate-neutron-python34/neutron/tests/unit/agent/test_securitygroups_rpc.py",
 line 2996, in test_security_group_rule_updated
2015-07-10 10:43:53.265 | self._verify_mock_calls()
2015-07-10 10:43:53.265 |   File 
"/home/jenkins/workspace/gate-neutron-python34/neutron/tests/unit/agent/test_securitygroups_rpc.py",
 line 2624, in _verify_mock_calls
2015-07-10 10:43:53.265 | matchers.MatchesRegex(expected_regex))
2015-07-10 10:43:53.265 |   File 
"/home/jenkins/workspace/gate-neutron-python34/.tox/py34/lib/python3.4/site-packages/testtools/testcase.py",
 line 435, in assertThat
2015-07-10 10:43:53.266 | raise mismatch_error
2015-07-10 10:43:53.268 | testtools.matchers._impl.MismatchError: '# Generated 
by iptables_manager\n*raw\n:run.py-OUTPUT - [0:0]\n:run.py-PREROUTING - 
[0:0]\n[0:0] -A PREROUTING -j run.py-PREROUTING\n[0:0] -A OUTPUT -j 
run.py-OUTPUT\n[0:0] -A run.py-PREROUTING -m physdev --physdev-in qvbtap_port1 
-j CT --zone 1\n[0:0] -A run.py-PREROUTING -m physdev --physdev-in taptap_port1 
-j CT --zone 1\n[0:0] -A run.py-PREROUTING -m physdev --physdev-in qvbtap_port2 
-j CT --zone 1\n[0:0] -A run.py-PREROUTING -m physdev --physdev-in taptap_port2 
-j CT --zone 1\nCOMMIT\n# Completed by iptables_manager\n# Generated by 
iptables_manager\n*nat\n:neutron-postrouting-bottom - [0:0]\n:run.py-OUTPUT - 
[0:0]\n:run.py-POSTROUTING - [0:0]\n:run.py-PREROUTING - 
[0:0]\n:run.py-float-snat - [0:0]\n:run.py-snat - [0:0]\n[0:0] -A PREROUTING -j 
run.py-PREROUTING\n[0:0] -A OUTPUT -j run.py-OUTPUT\n[0:0] -A POSTROUTING -j 
run.py-POSTROUTING\n[0:0] -A POSTROUTING -j neutron-postrouting-bottom\n[0:0] 
-A neutron-postrou
 ting-bottom -j run.py-snat\n[0:0] -A run.py-snat -j 
run.py-float-snat\nCOMMIT\n# Completed by iptables_manager\n# Generated by 
iptables_manager\n*mangle\n:run.py-FORWARD - [0:0]\n:run.py-INPUT - 
[0:0]\n:run.py-OUTPUT - [0:0]\n:run.py-POSTROUTING - [0:0]\n:run.py-PREROUTING 
- [0:0]\n:run.py-mark - [0:0]\n[0:0] -A PREROUTING -j run.py-PREROUTING\n[0:0] 
-A INPUT -j run.py-INPUT\n[0:0] -A FORWARD -j run.py-FORWARD\n[0:0] -A OUTPUT 
-j run.py-OUTPUT\n[0:0] -A POSTROUTING -j run.py-POSTROUTING\n[0:0] -A 
run.py-PREROUTING -j run.py-mark\nCOMMIT\n# Completed by iptables_manager\n# 
Generated by iptables_manager\n*filter\n:neutron-filter-top - 
[0:0]\n:run.py-FORWARD - [0:0]\n:run.py-INPUT - [0:0]\n:run.py-OUTPUT - 
[0:0]\n:run.py-itap_port1 - [0:0]\n:run.py-itap_port2 - [0:0]\n:run.py-local - 
[0:0]\n:run.py-otap_port1 - [0:0]\n:run.py-otap_port2 - [0:0]\n:run.py-sg-chain 
- [0:0]\n:run.py-sg-fallback - [0:0]\n:run.py-stap_port1 - 
[0:0]\n:run.py-stap_port2 - [0:0]\n[0:0] -A FORWARD -j neutron-fil
 ter-top\n[0:0] -A OUTPUT -j neutron-filter-top\n[0:0] -A neutron-filter-top -j 
run.py-local\n[0:0] -A INPUT -j run.py-INPUT\n[0:0] -A OUTPUT -j 
run.py-OUTPUT\n[0:0] -A FORWARD -j run.py-FORWARD\n[0:0] -A run.py-sg-fallback 
-j DROP\n[0:0] -A run.py-FORWARD -m physdev --physdev-out taptap_port1 
--physdev-is-bridged -j run.py-sg-chain\n[0:0] -A run.py-sg-chain -m physdev 
--physdev-out taptap_port1 --physdev-is-bridged -j run.py-itap_port1\n[0:0] -A 
run.py-itap_port1 -m state --state INVALID -j DROP\n[0:0] -A run.py-itap_port1 
-m state --state RELATED,ESTABLISHED -j RETURN\n[0:0] -A run.py-itap_port1 -s 
10.0.0.2/32 -p udp -m udp --sport 67 --dport 68 -j RETURN\n[0:0] -A 
run.py-itap_port1 -p tcp -m tcp --dport 22 -j RETURN\n[0:0] -A 
run.py-itap_port1 -s 10.0.0.4/32 -j RETURN\n[0:0] -A run.py-itap_port1 -j 
run.py-sg-fallback\n[0:0] -A run.py-FORWARD -m physdev --physdev-in 
taptap_port1 --physdev-is-bridged -j run.py-sg-chain\n[0:0] -A run.py-sg-chain 

[Yahoo-eng-team] [Bug 1446017] Re: In Kilo code release, nova boot failed on keystoneclient returns 500 error

2015-07-10 Thread Ankit Charolia
** Changed in: nova
   Status: Expired => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1446017

Title:
  In Kilo code release, nova boot failed on keystoneclient returns 500
  error

Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  nova --debug boot --flavor 1 --image dockerc7  --nic 
net-id=40945ae1-344c-4ebd-a25b-2776feb0f409 d01
  ...

  DEBUG (iso8601:184) Parsed 2015-04-19T22:56:02Z into {'tz_sign': None, 
'second_fraction': None, 'hour': u'22', 'daydash': u'19', 'tz_hour': None, 
'month': None, 'timezone': u'Z', 'second': u'02', 'tz_minute': None, 'year': 
u'2015', 'separator': u'T', 'monthdash': u'04', 'day': None, 'minute': u'56'} 
with default timezone 
  DEBUG (iso8601:140) Got u'2015' for 'year' with default None
  DEBUG (iso8601:140) Got u'04' for 'monthdash' with default 1
  DEBUG (iso8601:140) Got 4 for 'month' with default 4
  DEBUG (iso8601:140) Got u'19' for 'daydash' with default 1
  DEBUG (iso8601:140) Got 19 for 'day' with default 19
  DEBUG (iso8601:140) Got u'22' for 'hour' with default None
  DEBUG (iso8601:140) Got u'56' for 'minute' with default None
  DEBUG (iso8601:140) Got u'02' for 'second' with default None
  DEBUG (iso8601:184) Parsed 2015-04-19T22:56:02Z into {'tz_sign': None, 
'second_fraction': None, 'hour': u'22', 'daydash': u'19', 'tz_hour': None, 
'month': None, 'timezone': u'Z', 'second': u'02', 'tz_minute': None, 'year': 
u'2015', 'separator': u'T', 'monthdash': u'04', 'day': None, 'minute': u'56'} 
with default timezone 
  DEBUG (iso8601:140) Got u'2015' for 'year' with default None
  DEBUG (iso8601:140) Got u'04' for 'monthdash' with default 1
  DEBUG (iso8601:140) Got 4 for 'month' with default 4
  DEBUG (iso8601:140) Got u'19' for 'daydash' with default 1
  DEBUG (iso8601:140) Got 19 for 'day' with default 19
  DEBUG (iso8601:140) Got u'22' for 'hour' with default None
  DEBUG (iso8601:140) Got u'56' for 'minute' with default None
  DEBUG (iso8601:140) Got u'02' for 'second' with default None
  DEBUG (session:195) REQ: curl -g -i -X POST 
http://10.0.0.244:8774/v2/959d7f7e020b48509aea18dcec819491/servers -H 
"User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-Auth-Token: 
{SHA1}d66f80c62fd0af6d801994e38c69d8e2a1833370" -d '{"server": {"name": "d01", 
"imageRef": "b4ca9864-9ceb-4b42-9c82-620f0e2fd60d", "flavorRef": "1", 
"max_count": 1, "min_count": 1, "networks": [{"uuid": 
"40945ae1-344c-4ebd-a25b-2776feb0f409"}]}}'
  DEBUG (retry:155) Converted retries value: 0 -> Retry(total=0, connect=None, 
read=None, redirect=0)
  DEBUG (connectionpool:383) "POST /v2/959d7f7e020b48509aea18dcec819491/servers 
HTTP/1.1" 500 128
  DEBUG (session:223) RESP:
  DEBUG (shell:914) The server has either erred or is incapable of performing 
the requested operation. (HTTP 500)
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 911, in 
main
  OpenStackComputeShell().main(argv)
File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 838, in 
main
  args.func(self.cs, args)
File "/usr/lib/python2.7/site-packages/novaclient/v2/shell.py", line 500, 
in do_boot
  server = cs.servers.create(*boot_args, **boot_kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 929, 
in create
  **boot_kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 557, 
in _boot
  return_raw=return_raw, **kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 152, in 
_create
  _resp, body = self.api.client.post(url, body=body)
File "/usr/lib/python2.7/site-packages/keystoneclient/adapter.py", line 
171, in post
  return self.request(url, 'POST', **kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 97, in 
request
  raise exceptions.from_response(resp, body, url, method)
  ClientException: The server has either erred or is incapable of performing 
the requested operation. (HTTP 500)
  ERROR (ClientException): The server has either erred or is incapable of 
performing the requested operation. (HTTP 500)

  
  Traced back found the http 500 was returned by  request() in sessions.py, 
even the token string was valid.

  > 
/usr/lib/python2.7/site-packages/requests/sessions.py(309)request()->
  -> return resp
  (Pdb) 
  > 
/usr/lib/python2.7/site-packages/keystoneclient/session.py(435)_send_request()
  -> if log:
  (Pdb) p resp
  
  (Pdb) p resp.status_code
  500

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1446017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473369] Re: new mock release broke a bunch of unit tests

2015-07-10 Thread Sergey Vilgelm
This bug doesn't affect the juno.

** Changed in: murano/juno
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473369

Title:
  new mock release broke a bunch of unit tests

Status in murano:
  Fix Committed
Status in murano juno series:
  Invalid
Status in murano kilo series:
  Fix Committed
Status in neutron:
  In Progress
Status in python-muranoclient:
  In Progress
Status in python-muranoclient kilo series:
  New

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2015-July/069156.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/murano/+bug/1473369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473369] Re: new mock release broke a bunch of unit tests

2015-07-10 Thread Kirill Zaitsev
** No longer affects: murano/juno

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473369

Title:
  new mock release broke a bunch of unit tests

Status in murano:
  Fix Committed
Status in murano kilo series:
  Fix Committed
Status in neutron:
  In Progress
Status in python-muranoclient:
  In Progress
Status in python-muranoclient kilo series:
  New

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2015-July/069156.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/murano/+bug/1473369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473369] Re: new mock release broke a bunch of unit tests

2015-07-10 Thread Louis Taylor
** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
   Importance: Undecided => Critical

** Changed in: glance
 Assignee: (unassigned) => Louis Taylor (kragniz)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473369

Title:
  new mock release broke a bunch of unit tests

Status in Glance:
  New
Status in murano:
  Fix Committed
Status in murano kilo series:
  Fix Committed
Status in neutron:
  In Progress
Status in python-muranoclient:
  In Progress
Status in python-muranoclient kilo series:
  New

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2015-July/069156.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1473369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473298] Re: Cannot create keystone trust with python-openstackclient using trustor/trustee id

2015-07-10 Thread Dolph Mathews
** Project changed: keystone => python-openstackclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1473298

Title:
  Cannot create keystone trust with python-openstackclient using
  trustor/trustee id

Status in python-openstackclient:
  New

Bug description:
  Creating Keystone V3 trusts (Kilo 2015.1.0) with python-
  openstackclient (OSC) 1.5.0 works fine when using trustor user and
  trustee user names but doesn't when using IDs.

  Keystone log (verbose) doesn't return any error/warning, so that might
  be an OSC issue.

  # openstack user show adminv3 --format shell
  domain_id="43c0586acd1b48b5ad544600414700fb"
  email="t...@example.tld"
  enabled="True"
  id="24b047f52ff94029923f7f0ea982f03f"
  name="adminv3"

  
  # openstack trust create --format shell --role admin --project openstackv3 
adminv3 foo
  deleted_at="None"
  expires_at="None"
  id="c42c31ac89a0465da6f23121a64570c1"
  impersonation="False"
  project_id="78e22bb71862481dbe8335b4ce4551e8"
  redelegation_count="0"
  remaining_uses="None"
  roles="admin "
  trustee_user_id="ac994e5701d644b6a3ac78c9dd1ad04a"
  trustor_user_id="24b047f52ff94029923f7f0ea982f03f"

  
  # openstack trust create --format shell --role admin --project openstackv3 
24b047f52ff94029923f7f0ea982f03f foo
  ERROR: openstack No user with a name or ID of 
'24b047f52ff94029923f7f0ea982f03f' exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-openstackclient/+bug/1473298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473369] Re: new mock release broke a bunch of unit tests

2015-07-10 Thread Victor Stinner
** Also affects: swift
   Importance: Undecided
   Status: New

** Changed in: swift
 Assignee: (unassigned) => Victor Stinner (victor-stinner)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473369

Title:
  new mock release broke a bunch of unit tests

Status in Glance:
  In Progress
Status in murano:
  Fix Committed
Status in murano kilo series:
  Fix Committed
Status in neutron:
  Fix Committed
Status in python-muranoclient:
  In Progress
Status in python-muranoclient kilo series:
  New
Status in OpenStack Object Storage (swift):
  In Progress

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2015-July/069156.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1473369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473475] [NEW] VPNaaS: Add DevStack plugin

2015-07-10 Thread Paul Michali
Public bug reported:

This is an enhancement to the neutron-vpnaas repo to add DevStack plugin
support.

** Affects: neutron
 Importance: Undecided
 Assignee: Paul Michali (pcm)
 Status: In Progress


** Tags: vpnaas

** Changed in: neutron
 Assignee: (unassigned) => Paul Michali (pcm)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473475

Title:
  VPNaaS: Add DevStack plugin

Status in neutron:
  In Progress

Bug description:
  This is an enhancement to the neutron-vpnaas repo to add DevStack
  plugin support.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1473475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473401] Re: gate-nova-python27 and gate-nova-docs failing for mock 1.1.0

2015-07-10 Thread Matt Riedemann
Related pbr fix: https://review.openstack.org/#/c/200558/

** Also affects: pbr
   Importance: Undecided
   Status: New

** Changed in: pbr
   Status: New => In Progress

** Changed in: pbr
 Assignee: (unassigned) => Monty Taylor (mordred)

** Changed in: pbr
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1473401

Title:
  gate-nova-python27 and gate-nova-docs failing for mock 1.1.0

Status in OpenStack Compute (nova):
  In Progress
Status in PBR:
  In Progress

Bug description:
  All failure-

  http://logs.openstack.org/94/197394/2/check/gate-nova-
  python27/579ebb9/testr_results.html.gz

  http://logs.openstack.org/94/197394/2/check/gate-nova-
  docs/40f8a27/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1473401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473369] Re: new mock release broke a bunch of unit tests

2015-07-10 Thread Mike Fedosin
** Also affects: python-glance-store (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: python-glance-store (Ubuntu)

** Also affects: glance-store
   Importance: Undecided
   Status: New

** Changed in: glance-store
 Assignee: (unassigned) => Mike Fedosin (mfedosin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473369

Title:
  new mock release broke a bunch of unit tests

Status in Glance:
  In Progress
Status in glance_store:
  New
Status in murano:
  Fix Committed
Status in murano kilo series:
  Fix Committed
Status in neutron:
  Fix Committed
Status in python-muranoclient:
  In Progress
Status in python-muranoclient kilo series:
  New
Status in OpenStack Object Storage (swift):
  In Progress

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2015-July/069156.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1473369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473489] [NEW] Identity API v3 does not accept more than one query parameter

2015-07-10 Thread Anh Huynh
Public bug reported:

When GET /v3/users?name="blah"&enabled="true" is called the API would
only take the "name" query and omits the "enabled" query. This is also
reproduced across many different queries, including /v3/credentials.

This looks like a repeat of
https://bugs.launchpad.net/keystone/+bug/1424745

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1473489

Title:
  Identity API v3 does not accept more than one query parameter

Status in Keystone:
  New

Bug description:
  When GET /v3/users?name="blah"&enabled="true" is called the API would
  only take the "name" query and omits the "enabled" query. This is also
  reproduced across many different queries, including /v3/credentials.

  This looks like a repeat of
  https://bugs.launchpad.net/keystone/+bug/1424745

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1473489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473511] [NEW] Raising ForbiddenAction when ValidationError happens

2015-07-10 Thread Henrique Truta
Public bug reported:

Keystone is raising, at some points a ForbiddenAction, when it has
nothing to do with credentials. This error must be changed to a
ValidationError.

** Affects: keystone
 Importance: Undecided
 Assignee: Henrique Truta (henriquetruta)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Henrique Truta (henriquetruta)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1473511

Title:
  Raising ForbiddenAction when ValidationError happens

Status in Keystone:
  In Progress

Bug description:
  Keystone is raising, at some points a ForbiddenAction, when it has
  nothing to do with credentials. This error must be changed to a
  ValidationError.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1473511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469260] Re: Custom vendor data causes cloud-init failure on 0.7.5

2015-07-10 Thread Scott Moser
marking trunk fix-released and ubuntu fixed-released per bug openers
suggestion that it was fixed in 0.7.7 . I have not reproduced though.


** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Status: New => Fix Released

** Changed in: cloud-init
   Status: New => Fix Released

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

** Changed in: cloud-init
   Importance: Undecided => Medium

** Also affects: cloud-init (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Utopic)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu Trusty)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Utopic)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1469260

Title:
  Custom vendor data causes cloud-init failure on 0.7.5

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Trusty:
  New
Status in cloud-init source package in Utopic:
  New

Bug description:
  I encountered this issue when adding custom vendor data via nova-
  compute. Originally the bug manifested as SSH host key generation
  failing to fire when vendor data was present (example vendor data
  below).

  {"msg": "", "uuid": "4996e2b67d2941818646481453de1efe", "users":
  [{"username": "erhudy", "sshPublicKeys": [], "uuid": "erhudy"}],
  "name": "TestTenant"}

  I launched a volume-backed instance, waited for it to fail, then
  terminated it and mounted its root volume to examine the logs. What I
  found was that cloud-init was failing to process vendor-data into MIME
  multipart (note the absence of the line that indicates that cloud-init
  is writing vendor-data.txt.i):

  2015-06-25 21:41:02,178 - util.py[DEBUG]: Writing to 
/var/lib/cloud/instance/obj.pkl - wb: [256] 9751 bytes
  2015-06-25 21:41:02,178 - util.py[DEBUG]: Writing to 
/var/lib/cloud/instances/65c9fb0c-0700-4f87-a22f-c59534e98dfb/user-data.txt - 
wb: [384] 0 bytes
  2015-06-25 21:41:02,184 - util.py[DEBUG]: Writing to 
/var/lib/cloud/instances/65c9fb0c-0700-4f87-a22f-c59534e98dfb/user-data.txt.i - 
wb: [384] 345 bytes
  2015-06-25 21:41:02,185 - util.py[DEBUG]: Writing to 
/var/lib/cloud/instances/65c9fb0c-0700-4f87-a22f-c59534e98dfb/vendor-data.txt - 
wb: [384] 234 bytes
  2015-06-25 21:41:02,185 - util.py[DEBUG]: Reading from /proc/uptime 
(quiet=False)

  After following the call chain all the way down, I found the
  problematic code in user_data.py:

  # Coverts a raw string into a mime message
  def convert_string(raw_data, headers=None):
  if not raw_data:
  raw_data = ''
  if not headers:
  headers = {}
  data = util.decomp_gzip(raw_data)
  if "mime-version:" in data[0:4096].lower():
  msg = email.message_from_string(data)
  for (key, val) in headers.iteritems():
  _replace_header(msg, key, val)
  else:
  mtype = headers.get(CONTENT_TYPE, NOT_MULTIPART_TYPE)
  maintype, subtype = mtype.split("/", 1)
  msg = MIMEBase(maintype, subtype, *headers)
  msg.set_payload(data)
  return msg

  raw_data in the case that is failing is a dictionary rather than the
  expected string, so slicing into data causes a TypeError: unhashable
  type exception.

  I think this bug was fixed after a fashion in 0.7.7, where the call to
  util.decomp_gzip() is now wrapped by util.decode_binary(), which
  appears to always return a string.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1469260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473541] [NEW] Limit Summary template refator prevents translators from controlling the word order in translations

2015-07-10 Thread Akihiro Motoki
Public bug reported:

As a result of the change of https://review.openstack.org/#/c/170147,
translators cannot control word order any more.
It stops using blocktrans and uses string concatenation instead.

The limit template is used in the top page, and Horizon provides ugly
strings in Overview of the top page.

** Affects: horizon
 Importance: High
 Status: New


** Tags: i18n

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1473541

Title:
  Limit Summary template refator prevents translators from controlling
  the word order in translations

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  As a result of the change of https://review.openstack.org/#/c/170147,
  translators cannot control word order any more.
  It stops using blocktrans and uses string concatenation instead.

  The limit template is used in the top page, and Horizon provides ugly
  strings in Overview of the top page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1473541/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473549] [NEW] launch instance should add config for require ssh key

2015-07-10 Thread Eric Peterson
Public bug reported:

The launch instance flow has a config control for
OPENSTACK_HYPERVISOR_FEATURES['can_set_password'] which allows a
deployment to decide if the password should be prompted for during
instance creation.

If a password is not possible to set, then it is very likely that an ssh
key MUST be required.

There should be a config setting that looks like something like:
OPENSTACK_HYPERVISOR_FEATURES['require_ssh_key']
which would make the launch instance key pair field required.

This is needed for both current launch instance screen, and ideally the
next generation (angular) version too.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit ops

** Tags added: low-hanging-fruit

** Tags added: ops

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1473549

Title:
  launch instance should add config for require ssh key

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The launch instance flow has a config control for
  OPENSTACK_HYPERVISOR_FEATURES['can_set_password'] which allows a
  deployment to decide if the password should be prompted for during
  instance creation.

  If a password is not possible to set, then it is very likely that an
  ssh key MUST be required.

  There should be a config setting that looks like something like:
  OPENSTACK_HYPERVISOR_FEATURES['require_ssh_key']
  which would make the launch instance key pair field required.

  This is needed for both current launch instance screen, and ideally
  the next generation (angular) version too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1473549/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459828] Re: keystone-all crashes when ca_certs is not defined in conf

2015-07-10 Thread Brant Knudson
icehouse is now eol, so I don't see any need to spend more time on this.

** Changed in: keystone/icehouse
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1459828

Title:
  keystone-all crashes when ca_certs is not defined in conf

Status in Keystone:
  Incomplete
Status in Keystone icehouse series:
  Won't Fix

Bug description:
  When [ssl] ca_certs parameter is commented on keystone.conf, ssl
  module try to load the default ca_cert file
  (/etc/keystone/ssl/certs/ca.pem) and raises an IOError exception
  because it didn't find the file.

  This happens running on Python 2.7.9.

  I have a keystone cluster running on Python 2.7.7, with the very same
  keystone.conf file, and that crash doesn't happen.

  If any further information is required, don't hesitate in contacting
  me.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1459828/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473553] [NEW] AuthContextMiddleware re-implements AdminToken

2015-07-10 Thread Brant Knudson
Public bug reported:


AuthContextMiddleware essentially re-implements the default 
AdminTokenAuthMiddleware:

class AdminTokenAuthMiddleware(wsgi.Middleware):
...
context['is_admin'] = (token == CONF.admin_token)

class AuthContextMiddleware(wsgi.Middleware):
...
if token_id == CONF.admin_token:

The problem is, what if someone decides they want to implement their own
`AdminTokenAuthMiddleware` that implements "admin token" differently.
For example, using a special client certificate instead.

This should be possible, but it's not because AuthContextMiddleware
decided to re-implement AdminTokenAuthMiddleware rather than using its
output (the setting of is_admin in the context.

** Affects: keystone
 Importance: Undecided
 Assignee: Brant Knudson (blk-u)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Brant Knudson (blk-u)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1473553

Title:
  AuthContextMiddleware re-implements AdminToken

Status in Keystone:
  In Progress

Bug description:
  
  AuthContextMiddleware essentially re-implements the default 
AdminTokenAuthMiddleware:

  class AdminTokenAuthMiddleware(wsgi.Middleware):
  ...
  context['is_admin'] = (token == CONF.admin_token)

  class AuthContextMiddleware(wsgi.Middleware):
  ...
  if token_id == CONF.admin_token:

  The problem is, what if someone decides they want to implement their
  own `AdminTokenAuthMiddleware` that implements "admin token"
  differently. For example, using a special client certificate instead.

  This should be possible, but it's not because AuthContextMiddleware
  decided to re-implement AdminTokenAuthMiddleware rather than using its
  output (the setting of is_admin in the context.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1473553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473556] [NEW] Error log is generated when API operation is PolicyNotAuthorized and returns 404

2015-07-10 Thread Akihiro Motoki
Public bug reported:

neutron.policy module can raises webob.exc.HTTPNotFound when
PolicyNotAuthorized is raised. In this case, neutron.api.resource
outputs a log with error level. It should be INFO level as it occurs by
user API requests.

One of the easiest way is to reproduce this bug is as follows:

(1) create a shared network by admin user
(2) try to delete the shared network by regular user

(A regular user can know a ID of the shared network, so the user can
request to delete the shared network.)

As a result we get the following log.
It is confusing from the point of log monitoring.

2015-07-11 05:28:33.914 DEBUG neutron.policy 
[req-5aef6df6-1fb7-4187-9980-4e41fc648ad7 demo 
1e942c3c210b42ff8c45f42962da33b4] Enforcing rules: ['delete_network', 
'delete_network:provider:physical_network
', 'delete_network:shared', 'delete_network:provider:network_type', 
'delete_network:provider:segmentation_id'] from (pid=1439) log_rule_list 
/opt/stack/neutron/neutron/policy.py:319
2015-07-11 05:28:33.914 DEBUG neutron.policy 
[req-5aef6df6-1fb7-4187-9980-4e41fc648ad7 demo 
1e942c3c210b42ff8c45f42962da33b4] Failed policy check for 'delete_network' from 
(pid=1439) enforce /opt/stack/n
eutron/neutron/policy.py:393
2015-07-11 05:28:33.914 ERROR neutron.api.v2.resource 
[req-5aef6df6-1fb7-4187-9980-4e41fc648ad7 demo 
1e942c3c210b42ff8c45f42962da33b4] delete failed
2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in wrapper
2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 119, in 
__exit__
2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 136, in wrapper
2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 495, in delete
2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource raise 
webob.exc.HTTPNotFound(msg)
2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource HTTPNotFound: The 
resource could not be found.
2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource

** Affects: neutron
 Importance: Low
 Assignee: Akihiro Motoki (amotoki)
 Status: In Progress


** Tags: api

** Changed in: neutron
   Importance: Medium => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473556

Title:
  Error log is generated when API operation is PolicyNotAuthorized and
  returns 404

Status in neutron:
  In Progress

Bug description:
  neutron.policy module can raises webob.exc.HTTPNotFound when
  PolicyNotAuthorized is raised. In this case, neutron.api.resource
  outputs a log with error level. It should be INFO level as it occurs
  by user API requests.

  One of the easiest way is to reproduce this bug is as follows:

  (1) create a shared network by admin user
  (2) try to delete the shared network by regular user

  (A regular user can know a ID of the shared network, so the user can
  request to delete the shared network.)

  As a result we get the following log.
  It is confusing from the point of log monitoring.

  2015-07-11 05:28:33.914 DEBUG neutron.policy 
[req-5aef6df6-1fb7-4187-9980-4e41fc648ad7 demo 
1e942c3c210b42ff8c45f42962da33b4] Enforcing rules: ['delete_network', 
'delete_network:provider:physical_network
  ', 'delete_network:shared', 'delete_network:provider:network_type', 
'delete_network:provider:segmentation_id'] from (pid=1439) log_rule_list 
/opt/stack/neutron/neutron/policy.py:319
  2015-07-11 05:28:33.914 DEBUG neutron.policy 
[req-5aef6df6-1fb7-4187-9980-4e41fc648ad7 demo 
1e942c3c210b42ff8c45f42962da33b4] Failed policy check for 'delete_network' from 
(pid=1439) enforce /opt/stack/n
  eutron/neutron/policy.py:393
  2015-07-11 05:28:33.914 ERROR neutron.api.v2.resource 
[req-5aef6df6-1fb7-4187-9980-4e41fc648ad7 demo 
1e942c3c210b42ff8c45f42962da33b4] delete failed
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
  2015-07-11 05:28:33.914 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-07-11 05:

[Yahoo-eng-team] [Bug 1456335] Re: neutron-vpn-netns-wrapper missing in Ubuntu Package

2015-07-10 Thread Felipe Reyes
** Also affects: neutron-vpnaas (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456335

Title:
  neutron-vpn-netns-wrapper missing in Ubuntu Package

Status in neutron:
  New
Status in neutron-vpnaas package in Ubuntu:
  New

Bug description:
  The executable neutron-vpn-netns-wrapper (path /usr/bin/neutron-vpn-
  netns-wrapper) in Ubuntu 14.04 packages is missing for OpenStack Kilo.

  I tried to enable VPNaaS with StrongSwan and it failed with this error 
message:
  2015-05-18 19:20:41.510 3254 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec Stderr: 
/usr/bin/neutron-rootwrap: Unauthorized command: ip netns exec 
qrouter-0b4c88fa-4944-45a7-b1b3-fbee1d7fc2ac neutron-vpn-netns-wrapper 
--mount_paths=/etc:/var/lib/neutron/ipsec/0b4c88fa-4944-45a7-b1b3-fbee1d7fc2ac/etc,/var/run:/var/lib/neutron/ipsec/0b4c88fa-4944-45a7-b1b3-fbee1d7fc2ac/var/run
 --cmd=ipsec,start (no filter matched)

  After copying the content of neutron-vpn-netns-wrapper from the Fedora
  repository VPNaaS with StrongSwan worked.

  The content of the vpn-netns-wrapper:

  #!/usr/bin/python2
  # PBR Generated from u'console_scripts'

  import sys

  from neutron_vpnaas.services.vpn.common.netns_wrapper import main

  
  if __name__ == "__main__":
  sys.exit(main())

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473567] [NEW] Fernet tokens fail tempest runs

2015-07-10 Thread Eric Brown
Public bug reported:

It seems testing an OpenStack instance that was deployed with Fernet tokens 
fails on some of the tempest tests.  In my case these tests failed:
http://paste.openstack.org/show/363017/

bknudson also found similar in a test patch:
   https://review.openstack.org/#/c/195780

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: fernet

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1473567

Title:
  Fernet tokens fail tempest runs

Status in Keystone:
  New

Bug description:
  It seems testing an OpenStack instance that was deployed with Fernet tokens 
fails on some of the tempest tests.  In my case these tests failed:
  http://paste.openstack.org/show/363017/

  bknudson also found similar in a test patch:
 https://review.openstack.org/#/c/195780

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1473567/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438996] Re: [Launch Instance Fix] Update Local & Default Settings

2015-07-10 Thread Travis Tripp
This was done in Kilo and released in Kilo

** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1438996

Title:
  [Launch Instance Fix] Update Local & Default Settings

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  This will include several patches:

  The LAUNCH_INSTANCE_LEGACY_ENABLED LAUNCH_INSTANCE_NG_ENABLED settings
  should be documented as is.

  Next the default setting should be updated to enable the new
  experience by default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1438996/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473588] [NEW] Provide an option to disable auto-hashing of keystone token

2015-07-10 Thread Lin Hua Cheng
Public bug reported:


Token hashing is performed to be able to support session with cookie
backend. However, the hashed token doesn't always work.

We should provide an option for user to turn off token hashing

** Affects: horizon
 Importance: Undecided
 Assignee: Lin Hua Cheng (lin-hua-cheng)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Lin Hua Cheng (lin-hua-cheng)

** Changed in: horizon
Milestone: None => liberty-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1473588

Title:
  Provide an option to disable auto-hashing of keystone token

Status in OpenStack Dashboard (Horizon):
  New

Bug description:

  Token hashing is performed to be able to support session with cookie
  backend. However, the hashed token doesn't always work.

  We should provide an option for user to turn off token hashing

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1473588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473598] [NEW] misleading error when creating a group without specifying a domain

2015-07-10 Thread Miguel Grinberg
Public bug reported:

If you try to create a group, but do not include a domain name or id in
the request, then keystone responds with a 401, making it look like you
have an authentication problem.

The correct answer in this case would be a 400 (bad request).

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1473598

Title:
  misleading error when creating a group without specifying a domain

Status in Keystone:
  New

Bug description:
  If you try to create a group, but do not include a domain name or id
  in the request, then keystone responds with a 401, making it look like
  you have an authentication problem.

  The correct answer in this case would be a 400 (bad request).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1473598/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470635] Re: endpoints added with v3 are not visible with v2

2015-07-10 Thread Ian Cordasco
** Changed in: openstack-ansible
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1470635

Title:
  endpoints added with v3 are not visible with v2

Status in Keystone:
  Confirmed
Status in openstack-ansible:
  Opinion
Status in puppet-keystone:
  Confirmed

Bug description:
  Create an endpoint with v3::

  # openstack --os-identity-api-version 3 [--admin credentials]
  endpoint create 

  try to list endpoints with v2::

  # openstack --os-identity-api-version 2 [--admin credentials]
  endpoint list

  nothing.

  We are in the process of trying to convert puppet-keystone to v3 with
  the goal of maintaining backwards compatibility.  That means, we want
  admins/operators not to have to change any existing workflow.  This
  bug causes openstack endpoint list to return nothing which breaks
  existing workflows and backwards compatibility.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1470635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423075] Re: LBaaS|cannot assign 2 pools of service to same VIP

2015-07-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1423075

Title:
  LBaaS|cannot assign 2 pools of service to same VIP

Status in neutron:
  Expired

Bug description:
  Version :
  [root@puma15 ~(keystone_admin)]# rpm -qa | grep rhel
  libreport-rhel-2.1.11-10.el7.x86_64
  [root@puma15 ~]# rpm -qa |grep neutron
  openstack-neutron-openvswitch-2014.2.1-6.el7ost.noarch
  python-neutronclient-2.3.9-1.el7ost.noarch
  openstack-neutron-2014.2.1-6.el7ost.noarch
  python-neutron-2014.2.1-6.el7ost.noarch
  openstack-neutron-ml2-2014.2.1-6.el7ost.noarch

  
  When I  created 2 pools with 2 different protocol = 1.HTTP & 2. TCP(SSH) 
   I tried to associate it via neutron client (CLI) to same VIP but it not 
   possible. There is no option to create load balancer that make balance 
between 2 server for 2 different services.
  To Reproduce : 
  1.  configure 2 different pools  1 for HTTP  and second to SSH and assign 
them to 1 single VIP .

  As its see I need to create Vip per a service .
  This behevie is not as I expected LB should be work. we can take for example 
Alteon- Radware LB  or F5 LB .

  attached setup description .

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1423075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp