[Yahoo-eng-team] [Bug 1567047] Re: Panels in local/enabled always appear last in nav

2016-04-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/302417
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=ea92e735829ae4271fcbae932f69ffdbda268546
Submitter: Jenkins
Branch:master

commit ea92e735829ae4271fcbae932f69ffdbda268546
Author: Tyr Johanson 
Date:   Wed Apr 6 13:36:05 2016 -0600

Allow local/enabled panels to order relative to enabled panels

A dashboard enabled file in local/enabled is not able to appear before,
or inbetween any core panels.

The list of panels appears to be intended to be sorted by file name,
but all files in /enabled and always presented in the nav ahead of
any files from local/enabled, no matter the file name.

This appears to be a bug in util/settings.py that does an rsplit to
separate file name from path, but accidentally uses the full list of
split items, instead of just the file name.

For example, a file with __name__ of
'openstack_dashboard.enabled._1040_project_volumes_panel' splits into
['openstack_dashboard.enabled', '_1040_project_volumes_panel']. When
this list is fed to cmp(), it will always come before a panel in
local/enabled such as
['openstack_dashboard.local.enabled', '_0001_my_new_panel']

Change-Id: Ic169ccf0db1e04ec42fe999df6648117ce9efe84
Closes-Bug: 1567047


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1567047

Title:
  Panels in local/enabled always appear last in nav

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The list of panels appears to be intended to be sorted by file name,
  but all files in /enabled and always presented in the nav ahead of any
  files from local/enabled, no matter the file name.

  This appears to be a bug in util/settings.py that does an rsplit to
  separate file name from path, but accidentally uses the full list of
  split items, instead of just the file name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1567047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572831] [NEW] VM's go into error state when booted with SRIOV nic

2016-04-20 Thread Preethi Dsilva
Public bug reported:

VM's go into error state when booted with SRIOV nic

STeps to reproduce:

1.enable sriov in the bios in my case I have mellanox card with dual port nic 
which shows up in the OS as eth4 and eth5
2.provide PCI whitelist in nova.conf
pci_passthrough_whitelist = 
{"address":"*:04:00.*","physical_network":"physnet1"}
3.the mlx4_core file is set as options mlx4_core port_type_array=2,2 
num_vfs=3,3,0 probe_vf=3,3,0 enable_64b_cqe_eqe=0 log_num_mgm_entry_size=-1
4.Its observed that 3 vm's went into eth4 vf's and 3 vm's went into eth5 vf's
the sequence is first vm landed on eth4 vf2 then 2nd on eth4 vf1  both were up 
with ip assigned.3rd vm landed on eth5 vf5 bt state of VF remained in auto 
state(if we manually set the state to enable then vm gets IP but nova fails to 
do so hence vm goes into error state)
5.4th vm landed into eth5 again however nova was able to make state to enable 
hence the vm got IP
5th vm landed on eth4 vf0 and it gt ip

This pattern is not certain.Every time vm goes into error the logs show the 
below error 
VirtualInterfaceCreateException: Virtual Interface creation failed

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1572831

Title:
  VM's go into error state when booted with SRIOV nic

Status in OpenStack Compute (nova):
  New

Bug description:
  VM's go into error state when booted with SRIOV nic

  STeps to reproduce:
  
  1.enable sriov in the bios in my case I have mellanox card with dual port nic 
which shows up in the OS as eth4 and eth5
  2.provide PCI whitelist in nova.conf
  pci_passthrough_whitelist = 
{"address":"*:04:00.*","physical_network":"physnet1"}
  3.the mlx4_core file is set as options mlx4_core port_type_array=2,2 
num_vfs=3,3,0 probe_vf=3,3,0 enable_64b_cqe_eqe=0 log_num_mgm_entry_size=-1
  4.Its observed that 3 vm's went into eth4 vf's and 3 vm's went into eth5 vf's
  the sequence is first vm landed on eth4 vf2 then 2nd on eth4 vf1  both were 
up with ip assigned.3rd vm landed on eth5 vf5 bt state of VF remained in auto 
state(if we manually set the state to enable then vm gets IP but nova fails to 
do so hence vm goes into error state)
  5.4th vm landed into eth5 again however nova was able to make state to enable 
hence the vm got IP
  5th vm landed on eth4 vf0 and it gt ip

  This pattern is not certain.Every time vm goes into error the logs show the 
below error 
  VirtualInterfaceCreateException: Virtual Interface creation failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1572831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572826] [NEW] dev name in the pci whitelist is not honored for SRIOV

2016-04-20 Thread Preethi Dsilva
Public bug reported:

dev name in the pci whitelist is not honored for SRIOV

steps to reproduce:

1.enable sriov in the bios in my case I have mellanox card with dual port nic 
which shows up in teh OS as eth4 and eth5
2.provide PCI whitelist in  nova.conf
pci_passthrough_whitelist = {"devname":"eth4","physical_network":"physnet1"}
3.the mlx4_core file is set as options mlx4_core port_type_array=2,2 num_vfs=6 
probe_vf=6 enable_64b_cqe_eqe=0  log_num_mgm_entry_size=-1
However, the behavior seen is that irrespective of the devname specified the 
tenant VM gets booted into eth4 or eth5 .

Tested the issue with MItaka code I am attaching the nova logs and
local.conf for your reference.

** Affects: nova
 Importance: Undecided
 Status: New

** Project changed: neutron => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572826

Title:
  dev name in the pci whitelist is not honored for SRIOV

Status in OpenStack Compute (nova):
  New

Bug description:
  dev name in the pci whitelist is not honored for SRIOV

  steps to reproduce:
  
  1.enable sriov in the bios in my case I have mellanox card with dual port nic 
which shows up in teh OS as eth4 and eth5
  2.provide PCI whitelist in  nova.conf
  pci_passthrough_whitelist = {"devname":"eth4","physical_network":"physnet1"}
  3.the mlx4_core file is set as options mlx4_core port_type_array=2,2 
num_vfs=6 probe_vf=6 enable_64b_cqe_eqe=0  log_num_mgm_entry_size=-1
  However, the behavior seen is that irrespective of the devname specified the 
tenant VM gets booted into eth4 or eth5 .

  Tested the issue with MItaka code I am attaching the nova logs and
  local.conf for your reference.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1572826/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572509] Re: Nova boot fails when freed SRIOV port is used for booting

2016-04-20 Thread Preethi Dsilva
** Project changed: nova => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572509

Title:
  Nova boot fails when freed SRIOV port is used for booting

Status in neutron:
  New

Bug description:
  Nova boot fails when freed SRIOV port is used for booting

  steps to reproduce:
  ==
  1.create a SRIOV port
  2.boot a vm -->Boot is successful and vm gets ip
  3.now delete the vm using nova delete --successful (mac is released from VF)
  4.using the port created in step 1 boot a new vm.

  VM fails to boot with following error
  [01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager ^[[01;35m[instance: 
a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00mhostname=instance.hostname)
  ^[[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager 
^[[01;35m[instance: a47344fb-3254-4409-9c23-55e3cde693d9] 
^[[00mPortNotUsableDNS: Port 7219a612-014e-452e-b79a-19c87cc33db4 not usable 
for instance a47344fb-3254-4409-9c23-55e3cde693d9. Value vm4 assigned to 
dns_name attribute does not match instance's hostname vmtest4
  ^[[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager 
^[[01;35m[instance: a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00m

  Expected :
  =
  As port is unbound in step 3 we should be able to bind it in step 4.

  The setup consists of controller and compute node with Mellanox card
  enabled for SRIOV. Ubuntu 14.04 qcow2 image is used for tenant VM boot

  Tested the above with MItaka code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572509/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443898] Re: TestVolumeBootPattern.test_volume_boot_pattern fail with libvirt-xen

2016-04-20 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1443898

Title:
  TestVolumeBootPattern.test_volume_boot_pattern fail with libvirt-xen

Status in OpenStack Compute (nova):
  Expired
Status in tempest:
  Invalid

Bug description:
  The following test is failling when runned with libvirt-xen nova compute node.
  
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern

  This is due to the function _boot_instance_from_volume which create a
  volume with the device name "vda", but this is not supported by Xen.

  Also Nova itself does not reject the device name.

  This is with devstack master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1443898/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497401] Re: Add Solaris to vm_modes and hvtype

2016-04-20 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497401

Title:
  Add Solaris to vm_modes and hvtype

Status in OpenStack Compute (nova):
  Expired

Bug description:
  In order for Solaris to participate in Nova, a 'solariszones'
  hypervisor and vm_mode needs to be added.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1497401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572592] Re: gate-neutron-lbaasv1-dsvm-api gate broken on Neutron LBaaS: fail to find a quota

2016-04-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/308384
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lbaas/commit/?id=c5cbd394e1aca10d915272c3e787129949e9f6a2
Submitter: Jenkins
Branch:master

commit c5cbd394e1aca10d915272c3e787129949e9f6a2
Author: Victor Stinner 
Date:   Wed Apr 20 16:46:27 2016 +0200

Fix test_quotas

The change I1cd91b5e06bd17f9aac97bba71228f2e5c48879b modified the
delete_tenant_quota() function: it now raises a NotFound error if
delete failed.

Fix the test: catch and ignore the NotFound exception. The test
removes the quota by reseting quotas of the tenant.

Change-Id: I87df52474014c4bceff8124a627f829f6b625ce0
Closes-Bug: #1572592


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572592

Title:
  gate-neutron-lbaasv1-dsvm-api gate broken on Neutron LBaaS: fail to
  find a quota

Status in neutron:
  Fix Released

Bug description:
  The gate-neutron-lbaasv1-dsvm-api gate job of Neutron LBaaS looks to
  always fail on test_quotas. Example of a failure.

  For example, the gate failed on the approve "Updated from global
  requirements" change: https://review.openstack.org/#/c/307761/

  Console logs: http://logs.openstack.org/61/307761/1/check/gate-
  neutron-lbaasv1-dsvm-api/69872dd/console.html

  
neutron_lbaas.tests.tempest.v1.api.admin.test_quotas.QuotasTest.test_lbaas_quotas[gate]
  
---

  Traceback (most recent call last):
File 
"neutron_lbaas/tests/tempest/lib/services/network/json/network_client.py", line 
330, in reset_quotas
  resp, body = self.delete(uri)
File 
"/opt/stack/new/neutron-lbaas/.tox/apiv1/local/lib/python2.7/site-packages/tempest/lib/common/rest_client.py",
 line 290, in delete
  return self.request('DELETE', url, extra_headers, headers, body)
File 
"/opt/stack/new/neutron-lbaas/.tox/apiv1/local/lib/python2.7/site-packages/tempest/lib/common/rest_client.py",
 line 642, in request
  resp, resp_body)
File 
"/opt/stack/new/neutron-lbaas/.tox/apiv1/local/lib/python2.7/site-packages/tempest/lib/common/rest_client.py",
 line 695, in _error_checker
  raise exceptions.NotFound(resp_body, resp=resp)
  tempest.lib.exceptions.NotFound: Object not found
  Details: {u'message': u'Quota for tenant ce7b2ca707c6479f88b67e312d3764f2 
could not be found.', u'detail': u'', u'type': u'TenantQuotaNotFound'}

  
neutron_lbaas.tests.tempest.v1.api.admin.test_quotas.QuotasTest.test_quotas[gate]
  
-

  Traceback (most recent call last):
File 
"neutron_lbaas/tests/tempest/lib/services/network/json/network_client.py", line 
330, in reset_quotas
  resp, body = self.delete(uri)
File 
"/opt/stack/new/neutron-lbaas/.tox/apiv1/local/lib/python2.7/site-packages/tempest/lib/common/rest_client.py",
 line 290, in delete
  return self.request('DELETE', url, extra_headers, headers, body)
File 
"/opt/stack/new/neutron-lbaas/.tox/apiv1/local/lib/python2.7/site-packages/tempest/lib/common/rest_client.py",
 line 642, in request
  resp, resp_body)
File 
"/opt/stack/new/neutron-lbaas/.tox/apiv1/local/lib/python2.7/site-packages/tempest/lib/common/rest_client.py",
 line 695, in _error_checker
  raise exceptions.NotFound(resp_body, resp=resp)
  tempest.lib.exceptions.NotFound: Object not found
  Details: {u'message': u'Quota for tenant 89128f4ca5aa415db790b0ed30d8b24e 
could not be found.', u'detail': u'', u'type': u'TenantQuotaNotFound'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572592/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572303] Re: additionalProperties validation doesn't work on assisted_volume_snapshots API

2016-04-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/308018
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=4ad8a86b3019be75a3b5e8ac72e95ff312c8d3cb
Submitter: Jenkins
Branch:master

commit 4ad8a86b3019be75a3b5e8ac72e95ff312c8d3cb
Author: Ken'ichi Ohmichi 
Date:   Tue Apr 19 12:44:54 2016 -0700

Fix the schema of assisted_volume_snapshots

The position of both 'required' and 'additionalProperties' was wrong
and the additionalProperties validation didn't work at all.
This patch fixes it by changing the position.

Closes-Bug: #1572303
Change-Id: I8eaf7f7c3340321893edcb651983e9133a33e5a1


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1572303

Title:
  additionalProperties validation doesn't work on
  assisted_volume_snapshots API

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The other APIs block unexpected attributes on request bodies with 
additionalProperties  feature of JSON-Schema.
  However, it doesn't work on  assisted_volume_snapshots API now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1572303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572442] Re: tempest.scenario.ha_router_rescheduling should only apply to alive agents.

2016-04-20 Thread SHI Peiqi
** Project changed: neutron => tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572442

Title:
  tempest.scenario.ha_router_rescheduling should only apply  to alive
  agents.

Status in tempest:
  In Progress

Bug description:
  In the ha router scheduling test case under 
tempest/scenario/test_network_basic.ops.py,
  step 4: "assign router to new l3-agent (or old one if no new agent is 
available)" should make sure "new l3-agents" are alive.

  Because, if there are some dead l3-agents in the HA router
  environment, the flag "no_migration" will be set to False and "assign
  router to  dead(new) l3-agents" will fail the case, which is not the
  purpose of the test.

  The idea is to make sure attribute: agent_list lists only  agents
  alive.

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1572442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572795] [NEW] There are some verbose files in Create Network

2016-04-20 Thread Kenji Ishii
Public bug reported:

By https://review.openstack.org/#/c/298508/,
Create Network also became to be able to use common html templates.
Therefore, files being used by its page can be removed.

** Affects: horizon
 Importance: Undecided
 Assignee: Kenji Ishii (ken-ishii)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1572795

Title:
  There are some verbose files in Create Network

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  By https://review.openstack.org/#/c/298508/,
  Create Network also became to be able to use common html templates.
  Therefore, files being used by its page can be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1572795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527274] Re: Neutron-metering-agent failed to add rule on router without gateway

2016-04-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/307122
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=f73dae0a65a6b63cffae5c5e3c2f98d4f0e35ccf
Submitter: Jenkins
Branch:master

commit f73dae0a65a6b63cffae5c5e3c2f98d4f0e35ccf
Author: Sergey Belous 
Date:   Mon Apr 18 14:39:45 2016 +0300

Add check that external gw port exist when metering-agent adds a rule

If router has no gateway port when metering-agent wants to add
a metering-label-rule method _process_metering_label_rules() fails
with error "cannot concatenate 'str' and 'NoneType' objects"
because there is no check that router has an external gateway port.
This patch adds this check and adds some unit test.

Closes-bug: #1527274
Change-Id: Ic9f626db41bfb6343187742e209402dd7d5232d1


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527274

Title:
  Neutron-metering-agent failed to add rule on router without gateway

Status in neutron:
  Fix Released

Bug description:
  If we trying to create meter-label-rule and there is a router without
  external gateway, then the metering agent will raise an error:

  2015-12-17 08:56:44.659 ERROR oslo_messaging.rpc.dispatcher 
[req-732f65fb-9a4e-4883-b545-3cb080c8cdae admin 
f8267bb3db654ca2a26a07d9757ec280] Exception during message handling: cannot 
concatenate 'str' and 'NoneType' objects
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/services/metering/agents/metering_agent.py", line 
222, in add_metering_label_rule
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher 
'add_metering_label_rule')
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/services/metering/agents/metering_agent.py", line 
176, in _invoke_driver
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher return 
getattr(self.metering_driver, func_name)(context, meterings)
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_log/helpers.py", line 46, in 
wrapper
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher return 
method(*args, **kwargs)
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/services/metering/drivers/iptables/iptables_driver.py",
 line 259, in add_metering_label_rule
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher 
self._add_metering_label_rule(router)
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/services/metering/drivers/iptables/iptables_driver.py",
 line 272, in _add_metering_label_rule
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher 
self._process_metering_rule_action(router, 'create')
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/services/metering/drivers/iptables/iptables_driver.py",
 line 281, in _process_metering_rule_action
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher ext_dev = 
self.get_external_device_name(rm.router['gw_port_id'])
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/services/metering/drivers/iptables/iptables_driver.py",
 line 132, in get_external_device_name
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher return 
(EXTERNAL_DEV_PREFIX + port_id)[:self.driver.DEV_NAME_LEN]
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher TypeError: cannot 
concatenate 'str' and 'NoneType' objects
  2015-12-17 08:56:44.659 TRACE oslo_messaging.rpc.dispatcher

  Steps 

[Yahoo-eng-team] [Bug 1572439] Re: test_update_subnetpool_associate_address_scope_wrong_ip_version should check address-scope extension

2016-04-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/308184
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=9a32f4ef763e382b6eb69ca06b9d95bad4b6e879
Submitter: Jenkins
Branch:master

commit 9a32f4ef763e382b6eb69ca06b9d95bad4b6e879
Author: YAMAMOTO Takashi 
Date:   Wed Apr 20 17:10:11 2016 +0900

Add a missing address-scope extension check

Closes-Bug: #1572439
Change-Id: I538af531278d4f0fc6feb494d13387d8a2810cf3


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572439

Title:
  test_update_subnetpool_associate_address_scope_wrong_ip_version should
  check address-scope extension

Status in neutron:
  Fix Released

Bug description:
  currently test_update_subnetpool_associate_address_scope_wrong_ip_version 
assumes address-scope
  extension.  it should be skipped when the extension is not available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572783] Re: Openswan/Libreswan: Check config changes before restart

2016-04-20 Thread Doug Wiegley
There is no user visible change here beyond it not dropping you, and no
config or api changes. This is not DocImpact.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572783

Title:
  Openswan/Libreswan: Check config changes before restart

Status in neutron:
  Invalid

Bug description:
  https://review.openstack.org/306899
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.

  commit 814e3f0c7d7bd8b44be61d8badf127b1c60debbc
  Author: nick.zhuyj 
  Date:   Sun Apr 17 21:59:46 2016 -0500

  Openswan/Libreswan: Check config changes before restart
  
  Currently, when neutron-vpn-agent restart, all the pluto process in
  router ns will be restarted too. But actually this is not required
  and will impact the vpn traffic. In this patch, we keep a backup for
  ipsec.conf and ipsec.secrets, and then compare the configurations
  when restart, if no config changes. Restart can be skipped.
  
  Note: this change is DocImpact
  
  Change-Id: I5a7fae909cb56721bd7e4d42999356c7f7464358
  Closes-Bug: #1571455

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572783/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1571455] Re: VPNaaS: pluto should not be restarted when neutron-vpn-agent restart

2016-04-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/306899
Committed: 
https://git.openstack.org/cgit/openstack/neutron-vpnaas/commit/?id=814e3f0c7d7bd8b44be61d8badf127b1c60debbc
Submitter: Jenkins
Branch:master

commit 814e3f0c7d7bd8b44be61d8badf127b1c60debbc
Author: nick.zhuyj 
Date:   Sun Apr 17 21:59:46 2016 -0500

Openswan/Libreswan: Check config changes before restart

Currently, when neutron-vpn-agent restart, all the pluto process in
router ns will be restarted too. But actually this is not required
and will impact the vpn traffic. In this patch, we keep a backup for
ipsec.conf and ipsec.secrets, and then compare the configurations
when restart, if no config changes. Restart can be skipped.

Note: this change is DocImpact

Change-Id: I5a7fae909cb56721bd7e4d42999356c7f7464358
Closes-Bug: #1571455


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1571455

Title:
  VPNaaS: pluto should not be restarted when neutron-vpn-agent restart

Status in neutron:
  Fix Released

Bug description:
  Currently, opeswan/libreswan pluto process in each router ns will be
  restarted when neutron-vpn-agent restart. Because there is no reload
  commands which is supported in strongswan.

  This is not good, because it will impact the vpn traffic when vpn-
  agent restart.

  Solution:
  Each time after pluto start, let's keep a backup configuration files for 
ipsec.conf & ipsec.secrets. named them as ipsec.conf.old & ipsec.secrets.old.
  Then when restart is required, let's check if configurations are changed, if 
not, then restart can be skipped.
  With this way, we can simulate a reload method and avoid restart pluto when 
vpn-agent restart.

  
  Following is the captured from currently devstack setup, we can see pluto 
process id changed after vpn-agent restart:

  stack@VPN-dev-nick:~$ps ax | grep ctlbase
  21683 ?Ss 0:00 /usr/lib/ipsec/pluto --ctlbase 
/opt/stack/data/neutron/ipsec/a83ba62a-5f97-42a3-b489-80c1465a083a/var/run/pluto
 --ipsecdir 
/opt/stack/data/neutron/ipsec/a83ba62a-5f97-42a3-b489-80c1465a083a/etc 
--use-netkey --uniqueids --nat_traversal --secretsfile 
/opt/stack/data/neutron/ipsec/a83ba62a-5f97-42a3-b489-80c1465a083a/etc/ipsec.secrets
 --virtual_private %v4:192.168.1.0/24,%v4:192.168.2.0/24

  
  RESTART NEUTRON-VPN-AGENT, CHECK AGAIN:

  stack@VPN-dev-nick:~$ps ax | grep ctlbase
  22206 ?Ss 0:00 /usr/lib/ipsec/pluto --ctlbase 
/opt/stack/data/neutron/ipsec/a83ba62a-5f97-42a3-b489-80c1465a083a/var/run/pluto
 --ipsecdir 
/opt/stack/data/neutron/ipsec/a83ba62a-5f97-42a3-b489-80c1465a083a/etc 
--use-netkey --uniqueids --nat_traversal --secretsfile 
/opt/stack/data/neutron/ipsec/a83ba62a-5f97-42a3-b489-80c1465a083a/etc/ipsec.secrets
 --virtual_private %v4:192.168.1.0/24,%v4:192.168.2.0/24

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1571455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572783] [NEW] Openswan/Libreswan: Check config changes before restart

2016-04-20 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/306899
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.

commit 814e3f0c7d7bd8b44be61d8badf127b1c60debbc
Author: nick.zhuyj 
Date:   Sun Apr 17 21:59:46 2016 -0500

Openswan/Libreswan: Check config changes before restart

Currently, when neutron-vpn-agent restart, all the pluto process in
router ns will be restarted too. But actually this is not required
and will impact the vpn traffic. In this patch, we keep a backup for
ipsec.conf and ipsec.secrets, and then compare the configurations
when restart, if no config changes. Restart can be skipped.

Note: this change is DocImpact

Change-Id: I5a7fae909cb56721bd7e4d42999356c7f7464358
Closes-Bug: #1571455

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron-vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572783

Title:
  Openswan/Libreswan: Check config changes before restart

Status in neutron:
  New

Bug description:
  https://review.openstack.org/306899
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.

  commit 814e3f0c7d7bd8b44be61d8badf127b1c60debbc
  Author: nick.zhuyj 
  Date:   Sun Apr 17 21:59:46 2016 -0500

  Openswan/Libreswan: Check config changes before restart
  
  Currently, when neutron-vpn-agent restart, all the pluto process in
  router ns will be restarted too. But actually this is not required
  and will impact the vpn traffic. In this patch, we keep a backup for
  ipsec.conf and ipsec.secrets, and then compare the configurations
  when restart, if no config changes. Restart can be skipped.
  
  Note: this change is DocImpact
  
  Change-Id: I5a7fae909cb56721bd7e4d42999356c7f7464358
  Closes-Bug: #1571455

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572783/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554414] Re: Avoid calling _get_subnet(s) multiple times in ipam driver

2016-04-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/281116
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=e174e619580173d43b398aa71587938a4313ca5b
Submitter: Jenkins
Branch:master

commit e174e619580173d43b398aa71587938a4313ca5b
Author: venkata anil 
Date:   Wed Apr 13 13:50:05 2016 +

Avoid calling _get_subnet(s) multiple times in ipam driver

While allocating or updating ips for port, _get_subnet and
_get_subnets were called multiple times resulting in multiple DB
calls.  This patch changes it to only call _get_subnets once
initially.

Closes-bug: #1554414
Change-Id: I7124974bac629fdb8946df6a7f84bd6b40f5af49


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554414

Title:
  Avoid calling _get_subnet(s) multiple times in ipam driver

Status in neutron:
  Fix Released

Bug description:
  While allocating or updating ips for port, _get_subnet and _get_subents are 
called multiple times.
  (note: Fix for this bug is needed for PD change 
https://review.openstack.org/#/c/241227 )

  For example, if update_port is called with below fixed_ips
  fixed_ips = [{'subnet_id': 'subnet1'},
   {'subnet_id': 'v6_dhcp_stateless_subnet'}
   {'subnet_id': 'v6_slaac_subnet'}
   {'subnet_id': 'v6_pd_subnet'}
   {'subnet_id': 'subnet4', 'ip_address': '30.0.0.3'}
   {'ip_address': '40.0.0.4'}
   {'ip_address': '50.0.0.5'}}
  then through _test_fixed_ips_for_port(fixed_ips),  "_get_subnet"[1] is called 
4 times for subnet1, v6_dhcp_stateless_subnet, v6_slaac_subnet, v6_pd_subnet, 
subnet4. "_get_subnets" [2] is called 2 times for ip_address 40.0.0.4 and 
50.0.0.5.

  If in case of _test_fixed_ips_for_port called for
  _allocate_ips_for_port, then _get_subnets is already called at[3] (so
  increase call count). So incase of _allocate_ips_for_port, if we save
  subnets from [3] saved in local variable and use that in-memory
  subnets in further calls, we can avoid above calls to DB.

  Sometimes when subnet is updated, update_subnet may trigger
  update_port(fixed_ips)[4] for all ports on the subnet. And in each
  port's fixed_ips, if we have multiple subnets and ip_addresses then
  _get_subnet and _get_subnets will be called for multiple times for
  each port like above example. For example in above case if we have 10
  ports on the subnet, then update_subnet will result in (10*6=60) 60
  times DB access instead of 10 DB access.

  When port_update is called for PD subnet, it again calls get_subnet
  for each fixed_ip[5], to check if subnet is PD subnet or not(after
  get_subnet and get_subnets called many times in
  _test_fixed_ips_for_port).

  In all above cases, for _get_subnet and _get_subnets, we are accessing DB 
many times.
   So instead of calling get_subnet or get_subnets for each fixed_ip of a 
port(at multiple places), we can call get_subnets of a network at begining of 
_allocate_ips_for_port(for create port) and _update_ips_for_port(during update) 
and use the in-memory subnets in subsequent private functions.

  [1] 
https://github.com/openstack/neutron/blob/master/neutron/db/ipam_backend_mixin.py#L311
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/db/ipam_backend_mixin.py#L331
  [3] 
https://github.com/openstack/neutron/blob/master/neutron/db/ipam_pluggable_backend.py#L192
  [4] 
https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L785
  [5] 
https://review.openstack.org/#/c/241227/11/neutron/db/ipam_non_pluggable_backend.py
 Lines 284 and 334.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1554414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481161] Re: ML2 plugin for MLX: Unable to configure device with ssh connector when enable password is configured

2016-04-20 Thread Angela Smith
Change Neutron bug to Invalid.  Actually, it is a case where we are not
being allowed to fix this in Liberty branch and told to remove our
driver.  Therefore, the fix is only available in Mitaka codebase in
networking-brocade repository.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481161

Title:
  ML2 plugin for MLX: Unable to configure device with ssh connector when
  enable password is configured

Status in networking-brocade:
  Fix Released
Status in neutron:
  Invalid

Bug description:
  When the following command is configured in FI/NI device, it puts the
  user directly into privelege mode after login, without the need to
  execute 'enable' command.

  aaa authentication login privilege-mode

  When the above command is configured, ML2 plugin falis to configure
  the device. This also happens when enable password is configured using
  the following command.

  enable super-user-password

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-brocade/+bug/1481161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558101] Re: Neutron performance degradation when multiple subnets attached to same router

2016-04-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/293976
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=9355885d7dab41e48a8ecaaf59b1bd3835f17f27
Submitter: Jenkins
Branch:master

commit 9355885d7dab41e48a8ecaaf59b1bd3835f17f27
Author: Kevin Benton 
Date:   Wed Mar 16 10:43:25 2016 -0700

Fetch router port subnets in bulk

When a router port is being attached to a network it
must first check that the new subnet(s) do not overlap
with any of the other subnets the router is already
attached to.

The previous code was calling the core plugin to look
up each attached subnet individually. This led to slow
responses if a router was attached to hundreds of other
subnets.

This patch just adjusts the logic to request all of the
subnets from the core plugin in one call so the number
of DB calls stays constant regardless of attached subnet
count.

Change-Id: I36a2b23b089269bb51ce3dc271c944b48fca8781
Closes-Bug: #1558101


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558101

Title:
  Neutron performance degradation when multiple subnets attached to same
  router

Status in neutron:
  Fix Released

Bug description:
  Hello

  In our stress test, Neutron has performance degradation when
  multiple subnets are added to the same router.

  I am attaching script to reproduce this bug.

  This script does the following:
  1. creates a ROUTER-1
  2. Create 10 networks titled: NETWORK-X
 2.1 Create 100 Subnets for each NETWORK-X and attach subnets to ROUTER-1
using the neutron.add_interface_router()

  Here is the result for the first 500 subnets:

  Time spend to create 100 subnets: 281 seconds
  Time spend to create additional 100 subnets: 594 seconds
  Time spend to create additional 100 subnets: 912 seconds
  Time spend to create additional 100 subnets: 1191 seconds
  Time spend to create additional 100 subnets: 1464 seconds

  -
  Total time to create 500 subnets is 74 minutes.
  When it took only 10 minutes to create first 100 subnets and link to router.

  
  To be able to run the test I had to disable quota and restart q-svc.
  From my neutron config file: /etc/neutron/neutron.conf

  [quotas]
  default_quota = -1
  quota_network = -1
  quota_subnet = -1
  track_quota_usage = false
  quota_router = -1
  quota_floatingip = -1
  quota_security_group = -1
  quota_security_group_rule = -1

  I am using regular all in one devstack installation using the l2/l3
  agent.

  Linux is Ubuntu 14.04.3 LTS

  This is my local.config file:
  -

  [[local|localrc]]
  NETWORK_GATEWAY=10.0.0.1
  PUBLIC_NETWORK_GATEWAY=10.100.100.8
  Q_FLOATING_ALLOCATION_POOL=start=10.100.201.200,end=10.100.201.230
  RABBIT_PASSWORD=pwd123
  SERVICE_PASSWORD=pwd123
  SERVICE_TOKEN=pwd123
  ADMIN_PASSWORD=pwd123
  DATABASE_PASSWORD=pwd123

  disable_service n-net
  disable_service tempest
  enable_service heat
  enable_service h-eng h-api h-api-cfn h-api-cw
  enable_service q-agt
  enable_service q-dhcp
  enable_service q-l3
  enable_service q-meta
  enable_service q-svc

  FIXED_NETWORK_SIZE=256
  FIXED_RANGE=10.0.0.0/24
  FLAT_INTERFACE=eno1
  FLOATING_RANGE=10.100.0.0/16
  HOST_IP=10.100.100.8

  -

  Thanks and best regards,
  Yuli

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572730] [NEW] The _sync_power_states task should filter out instances.task_state != None up front

2016-04-20 Thread Matt Riedemann
Public bug reported:

The _sync_power_states periodic task queries all instances on the
compute host:

https://github.com/openstack/nova/blob/4ad414f3b1216393301ef268a64e61ca1a3d5be9/nova/compute/manager.py#L6164

Then later it skips any that are in the middle of an operation:

https://github.com/openstack/nova/blob/4ad414f3b1216393301ef268a64e61ca1a3d5be9/nova/compute/manager.py#L6269

We should avoid the roundtrip to the DB and RPC traffic to load up all
of the instances on the compute host that are in the middle of a task
and will just be skipped in code anyway and filter out the instance list
by task_state in the initial DB query.

** Affects: nova
 Importance: Medium
 Status: Triaged


** Tags: compute low-hanging-fruit performance

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1572730

Title:
  The _sync_power_states task should filter out instances.task_state !=
  None up front

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  The _sync_power_states periodic task queries all instances on the
  compute host:

  
https://github.com/openstack/nova/blob/4ad414f3b1216393301ef268a64e61ca1a3d5be9/nova/compute/manager.py#L6164

  Then later it skips any that are in the middle of an operation:

  
https://github.com/openstack/nova/blob/4ad414f3b1216393301ef268a64e61ca1a3d5be9/nova/compute/manager.py#L6269

  We should avoid the roundtrip to the DB and RPC traffic to load up all
  of the instances on the compute host that are in the middle of a task
  and will just be skipped in code anyway and filter out the instance
  list by task_state in the initial DB query.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1572730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1571825] Re: Rescue with bad image makes instance stuck in weird state

2016-04-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/307317
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=988668e04abfa1e2a3101a5a87c247445a311e5b
Submitter: Jenkins
Branch:master

commit 988668e04abfa1e2a3101a5a87c247445a311e5b
Author: Ivo Vasev 
Date:   Mon Apr 18 10:10:49 2016 -0500

Added validation for rescue image ref

Make sure we do not enter error/shutoff state when using invalid image
Added tests with full href url
Fixed tox pep8 ordering

Closes-Bug: #1571825
Change-Id: I181cb54f14200c27362de5cd5777df05fe87db6f


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1571825

Title:
  Rescue with bad image makes instance stuck in weird state

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  Confirmed

Bug description:
  
  When badly formatted image id is used, this cause nova to proces the request 
and the VM goes in error/shutoff state.

  User behavior should be improved.

  https://review.openstack.org/#/c/307317/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1571825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570113] Re: compute nodes do not get dvrha router update from server

2016-04-20 Thread Hardik Italia
Re-opening this bug as still can be reproduce with master branch by
following below.

Master branch : April 19 commit 21a2dbc25d2eeeae43faf6c7e5ebd2e4f7fd2be2

1) Create network/subnet.
2) Create external network & subnet.
3) Create DVR HA router.
4) Set router gateway.
5) Attached router interface to above created tenant network.
6) Boot VM in the above created network and noticed that router namesapce is 
not being created on the compute node.

Example:

neutron net-create n3
neutron subnet-create --name s3 n3 3.3.3.0/24
neutron router-create r3 --distributed True --ha True
neutron router-gateway-set r3 ext-net
neutron router-interface-add r3 s3 
nova boot --image cirros-0.3.4-x86_64-uec --flavor 42 --nic 
net-id=b8fa3205-16f2-4e5a-aaf5-c0ca15195771 vm3 

VM is running:

stack@ctl1:/opt/stack/neutron$ nova list
+--+--+++-++
| ID   | Name | Status | Task State | Power 
State | Networks   |
+--+--+++-++
| 6ab2d1ad-284c-429f-b68e-37ccb82039b9 | vm3  | ACTIVE | -  | Running   
  | n3=3.3.3.5 |
+--+--+++-++


from compute node:
stack@cn:/opt/stack/neutron$ virsh list
 IdName   State

 4 instance-0006  running

stack@cn:/opt/stack/neutron$ ip netns
stack@cn:/opt/stack/neutron$ 

Workaround:
If we boot VM first and then add router's interface to tenant network then 
router namespace is being created.

** Changed in: neutron
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1570113

Title:
  compute nodes do not get dvrha router update from server

Status in neutron:
  Confirmed

Bug description:
  branch: master (April 13th 2016):  commit
  2a305c563073a3066aac3f07aab3c895ec2cd2fb

  Topology: 
  1 controller (neutronserver)
  3 network nodes (l3_agent in dvr_snat mode)
  2 compute nodes (l3_agent in dvr mode)

  behavior: when a dvr/ha router is created and a vm instantiated on the
  compute node, the l3_agent on the compute node does not instantiate a
  local router. (the qr-namespace along with all the interfaces).

  expected: when a dvr/ha router is created and a vm is instantiated on
  a compute node, the l3_agent running on that compute node should
  instantiate a local router.  The qr-router-id namespace along with all
  the appropriate interfaces should be present locally on the compute
  node. Similar in behavior to a dvr only router. (without ha).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1570113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552397] Re: unable to set configuration file when running keystone as wsgi application

2016-04-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/288216
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=c7cb72b20e181e7df0bed2370a48b3aea249162f
Submitter: Jenkins
Branch:master

commit c7cb72b20e181e7df0bed2370a48b3aea249162f
Author: Cristian Sava 
Date:   Fri Mar 4 00:55:03 2016 +

Customize config file location when run as wsgi app.

Running keystone as a wsgi application should allow the same kind of
customization as when run from the command line. Setting sys.argv for
wsgi applications is difficult so that environment variables need to
be used for this purpose.

Closes-Bug: #1552397

Change-Id: I1cd8c7c9f8d4c748384f9b72511b677176672791


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1552397

Title:
  unable to set configuration file when running keystone as wsgi
  application

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Currently, the initialize_application() function defined inside
  keystone/server/wsgi.py module does not allow defining custom
  locations for configuration file:

  def initialize_application(name, post_log_configured_function=lambda: None):
  common.configure()
  ...

  I think the initialize_application() prototype should allow passing
  through arguments for the common.configure() function, that would
  allow for instance defining alternate config file locations etc. Such
  customization is possible when running keystone under eventlet model.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1552397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572686] [NEW] Console errors in Metadata widget

2016-04-20 Thread Rob Cresswell
Public bug reported:

Seeing the following error in the Update Metadata widget, in both Launch
Instance and from Images:

angular.js:11707 TypeError: Cannot read property 'type' of null
at new MetadataTreeItemController (metadata-tree-item.controller.js:35)
at Object.invoke (angular.js:4219)
at extend.instance (angular.js:8525)
at angular.js:7771
at forEach (angular.js:334)
at nodeLinkFn (angular.js:7770)
at delayedNodeLinkFn (angular.js:8048)
at compositeLinkFn (angular.js:7149)
at nodeLinkFn (angular.js:7795)
at compositeLinkFn (angular.js:7149)

Refers to this line of code:
https://github.com/openstack/horizon/blob/b0f3ec3ace531c110f328d208cded302d2617f88/horizon/static/framework/widgets/metadata/tree
/metadata-tree-item.controller.js#L35

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: angularjs

** Description changed:

  Seeing the following error in the Update Metadata widget, in both Launch
  Instance and from Images:
  
  angular.js:11707 TypeError: Cannot read property 'type' of null
- at new MetadataTreeItemController (metadata-tree-item.controller.js:35)
- at Object.invoke (angular.js:4219)
- at extend.instance (angular.js:8525)
- at angular.js:7771
- at forEach (angular.js:334)
- at nodeLinkFn (angular.js:7770)
- at delayedNodeLinkFn (angular.js:8048)
- at compositeLinkFn (angular.js:7149)
- at nodeLinkFn (angular.js:7795)
- at compositeLinkFn (angular.js:7149)
+ at new MetadataTreeItemController (metadata-tree-item.controller.js:35)
+ at Object.invoke (angular.js:4219)
+ at extend.instance (angular.js:8525)
+ at angular.js:7771
+ at forEach (angular.js:334)
+ at nodeLinkFn (angular.js:7770)
+ at delayedNodeLinkFn (angular.js:8048)
+ at compositeLinkFn (angular.js:7149)
+ at nodeLinkFn (angular.js:7795)
+ at compositeLinkFn (angular.js:7149)
+ 
+ Refers to this line of code:
+ 
https://github.com/openstack/horizon/blob/b0f3ec3ace531c110f328d208cded302d2617f88/horizon/static/framework/widgets/metadata/tree
+ /metadata-tree-item.controller.js#L35

** Changed in: horizon
Milestone: None => next

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1572686

Title:
  Console errors in Metadata widget

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Seeing the following error in the Update Metadata widget, in both
  Launch Instance and from Images:

  angular.js:11707 TypeError: Cannot read property 'type' of null
  at new MetadataTreeItemController (metadata-tree-item.controller.js:35)
  at Object.invoke (angular.js:4219)
  at extend.instance (angular.js:8525)
  at angular.js:7771
  at forEach (angular.js:334)
  at nodeLinkFn (angular.js:7770)
  at delayedNodeLinkFn (angular.js:8048)
  at compositeLinkFn (angular.js:7149)
  at nodeLinkFn (angular.js:7795)
  at compositeLinkFn (angular.js:7149)

  Refers to this line of code:
  
https://github.com/openstack/horizon/blob/b0f3ec3ace531c110f328d208cded302d2617f88/horizon/static/framework/widgets/metadata/tree
  /metadata-tree-item.controller.js#L35

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1572686/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572682] [NEW] Navbar header is misaligned

2016-04-20 Thread Rob Cresswell
Public bug reported:

After the release of XStatic-Bootstrap 3.3.6 and XStatic-Bootswatch
3.3.6, the navbar branding image is aligned to the top, not the middle.

** Affects: horizon
 Importance: Low
 Assignee: Diana Whitten (hurgleburgler)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1572682

Title:
  Navbar header is misaligned

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After the release of XStatic-Bootstrap 3.3.6 and XStatic-Bootswatch
  3.3.6, the navbar branding image is aligned to the top, not the
  middle.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1572682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492716] Re: Can't do image-create for suspended instance booted from volume

2016-04-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/223382
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=cf82f524bf3a5c7986e4a33872762163b9541235
Submitter: Jenkins
Branch:master

commit cf82f524bf3a5c7986e4a33872762163b9541235
Author: Rui Chen 
Date:   Tue Sep 15 10:34:14 2015 +0800

Create image for suspended instance booted from volume

The suspended instance's power state is power-off on hypervisor,
so it's no reason to limit the suspended instance booted from volume
to create image, add the suspended state to allowing list of
snapshot_volume_backed().

Change-Id: Id3457cd54af9b581b79eccc5d44a97dbdc63232e
Closes-Bug: #1492716


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1492716

Title:
  Can't do image-create for suspended instance booted from volume

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  1. code base

  stack@devstack:/opt/stack/nova$  [master]$ git log -1
  commit 0391a19d0ff29d2987b6cc55f846eea3cff0a358
  Merge: ce2da34 e5982de
  Author: Jenkins 
  Date:   Sun Sep 6 03:19:28 2015 +

  Merge "Reject the cell name include '!', '.' and '@' for Nova API"

  2. Reproduce steps:

  * boot a instance from cinder bootable volume
  * suspend the instance
  * do image-create for the suspended instance

  Expected result:
  * image-create is successful

  Actual result:
  * ERROR (Conflict): Cannot 'createImage' instance 
d4cc1211-a58a-4b33-b6a4-e9e998925389 while it is in vm_state suspended (HTTP 
409) (Request-ID: req-5a6fb7e6-ea12-4e72-98f5-ff792fc60d48)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1492716/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456624] Re: DVR Connection to external network lost when associating a floating IP

2016-04-20 Thread Carl Baldwin
In my review of the patch, I stated that I think the cure is much worse
than the problem.  I don't think anyone has chimed in to change my mind
and so I'm marking this as won't fix.  Ping me if you think it should be
fixed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456624

Title:
  DVR Connection to external network lost when associating a floating IP

Status in neutron:
  Won't Fix

Bug description:
  In DVR, when a floating ip is associated with a port, the current
  connection( ssh or ping) to external network will be hung(and
  unresponsive).

  The connection may be any TCP, UDP, ICMP connections which are tracked
  in conntrack.

  Having a distributed router with interfaces for an internal network
  and external network.

  When Launching a instance and pinging an external network and then 
associating a floating to the instance the connection is lost i.e.
   the ping fails.
  When running the ping command again - it's successful.

  Version
  ==
  RHEL 7.1
  python-nova-2015.1.0-3.el7ost.noarch
  python-neutron-2015.1.0-1.el7ost.noarch

  How to reproduce
  ==
  1. Create a distributed router and attach an internal and an external network 
to it.
  # neutron router-create --distributed True router1
  # neutron router-interface-add router1 
  # neutron router-gateway-set 

  2. Launch an instance and associate it with a floating IP.
  # nova boot --flavor m1.small --image fedora --nic net-id= vm1

  3. Go to the console of the instance and run ping to an external network:
   # ping 8.8.8.8

  4.  Associate a floating IP to the instance:
   # nova floating-ip-associate vm1 

  5. Verify that the ping fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456624/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570748] Re: Bug: resize instance after edit flavor with horizon

2016-04-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/307438
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=a46e847aad4ee7edbb63eb08f97f6635e6c9ccb0
Submitter: Jenkins
Branch:master

commit a46e847aad4ee7edbb63eb08f97f6635e6c9ccb0
Author: Dan Smith 
Date:   Mon Apr 18 12:40:54 2016 -0700

Fix reverse_upsize_quota_delta attempt to look up deleted flavors

When we did the "great flavor migration of 2015" we missed a quota method
which still looks up flavors by id from the migration. Now that flavors
are moved to the api database and actually removed when deleted, this no
longer works. The problem manifests itself as a failure when trying to
revert a migration or resize operation when the original flavor has been
deleted.

Change-Id: I5f95021410a309ac07fe9f474cbcd0214d1af208
Closes-Bug: #1570748


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1570748

Title:
  Bug: resize instance after edit flavor with horizon

Status in OpenStack Compute (nova):
  Fix Released
Status in tempest:
  In Progress

Bug description:
  Error occured when resize instance after edit flavor with horizon (and
  also delete flavor used by instance)

  Reproduce step :

  1. create flavor A
  2. boot instance using flavor A
  3. edit flavor with horizon (or delete flavor A)
  -> the result is same to edit or to delelet flavor because edit flavor 
means delete/recreate flavor)
  4. resize or migrate instance
  5. Error occured

  Log : 
  nova-compute.log
 File "/opt/openstack/src/nova/nova/conductor/manager.py", line 422, in 
_object_dispatch
   return getattr(target, method)(*args, **kwargs)

 File "/opt/openstack/src/nova/nova/objects/base.py", line 163, in wrapper
   result = fn(cls, context, *args, **kwargs)

 File "/opt/openstack/src/nova/nova/objects/flavor.py", line 132, in 
get_by_id
   db_flavor = db.flavor_get(context, id)

 File "/opt/openstack/src/nova/nova/db/api.py", line 1479, in flavor_get
   return IMPL.flavor_get(context, id)

 File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 233, in 
wrapper
   return f(*args, **kwargs)

 File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 4732, in 
flavor_get
   raise exception.FlavorNotFound(flavor_id=id)

   FlavorNotFound: Flavor 7 could not be found.

  
  This Error is occured because of below code:
  /opt/openstack/src/nova/nova/compute/manager.py

  def resize_instance(self, context, instance, image,
  reservations, migration, instance_type,
  clean_shutdown=True):
  
  if (not instance_type or
  not isinstance(instance_type, objects.Flavor)):
  instance_type = objects.Flavor.get_by_id(
  context, migration['new_instance_type_id'])
  

  I think that deleted flavor should be taken when resize instance. 
  I tested this in stable/kilo, but I think stable/liberty and stable/mitaka 
has same bug because source code is not changed.

  thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1570748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442098] Re: instance_group_member entries not deleted when the instance deleted

2016-04-20 Thread Matt Riedemann
** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Also affects: nova/liberty
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1442098

Title:
  instance_group_member entries not deleted when the instance deleted

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  New
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  Just the not deleted members needs to be selected, an instance group
  can gather many-many deleted instances during on his lifetime.

  The selecting query contains a condition for omitting the deleted
  records:

  SELECT instance_groups.created_at AS instance_groups_created_at,
  instance_groups.updated_at AS instance_groups_updated_at,
  instance_groups.deleted_at AS instance_groups_deleted_at,
  instance_groups.deleted AS instance_groups_deleted, instance_groups.id
  AS instance_groups_id, instance_groups.user_id AS
  instance_groups_user_id, instance_groups.project_id AS
  instance_groups_project_id, instance_groups.uuid AS
  instance_groups_uuid, instance_groups.name AS instance_groups_name,
  instance_group_policy_1.created_at AS
  instance_group_policy_1_created_at, instance_group_policy_1.updated_at
  AS instance_group_policy_1_updated_at,
  instance_group_policy_1.deleted_at AS
  instance_group_policy_1_deleted_at, instance_group_policy_1.deleted AS
  instance_group_policy_1_deleted, instance_group_policy_1.id AS
  instance_group_policy_1_id, instance_group_policy_1.policy AS
  instance_group_policy_1_policy, instance_group_policy_1.group_id AS
  instance_group_policy_1_group_id, instance_group_member_1.created_at
  AS instance_group_member_1_created_at,
  instance_group_member_1.updated_at AS
  instance_group_member_1_updated_at, instance_group_member_1.deleted_at
  AS instance_group_member_1_deleted_at, instance_group_member_1.deleted
  AS instance_group_member_1_deleted, instance_group_member_1.id AS
  instance_group_member_1_id, instance_group_member_1.instance_id AS
  instance_group_member_1_instance_id, instance_group_member_1.group_id
  AS instance_group_member_1_group_id  FROM instance_groups LEFT OUTER
  JOIN instance_group_policy AS instance_group_policy_1 ON
  instance_groups.id = instance_group_policy_1.group_id AND
  instance_group_policy_1.deleted = 0 AND instance_groups.deleted = 0
  LEFT OUTER JOIN instance_group_member AS instance_group_member_1 ON
  instance_groups.id = instance_group_member_1.group_id AND
  instance_group_member_1.deleted = 0 AND instance_groups.deleted = 0
  WHERE instance_groups.deleted = 0 AND instance_groups.project_id =
  '6da55626d6a04f4c99980dc17d34235f';

  (Captured at $nova server-group-list)

  But actually nova fetches the deleted records because the `deleted` field is 
0,
  even if the instance already deleted.

  For figuring out the instance  is actually deleted the nova API issues
  other otherwise  not needed queries.

  The instance_group_member records actually set to deleted only when
  instance_group deleted.

  show create table instance_group_member;

  CREATE TABLE `instance_group_member` (
`created_at` datetime DEFAULT NULL,
`updated_at` datetime DEFAULT NULL,
`deleted_at` datetime DEFAULT NULL,
`deleted` int(11) DEFAULT NULL,
`id` int(11) NOT NULL AUTO_INCREMENT,
`instance_id` varchar(255) DEFAULT NULL,
`group_id` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `group_id` (`group_id`),
KEY `instance_group_member_instance_idx` (`instance_id`),
CONSTRAINT `instance_group_member_ibfk_1` FOREIGN KEY (`group_id`) 
REFERENCES `instance_groups` (`id`)
  ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8

  1, Please  delete the instance_group_member  records when the instance gets 
deleted.
  2, Please add (`deleted`,`group_id`) BTREE index  as combined index, in this 
way it will be  usable in other situations as well, for example  when only a 
single group's members is needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1442098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518430] Re: liberty: ~busy loop on epoll_wait being called with zero timeout

2016-04-20 Thread Aleksandr Shaposhnikov
** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: mos
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1518430

Title:
  liberty: ~busy loop on epoll_wait being called with zero timeout

Status in Mirantis OpenStack:
  New
Status in OpenStack Compute (nova):
  New
Status in nova package in Ubuntu:
  Confirmed

Bug description:
  Context: openstack juju/maas deploy using 1510 charms release
  on trusty, with:
    openstack-origin: "cloud:trusty-liberty"
    source: "cloud:trusty-updates/liberty

  * Several openstack nova- and neutron- services, at least:
  nova-compute, neutron-server, nova-conductor,
  neutron-openvswitch-agent,neutron-vpn-agent
  show almost busy looping on epoll_wait() calls, with zero timeout set
  most frequently.
  - nova-compute (chose it b/cos single proc'd) strace and ltrace captures:
    http://paste.ubuntu.com/13371248/ (ltrace, strace)

  As comparison, this is how it looks on a kilo deploy:
  - http://paste.ubuntu.com/13371635/

  * 'top' sample from a nova-cloud-controller unit from
 this completely idle stack:
    http://paste.ubuntu.com/13371809/

  FYI *not* seeing this behavior on keystone, glance, cinder,
  ceilometer-api.

  As this issue is present on several components, it likely comes
  from common libraries (oslo concurrency?), fyi filed the bug to
  nova itself as a starting point for debugging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1518430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572655] Re: Neutron-LBaaS v2: Redundant protocols for Listener and Pool

2016-04-20 Thread Doug Wiegley
Just one place won't work.  I get that if it's pass-through, just one is
fine . But if you're terminated https, you might want cleartext back to
the members to offload SSL.  Or not, if you're just being a sneaky
middleman.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572655

Title:
  Neutron-LBaaS v2:  Redundant protocols for Listener and Pool

Status in neutron:
  Won't Fix

Bug description:
  (This is more of a feature request than a bug.)

  Examine the available protocols for Listener and Pool.

  Listener:  TCP,   HTTP,   HTTPS,  TERMINATED_HTTPS
  Pool: TCP,HTTP,   HTTPS

  This combination may be redundant:
  Listener -> Pool
  HTTPS -> HTTP
  TERMINATED_HTTPS -> HTTP

  It becomes quite complicated that we can create different combinations
  of protocols for pool and listener.

  I suggest having just a one time place to define a protocol - either
  in pool or listener.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572670] [NEW] Ovs agent should set min burst value if user don't provide it

2016-04-20 Thread Slawek Kaplonski
Public bug reported:

If user will set QoS bw_limit rule without giving burst value (or this burst is 
set to 0kb) then bw limit could not work properly. We should change openvswitch 
QoS driver to set min burst value as 80 % of configured bw_limit. 80% is value 
which should works fine for TCP traffic.
Such change will make consistent behaviour between Linuxbridge agent and 
openvswitch agent. See on https://launchpad.net/bugs/1563720 for LB 
implementation details.

** Affects: neutron
 Importance: Undecided
 Assignee: Slawek Kaplonski (slaweq)
 Status: New


** Tags: ovs qos

** Changed in: neutron
 Assignee: (unassigned) => Slawek Kaplonski (slaweq)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572670

Title:
  Ovs agent should set min burst value if user don't provide it

Status in neutron:
  New

Bug description:
  If user will set QoS bw_limit rule without giving burst value (or this burst 
is set to 0kb) then bw limit could not work properly. We should change 
openvswitch QoS driver to set min burst value as 80 % of configured bw_limit. 
80% is value which should works fine for TCP traffic.
  Such change will make consistent behaviour between Linuxbridge agent and 
openvswitch agent. See on https://launchpad.net/bugs/1563720 for LB 
implementation details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572656] Re: Neutron IETF Service Function Chaining API

2016-04-20 Thread Armando Migliaccio
Invalid for now. This should be tracked in networking-sfc.

** Also affects: networking-sfc
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: In Progress => Opinion

** Changed in: neutron
 Assignee: Igor Duarte Cardoso (igordcard) => (unassigned)

** Changed in: neutron
   Status: Opinion => Invalid

** Tags removed: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572656

Title:
  Neutron IETF Service Function Chaining API

Status in networking-sfc:
  New
Status in neutron:
  Invalid

Bug description:
  The ability to dynamically instantiate Service Function Chains (SFCs) is
  sought by many. Doing so with OpenStack via Neutron brings additional
  value since basic networking is already handled and a framework is in
  place for translating network abstractions to back-ends such as
  virtual switches and SDN controllers. With that in mind, only additional
  logic for traffic classification and steering, plus an API capable of
  reflecting these capabilities, needs to be added to make Neutron the single
  API necessary to instantiate and manage Service Function Chains together
  with the pure networking abstraction.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-sfc/+bug/1572656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569641] Re: server group members are not deleted on failed server create overquota

2016-04-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/304929
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5674e7646d106751b27d191e3334d9e6ebe9ab1b
Submitter: Jenkins
Branch:master

commit 5674e7646d106751b27d191e3334d9e6ebe9ab1b
Author: Matt Riedemann 
Date:   Tue Apr 12 22:09:16 2016 -0400

Properly clean up BDMs when _provision_instances fails

_provision_instances calls create_db_entry_for_new_instance
which creates the instance and block device mappings in the
database.

The instance is added to the instances list which is used
in the global exception block at the bottom of _provision_instances
to destroy any instances created. A failure that can trigger
this cleanup attempt after the instance and BDMs are created
is when checking server group member count fails with OverQuota.

The problem is that we fail to (soft) delete the block device mappings
that we created. Since there is a foreign key constraint between
the block_device_mapping and instances tables in the database,
when we try to archive (copy soft deleted things to shadow tables
and then hard-delete them) the deleted instance it will fail on
a referential constraint error due to the BDM(s) which weren't deleted.

We can fix this by deleting the BDMs when deleting the instance just
like we do for other reference tables.

A functional test is added to demonstrate the failure and fix which
also has the nice benefit of functionally testing the server group
member overquota error handling.

Change-Id: Ida66a93031046bafcf30c95ca232fb6382c2597b
Closes-Bug: #1569641


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1569641

Title:
  server group members are not deleted on failed server create overquota

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Confirmed
Status in OpenStack Compute (nova) mitaka series:
  Confirmed

Bug description:
  When creating instances in the database in the compute API, if we fail
  after creating them we attempt to delete the instances from the DB
  here:

  
https://github.com/openstack/nova/blob/af7e83fef3bc2c005c581587e9230a4070f8feb9/nova/compute/api.py#L1033

  However, if there is a failure it's ignored and we continue and just
  re-raise the exception.

  The instances can fail to delete because of a referential constraint
  on the block device mappings created here:

  
https://github.com/openstack/nova/blob/af7e83fef3bc2c005c581587e9230a4070f8feb9/nova/compute/api.py#L1471

  So if we don't delete those first, we can't cleanup the instances. You
  can recreate this by changing CONF.quota_server_group_members=0 and
  trying to boot a server into a server group.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1569641/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572656] [NEW] Neutron IETF Service Function Chaining API

2016-04-20 Thread Igor Duarte Cardoso
Public bug reported:

The ability to dynamically instantiate Service Function Chains (SFCs) is
sought by many. Doing so with OpenStack via Neutron brings additional
value since basic networking is already handled and a framework is in
place for translating network abstractions to back-ends such as
virtual switches and SDN controllers. With that in mind, only additional
logic for traffic classification and steering, plus an API capable of
reflecting these capabilities, needs to be added to make Neutron the single
API necessary to instantiate and manage Service Function Chains together
with the pure networking abstraction.

** Affects: neutron
 Importance: Undecided
 Assignee: Igor Duarte Cardoso (igordcard)
 Status: In Progress


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) => Igor Duarte Cardoso (igordcard)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572656

Title:
  Neutron IETF Service Function Chaining API

Status in neutron:
  In Progress

Bug description:
  The ability to dynamically instantiate Service Function Chains (SFCs) is
  sought by many. Doing so with OpenStack via Neutron brings additional
  value since basic networking is already handled and a framework is in
  place for translating network abstractions to back-ends such as
  virtual switches and SDN controllers. With that in mind, only additional
  logic for traffic classification and steering, plus an API capable of
  reflecting these capabilities, needs to be added to make Neutron the single
  API necessary to instantiate and manage Service Function Chains together
  with the pure networking abstraction.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572655] [NEW] Neutron-LBaaS v2: Redundant protocols for Listener and Pool

2016-04-20 Thread Franklin Naval
Public bug reported:

(This is more of a feature request than a bug.)

Examine the available protocols for Listener and Pool.

Listener:  TCP, HTTP,   HTTPS,  TERMINATED_HTTPS
Pool: TCP,  HTTP,   HTTPS

This combination may be redundant:
Listener -> Pool
HTTPS -> HTTP
TERMINATED_HTTPS -> HTTP

It becomes quite complicated that we can create different combinations
of protocols for pool and listener.

I suggest having just a one time place to define a protocol - either in
pool or listener.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaasv2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572655

Title:
  Neutron-LBaaS v2:  Redundant protocols for Listener and Pool

Status in neutron:
  New

Bug description:
  (This is more of a feature request than a bug.)

  Examine the available protocols for Listener and Pool.

  Listener:  TCP,   HTTP,   HTTPS,  TERMINATED_HTTPS
  Pool: TCP,HTTP,   HTTPS

  This combination may be redundant:
  Listener -> Pool
  HTTPS -> HTTP
  TERMINATED_HTTPS -> HTTP

  It becomes quite complicated that we can create different combinations
  of protocols for pool and listener.

  I suggest having just a one time place to define a protocol - either
  in pool or listener.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567977] Re: POST /servers with incorrect content-type returns 400, should be 415

2016-04-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/304958
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=a7019a87ba696509d825c6c6f2220331e4ffb033
Submitter: Jenkins
Branch:master

commit a7019a87ba696509d825c6c6f2220331e4ffb033
Author: Brandon Irizarry 
Date:   Wed Apr 13 05:18:25 2016 +

Changed an HTTP exception to return proper code

POSTing to /servers with a content-type of text/plain
and a text/plain body results in a response code of 400.
It should be 415. I found this line in the code that
appears to handle this singular case and modified
the HTTP exception used to the correct one. Tests were
also updated accordingly.

Change-Id: I5fa1fdba56803b2ef63b1efaaeeced6ceb7779d9
Closes-Bug: 1567977


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1567977

Title:
  POST /servers with incorrect content-type returns 400, should be 415

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Confirmed
Status in OpenStack Compute (nova) mitaka series:
  Confirmed

Bug description:
  (master nova april 8, 2016)

  POSTing to /servers with a content-type of text/plain and a text/plain
  body results in a response code of 400. This is incorrect. It should
  be 415: 

  Here's the gabbi demo:

  - name: post bad content-type
xfail: True
POST: /servers
request_headers:
content-type: text/plain
data: I want a server so badly
status: 415

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1567977/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572635] [NEW] Can't replace the original file with newer version-Ceph Object Storage

2016-04-20 Thread Praveen Madire
Public bug reported:

Steps to reproduce the issue.
1)Login to Horizon
2)Create a container
3)Upload a file to the above container
4)Edit the file on your local machine then upload the latest version of the 
file by selecting Edit option next to the file
5)Selected the newer version of the file
6)Click on Update object

Actual Results: Error message says : Object with the name already exist
Expected results: Updated file should have updated successfully

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1572635

Title:
  Can't replace the original file with newer version-Ceph Object Storage

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce the issue.
  1)Login to Horizon
  2)Create a container
  3)Upload a file to the above container
  4)Edit the file on your local machine then upload the latest version of the 
file by selecting Edit option next to the file
  5)Selected the newer version of the file
  6)Click on Update object

  Actual Results: Error message says : Object with the name already exist
  Expected results: Updated file should have updated successfully

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1572635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572593] Re: Impossible attach detached port to another instance

2016-04-20 Thread Timofey Durakov
This is Nova issue.

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
 Assignee: (unassigned) => Timofey Durakov (tdurakov)

** Changed in: heat
   Status: New => Incomplete

** Changed in: neutron
   Status: New => Invalid

** Changed in: heat
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572593

Title:
  Impossible attach detached port to another instance

Status in heat:
  Invalid
Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The follow scenario leads to error:
   - create independent port in neutron
   - boot VM in nova
   - attach port to VM via nova interface-attach. As result dns_name of port 
will be updated and port be attached to VM
   - detach port from VM via nova interface-detach. As result port will be 
detached, BUT dns_name will be still as name of VM.

  if you try to attach this port to another VM you will get error, that
  it's not possible to attach port.

  
  The same issue happens for Heat stack, when we have VM+port and execute 
update replace for Server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1572593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572619] [NEW] Add project name to the project deleted notification event

2016-04-20 Thread Todd Johnson
Public bug reported:

I have some code that cleans up openstack resources when a project is
deleted by listening for the identity.project.deleted notification
event.  This event payload only includes the project id. It would be
nice if it also included the project name.  I send an email to the admin
with resources that are cleaned up and it would be nicer to have the
project name as well as the id.  Since the project has already been
deleted by the time i receive the event, i can't go back to keystone to
get the project name.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1572619

Title:
  Add project name to the project deleted notification event

Status in OpenStack Identity (keystone):
  New

Bug description:
  I have some code that cleans up openstack resources when a project is
  deleted by listening for the identity.project.deleted notification
  event.  This event payload only includes the project id. It would be
  nice if it also included the project name.  I send an email to the
  admin with resources that are cleaned up and it would be nicer to have
  the project name as well as the id.  Since the project has already
  been deleted by the time i receive the event, i can't go back to
  keystone to get the project name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1572619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494001] Re: /usr/share/openstack-dashboard/manage.py {check, syncdb} SystemCheckError: System check identified some issues:

2016-04-20 Thread Rob Cresswell
Looks like the relevant fedora patch has been pushed to stable (and this
should not be tracked on upstream anyway) - marking invalid for those
reasons.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1494001

Title:
  /usr/share/openstack-dashboard/manage.py {check,syncdb}
  SystemCheckError: System check identified some issues:

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Minor version update broke database checking and sync.  A custom field in 
dlango's user model clashes?  It appears there was a minor change somewhere in 
the dashboard app from version openstack-dashboard-2015.1.0-7.el7.noarch
   to openstack-dashboard-2015.1.1-1.el7.noarch.  The was a minor update to 
also from python-django-horizon-2015.1.0-7.el7.noarch
  to python-django-horizon-2015.1.1-1.el7.noarch.

  
  Working Config:

   /etc/openstack-dashboard/local_settings

  SESSION_ENGINE = 'django.contrib.sessions.backends.db'
  DATABASES = {
  'default': {
  # Database configuration here
  'ENGINE': 'django.db.backends.mysql',
  'NAME': 'dash',
  'USER': 'dash',
  'PASSWORD': 'password',
  'HOST': '0.0.0.0',
  'default-character-set': 'utf8'
  }
  }

  rpm -qa |grep django
  python-django-pyscss-1.0.5-2.el7.noarch
  python-django-bash-completion-1.8.3-1.el7.noarch
  python-django-appconf-0.6-1.el7.noarch
  python-django-openstack-auth-1.2.0-4.el7.noarch
  python-django-horizon-2015.1.0-7.el7.noarch
  python-django-compressor-1.4-3.el7.noarch
  python-django-1.8.3-1.el7.noarch

  rpm -qa |grep dashboard
  openstack-dashboard-2015.1.0-7.el7.noarch

  /usr/share/openstack-dashboard/manage.py check
  System check identified no issues (0 silenced).


  Non Working:

   /etc/openstack-dashboard/local_settings

  SESSION_ENGINE = 'django.contrib.sessions.backends.db'
  DATABASES = {
  'default': {
  # Database configuration here
  'ENGINE': 'django.db.backends.mysql',
  'NAME': 'dash',
  'USER': 'dash',
  'PASSWORD': 'password',
  'HOST': '0.0.0.0',
  'default-character-set': 'utf8'
  }
  }

  
  rpm -qa |grep django
  python-django-appconf-0.6-1.el7.noarch
  python-django-pyscss-1.0.5-2.el7.noarch
  python-django-1.8.3-1.el7.noarch
  python-django-bash-completion-1.8.3-1.el7.noarch
  python-django-openstack-auth-1.2.0-4.el7.noarch
  python-django-compressor-1.4-3.el7.noarch
  python-django-horizon-2015.1.1-1.el7.noarch

  rpm -qa |grep dashboard
  openstack-dashboard-2015.1.1-1.el7.noarch

  /usr/share/openstack-dashboard/manage.py check
  SystemCheckError: System check identified some issues:

  ERRORS:
  auth.User.groups: (fields.E304) Reverse accessor for 'User.groups' clashes 
with reverse accessor for 'User.groups'.
HINT: Add or change a related_name argument to the definition for 
'User.groups' or 'User.groups'.
  auth.User.user_permissions: (fields.E304) Reverse accessor for 
'User.user_permissions' clashes with reverse accessor for 
'User.user_permissions'.
HINT: Add or change a related_name argument to the definition for 
'User.user_permissions' or 'User.user_permissions'.
  openstack_auth.User.groups: (fields.E304) Reverse accessor for 'User.groups' 
clashes with reverse accessor for 'User.groups'.
HINT: Add or change a related_name argument to the definition for 
'User.groups' or 'User.groups'.
  openstack_auth.User.user_permissions: (fields.E304) Reverse accessor for 
'User.user_permissions' clashes with reverse accessor for 
'User.user_permissions'.
HINT: Add or change a related_name argument to the definition for 
'User.user_permissions' or 'User.user_permissions'.

  System check identified 4 issues (0 silenced).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1494001/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1556836] Re: QOS: TestQosPlugin should receive plugin name as an input

2016-04-20 Thread Miguel Angel Ajo
This one is not valid anymore, since they are now again "qos" pluggin
with no subclassing.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1556836

Title:
  QOS: TestQosPlugin should receive plugin name as an input

Status in neutron:
  Invalid

Bug description:
  The neutron QoS plugin test class 'TestQosPlugin' uses the hard coded name 
'qos' as the plugin name (alias, in this case) and hard coded core plugin name 
as well.
  In order to inherit from this test class, and use it in the vmware-nsx 
integration, we need to set the plugin name dynamically, as a parameter of the 
test setup function, or in a separate internal method of this class.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1556836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572593] Re: Impossible attach detached port to another instance

2016-04-20 Thread Sergey Kraynev
This feature was added in mitaka-3, so if we decide to fix it in N1, we
need to backport it.

** Changed in: heat
   Importance: Undecided => High

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572593

Title:
  Impossible attach detached port to another instance

Status in heat:
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  The follow scenario leads to error:
   - create independent port in neutron
   - boot VM in nova
   - attach port to VM via nova interface-attach. As result dns_name of port 
will be updated and port be attached to VM
   - detach port from VM via nova interface-detach. As result port will be 
detached, BUT dns_name will be still as name of VM.

  if you try to attach this port to another VM you will get error, that
  it's not possible to attach port.

  
  The same issue happens for Heat stack, when we have VM+port and execute 
update replace for Server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1572593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572593] [NEW] Impossible attach detached port to another instance

2016-04-20 Thread Sergey Kraynev
Public bug reported:

The follow scenario leads to error:
 - create independent port in neutron
 - boot VM in nova
 - attach port to VM via nova interface-attach. As result dns_name of port will 
be updated and port be attached to VM
 - detach port from VM via nova interface-detach. As result port will be 
detached, BUT dns_name will be still as name of VM.

if you try to attach this port to another VM you will get error, that
it's not possible to attach port.


The same issue happens for Heat stack, when we have VM+port and execute update 
replace for Server.

** Affects: heat
 Importance: High
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: mitaka-backport-potential

** Description changed:

  The follow scenario leads to error:
-  - create independent port in neutron
-  - boot VM in nova
-  - attach port to VM via nova interface-attach. As result dns_name of port 
will be updated and port be attached to VM
-  - detach port from VM via nova interface-detach. As result port will be 
detached, BUT dns_name will be still as name of VM.
+  - create independent port in neutron
+  - boot VM in nova
+  - attach port to VM via nova interface-attach. As result dns_name of port 
will be updated and port be attached to VM
+  - detach port from VM via nova interface-detach. As result port will be 
detached, BUT dns_name will be still as name of VM.
  
  if you try to attach this port to another VM you will get error, that
  it's not possible to attach port.
+ 
+ 
+ The same issue happens for Heat stack, when we have VM+port and execute 
update replace for Server.

** Also affects: heat
   Importance: Undecided
   Status: New

** Changed in: heat
Milestone: None => newton-1

** Tags added: mitaka-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572593

Title:
  Impossible attach detached port to another instance

Status in heat:
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  The follow scenario leads to error:
   - create independent port in neutron
   - boot VM in nova
   - attach port to VM via nova interface-attach. As result dns_name of port 
will be updated and port be attached to VM
   - detach port from VM via nova interface-detach. As result port will be 
detached, BUT dns_name will be still as name of VM.

  if you try to attach this port to another VM you will get error, that
  it's not possible to attach port.

  
  The same issue happens for Heat stack, when we have VM+port and execute 
update replace for Server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1572593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572592] [NEW] gate-neutron-lbaasv1-dsvm-api gate broken on Neutron LBaaS: fail to find a quota

2016-04-20 Thread Victor Stinner
Public bug reported:

The gate-neutron-lbaasv1-dsvm-api gate job of Neutron LBaaS looks to
always fail on test_quotas. Example of a failure.

For example, the gate failed on the approve "Updated from global
requirements" change: https://review.openstack.org/#/c/307761/

Console logs: http://logs.openstack.org/61/307761/1/check/gate-neutron-
lbaasv1-dsvm-api/69872dd/console.html

neutron_lbaas.tests.tempest.v1.api.admin.test_quotas.QuotasTest.test_lbaas_quotas[gate]
---

Traceback (most recent call last):
  File 
"neutron_lbaas/tests/tempest/lib/services/network/json/network_client.py", line 
330, in reset_quotas
resp, body = self.delete(uri)
  File 
"/opt/stack/new/neutron-lbaas/.tox/apiv1/local/lib/python2.7/site-packages/tempest/lib/common/rest_client.py",
 line 290, in delete
return self.request('DELETE', url, extra_headers, headers, body)
  File 
"/opt/stack/new/neutron-lbaas/.tox/apiv1/local/lib/python2.7/site-packages/tempest/lib/common/rest_client.py",
 line 642, in request
resp, resp_body)
  File 
"/opt/stack/new/neutron-lbaas/.tox/apiv1/local/lib/python2.7/site-packages/tempest/lib/common/rest_client.py",
 line 695, in _error_checker
raise exceptions.NotFound(resp_body, resp=resp)
tempest.lib.exceptions.NotFound: Object not found
Details: {u'message': u'Quota for tenant ce7b2ca707c6479f88b67e312d3764f2 could 
not be found.', u'detail': u'', u'type': u'TenantQuotaNotFound'}

neutron_lbaas.tests.tempest.v1.api.admin.test_quotas.QuotasTest.test_quotas[gate]
-

Traceback (most recent call last):
  File 
"neutron_lbaas/tests/tempest/lib/services/network/json/network_client.py", line 
330, in reset_quotas
resp, body = self.delete(uri)
  File 
"/opt/stack/new/neutron-lbaas/.tox/apiv1/local/lib/python2.7/site-packages/tempest/lib/common/rest_client.py",
 line 290, in delete
return self.request('DELETE', url, extra_headers, headers, body)
  File 
"/opt/stack/new/neutron-lbaas/.tox/apiv1/local/lib/python2.7/site-packages/tempest/lib/common/rest_client.py",
 line 642, in request
resp, resp_body)
  File 
"/opt/stack/new/neutron-lbaas/.tox/apiv1/local/lib/python2.7/site-packages/tempest/lib/common/rest_client.py",
 line 695, in _error_checker
raise exceptions.NotFound(resp_body, resp=resp)
tempest.lib.exceptions.NotFound: Object not found
Details: {u'message': u'Quota for tenant 89128f4ca5aa415db790b0ed30d8b24e could 
not be found.', u'detail': u'', u'type': u'TenantQuotaNotFound'}

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572592

Title:
  gate-neutron-lbaasv1-dsvm-api gate broken on Neutron LBaaS: fail to
  find a quota

Status in neutron:
  New

Bug description:
  The gate-neutron-lbaasv1-dsvm-api gate job of Neutron LBaaS looks to
  always fail on test_quotas. Example of a failure.

  For example, the gate failed on the approve "Updated from global
  requirements" change: https://review.openstack.org/#/c/307761/

  Console logs: http://logs.openstack.org/61/307761/1/check/gate-
  neutron-lbaasv1-dsvm-api/69872dd/console.html

  
neutron_lbaas.tests.tempest.v1.api.admin.test_quotas.QuotasTest.test_lbaas_quotas[gate]
  
---

  Traceback (most recent call last):
File 
"neutron_lbaas/tests/tempest/lib/services/network/json/network_client.py", line 
330, in reset_quotas
  resp, body = self.delete(uri)
File 
"/opt/stack/new/neutron-lbaas/.tox/apiv1/local/lib/python2.7/site-packages/tempest/lib/common/rest_client.py",
 line 290, in delete
  return self.request('DELETE', url, extra_headers, headers, body)
File 
"/opt/stack/new/neutron-lbaas/.tox/apiv1/local/lib/python2.7/site-packages/tempest/lib/common/rest_client.py",
 line 642, in request
  resp, resp_body)
File 
"/opt/stack/new/neutron-lbaas/.tox/apiv1/local/lib/python2.7/site-packages/tempest/lib/common/rest_client.py",
 line 695, in _error_checker
  raise exceptions.NotFound(resp_body, resp=resp)
  tempest.lib.exceptions.NotFound: Object not found
  Details: {u'message': u'Quota for tenant ce7b2ca707c6479f88b67e312d3764f2 
could not be found.', u'detail': u'', u'type': u'TenantQuotaNotFound'}

  
neutron_lbaas.tests.tempest.v1.api.admin.test_quotas.QuotasTest.test_quotas[gate]
  
-

  Traceback (most recent call last):
File 
"neutron_lbaas/tests/tempest/lib/services/network/json/network_client.py", line 
330, in reset_quotas
  resp, body = self.delete(uri)
File 
"/opt/stack/new/neutron-lbaas/.tox/apiv1/local/lib/python2.7/site-packages/tempest/lib/common/rest_client.py",
 line 

[Yahoo-eng-team] [Bug 1567977] Re: POST /servers with incorrect content-type returns 400, should be 415

2016-04-20 Thread Matt Riedemann
** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/liberty
   Status: New => Confirmed

** Changed in: nova/mitaka
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1567977

Title:
  POST /servers with incorrect content-type returns 400, should be 415

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) liberty series:
  Confirmed
Status in OpenStack Compute (nova) mitaka series:
  Confirmed

Bug description:
  (master nova april 8, 2016)

  POSTing to /servers with a content-type of text/plain and a text/plain
  body results in a response code of 400. This is incorrect. It should
  be 415: 

  Here's the gabbi demo:

  - name: post bad content-type
xfail: True
POST: /servers
request_headers:
content-type: text/plain
data: I want a server so badly
status: 415

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1567977/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570694] Re: potentially unsafe use of shell commands

2016-04-20 Thread Andrey Kurilin
** No longer affects: rally

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1570694

Title:
  potentially unsafe use of shell commands

Status in neutron:
  Invalid
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Hello, I'm reviewing neutron-vpnaas for including into Ubuntu main and had
  some questions.

  Most of my concern lies in one file:
  
http://git.openstack.org/cgit/openstack/neutron-vpnaas/tree/rally-jobs/plugins/vpn_utils.py

  This file extensively calls sudo with string-constructed command lines
  with no parameter quoting of any kind. This allows easy shell injection
  problems.

  I can't decide if this is a security issue or not:

  - If this is intended to be used by the cloud 'owner' and only the owner,
    then it's probably fine as-is, though may suffer reliability issues.

  - If this is intended to allow individual tenants in the cloud to manage
    their own virtual machines, this _may_ be fine as-is, though may suffer
    reliability problems.

  - If this is intended to allow individual tenants in the cloud to manage
    cloud-owned networking machines, this package needs immediate attention
    from the openstack security team.

  Just search for 'sudo' in that file and I think the issue will be
  immediately obvious. Here's a few examples I collected for my notes,
  though they came from Ubuntu's packaging so may differ slightly:

    - cmd = "sudo ip netns exec {} ip a".format(namespace)
  interfaces = execute_cmd_over_ssh(controller, cmd, private_key)

    - cmd = "sudo ip netns exec {} ping -w {} -c {} {}".format(
  namespace, 2 * count, count, router_gw_ip)
  return ping(controller, cmd, private_key)

    - for key, ns_comp in zip(remote_key_files, ns_compute_tuples):
     cmd = "sudo rm -f {}".format(key)
     host = ns_comp[1]
     execute_cmd_over_ssh(host, cmd, private_key)

    - cmd = ("sudo ssh-keygen -f /root/.ssh/known_hosts -R"
     " {}".format(host))
  execute_cmd_over_ssh(compute_host, cmd, private_key)

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1570694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1530830] Re: Instance Console Crashes : Unable to enter credentials at Instance Console

2016-04-20 Thread Rob Cresswell
Not sure why I can change this, but I've bumped it back to Confirmed.
Shouldnt we wait for Infra to move to In Progress when the patch is
submitted with a "Closes-Bug: " ?

Also removing Horizon, since it doesn't appear to be our issue.

** Changed in: nova
   Status: In Progress => Confirmed

** No longer affects: horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1530830

Title:
  Instance Console Crashes : Unable to enter credentials at Instance
  Console

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Proof of Concept

  After I spawn an instance I  click on the instance . Then I will click
  the console tab . The  Instance Console loads and I get a shell prompt
  asking for login/password .

  After I enter few keystrokes it doesn't print anything in the login
  prompt . After pressing some more keys constantly , it start showing
  syslog messages and doesnt show the login prompt anymore

  Please check the attachment

  I am using chrome as a browser . This issue is also there when I
  tested from firefox . Check the image attached

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1530830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572555] [NEW] Nova reports memory_mb=0 for available Ironic node

2016-04-20 Thread Pavlo Shchelokovskyy
Public bug reported:

this is on latest devstack master and might be related to bug 1572472

Reproduce

1. deploy Ironic+Nova in DevStack as usual, 3 VMs x 1cpu,1024MB RAM,10GB disk 
posing as Ironic nodes
ironic node-list
+--++---+-++-+
| UUID | Name   | Instance UUID | Power State | 
Provisioning State | Maintenance |
+--++---+-++-+
| bb785191-e548-4a6d-820e-bf2c5cdba922 | node-0 | None  | power off   | 
available  | False   |
| 2a508e69-08ab-4b0e-abaa-39e4e268bd47 | node-1 | None  | power off   | 
available  | False   |
| 9e0763d9-6f96-4327-a189-3bf12f5856ac | node-2 | None  | power off   | 
available  | False   |
+--++---+-++-+

2. check nova hypervisor-stats and nova hypervisor-show 
for all hypervisors nova reports memory_mb=1024, local_gb=10

nova hypervisor-stats
+--+---+
| Property | Value |
+--+---+
| count| 3 |
| current_workload | 0 |
| disk_available_least | 30|
| free_disk_gb | 30|
| free_ram_mb  | 3072  |
| local_gb | 30|
| local_gb_used| 0 |
| memory_mb| 3072  |
| memory_mb_used   | 0 |
| running_vms  | 0 |
| vcpus| 3 |
| vcpus_used   | 0 |
+--+---+

3. put two ironic nodes into maintenance
ironic node-set-maintenance node-1 on
ironic node-set-maintenance node-2 on
ironic node-list
+--++---+-++-+
| UUID | Name   | Instance UUID | Power State | 
Provisioning State | Maintenance |
+--++---+-++-+
| bb785191-e548-4a6d-820e-bf2c5cdba922 | node-0 | None  | power off   | 
available  | False   |
| 2a508e69-08ab-4b0e-abaa-39e4e268bd47 | node-1 | None  | power off   | 
available  | True|
| 9e0763d9-6f96-4327-a189-3bf12f5856ac | node-2 | None  | power off   | 
available  | True|
+--++---+-++-+

4. wait for nova hypervisor-stats to be updated

Expected result:
total memory_mb is 1024, total local_gb is 10

Actual result:
total memory_mb is 0, total local_gb is 0
nova hypervisor-stats
+--+---+
| Property | Value |
+--+---+
| count| 3 |
| current_workload | 0 |
| disk_available_least | 10|
| free_disk_gb | 10|
| free_ram_mb  | 1024  |
| local_gb | 0 |
| local_gb_used| 0 |
| memory_mb| 0 |
| memory_mb_used   | 0 |
| running_vms  | 0 |
| vcpus| 0 |
| vcpus_used   | 0 |
+--+---+

also hypervisor-show shows these values as 0:

nova hypervisor-show bb785191-e548-4a6d-820e-bf2c5cdba922
+-+--+
| Property| Value|
+-+--+
| cpu_info|  |
| current_workload| 0|
| disk_available_least| 10   |
| free_disk_gb| 10   |
| free_ram_mb | 1024 |
| host_ip | 192.168.100.12   |
| hypervisor_hostname | bb785191-e548-4a6d-820e-bf2c5cdba922 |
| hypervisor_type | ironic   |
| hypervisor_version  | 1|
| id  | 1|
| local_gb| 0|
| local_gb_used   | 0|
| memory_mb   | 0|
| memory_mb_used  | 0|
| running_vms | 0|
| service_disabled_reason | None |
| service_host| ironic   |
| service_id  | 8|
| state   | up   |
| status  | enabled 

[Yahoo-eng-team] [Bug 1572548] [NEW] metering-agent failed to get traffic counters when no router-namespace where meter-label-rules were added

2016-04-20 Thread Sergey Belous
Public bug reported:

Removing router from l3 agent cause errors from neutron-metering-agent.
The neutron-meter-agent continues try to get traffic counters from
iptables on router-namespace, which is not exist after removing router
from l3-agent.

Steps to reproduce:
1. Create internal net, subnet, router. Set external gateway for router, add 
interface to router for created net.
2. Create neutron-meter-label
3. Use 'neutron l3-agent-router-remove' command to remove a router from a L3 
agent
Trace in neutron-metering-agent logs:
2016-04-20 12:28:26.699 ERROR neutron.agent.linux.utils [-] Exit code: 1; 
Stdin: ; Stdout: ; Stderr: Cannot open network namespace 
"qrouter-c8acd926-88e5-4292-883e-e928fb3f6d32": No such file or directory

2016-04-20 12:28:26.700 ERROR 
neutron.services.metering.drivers.iptables.iptables_driver [-] Failed to get 
traffic counters, router: {u'status': u'ACTIVE', u'name': u'router05', 
u'gw_port_id': u'f36b30d3-5290-4896-837c-108b8cd4f3dc', u'admin_state_up': 
True, u'tenant_id': u'1c0eb24bdbb1406bb7d1346f36064ebd', u'_metering_labels': 
[{u'rules': [{u'remote_ip_prefix': u'0.0.0.0/0', u'direction': u'egress', 
u'metering_label_id': u'67dae290-38cb-4962-80ab-d6d3404dc6df', u'id': 
u'799f3361-6f90-4e36-a338-311d6e7c9d5b', u'excluded': False}], u'id': 
u'67dae290-38cb-4962-80ab-d6d3404dc6df'}], u'id': 
u'c8acd926-88e5-4292-883e-e928fb3f6d32'}
2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver Traceback (most 
recent call last):
2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver   File 
"/opt/stack/neutron/neutron/services/metering/drivers/iptables/iptables_driver.py",
 line 355, in get_traffic_counters
2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver chain, 
wrap=False, zero=True)
2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver   File 
"/opt/stack/neutron/neutron/agent/linux/iptables_manager.py", line 712, in 
get_traffic_counters
2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver current_table = 
self.execute(args, run_as_root=True)
2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 137, in execute
2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver raise 
RuntimeError(msg)
2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver RuntimeError: Exit 
code: 1; Stdin: ; Stdout: ; Stderr: Cannot open network namespace 
"qrouter-c8acd926-88e5-4292-883e-e928fb3f6d32": No such file or directory
2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver
2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver

** Affects: neutron
 Importance: Undecided
 Assignee: Sergey Belous (sbelous)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Sergey Belous (sbelous)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572548

Title:
  metering-agent failed to get traffic counters when no router-namespace
  where meter-label-rules were added

Status in neutron:
  New

Bug description:
  Removing router from l3 agent cause errors from neutron-metering-
  agent. The neutron-meter-agent continues try to get traffic counters
  from iptables on router-namespace, which is not exist after removing
  router from l3-agent.

  Steps to reproduce:
  1. Create internal net, subnet, router. Set external gateway for router, add 
interface to router for created net.
  2. Create neutron-meter-label
  3. Use 'neutron l3-agent-router-remove' command to remove a router from a L3 
agent
  Trace in neutron-metering-agent logs:
  2016-04-20 12:28:26.699 ERROR neutron.agent.linux.utils [-] Exit code: 1; 
Stdin: ; Stdout: ; Stderr: Cannot open network namespace 
"qrouter-c8acd926-88e5-4292-883e-e928fb3f6d32": No such file or directory

  2016-04-20 12:28:26.700 ERROR 
neutron.services.metering.drivers.iptables.iptables_driver [-] Failed to get 
traffic counters, router: {u'status': u'ACTIVE', u'name': u'router05', 
u'gw_port_id': u'f36b30d3-5290-4896-837c-108b8cd4f3dc', u'admin_state_up': 
True, u'tenant_id': u'1c0eb24bdbb1406bb7d1346f36064ebd', u'_metering_labels': 
[{u'rules': [{u'remote_ip_prefix': u'0.0.0.0/0', u'direction': u'egress', 
u'metering_label_id': u'67dae290-38cb-4962-80ab-d6d3404dc6df', u'id': 
u'799f3361-6f90-4e36-a338-311d6e7c9d5b', u'excluded': False}], u'id': 
u'67dae290-38cb-4962-80ab-d6d3404dc6df'}], u'id': 
u'c8acd926-88e5-4292-883e-e928fb3f6d32'}
  2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver Traceback (most 
recent call last):
  2016-04-20 12:28:26.700 TRACE 

[Yahoo-eng-team] [Bug 1572543] [NEW] Request stable/liberty release for openstack/networking-hyperv

2016-04-20 Thread Claudiu Belu
Public bug reported:

Please release the new version for stable/liberty branch of networking-
hyperv.

commit id: 13401e80e3360b9f25797879c7ade7b768ca034f
tag: 1.0.4

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: release-subproject

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572543

Title:
  Request stable/liberty release for openstack/networking-hyperv

Status in neutron:
  New

Bug description:
  Please release the new version for stable/liberty branch of
  networking-hyperv.

  commit id: 13401e80e3360b9f25797879c7ade7b768ca034f
  tag: 1.0.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1177924] Re: Use testr instead of nose as the unittest runner.

2016-04-20 Thread Valeriy Ponomaryov
** Also affects: manila-ui
   Importance: Undecided
   Status: New

** Changed in: manila-ui
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1177924

Title:
  Use testr instead of nose as the unittest runner.

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in django-openstack-auth:
  New
Status in Glance:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Triaged
Status in OpenStack Identity (keystone):
  Fix Released
Status in manila-ui:
  New
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in OpenStack Object Storage (swift):
  Triaged
Status in OpenStack DBaaS (Trove):
  Triaged

Bug description:
  We want to start using testr as our test runner instead of nose so
  that we can start running tests in parallel. For the projects that
  have switched we have seen improvements to test speed and quality.

  As part of getting set for that, we need to start using testtools and
  fixtures so provide the plumbing and test isolation needed for
  automatic parallelization. The work can be done piecemeal - with
  testtools and fixtures being added first, and then tox/run_tests being
  ported to us testr/subunit instead of nose.

  This work was semi tracked during Grizzly with this
  https://blueprints.launchpad.net/openstack-ci/+spec/grizzly-testtools
  blueprint. I am opening this bug so that we can track migration to
  testr on a per project basis.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1177924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533748] Re: Track template rendering times

2016-04-20 Thread Itxaka Serrano
Superseeded by https://blueprints.launchpad.net/horizon/+spec/django-
debug-toolbar

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1533748

Title:
  Track template rendering times

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Track how long it takes to render the templates in order to have an
  easy access to compare between problematic changes to templates

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1533748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572509] [NEW] Nova boot fails when freed SRIOV port is used for booting

2016-04-20 Thread Preethi Dsilva
Public bug reported:

Nova boot fails when freed SRIOV port is used for booting

steps to reproduce:
==
1.create a SRIOV port
2.boot a vm -->Boot is successful and vm gets ip
3.now delete the vm using nova delete --successful (mac is released from VF)
4.using the port created in step 1 boot a new vm.

VM fails to boot with following error
[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager ^[[01;35m[instance: 
a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00mhostname=instance.hostname)
^[[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager ^[[01;35m[instance: 
a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00mPortNotUsableDNS: Port 
7219a612-014e-452e-b79a-19c87cc33db4 not usable for instance 
a47344fb-3254-4409-9c23-55e3cde693d9. Value vm4 assigned to dns_name attribute 
does not match instance's hostname vmtest4
^[[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager ^[[01;35m[instance: 
a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00m

Expected :
=
As port is unbound in step 3 we should be able to bind it in step 4.

The setup consists of controller and compute node with Mellanox card
enabled for SRIOV. Ubuntu 14.04 qcow2 image is used for tenant VM boot

Tested the above with MItaka code.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1572509

Title:
  Nova boot fails when freed SRIOV port is used for booting

Status in OpenStack Compute (nova):
  New

Bug description:
  Nova boot fails when freed SRIOV port is used for booting

  steps to reproduce:
  ==
  1.create a SRIOV port
  2.boot a vm -->Boot is successful and vm gets ip
  3.now delete the vm using nova delete --successful (mac is released from VF)
  4.using the port created in step 1 boot a new vm.

  VM fails to boot with following error
  [01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager ^[[01;35m[instance: 
a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00mhostname=instance.hostname)
  ^[[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager 
^[[01;35m[instance: a47344fb-3254-4409-9c23-55e3cde693d9] 
^[[00mPortNotUsableDNS: Port 7219a612-014e-452e-b79a-19c87cc33db4 not usable 
for instance a47344fb-3254-4409-9c23-55e3cde693d9. Value vm4 assigned to 
dns_name attribute does not match instance's hostname vmtest4
  ^[[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager 
^[[01;35m[instance: a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00m

  Expected :
  =
  As port is unbound in step 3 we should be able to bind it in step 4.

  The setup consists of controller and compute node with Mellanox card
  enabled for SRIOV. Ubuntu 14.04 qcow2 image is used for tenant VM boot

  Tested the above with MItaka code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1572509/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572495] [NEW] Nova API fails to start if "ec2" is specified in enabled_apis

2016-04-20 Thread Sean Dague
Public bug reported:

As reported by Kevin Foxx yesterday, if "ec2" is listed in enabled_apis
in Mitaka and beyond Nova will refuse to start because we removed it.

We assumed the change in defaults was good enough for people to move
forward, but didn't realize that many of the install tools (packstack
for sure) hardcoded this value into the config instead of sticking with
the defaults.

While we've added a new upgrade note on this, we could also actually
catch the error and not crash people.

** Affects: nova
 Importance: High
 Status: Confirmed


** Tags: api upgrades

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => High

** Tags added: api

** Tags added: upgrades

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1572495

Title:
  Nova API fails to start if "ec2" is specified in enabled_apis

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  As reported by Kevin Foxx yesterday, if "ec2" is listed in
  enabled_apis in Mitaka and beyond Nova will refuse to start because we
  removed it.

  We assumed the change in defaults was good enough for people to move
  forward, but didn't realize that many of the install tools (packstack
  for sure) hardcoded this value into the config instead of sticking
  with the defaults.

  While we've added a new upgrade note on this, we could also actually
  catch the error and not crash people.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1572495/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572472] Re: Ironic Nova host manager Unsupported Operand

2016-04-20 Thread Sam Betts
** Description changed:

  There is a timeout issue that I see when running the Cisco Ironic Third
  party CI:
  
  Traceback (most recent call last):
    File "tempest/test.py", line 113, in wrapper
  return f(self, *func_args, **func_kwargs)
    File 
"/opt/stack/ironic/ironic_tempest_plugin/tests/scenario/test_baremetal_basic_ops.py",
 line 113, in test_baremetal_server_ops
  self.boot_instance()
    File 
"/opt/stack/ironic/ironic_tempest_plugin/tests/scenario/baremetal_manager.py", 
line 150, in boot_instance
  self.wait_node(self.instance['id'])
    File 
"/opt/stack/ironic/ironic_tempest_plugin/tests/scenario/baremetal_manager.py", 
line 116, in wait_node
  raise lib_exc.TimeoutException(msg)
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: Timed out waiting to get Ironic node by instance id 
0a252cd1-a020-40da-911b-2becd1820306
  
  On further investigation into the issue I see this error in the
  n-sch.log, which I expect is leading to the node not being available in
  nova, so the instance never gets assigned to the Ironic node:
  
  2016-04-20 08:41:02.953 ERROR oslo_messaging.rpc.dispatcher 
[req-e5ae7418-e311-491e-99b3-86aeaa0b4e3c tempest-BaremetalBasicOps-171486249 
tempest-BaremetalBasicOps-1004295680] Exception during message handling: 
unsupported operand type(s) for *: 'NoneType' and 'int'
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
138, in _dispatch_and_reply
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher 
incoming.message))
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
185, in _dispatch
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
127, in _do_dispatch
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
150, in inner
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher return 
func(*args, **kwargs)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/manager.py", line 104, in select_destinations
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher dests = 
self.driver.select_destinations(ctxt, spec_obj)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 53, in 
select_destinations
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher 
selected_hosts = self._schedule(context, spec_obj)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 104, in _schedule
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher hosts = 
self._get_all_host_states(elevated)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 145, in 
_get_all_host_states
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher return 
self.host_manager.get_all_host_states(context)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/host_manager.py", line 574, in 
get_all_host_states
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher 
self._get_instance_info(context, compute))
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/host_manager.py", line 180, in update
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher return 
_locked_update(self, compute, service, aggregates, inst_dict)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/host_manager.py", line 169, in _locked_update
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher 
self._update_from_compute_node(compute)
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/ironic_host_manager.py", line 44, in 
_update_from_compute_node
  2016-04-20 08:41:02.953 TRACE oslo_messaging.rpc.dispatcher 
self.free_disk_mb = 

[Yahoo-eng-team] [Bug 1572474] [NEW] [Pluggable IPAM] Deadlock on simultaneous update subnet and ip allocation from subnet

2016-04-20 Thread Pavel Bondar
Public bug reported:

Observed in logs 'Lock wait timeout exceeded; try restarting transaction' [1], 
when two requests are concurently executed in neutron:
- request A calls update subnet req-5f9fc363-4b22-48e0-97e2-504aa7c3dda3
- request B calls create port on the same subnet 
req-ccd11684-ad2b-4937-a3c1-dc46aaa36b2d
As a result both requests  failed.

Request A tries to delete 'ipamallocationpools' for subnet_id and it 
effectivelly removes 'ipamavailabilityranges' by foreign key.
Request B allocates ip and modifies av_range record in 'ipamavailabilityranges'.
So looks like collision caused by concurent access to 'ipamavailabilityranges' 
table.

[1] http://logs.openstack.org/23/181023/68/check/gate-tempest-dsvm-
neutron-full/a9180e0/logs/screen-q-svc.txt.gz#_2016-04-19_15_43_05_837

StackTrace with both requests failed:
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters 
[req-5f9fc363-4b22-48e0-97e2-504aa7c3dda3 tempest-NetworksIpV6Test-714183411 -] 
DBAPIError exception wrapped from (pymysql.err.InternalError) (1205, u'Lock 
wait timeout exceeded; try restarting transaction') [SQL: u'DELETE FROM 
ipamallocationpools WHERE ipamallocationpools.ipam_subnet_id = 
%(ipam_subnet_id_1)s'] [parameters: {u'ipam_subnet_id_1': 
u'0b896671-8cc2-4e08-bbfe-05655e6c479c'}]
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, 
in _execute_context
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters context)
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
450, in do_execute
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 158, in 
execute
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters result = 
self._query(query)
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 308, in _query
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters 
conn.query(q)
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 820, in 
query
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1002, in 
_read_query_result
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters 
result.read()
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1285, in 
read
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters 
first_packet = self.connection._read_packet()
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 966, in 
_read_packet
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters 
packet.check_error()
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 394, in 
check_error
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters 
err.raise_mysql_exception(self._data)
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/err.py", line 120, in 
raise_mysql_exception
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters 
_check_mysql_exception(errinfo)
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/err.py", line 115, in 
_check_mysql_exception
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters raise 
InternalError(errno, errorvalue)
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters 
InternalError: (1205, u'Lock wait timeout exceeded; try restarting transaction')
2016-04-19 15:43:05.837 17992 ERROR oslo_db.sqlalchemy.exc_filters
2016-04-19 15:43:05.847 17992 ERROR neutron.api.v2.resource 
[req-5f9fc363-4b22-48e0-97e2-504aa7c3dda3 tempest-NetworksIpV6Test-714183411 -] 
update failed
2016-04-19 15:43:05.847 17992 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2016-04-19 15:43:05.847 17992 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 84, in resource
2016-04-19 

[Yahoo-eng-team] [Bug 1568764] Re: impossible to modify template loaders when customizon horizon

2016-04-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/304067
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=17176d06173133a7e59432fdbd46af30e44c20d3
Submitter: Jenkins
Branch:master

commit 17176d06173133a7e59432fdbd46af30e44c20d3
Author: Yves-Gwenael Bourhis 
Date:   Mon Apr 11 13:40:44 2016 +0200

Template loaders defined before local settings

TEMPLATE_LOADERS was defined after loading local_settings.py and was 
sqashing
any attempt to customize the template loaders.
This patch adds CACHED_TEMPLATE_LOADERS, ADD_TEMPLATE_LOADERS and a doc
defining how to use them in order to ease customization process.

Change-Id: I2544529ee965ef01c6ac4973056801ebee50be6d
Closes-Bug: #1568764


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1568764

Title:
  impossible to modify template loaders when customizon horizon

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When customizing horizon as per:
  
http://docs.openstack.org/developer/horizon/topics/customizing.html#modifying-existing-dashboards-and-panels

  we need to modify TEMPLATE_LOADERS in local settings, usually horizon 
customizers need:
  https://pypi.python.org/pypi/django-apptemplates/

  To extend original templates with ease

  Since this patch:
  https://review.openstack.org/#/c/281976/

  horizon simply nukes any attempt to modify TEMPLATE_LOADERS because
  they are redifined after loading the local_settings. therefore
  customizing horizon is impossible without doing horizon code
  intrusion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1568764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572469] [NEW] Ha router changes state Master/Backup from time to time.

2016-04-20 Thread sklgromek
Public bug reported:

When default route are set in router attributes (not from:  neutron 
router-gateway-set)
Router changes state  from master to backup from time to time.
In keepalived config file  in section virtual_routes route entry are 
duplicated, 
every time when router state was changed. 

My environment:
Ubuntu 14.04 LTS
Openstack Kilo from ubuntu-cloud-archive
Keepalived v1.2.7 (08/14,2013)
l3-agent: neutron-vpn-agent 2015.1.3 

Replicate:
Set default route:
neutron router-update my-router --routes type=dict list=true 
destination=0.0.0.0/0,nexthop=192.168.0.10
Wait for a while...
Router will periodically change state, in keepalived config file route entry 
will be duplicated.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572469

Title:
  Ha router  changes state Master/Backup from time to time.

Status in neutron:
  New

Bug description:
  When default route are set in router attributes (not from:  neutron 
router-gateway-set)
  Router changes state  from master to backup from time to time.
  In keepalived config file  in section virtual_routes route entry are 
duplicated, 
  every time when router state was changed. 

  My environment:
  Ubuntu 14.04 LTS
  Openstack Kilo from ubuntu-cloud-archive
  Keepalived v1.2.7 (08/14,2013)
  l3-agent: neutron-vpn-agent 2015.1.3 

  Replicate:
  Set default route:
  neutron router-update my-router --routes type=dict list=true 
destination=0.0.0.0/0,nexthop=192.168.0.10
  Wait for a while...
  Router will periodically change state, in keepalived config file route entry 
will be duplicated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572469/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572442] [NEW] tempest.scenario.ha_router_rescheduling should only apply to alive agents.

2016-04-20 Thread SHI Peiqi
Public bug reported:

In the ha router scheduling test case under 
tempest/scenario/test_network_basic.ops.py,
step 4: "assign router to new l3-agent (or old one if no new agent is 
available)" should make sure "new l3-agents" are alive.

Because, if there are some dead l3-agents in the HA router environment,
the flag "no_migration" will be set to False and "assign router to
dead(new) l3-agents" will fail the case, which is not the purpose of the
test.

The idea is to make sure attribute: agent_list lists only  agents alive.

** Affects: neutron
 Importance: Undecided
 Assignee: SHI Peiqi (uestc-shi)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => SHI Peiqi (uestc-shi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572442

Title:
  tempest.scenario.ha_router_rescheduling should only apply  to alive
  agents.

Status in neutron:
  New

Bug description:
  In the ha router scheduling test case under 
tempest/scenario/test_network_basic.ops.py,
  step 4: "assign router to new l3-agent (or old one if no new agent is 
available)" should make sure "new l3-agents" are alive.

  Because, if there are some dead l3-agents in the HA router
  environment, the flag "no_migration" will be set to False and "assign
  router to  dead(new) l3-agents" will fail the case, which is not the
  purpose of the test.

  The idea is to make sure attribute: agent_list lists only  agents
  alive.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572439] [NEW] test_update_subnetpool_associate_address_scope_wrong_ip_version should check address-scope extension

2016-04-20 Thread YAMAMOTO Takashi
Public bug reported:

currently test_update_subnetpool_associate_address_scope_wrong_ip_version 
assumes address-scope
extension.  it should be skipped when the extension is not available.

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572439

Title:
  test_update_subnetpool_associate_address_scope_wrong_ip_version should
  check address-scope extension

Status in neutron:
  In Progress

Bug description:
  currently test_update_subnetpool_associate_address_scope_wrong_ip_version 
assumes address-scope
  extension.  it should be skipped when the extension is not available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1571486] Re: tempest jobs timout

2016-04-20 Thread YAMAMOTO Takashi
** Changed in: tempest
   Status: New => Invalid

** Summary changed:

- tempest jobs timout
+ tempest jobs timeout

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1571486

Title:
  tempest jobs timeout

Status in networking-midonet:
  In Progress
Status in neutron:
  In Progress
Status in tempest:
  Invalid

Bug description:
  recently added neutron tempest plugin performs eventlet monkey patching.
  it affects other tempest tests.
  namely, all or most paramiko-using tests seem failing.

  examples:
  
http://logs.openstack.org/87/199387/25/check/gate-tempest-dsvm-networking-midonet-v1/28549d7/
  
http://logs.openstack.org/87/199387/25/check/gate-tempest-dsvm-networking-midonet-v2/d72e49c/
  
http://logs.openstack.org/87/199387/25/check/gate-tempest-dsvm-networking-midonet-ml2/4d9e8ff/

  it seems tap-as-a-service and neutron-fwaas jobs are also affected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1571486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572411] Re: Keystone authetication error

2016-04-20 Thread muralidharan
It's an issue with blob datas , So myself making it invalid

** Changed in: keystone
   Status: New => Opinion

** Changed in: keystone
   Status: Opinion => Incomplete

** Changed in: keystone
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1572411

Title:
  Keystone authetication error

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
   I have working devstack setup which services which is added like
  keysone, nova, horizon ,cinder, cloudkitty, ceilometer etc.,

  But now I am getting some error in horizon while logging in.

  I am giving the correct username and password, But it is giving error
  which is as follows:(Horizon)

  Unable to retrieve authorized projects.

  
  Keystone is generating the error which is as follows:

  2016-04-19 03:53:46.762426 26667 DEBUG keystone.middleware.core 
[req-9629a303-718a-4caf-abe8-e34c2324906a - - - - -] There is either no auth 
token in the request or the certificate issuer is not trusted. No auth context 
will be set. process_request /opt/stack/keystone/keystone/middleware/core.py:310
  2016-04-19 03:53:46.765221 26667 INFO keystone.common.wsgi 
[req-9629a303-718a-4caf-abe8-e34c2324906a - - - - -] POST 
http://198.100.181.65:5000/v2.0/tokens
  2016-04-19 03:53:46.806062 2 INFO keystone.common.wsgi 
[req-56b2c8ef-9104-4902-89b8-25ef06ee511e - - - - -] GET 
http://198.100.181.65:5000/
  2016-04-19 03:53:46.826444 26668 DEBUG keystone.common.authorization 
[req-212b39d3-b64d-48af-b4a2-da2bcd33c334 - - - - -] RBAC: Proceeding without 
project or domain scope token_to_auth_context 
/opt/stack/keystone/keystone/common/authorization.py:74
  2016-04-19 03:53:46.826782 26668 DEBUG keystone.middleware.core 
[req-212b39d3-b64d-48af-b4a2-da2bcd33c334 - - - - -] RBAC: auth_context: 
{'is_delegated_auth': False, 'user_id': u'4b8fcbd328de4ab190065c386480fda4', 
'trustee_id': None, 'trustor_id': None, 'consumer_id': None, 'token': 
, 'access_token_id': 
None, 'trust_id': None} process_request 
/opt/stack/keystone/keystone/middleware/core.py:314
  2016-04-19 03:53:46.828707 26668 INFO keystone.common.wsgi 
[req-212b39d3-b64d-48af-b4a2-da2bcd33c334 - - - - -] GET 
http://198.100.181.65:5000/v2.0/tenants
  2016-04-19 03:53:46.841585 26668 ERROR keystone.common.wsgi 
[req-212b39d3-b64d-48af-b4a2-da2bcd33c334 - - - - -] Expecting ',' delimiter: 
line 1 column 20 (char 19)
  2016-04-19 03:53:46.841608 26668 TRACE keystone.common.wsgi Traceback 
(most recent call last):
  2016-04-19 03:53:46.841614 26668 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/common/wsgi.py", line 248, in __call__
  2016-04-19 03:53:46.841617 26668 TRACE keystone.common.wsgi result = 
method(context, **params)
  2016-04-19 03:53:46.841621 26668 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/assignment/controllers.py", line 56, in 
get_projects_for_token
  2016-04-19 03:53:46.841626 26668 TRACE keystone.common.wsgi 
self.assignment_api.list_projects_for_user(token_ref.user_id))
  2016-04-19 03:53:46.841630 26668 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/assignment/core.py", line 290, in 
list_projects_for_user
  2016-04-19 03:53:46.841633 26668 TRACE keystone.common.wsgi return 
self.resource_api.list_projects_from_ids(project_ids)
  2016-04-19 03:53:46.841637 26668 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/resource/backends/sql.py", line 67, in 
list_projects_from_ids
  2016-04-19 03:53:46.841641 26668 TRACE keystone.common.wsgi return 
[project_ref.to_dict() for project_ref in query.all()]
  2016-04-19 03:53:46.841644 26668 TRACE keystone.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2584, in 
all
  2016-04-19 03:53:46.841648 26668 TRACE keystone.common.wsgi return 
list(self)
  2016-04-19 03:53:46.841651 26668 TRACE keystone.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py", line 86, in 
instances
  2016-04-19 03:53:46.841681 26668 TRACE keystone.common.wsgi 
util.raise_from_cause(err)
  2016-04-19 03:53:46.841686 26668 TRACE keystone.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 200, 
in raise_from_cause
  2016-04-19 03:53:46.841690 26668 TRACE keystone.common.wsgi 
reraise(type(exception), exception, tb=exc_tb)
  2016-04-19 03:53:46.841693 26668 TRACE keystone.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py", line 71, in 
instances
  2016-04-19 03:53:46.841697 26668 TRACE keystone.common.wsgi rows = 
[proc(row) for row in fetch]
  2016-04-19 03:53:46.841700 26668 TRACE keystone.common.wsgi   File 

[Yahoo-eng-team] [Bug 1566678] Re: database consistency problem for deleting router gateway port

2016-04-20 Thread Kevin Benton
Well actually just marking as fix released because the bug for that
patch had to do with HA interfaces, but it's the same underlying issue.

** Changed in: neutron
   Status: New => Fix Released

** Changed in: neutron
 Assignee: Anseela M M (anseela-m00) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566678

Title:
  database consistency problem for deleting router gateway port

Status in neutron:
  Fix Released

Bug description:
  * Summary
  When deleting l3 router gateway port,  if the gateway port deleting fail, 
then this gateway port cannot be delete any more.

  * Pre-conditions
  A vrouter is created with an external gateway port.

  * Step-by-step
  1. loop add and delete gateway port.
  2. loop kill the neutron-server and restart

  * Expect output
  The gateway port can be deleted or added successfully.

  * Actual output
  The gateway port is stale and cannot be deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572411] [NEW] Keystone authetication error

2016-04-20 Thread muralidharan
Public bug reported:

 I have working devstack setup which services which is added like
keysone, nova, horizon ,cinder, cloudkitty, ceilometer etc.,

But now I am getting some error in horizon while logging in.

I am giving the correct username and password, But it is giving error
which is as follows:(Horizon)

Unable to retrieve authorized projects.


Keystone is generating the error which is as follows:

2016-04-19 03:53:46.762426 26667 DEBUG keystone.middleware.core 
[req-9629a303-718a-4caf-abe8-e34c2324906a - - - - -] There is either no auth 
token in the request or the certificate issuer is not trusted. No auth context 
will be set. process_request /opt/stack/keystone/keystone/middleware/core.py:310
2016-04-19 03:53:46.765221 26667 INFO keystone.common.wsgi 
[req-9629a303-718a-4caf-abe8-e34c2324906a - - - - -] POST 
http://198.100.181.65:5000/v2.0/tokens
2016-04-19 03:53:46.806062 2 INFO keystone.common.wsgi 
[req-56b2c8ef-9104-4902-89b8-25ef06ee511e - - - - -] GET 
http://198.100.181.65:5000/
2016-04-19 03:53:46.826444 26668 DEBUG keystone.common.authorization 
[req-212b39d3-b64d-48af-b4a2-da2bcd33c334 - - - - -] RBAC: Proceeding without 
project or domain scope token_to_auth_context 
/opt/stack/keystone/keystone/common/authorization.py:74
2016-04-19 03:53:46.826782 26668 DEBUG keystone.middleware.core 
[req-212b39d3-b64d-48af-b4a2-da2bcd33c334 - - - - -] RBAC: auth_context: 
{'is_delegated_auth': False, 'user_id': u'4b8fcbd328de4ab190065c386480fda4', 
'trustee_id': None, 'trustor_id': None, 'consumer_id': None, 'token': 
, 'access_token_id': 
None, 'trust_id': None} process_request 
/opt/stack/keystone/keystone/middleware/core.py:314
2016-04-19 03:53:46.828707 26668 INFO keystone.common.wsgi 
[req-212b39d3-b64d-48af-b4a2-da2bcd33c334 - - - - -] GET 
http://198.100.181.65:5000/v2.0/tenants
2016-04-19 03:53:46.841585 26668 ERROR keystone.common.wsgi 
[req-212b39d3-b64d-48af-b4a2-da2bcd33c334 - - - - -] Expecting ',' delimiter: 
line 1 column 20 (char 19)
2016-04-19 03:53:46.841608 26668 TRACE keystone.common.wsgi Traceback (most 
recent call last):
2016-04-19 03:53:46.841614 26668 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/common/wsgi.py", line 248, in __call__
2016-04-19 03:53:46.841617 26668 TRACE keystone.common.wsgi result = 
method(context, **params)
2016-04-19 03:53:46.841621 26668 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/assignment/controllers.py", line 56, in 
get_projects_for_token
2016-04-19 03:53:46.841626 26668 TRACE keystone.common.wsgi 
self.assignment_api.list_projects_for_user(token_ref.user_id))
2016-04-19 03:53:46.841630 26668 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/assignment/core.py", line 290, in 
list_projects_for_user
2016-04-19 03:53:46.841633 26668 TRACE keystone.common.wsgi return 
self.resource_api.list_projects_from_ids(project_ids)
2016-04-19 03:53:46.841637 26668 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/resource/backends/sql.py", line 67, in 
list_projects_from_ids
2016-04-19 03:53:46.841641 26668 TRACE keystone.common.wsgi return 
[project_ref.to_dict() for project_ref in query.all()]
2016-04-19 03:53:46.841644 26668 TRACE keystone.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2584, in 
all
2016-04-19 03:53:46.841648 26668 TRACE keystone.common.wsgi return 
list(self)
2016-04-19 03:53:46.841651 26668 TRACE keystone.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py", line 86, in 
instances
2016-04-19 03:53:46.841681 26668 TRACE keystone.common.wsgi 
util.raise_from_cause(err)
2016-04-19 03:53:46.841686 26668 TRACE keystone.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 200, 
in raise_from_cause
2016-04-19 03:53:46.841690 26668 TRACE keystone.common.wsgi 
reraise(type(exception), exception, tb=exc_tb)
2016-04-19 03:53:46.841693 26668 TRACE keystone.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py", line 71, in 
instances
2016-04-19 03:53:46.841697 26668 TRACE keystone.common.wsgi rows = 
[proc(row) for row in fetch]
2016-04-19 03:53:46.841700 26668 TRACE keystone.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py", line 428, 
in _instance
2016-04-19 03:53:46.841710 26668 TRACE keystone.common.wsgi 
loaded_instance, populate_existing, populators)
2016-04-19 03:53:46.841724 26668 TRACE keystone.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py", line 486, 
in _populate_full
2016-04-19 03:53:46.841742 26668 TRACE keystone.common.wsgi dict_[key] 
= getter(row)
2016-04-19 03:53:46.841747 26668 TRACE keystone.common.wsgi   File