[Yahoo-eng-team] [Bug 1563021] Re: Mouseover help for user settings items per page is not translatable

2016-04-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/308839
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=7b43577172c79bc9466984dca436a1e0ae70dc88
Submitter: Jenkins
Branch:master

commit 7b43577172c79bc9466984dca436a1e0ae70dc88
Author: Kenji Ishii 
Date:   Thu Apr 21 07:55:43 2016 +

Fix the issue help_text is not translated in User Setting

help message at the field of number of page size is not tranlated.
When we define any form field, it seems that translation does not
work well if help_text has any variable.
This patch will fix it.

Change-Id: I106e00c0fdc073999f879ffa032e11fde7735953
Closes-Bug: #1563021


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1563021

Title:
  Mouseover help for user settings items per page is not translatable

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Neutron LBaaS Dashboard:
  Invalid

Bug description:
  When Editing user settings [username]->Settings the help mouseover
  value "Number of items to show per page (..."  is not shown as
  translated.

  This failure can be seen in the Pseudo translation tool if it's
  patched with https://review.openstack.org/#/c/298379/

  and can also be seen in Japanese. Note that the segment is translated, but 
not displayed in Horizon:
  
https://github.com/openstack/horizon/blob/stable/mitaka/openstack_dashboard/locale/ja/LC_MESSAGES/django.po#L5832
  and
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/locale/ja/LC_MESSAGES/django.po#L5835

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1563021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1283655] Re: Adds host_ip to hypervisor show API

2016-04-25 Thread Sharat Sharma
Assigning to myself since there is no activity from a long time. if any
one have issues please tell me.

** Changed in: openstack-api-site
 Assignee: Lucky samadhiya (lucky-samadhiya) => Sharat Sharma 
(sharat-sharma)

** Changed in: openstack-api-site
   Status: Triaged => In Progress

** Project changed: openstack-api-site => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1283655

Title:
  Adds host_ip to hypervisor show API

Status in OpenStack Compute (nova):
  In Progress
Status in openstack-manuals:
  Invalid

Bug description:
  https://review.openstack.org/52733
  commit e05566de71f39acad3566fc31ba1053d84130c03
  Author: Jay Lau 
  Date:   Wed Feb 5 22:44:15 2014 +0800

  Adds host_ip to hypervisor show API
  
  After no-compute-fanout-to-scheduler, host_ip was stored in the table
  of compute_nodes. Host ip address should be considered as the hypervisor
  attribute similar to the hypervisor_type, hypervisor_version etc, and
  now those attributes such as hypervisor_type, hypervisor_version etc
  are all listed as the hypervisor attribute when calling "nova
  hypervisor-show host", so we can also set "host_ip" as a new attribute
  output for this command.
  
  DocImpact
  1) Only administrators can view hypervisor detail in nova.
  2) It can help improve debug capabilities for nova. For example, if
  admin using SimpleCIDRAffinityFilter, then after VM is deployed, admin
  can check if the VM was deployed successfully to the desired host by
  checking ip address of the host via "nova hypervisor-show host".
  3) Add host_ip to the output for "nova hypervisor-show"
  
  Implement bp hypervisor-show-ip
  Change-Id: I006a504d030be1f47beb68a844647026a6daf0ce

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1283655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1283655] [NEW] Adds host_ip to hypervisor show API

2016-04-25 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

https://review.openstack.org/52733
commit e05566de71f39acad3566fc31ba1053d84130c03
Author: Jay Lau 
Date:   Wed Feb 5 22:44:15 2014 +0800

Adds host_ip to hypervisor show API

After no-compute-fanout-to-scheduler, host_ip was stored in the table
of compute_nodes. Host ip address should be considered as the hypervisor
attribute similar to the hypervisor_type, hypervisor_version etc, and
now those attributes such as hypervisor_type, hypervisor_version etc
are all listed as the hypervisor attribute when calling "nova
hypervisor-show host", so we can also set "host_ip" as a new attribute
output for this command.

DocImpact
1) Only administrators can view hypervisor detail in nova.
2) It can help improve debug capabilities for nova. For example, if
admin using SimpleCIDRAffinityFilter, then after VM is deployed, admin
can check if the VM was deployed successfully to the desired host by
checking ip address of the host via "nova hypervisor-show host".
3) Add host_ip to the output for "nova hypervisor-show"

Implement bp hypervisor-show-ip
Change-Id: I006a504d030be1f47beb68a844647026a6daf0ce

** Affects: nova
 Importance: Medium
 Assignee: Sharat Sharma (sharat-sharma)
 Status: In Progress

** Affects: openstack-manuals
 Importance: Medium
 Status: Invalid


** Tags: nova
-- 
Adds host_ip to hypervisor show API
https://bugs.launchpad.net/bugs/1283655
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475396] Re: Start Instances should not be enabled for an Running Instance

2016-04-25 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1475396

Title:
  Start Instances should not be enabled for an Running Instance

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  Start Instances gets enabled by selecting any Instance using check box
  irrespective of the Instance Power State which is misleading and
  throws error if Start Instances operation is performed,

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1475396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574881] Re: use_helper_for_ns_read=False breaks dhcp agent and l3 agent when /var/run/netns doesn't exist

2016-04-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/237653
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=339a1ccbb931ff906143ee6339376836f6f4563e
Submitter: Jenkins
Branch:master

commit 339a1ccbb931ff906143ee6339376836f6f4563e
Author: Ryan Moats 
Date:   Tue Oct 20 15:51:37 2015 +

Revert "Improve performance of ensure_namespace"

This reverts commit 81823e86328e62850a89aef9f0b609bfc0a6dacd.

Unneeded optimization: this commit only improves execution
time on the order of milliseconds, which is less than 1% of
the total router update execution time at the network node.

This also

Closes-bug: #1574881

Change-Id: Icbcdf4725ba7d2e743bb6761c9799ae436bd953b


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1574881

Title:
  use_helper_for_ns_read=False breaks dhcp agent and l3 agent when
  /var/run/netns doesn't exist

Status in neutron:
  Fix Released

Bug description:
  A bug was introduced in https://review.openstack.org/#/c/227589/ .
  The idea of that patch was to improve performance by not shelling out
  to "ip netns list" to get a list of network namespaces and doing it in
  python instead (os.listdir).

  The iproute2 C code which implements "ip netns list" will first check
  if the "/var/run/netns" directory exists, before trying to enumerate
  the contents. The Python code tries to enumerate the directory
  contents (os.listdir), but doesn't check for the directory's
  existence. ip netns add would be able to create the directory.
  However, since an exception is thrown, that code path is no longer
  reached.

  The result is that the agents are non-functional when the directory is
  not present, and are unable to recover on their own.

  
  When use_helper_for_ns_read is True (the default value), then the existence 
of the directory is a non-issue, as "ip netns list" is run instead and 
sidesteps the broken behavior.

  Steps to reproduce:

  1- Start with a machine with no /var/run/netns directory (such as a newly 
provisioned VM)
  2- Disable use_helper_for_ns_read.
  In devstack:
  [[post-config|$NEUTRON_CONF]]
  [agent]
  use_helper_for_ns_read=False

  3- Run stack.sh
  4- At this point, q-l3 errors should already start appearing in the logs
  5- Create a new network and subnetwork
  6- There will be stacktraces in the dhcp agent logs.
  7- Observe that no router or dhcp namespaces were created

  Expected behavior:

  - No errors in the logs
  - /var/run/netns directory and mountpoint created (if not yet present)
  - network namespaces are created

  One possible fix would be to merge
  (https://review.openstack.org/#/c/237653/) and restore the old
  behavior.


  q-dhcp.log:

  Unable to plug DHCP port for network 37b27bd3-5072-4c3a-b26d-b7b67c2bc788. 
Releasing port.
  2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp Traceback (most recent 
call last):
  2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1234, in setup
  2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp 
mtu=network.get('mtu'))
  2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp   File 
"/opt/stack/neutron/neutron/agent/linux/interface.py", line 250, in plug
  2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp bridge, namespace, 
prefix, mtu)
  2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp   File 
"/opt/stack/neutron/neutron/agent/linux/interface.py", line 354, in plug_new
  2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp namespace_obj = 
ip.ensure_namespace(namespace)
  2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 195, in 
ensure_namespace
  2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp if not 
self.netns.exists(name):
  2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 883, in exists
  2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp return name in 
os.listdir(IP_NETNS_PATH)
  2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp OSError: [Errno 2] No 
such file or directory: '/var/run/netns'


  Unable to enable dhcp for 37b27bd3-5072-4c3a-b26d-b7b67c2bc788.
  2016-04-22 23:53:21.614 TRACE neutron.agent.dhcp.agent Traceback (most recent 
call last):
  2016-04-22 23:53:21.614 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 112, in call_driver
  2016-04-22 23:53:21.614 TRACE neutron.agent.dhcp.agent getattr(driver, 
action)(**action_kwargs)
  2016-04-22 23:53:21.614 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 209, in enable
  2016-04-22 23:53:21.614 TRACE neutron.agent.dhcp.agent interface_name = 

[Yahoo-eng-team] [Bug 1484586] Re: file injection fails when using fallback method

2016-04-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/215613
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=2c1b19761b3d960055ced11558dda22d022d77f4
Submitter: Jenkins
Branch:master

commit 2c1b19761b3d960055ced11558dda22d022d77f4
Author: Alexis Lee 
Date:   Fri Aug 21 13:58:06 2015 +0100

Wait for device to be mapped

There's a race condition when trying to perform file injection without
libguestfs, which causes a fallback to nbd device. Although the kpartx
command succeeds, it does so after the code has tested for success, so
Nova thinks it failed.

Retry a few times to avoid this.

Co-Authored-By: Paul Carlton 
Change-Id: Ie5c186562475cd56c55520ad7123f47a0130b2a4
Closes-Bug: #1428639
Closes-Bug: #1484586


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1484586

Title:
  file injection fails when using fallback method

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Trying to perform file injection without libguestfs, i.e. fallback to
  using nbd.

  2015-08-13 13:21:21 43295 DEBUG oslo_concurrency.processutils 
[req-e70d20d6-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0 
c942c664c4024ce4b5fe2bf8c3a21a3c] Running cmd (subprocess): sudo nova-rootwrap 
/opt/stack/service/nova-compute/etc/nova/rootwrap.conf qemu-nbd -c /dev/nbd8 
//var/lib/nova/instances/e8cb4369-adf8-4e97-ad75-9d181d3c9dac/disk execute 
/opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-packages/oslo_concurrency/processutils.py:199
  2015-08-13 13:21:21 43295 DEBUG oslo_concurrency.processutils 
[req-e70d20d6-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0 
c942c664c4024ce4b5fe2bf8c3a21a3c] CMD "sudo nova-rootwrap 
/opt/stack/service/nova-compute/etc/nova/rootwrap.conf qemu-nbd -c /dev/nbd8 
//var/lib/nova/instances/e8cb4369-adf8-4e97-ad75-9d181d3c9dac/disk" returned: 0 
in 0.096s execute 
/opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-packages/oslo_concurrency/processutils.py:225
  2015-08-13 13:21:21 43295 DEBUG oslo_concurrency.lockutils 
[req-e70d20d6-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0 
c942c664c4024ce4b5fe2bf8c3a21a3c] Lock "nbd-allocation-lock" released by 
"_inner_get_dev" :: held 0.099s inner 
/opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:456
  2015-08-13 13:21:21 43295 DEBUG nova.virt.disk.mount.api 
[req-e70d20d6-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0 
c942c664c4024ce4b5fe2bf8c3a21a3c] Map dev /dev/nbd8 map_dev 
/opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-packages/nova/virt/disk/mount/api.py:140
  2015-08-13 13:21:21 43295 DEBUG oslo_concurrency.processutils 
[req-e70d20d6-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0 
c942c664c4024ce4b5fe2bf8c3a21a3c] Running cmd (subprocess): sudo nova-rootwrap 
/opt/stack/service/nova-compute/etc/nova/rootwrap.conf kpartx -a /dev/nbd8 
execute 
/opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-packages/oslo_concurrency/processutils.py:199
  2015-08-13 13:21:21 43295 DEBUG oslo_concurrency.processutils 
[req-e70d20d6-f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0 
c942c664c4024ce4b5fe2bf8c3a21a3c] CMD "sudo nova-rootwrap 
/opt/stack/service/nova-compute/etc/nova/rootwrap.conf kpartx -a /dev/nbd8" 
returned: 0 in 0.093s execute 
/opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-packages/oslo_concurrency/processutils.py:225

  2015-08-13 13:21:21 43295 DEBUG nova.virt.disk.mount.api [req-e70d20d6
  -f42c-4d3e-8598-4dd1a76e09f5 61e0c3d7a0204bdeb647498c47599bc0
  c942c664c4024ce4b5fe2bf8c3a21a3c] Fail to mount, tearing back down
  do_mount /opt/stack/venv/nova-20150805T111039Z/lib/python2.7/site-
  packages/nova/virt/disk/mount/api.py:223

  Although the kpartx command works the check for file path fails
  generating an error.

  Inserting a short sleep before checking for the path seems to work.
  This issue is obviously timing related and I do not encounter this
  when running devstack on a libvirt host.  However it occurs on some of
  the baremetal hypervisors in our lab very reliably.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1484586/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574881] [NEW] Optimization for use_helper_for_ns_read crashes dhcp agent and l3 agent

2016-04-25 Thread Stephen Eilert
Public bug reported:

A bug was introduced in https://review.openstack.org/#/c/227589/ .  The
idea of that patch was to improve performance by not shelling out to "ip
netns list" to get a list of network namespaces and doing it in python
instead (os.listdir).

The iproute2 C code which implements "ip netns list" will first check if
the "/var/run/netns" directory exists, before trying to enumerate the
contents. The Python code tries to enumerate the directory contents
(os.listdir), but doesn't check for the directory's existence. ip netns
add would be able to create the directory. However, since an exception
is thrown, that code path is no longer reached.

The result is that the agents are non-functional when the directory is
not present, and are unable to recover on their own.


When use_helper_for_ns_read is True (the default value), then the existence of 
the directory is a non-issue, as "ip netns list" is run instead and sidesteps 
the broken behavior.

Steps to reproduce:

1- Start with a machine with no /var/run/netns directory (such as a newly 
provisioned VM)
2- Disable use_helper_for_ns_read.
In devstack:
[[post-config|$NEUTRON_CONF]]
[agent]
use_helper_for_ns_read=False

3- Run stack.sh
4- At this point, q-l3 errors should already start appearing in the logs
5- Create a new network and subnetwork
6- There will be stacktraces in the dhcp agent logs.
7- Observe that no router or dhcp namespaces were created

Expected behavior:

- No errors in the logs
- /var/run/netns directory and mountpoint created (if not yet present)
- network namespaces are created

One possible fix would be to merge
(https://review.openstack.org/#/c/237653/) and restore the old behavior.


q-dhcp.log:

Unable to plug DHCP port for network 37b27bd3-5072-4c3a-b26d-b7b67c2bc788. 
Releasing port.
2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp Traceback (most recent 
call last):
2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1234, in setup
2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp 
mtu=network.get('mtu'))
2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp   File 
"/opt/stack/neutron/neutron/agent/linux/interface.py", line 250, in plug
2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp bridge, namespace, 
prefix, mtu)
2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp   File 
"/opt/stack/neutron/neutron/agent/linux/interface.py", line 354, in plug_new
2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp namespace_obj = 
ip.ensure_namespace(namespace)
2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 195, in 
ensure_namespace
2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp if not 
self.netns.exists(name):
2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 883, in exists
2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp return name in 
os.listdir(IP_NETNS_PATH)
2016-04-22 23:53:21.492 TRACE neutron.agent.linux.dhcp OSError: [Errno 2] No 
such file or directory: '/var/run/netns'


Unable to enable dhcp for 37b27bd3-5072-4c3a-b26d-b7b67c2bc788.
2016-04-22 23:53:21.614 TRACE neutron.agent.dhcp.agent Traceback (most recent 
call last):
2016-04-22 23:53:21.614 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 112, in call_driver
2016-04-22 23:53:21.614 TRACE neutron.agent.dhcp.agent getattr(driver, 
action)(**action_kwargs)
2016-04-22 23:53:21.614 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 209, in enable
2016-04-22 23:53:21.614 TRACE neutron.agent.dhcp.agent interface_name = 
self.device_manager.setup(self.network)
2016-04-22 23:53:21.614 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1240, in setup
2016-04-22 23:53:21.614 TRACE neutron.agent.dhcp.agent 
self.plugin.release_dhcp_port(network.id, port.device_id)
2016-04-22 23:53:21.614 TRACE neutron.agent.dhcp.agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2016-04-22 23:53:21.614 TRACE neutron.agent.dhcp.agent self.force_reraise()
2016-04-22 23:53:21.614 TRACE neutron.agent.dhcp.agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-04-22 23:53:21.614 TRACE neutron.agent.dhcp.agent 
six.reraise(self.type_, self.value, self.tb)
2016-04-22 23:53:21.614 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1234, in setup
2016-04-22 23:53:21.614 TRACE neutron.agent.dhcp.agent 
mtu=network.get('mtu'))

q-l3.log(repeatedly):

2016-04-22 23:54:14.659 ESC[01;31mERROR oslo_service.periodic_task
req-43829b71-bb53-48d1-826b-7421a9a3612e None None Error during 
L3NATAgentWithStateReport.periodic_sync_routers_task
2016-

[Yahoo-eng-team] [Bug 1559543] Re: cloud-init does not configure or start networking on gentoo

2016-04-25 Thread Robin H. Johnson
** Bug watch added: Gentoo Bugzilla #581212
   https://bugs.gentoo.org/show_bug.cgi?id=581212

** Also affects: cloud-init (Gentoo Linux) via
   https://bugs.gentoo.org/show_bug.cgi?id=581212
   Importance: Unknown
   Status: Unknown

** Tags added: gentoo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1559543

Title:
  cloud-init does not configure or start networking on gentoo

Status in cloud-init:
  New
Status in cloud-init package in Gentoo Linux:
  Unknown

Bug description:
  the version of cloud-init I used was 0.7.6 as there are no newer
  versions to test with

  you can build an image to test with with diskimage-builder if you wish
  to test

  I'm also at castle so let me know if you want to meet up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1559543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572999] Re: separated logs for failed integration tests

2016-04-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/307880
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=4619696ec4b4767058ae994821b6be73af10c09e
Submitter: Jenkins
Branch:master

commit 4619696ec4b4767058ae994821b6be73af10c09e
Author: Sergei Chipiga 
Date:   Tue Apr 19 18:31:08 2016 +0300

Attach test logs individually for each test

Reports-Example: Ic35c95e720211bce8659baeb0cd4470308e25ea4

Change-Id: Ie5d972d2a560d4f59666c49dc3bf22fdb48071e8
Depends-On: I124973d9adbaaacf5d3429e6f6684f15de27dc7f
Closes-Bug: #1572999


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1572999

Title:
  separated logs for failed integration tests

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  We should have separated logs for failed integration tests to make
  them more readable and extensible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1572999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1568747] Re: Default SESSION_ENGINE in deployment.rst is out of sync with that in settings.py

2016-04-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/295213
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=3bf6e50eb676982b7554f77d91806d5088e7bff8
Submitter: Jenkins
Branch:master

commit 3bf6e50eb676982b7554f77d91806d5088e7bff8
Author: Bo Wang 
Date:   Mon Mar 21 19:15:51 2016 +0800

Default SESSION_ENGINE is not Local memory storage

Value of SESSION_ENGINE in settings.py had been changed from "cache" to
"signed_cookies" in patch: https://review.openstack.org/#/c/6473/12.

According info in deployment.rst is out of sync, fix it.

Closes-Bug: 1568747
Change-Id: Iaa104479dcf0e094e5c6e9c63c3a518064f6fb6e


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1568747

Title:
  Default SESSION_ENGINE in deployment.rst is out of sync with that in
  settings.py

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Value of SESSION_ENGINE in settings.py had been changed from "cache" to
  "signed_cookies" in patch: https://review.openstack.org/#/c/6473/12.

  Related Default SESSION_ENGINE in deployment.rst is out of sync, fix
  it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1568747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512645] Re: Security groups incorrectly applied on new additional interfaces

2016-04-25 Thread Armando Migliaccio
correct

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1512645

Title:
  Security groups incorrectly applied on new additional interfaces

Status in neutron:
  Invalid

Bug description:
  When launching an instance with one network interface and enabling 2
  security groups everything is working as it supposed to be.

  But when attaching additional network interfaces only the default
  security group is applied to those new interfaces. The additional
  security group isn't enabled at all on those extra interfaces.

  We had to dig into the iptables chains to discover this behavior. Once
  adding the rule manually or adding them to the default security group
  everything is working fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1512645/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574750] [NEW] Full table scan on "ports" table lookup by "device_id"

2016-04-25 Thread Ilya Chukhnakov
Public bug reported:

Current Neutron database model does not define an index for Port.device_id 
column. However observing the MySQL query log one could notice queries that 
would benefit from such an index:
# sed -n "/WHERE.*device_id/s/'[^']*'//gp" < 
/var/lib/mysql/$DB_HOSTNAME.log|sort|uniq -c
 34 WHERE ports.device_id IN ()
 78 WHERE ports.tenant_id IN () AND ports.device_id IN ()

Without that index the database is currently forced to use the full scan
table access path (or potentially less selective 'tenant_id' index for
the second query) which has suboptimal performance.

Pre-conditions: Devstack (master) configured with Neutron networking
(from Devstack guide
http://docs.openstack.org/developer/devstack/guides/neutron.html
#devstack-configuration).
Neutron@master:91d95197d892356bd1ab8a96966c11e97d78441b

Steps to reproduce:
0. enable MySQL query logging unless already enabled (set global general_log = 
'ON')
1. launch new instance
2. observe MySQL log file for queries having ports.device_id in WHERE clause
3. run EXPLAIN query plan for such queries and observe the full scan table 
access path for 'ports' table

** Affects: neutron
 Importance: Medium
 Assignee: Ilya Chukhnakov (ichukhnakov)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Ilya Chukhnakov (ichukhnakov)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1574750

Title:
  Full table scan on "ports" table lookup by "device_id"

Status in neutron:
  In Progress

Bug description:
  Current Neutron database model does not define an index for Port.device_id 
column. However observing the MySQL query log one could notice queries that 
would benefit from such an index:
  # sed -n "/WHERE.*device_id/s/'[^']*'//gp" < 
/var/lib/mysql/$DB_HOSTNAME.log|sort|uniq -c
   34 WHERE ports.device_id IN ()
   78 WHERE ports.tenant_id IN () AND ports.device_id IN ()

  Without that index the database is currently forced to use the full
  scan table access path (or potentially less selective 'tenant_id'
  index for the second query) which has suboptimal performance.

  Pre-conditions: Devstack (master) configured with Neutron networking
  (from Devstack guide
  http://docs.openstack.org/developer/devstack/guides/neutron.html
  #devstack-configuration).
  Neutron@master:91d95197d892356bd1ab8a96966c11e97d78441b

  Steps to reproduce:
  0. enable MySQL query logging unless already enabled (set global general_log 
= 'ON')
  1. launch new instance
  2. observe MySQL log file for queries having ports.device_id in WHERE clause
  3. run EXPLAIN query plan for such queries and observe the full scan table 
access path for 'ports' table

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1574750/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552115] Re: NSXv LBaaS driver Failed to re-start HA-Load-Balancer

2016-04-25 Thread Kobi Samoray
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552115

Title:
  NSXv LBaaS driver Failed to re-start HA-Load-Balancer

Status in neutron:
  Fix Released

Bug description:
  - OpenStack Kilo
  - NSXv 6.2.1

  I'm trying to create LBaaS VIP on port 22 and getting the following
  error in neutron.log

  cannot bind socket [192.168.112.103:22]

  For port 2022 all works fine.

  I'm using "exclusive" edges in HA mode.

  Full error:
  http://paste.openstack.org/show/488907/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1552115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574734] [NEW] wait for user menu will be opened before items click in integration tests

2016-04-25 Thread Sergei Chipiga
Public bug reported:

We should be sure that user menu is opened and visible before its items
clicking.

** Affects: horizon
 Importance: Undecided
 Assignee: Sergei Chipiga (schipiga)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1574734

Title:
  wait for user menu will be opened before items click in integration
  tests

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  We should be sure that user menu is opened and visible before its
  items clicking.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1574734/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574703] [NEW] Stacks page isn't refresh after stack deletion sometimes

2016-04-25 Thread Sergei Chipiga
Public bug reported:

Autotests detected http://logs.openstack.org/58/308458/12/check/gate-
horizon-dsvm-integration/08f3893/screenshots/

Steps:
- Go to Orchestration -> Stacks
- Launch stack
- Delete stack

Expected result:
- Stack is deleted, table is empty

Actual result:
- horizon shows that stack is present, but in heat logs there is response that 
no stacks:
http://logs.openstack.org/58/308458/12/check/gate-horizon-dsvm-integration/08f3893/logs/screen-h-api.txt.gz#_2016-04-25_14_10_50_814

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1574703

Title:
  Stacks page isn't refresh after stack deletion sometimes

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Autotests detected http://logs.openstack.org/58/308458/12/check/gate-
  horizon-dsvm-integration/08f3893/screenshots/

  Steps:
  - Go to Orchestration -> Stacks
  - Launch stack
  - Delete stack

  Expected result:
  - Stack is deleted, table is empty

  Actual result:
  - horizon shows that stack is present, but in heat logs there is response 
that no stacks:
  
http://logs.openstack.org/58/308458/12/check/gate-horizon-dsvm-integration/08f3893/logs/screen-h-api.txt.gz#_2016-04-25_14_10_50_814

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1574703/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570633] Re: [2.0 beta 2] Nodes fail to remain powered after Trusty commission with "Allow SSH" selected

2016-04-25 Thread Scott Moser
** Changed in: cloud-init
   Importance: Undecided => Medium

** Changed in: cloud-init
   Status: Confirmed => Fix Released

** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu)
   Status: New => Fix Released

** Also affects: cloud-init (Ubuntu Trusty)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1570633

Title:
  [2.0 beta 2] Nodes fail to remain powered after Trusty commission with
  "Allow SSH" selected

Status in cloud-init:
  Fix Released
Status in MAAS:
  Triaged
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Trusty:
  New

Bug description:
  Build Version/Date: MAAS 2.0 Beta2
  Environment used for testing: Xenial

  Summary: 
  When commissioning nodes with the "Allow SSH" option selected, at least 50% 
of nodes fail to remain powered and in "Ready" state

  Steps to Reproduce: 
  Enlist 5+ nodes
  Commission all nodes at once

  Expected result: 
  All nodes Ready and powered

  Actual result:
  50-75% of nodes are Ready but powered off

  Syslog shows the following errors
  Apr 14 19:03:41 donphan sh[28839]: 2016-04-14 19:03:41+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 19:05:37 donphan sh[28839]: 2016-04-14 19:05:37+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 19:05:37 donphan sh[28839]: 2016-04-14 19:05:37+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 19:07:12 donphan sh[28839]: 2016-04-14 19:07:12+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 19:07:48 donphan sh[28839]: 2016-04-14 19:07:48+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 19:08:08 donphan sh[28839]: 2016-04-14 19:08:08+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 19:11:37 donphan sh[28839]: 2016-04-14 19:11:37+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 19:11:44 donphan sh[28839]: 2016-04-14 19:11:44+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 19:11:47 donphan sh[28839]: 2016-04-14 19:11:47+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 19:12:24 donphan sh[28839]: 2016-04-14 19:12:24+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 19:12:24 donphan sh[28839]: 2016-04-14 19:12:24+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 19:13:14 donphan sh[28839]: 2016-04-14 19:13:14+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 19:13:17 donphan sh[28575]: Failure: 
twisted.internet.error.ConnectionDone: Connection was closed cleanly.
  Apr 14 19:13:18 donphan sh[28575]: Failure: 
twisted.internet.error.ConnectionDone: Connection was closed cleanly.
  Apr 14 19:43:41 donphan sh[28839]: 2016-04-14 19:43:41+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 19:43:50 donphan sh[28839]: 2016-04-14 19:43:50+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 19:43:57 donphan sh[28839]: 2016-04-14 19:43:57+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 19:44:05 donphan sh[28839]: 2016-04-14 19:44:05+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 19:44:06 donphan sh[28839]: 2016-04-14 19:44:06+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 19:45:10 donphan sh[28839]: 2016-04-14 19:45:10+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 19:46:19 donphan sh[28575]: #011twisted.internet.error.ConnectionDone: 
Connection was closed cleanly.
  Apr 14 21:34:08 donphan sh[28839]: 2016-04-14 21:34:08+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 21:34:09 donphan sh[28839]: 2016-04-14 21:34:09+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 21:34:20 donphan sh[28839]: 2016-04-14 21:34:20+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 21:34:35 donphan sh[28839]: 2016-04-14 21:34:35+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 21:34:36 donphan sh[28839]: 2016-04-14 21:34:36+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 21:35:05 donphan sh[28575]: Failure: 
twisted.internet.error.ConnectionDone: Connection was closed cleanly.
  Apr 14 21:35:46 donphan sh[28839]: 2016-04-14 21:35:46+ 
[RemoteOriginReadSession (UDP)] Got error: 
  Apr 14 21:36:51 donphan sh[28575]: #011twisted.internet.error.ConnectionDone: 
Connection was closed cleanly.
  Apr 14 21:37:00 donphan sh[28575]: #011twisted.internet.error.ConnectionDone: 
Connection was closed cleanly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1570633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574694] [NEW] Port dns_name is updated when dns-integration extension is disabled

2016-04-25 Thread Elena Ezhova
Public bug reported:

When a port is attached to and instance its dns_name is updated even if
dns-integration extension is not enabled:

$:~/devstack$ neutron port-create private 
Created a new port:
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| allowed_address_pairs |   
  |
| binding:host_id   |   
  |
| binding:profile   | {}
  |
| binding:vif_details   | {}
  |
| binding:vif_type  | unbound   
  |
| binding:vnic_type | normal
  |
| created_at| 2016-04-25T14:42:57   
  |
| description   |   
  |
| device_id |   
  |
| device_owner  |   
  |
| dns_name  |   
  |
| extra_dhcp_opts   |   
  |
| fixed_ips | {"subnet_id": "32ba7468-e4c2-4feb-9e0f-de983f7ced52", 
"ip_address": "10.0.0.5"} |
| id| 3a7facc6-cda5-46d8-bc67-c880406e338e  
  |
| mac_address   | fa:16:3e:6a:25:37 
  |
| name  |   
  |
| network_id| adc6b713-a44d-43c9-9366-4564c32ff41a  
  |
| port_security_enabled | True  
  |
| security_groups   | a61295bf-6751-42ed-ab70-73c0b42a09c9  
  |
| status| DOWN  
  |
| tenant_id | 801a523213aa4168adba27231095c535  
  |
| updated_at| 2016-04-25T14:42:57   
  |
+---+-+
$:~/devstack$ nova interface-attach --port-id 
3a7facc6-cda5-46d8-bc67-c880406e338e test
$:~/devstack$ neutron port-show 3a7facc6-cda5-46d8-bc67-c880406e338e

 
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| allowed_address_pairs |   
  |
| binding:host_id   | eezhova-devstack-2
  |
| binding:profile   | {}
  |
| binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug": true}
  |
| binding:vif_type  | ovs   
  |
| binding:vnic_type | normal
  |
| created_at| 2016-04-25T14:42:57   
  |
| description   |   
  |
| device_id | a251a60a-e98d-4f46-8288-d45c986874a1  
  |
| device_owner  | compute:None  
  |
| dns_name  | test

[Yahoo-eng-team] [Bug 1573095] Re: 16.04 cloud image hangs at first boot

2016-04-25 Thread Dan Watkins
Hi zero, Kenneth, Nick,

Thanks for reporting and confirming this bug! Could one of you include a
list of instructions to reliably reproduce this, please? That will make
it much easier for someone investigating the bug to be sure that they
are hitting the same issue that you are. :)


Thanks,

Dan

** Package changed: ubuntu => cloud-images

** Changed in: cloud-images
   Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1573095

Title:
  16.04 cloud image hangs at first boot

Status in cloud-images:
  Incomplete
Status in cloud-init:
  New

Bug description:
  I tried to launch a ubuntu 16.04 cloud image within KVM.
  The image is not booting up and hangs at 

  "Btrfs loaded"

  Hypervisor env is Proxmox 4.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/1573095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574558] [NEW] UEFI - instance terminates after boot

2016-04-25 Thread Chung Chih, Hung
Public bug reported:

Instance terminates after boot.
Attached file is qemu's instance logs.

Libvirt needs UEFI os boot loader.
Following link will show how to configurate os boot loader in libvirt.
https://libvirt.org/formatdomain.html#elementsOSBIOS
Nova will try to access UEFI boot loader with readonly permission.
This action will cause libvirt have two different operation according to 
content of nvram element.

When nvram element didn't specify template attribute, libvirt will read nvram 
option in qemu config, which is saved in libvirt configuration folder. 
Nvram option will store key-value pair data, key is os boot loader path and 
value is boor loader variable path.

When nvram element had specify template attribute which is path to boot loader 
variable, libvirt will copy the file into libvirt's nvram folder.
Then qemu will boot with boot loader and boot loader variable.

Let me check the xml element for os boot loader.

hvm
/opt/ovmf/OVMF_CODE.fd
/var/lib/libvirt/qemu/nvram/instance-002c_VARS.fd


  
We can find libvirt will try to copy /opt/ovmf/OVMF_CODE.fd to 
/var/lib/libvirt/qemu/nvram/instance-002c_VARS.fd.
Nova had specify wrong value in template attribute, it should be 
/opt/ovmf/OVMF_VARS.fd instead of /opt/ovmf/OVMF_CODE.fd

We can add one uefi_nvram_override, which key is boot loader path and value is 
boot loader variable path.
If boot loader is not exist in this option, just not specify template attribute.
If boot loader is exist in this option, just specify template attribute with 
value.

** Affects: nova
 Importance: Undecided
 Assignee: Chung Chih, Hung (lyanchih)
 Status: New

** Attachment added: "qemu.log"
   https://bugs.launchpad.net/bugs/1574558/+attachment/4646307/+files/qemu.log

** Changed in: nova
 Assignee: (unassigned) => Chung Chih, Hung (lyanchih)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1574558

Title:
  UEFI - instance terminates after boot

Status in OpenStack Compute (nova):
  New

Bug description:
  Instance terminates after boot.
  Attached file is qemu's instance logs.

  Libvirt needs UEFI os boot loader.
  Following link will show how to configurate os boot loader in libvirt.
  https://libvirt.org/formatdomain.html#elementsOSBIOS
  Nova will try to access UEFI boot loader with readonly permission.
  This action will cause libvirt have two different operation according to 
content of nvram element.

  When nvram element didn't specify template attribute, libvirt will read nvram 
option in qemu config, which is saved in libvirt configuration folder. 
  Nvram option will store key-value pair data, key is os boot loader path and 
value is boor loader variable path.

  When nvram element had specify template attribute which is path to boot 
loader variable, libvirt will copy the file into libvirt's nvram folder.
  Then qemu will boot with boot loader and boot loader variable.

  Let me check the xml element for os boot loader.
  
  hvm
  /opt/ovmf/OVMF_CODE.fd
  /var/lib/libvirt/qemu/nvram/instance-002c_VARS.fd
  
  

  We can find libvirt will try to copy /opt/ovmf/OVMF_CODE.fd to 
/var/lib/libvirt/qemu/nvram/instance-002c_VARS.fd.
  Nova had specify wrong value in template attribute, it should be 
/opt/ovmf/OVMF_VARS.fd instead of /opt/ovmf/OVMF_CODE.fd

  We can add one uefi_nvram_override, which key is boot loader path and value 
is boot loader variable path.
  If boot loader is not exist in this option, just not specify template 
attribute.
  If boot loader is exist in this option, just specify template attribute with 
value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1574558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574531] [NEW] XenAPI dosen't support GPT partition table

2016-04-25 Thread huan
Public bug reported:

Description
==
Boot an instance with FreeBSD image using OpenStack Liberty stable release

Steps
=
I downloaded a FreeBSD image from it's offical website and upload this image 
via glance.
I tried to boot an instance with this image, but it will always fail at auto 
config disk, see xenapi/vm_utils.py: _auto_configure_disk()

Expect:
==
Boot instance successfully

Actual:
==
Failed to boot instance

Logs


2016-04-20 05:04:10.043 ERROR nova.utils 
[req-7f9d8900-a3c4-4e9e-93dc-cd71d7263895 demo demo] [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] Failed to spawn, rolling back
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] Traceback (most recent call last):
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] File 
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 570, in _spawn
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] attach_devices_step(undo_mgr, vm_ref, 
vdis, disk_image_type)
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] File 
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 129, in inner
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] rv = f(*args, **kwargs)
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] File 
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 503, in attach_devices_step
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] attach_disks(undo_mgr, vm_ref, vdis, 
disk_image_type)
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] File 
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 450, in attach_disks
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] admin_password, injected_files)
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] File 
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 748, in _attach_disks
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] flavor.root_gb)
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] File 
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 926, in 
try_auto_configure_disk
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] _auto_configure_disk(session, vdi_ref, 
new_gb)
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] File 
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 904, in 
_auto_configure_disk
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] partitions = _get_partitions(dev)
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] File 
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 2197, in _get_partitions
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] num, start, end, size, fstype, name, 
flags = line.split(':')
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055] ValueError: need more than 1 value to 
unpack
2016-04-20 05:04:10.043 TRACE nova.utils [instance: 
d970c4d0-c112-4b6e-b559-9f38e91c1055]

With some additional debug loggs added, you can see:
2016-04-20 05:04:09.322 DEBUG nova.virt.xenapi.vm_utils 
[req-7f9d8900-a3c4-4e9e-93dc-cd71d7263895 demo demo]
Error: The backup GPT table is corrupt, but the primary appears OK, so that 
will be used.
Warning: Not all of the space available to /dev/xvdb appears to be used, you 
can fix the GPT to use all of the space (an extra 18874001 blocks) or continue 
with the current setting? 
BYT;
/dev/xvdb:62914560s:xvd:512:512:gpt:Xen Virtual Block Device;
1:3s:170s:168s::bootfs:;
2:171s:2097322s:2097152s::swapfs:;
3:2097323s:44040362s:41943040s:freebsd-ufs:rootfs:;

This is because XenAPI driver doesn't support GPT partition tables
currently

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1574531

Title:
  XenAPI dosen't support GPT partition table

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ==
  Boot an instance with FreeBSD image using OpenStack Liberty stable release

  Steps
  =
  I downloaded a FreeBSD image from it's offical website and upload this image 
via glance.
  I tried to boot an instance with this image, but it will always fail at auto 
config disk, see xenapi/vm_utils.py: _auto_configure_disk()

  Expect:
  ==
  Boot instance successfully

  Actual:
  ==
  Failed to boot instance

  Logs
  

  2016-04-20 05:04:10.043 ERROR nova.utils 
[req-7f9d8900

[Yahoo-eng-team] [Bug 1567743] Re: Quotas tests are not covered enough at each resources.

2016-04-25 Thread Maho Koshiya
** Changed in: neutron
   Status: Fix Released => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567743

Title:
  Quotas tests are not covered enough at each resources.

Status in neutron:
  In Progress

Bug description:
  The quotas tests in the case of create resources over limit value are
  not covered enough in unit tests/functional tests.

  A part of the quota test of the network already exists.
  But these tests do not exist - subnet/port/router/security group/security 
group rule/floatingip.

  These tests are necessary for it avoid that the user can create
  resources by mistake.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1567743/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574512] [NEW] python-glanceclient doesn't support $HOME while uploading image

2016-04-25 Thread Eli Qiao
Public bug reported:

~/devstack$ glance --debug image-create --name ubuntu-16 --visibility public 
--disk-format=qcow2 --container-format=bare \--os-distro=ubuntu 
--file=~/Downloads/xenial-server-cloudimg-amd64-disk1.img
File ~/Downloads/xenial-server-cloudimg-amd64-disk1.img does not exist or user 
does not have read privileges to it

~/devstack$ glance image-create --name ubuntu-16 --visibility public 
--disk-format=qcow2 --container-format=bare \--os-distro=ubuntu 
--file=/home/user/Downloads/xenial-server-cloudimg-amd64-disk1.img
+--+--+
| Property | Value|
+--+--+
| checksum | b27130a877734d9ec938a63ca63c4ee7 |
| container_format | bare |
| created_at   | 2016-04-25T08:38:29Z |
| disk_format  | qcow2|
| id   | 22bc8d04-c77b-4780-b6bb-1a37f11d6deb |
| min_disk | 0|
| min_ram  | 0|
| name | ubuntu-16|
| os_distro| ubuntu   |
| owner| 959eedbf87534e28a64f94c250b785ac |
| protected| False|
| size | 303235072|
| status   | active   |
| tags | []   |
| updated_at   | 2016-04-25T08:38:32Z |
| virtual_size | None |
| visibility   | public   |
+--+--+

** Affects: python-glanceclient
 Importance: Undecided
 Status: New

** Project changed: glance => python-glanceclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1574512

Title:
  python-glanceclient doesn't support $HOME while uploading image

Status in python-glanceclient:
  New

Bug description:
  ~/devstack$ glance --debug image-create --name ubuntu-16 --visibility public 
--disk-format=qcow2 --container-format=bare \--os-distro=ubuntu 
--file=~/Downloads/xenial-server-cloudimg-amd64-disk1.img
  File ~/Downloads/xenial-server-cloudimg-amd64-disk1.img does not exist or 
user does not have read privileges to it

  ~/devstack$ glance image-create --name ubuntu-16 --visibility public 
--disk-format=qcow2 --container-format=bare \--os-distro=ubuntu 
--file=/home/user/Downloads/xenial-server-cloudimg-amd64-disk1.img
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | b27130a877734d9ec938a63ca63c4ee7 |
  | container_format | bare |
  | created_at   | 2016-04-25T08:38:29Z |
  | disk_format  | qcow2|
  | id   | 22bc8d04-c77b-4780-b6bb-1a37f11d6deb |
  | min_disk | 0|
  | min_ram  | 0|
  | name | ubuntu-16|
  | os_distro| ubuntu   |
  | owner| 959eedbf87534e28a64f94c250b785ac |
  | protected| False|
  | size | 303235072|
  | status   | active   |
  | tags | []   |
  | updated_at   | 2016-04-25T08:38:32Z |
  | virtual_size | None |
  | visibility   | public   |
  +--+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1574512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574493] [NEW] keystone tox -epy27 fail in devstack environment

2016-04-25 Thread ZhiQiang Fan
Public bug reported:

reproduce steps:

1. clone devstack, enable keystone
2. cd /opt/stack/keystone && tox -epy27

==
Failed 1 tests - output below:
==

keystone.tests.unit.test_cli.CliNoConfigTestCase.test_cli
-

Captured traceback:
~~~
Traceback (most recent call last):
  File "keystone/tests/unit/test_cli.py", line 71, in test_cli
self.assertThat(self.logging.output, matchers.Contains(expected_msg))
  File 
"/tmp/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 493, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 'Config file not found, using 
default configs.' not in u''

If I remove /etc/keystone/keystone.conf then it will succeed

As a developer, I always run devstack on my workstation, and as unit
test code shouldn't rely on or affected by real environment, I think
this is a bug which should be fixed

already have a solution, I will fix it by myself

** Affects: keystone
 Importance: Undecided
 Assignee: ZhiQiang Fan (aji-zqfan)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => ZhiQiang Fan (aji-zqfan)

** Changed in: keystone
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1574493

Title:
  keystone tox -epy27 fail in devstack environment

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  reproduce steps:

  1. clone devstack, enable keystone
  2. cd /opt/stack/keystone && tox -epy27

  ==
  Failed 1 tests - output below:
  ==

  keystone.tests.unit.test_cli.CliNoConfigTestCase.test_cli
  -

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "keystone/tests/unit/test_cli.py", line 71, in test_cli
  self.assertThat(self.logging.output, matchers.Contains(expected_msg))
File 
"/tmp/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 493, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 'Config file not found, using 
default configs.' not in u''

  If I remove /etc/keystone/keystone.conf then it will succeed

  As a developer, I always run devstack on my workstation, and as unit
  test code shouldn't rely on or affected by real environment, I think
  this is a bug which should be fixed

  already have a solution, I will fix it by myself

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1574493/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574476] [NEW] lbaasv2 session_persistence or session-persistence?

2016-04-25 Thread dongjuan
Public bug reported:

problem is in Kilo neutron-lbaas branch.

When we create a Lbaas pool with --session_persistence it configured ok,
we create a Lbaas pool with --session-persistence it configured failed.

But we update a Lbaas pool with --session-persistence  or
--session_persistence it updated OK.


[root@opencos2 ~(keystone_admin)]# 
[root@opencos2 ~(keystone_admin)]# neutron lbaas-pool-create --listener 
listener500-1 --protocol HTTP --lb-algorithm SOURCE_IP pool500-1 
--session-persistence type=dict type='SOURCE_IP'
Invalid values_specs type=SOURCE_IP
[root@opencos2 ~(keystone_admin)]# 
[root@opencos2 ~(keystone_admin)]# neutron lbaas-pool-create --listener 
listener500-1 --protocol HTTP --lb-algorithm SOURCE_IP pool500-1 
--session_persistence type=dict type='SOURCE_IP'
Created a new pool:
+-++
| Field   | Value  |
+-++
| admin_state_up  | True   |
| description ||
| healthmonitor_id||
| id  | 64bed1f2-ff02-4b12-bdfa-1904079786be   |
| lb_algorithm| SOURCE_IP  |
| listeners   | {"id": "162c70aa-175d-473a-b13a-e3c335a0a9e1"} |
| members ||
| name| pool500-1  |
| protocol| HTTP   |
| session_persistence | {"cookie_name": null, "type": "SOURCE_IP"} |
| tenant_id   | be58eaec789d44f296a65f96b944a9f5   |
+-++
[root@opencos2 ~(keystone_admin)]# 
[root@opencos2 ~(keystone_admin)]# 
[root@opencos2 ~(keystone_admin)]# neutron lbaas-pool-update pool500-1 
--session_persistence type=dict type='HTTP_COOKIE'
Updated pool: pool500-1
[root@opencos2 ~(keystone_admin)]# 
[root@opencos2 ~(keystone_admin)]# neutron lbaas-pool-update pool500-1 
--session-persistence type=dict type='SOURCE_IP'
Updated pool: pool500-1
[root@opencos2 ~(keystone_admin)]# 
[root@opencos2 ~(keystone_admin)]#

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

** Tags added: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1574476

Title:
  lbaasv2 session_persistence or session-persistence?

Status in neutron:
  New

Bug description:
  problem is in Kilo neutron-lbaas branch.

  When we create a Lbaas pool with --session_persistence it configured ok,
  we create a Lbaas pool with --session-persistence it configured failed.

  But we update a Lbaas pool with --session-persistence  or
  --session_persistence it updated OK.

  
  [root@opencos2 ~(keystone_admin)]# 
  [root@opencos2 ~(keystone_admin)]# neutron lbaas-pool-create --listener 
listener500-1 --protocol HTTP --lb-algorithm SOURCE_IP pool500-1 
--session-persistence type=dict type='SOURCE_IP'
  Invalid values_specs type=SOURCE_IP
  [root@opencos2 ~(keystone_admin)]# 
  [root@opencos2 ~(keystone_admin)]# neutron lbaas-pool-create --listener 
listener500-1 --protocol HTTP --lb-algorithm SOURCE_IP pool500-1 
--session_persistence type=dict type='SOURCE_IP'
  Created a new pool:
  +-++
  | Field   | Value  |
  +-++
  | admin_state_up  | True   |
  | description ||
  | healthmonitor_id||
  | id  | 64bed1f2-ff02-4b12-bdfa-1904079786be   |
  | lb_algorithm| SOURCE_IP  |
  | listeners   | {"id": "162c70aa-175d-473a-b13a-e3c335a0a9e1"} |
  | members ||
  | name| pool500-1  |
  | protocol| HTTP   |
  | session_persistence | {"cookie_name": null, "type": "SOURCE_IP"} |
  | tenant_id   | be58eaec789d44f296a65f96b944a9f5   |
  +-++
  [root@opencos2 ~(keystone_admin)]# 
  [root@opencos2 ~(keystone_admin)]# 
  [root@opencos2 ~(keystone_admin)]# neutron lbaas-pool-update pool500-1 
--session_persistence type=dict type='HTTP_COOKIE'
  Updated pool: pool500-1
  [root@opencos2 ~(keystone_admin)]# 
  [root@opencos2 ~(keystone_admin)]# neu

[Yahoo-eng-team] [Bug 1574472] [NEW] icmpv6 and ipv6-icmp are missed in _validate_port_range

2016-04-25 Thread ZongKai LI
Public bug reported:

For IPv6, it has protocol port checked for "icmp" in
_validate_port_range, but not for "icmpv6" and "ipv6-icmp".

** Affects: neutron
 Importance: Undecided
 Assignee: ZongKai LI (lzklibj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => ZongKai LI (lzklibj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1574472

Title:
  icmpv6 and ipv6-icmp are missed in _validate_port_range

Status in neutron:
  New

Bug description:
  For IPv6, it has protocol port checked for "icmp" in
  _validate_port_range, but not for "icmpv6" and "ipv6-icmp".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1574472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp