[Yahoo-eng-team] [Bug 1752604] Re: Tabs may not appear in angular instance wizard

2018-03-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/550469
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=f4f497246d7448c23d623332d116ed8e82baf73b
Submitter: Zuul
Branch:master

commit f4f497246d7448c23d623332d116ed8e82baf73b
Author: David Gutman 
Date:   Wed Mar 7 14:09:57 2018 +0100

Tabs may not appear in angular instance wizard

In the launch instance wizard, when a specific tab is disabled
(by example not matching the required policy) then some other
tabs may not be displayed.

After a long analysis :
Each tabs resolved itself if it should or shouldn't be displayed
(ready or not). But at the first rejection (first tab ready=false),
the wizard will display all "ready" tabs without waiting for other
tabs to be ready or not.
If all tabs had been resolved before the first reject, the display
is correct, but sometimes when the reject finishes first, a lot of
tabs are missing.

This bugs is not current because I think it is not common to hide tabs.

Closes-Bug: #1752604
Change-Id: I67f96092d9f82374087fc0c87b857292e188b675


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1752604

Title:
  Tabs may not appear in angular instance wizard

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  In the launch instance wizard, when a specific tab is disabled (by
  example not matching the required policy) then some other tabs may not
  be displayed.

  After a long analysis :
  Each tabs resolved itself if it should or shouldn't be displayed (ready or 
not). But at the first rejection  (first tab ready=false), the wizard will 
display all "ready" tabs without waiting for other tabs to be ready or not. 
  If all tabs had been resolved before the first reject, the display is 
correct, but sometimes when the reject finishes first, a lot of tabs are 
missing.

  This bugs is not current because I think it is not common to hide
  tabs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1752604/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1755339] [NEW] no_plugin_xstatic_support

2018-03-12 Thread xinni
Public bug reported:

Current Horizon (queens) does not provide handful options for additional 
xstatic modules in pluggable dashboards.
We need an option in HORIZON_CONFIG to hold additional xstatic modules in 
plugins, and to let Horizon automatically collect related xstatic files.

** Affects: horizon
 Importance: Undecided
 Assignee: xinni (xinni-ge)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => xinni (xinni-ge)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1755339

Title:
  no_plugin_xstatic_support

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Current Horizon (queens) does not provide handful options for additional 
xstatic modules in pluggable dashboards.
  We need an option in HORIZON_CONFIG to hold additional xstatic modules in 
plugins, and to let Horizon automatically collect related xstatic files.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1755339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1682020] Re: Remove nova default keymap option for qemu-kvm (deprecated)

2018-03-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/483994
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=cab8139498c7ea6b05cfdc8b4997276051b943fc
Submitter: Zuul
Branch:master

commit cab8139498c7ea6b05cfdc8b4997276051b943fc
Author: Stephen Finucane 
Date:   Fri Jul 14 17:00:27 2017 +0100

conf: Deprecate 'keymap' options

Defining the 'keymap' option in libvirt results in the '-k' option being
passed through to QEMU [1][2]. This QEMU option has some uses, primarily
for users interacting with QEMU via stdin on the text console. However,
for users interacting with QEMU via VNC or Spice, like nova users do, it
is strongly recommended to never add the "-k" option. Doing so will
force QEMU to do keymap conversions which are known to be lossy. This
disproportionately affects users with non-US keyboard layouts, who would
be better served by relying on the guest OS to manage this. Users should
instead rely on their clients and guests to correctly configure this.

This is the second part of the three part deprecation cycle for these
options. At this point, they are retained but deprecated them and their
defaults modified to be unset. This allows us to warn users with libvirt
hypervisors that have configured the options about the pitfalls of the
option and give them time to prepare migration strategies, if necessary.
A replacement option is added to the VMWare group to allow us to retain
this functionality for that hypervisor. Combined with the above, this
will allow us to remove the options in a future release.

[1] 
https://github.com/libvirt/libvirt/blob/v1.2.9-maint/src/qemu/qemu_command.c#L6985-L6986
[2] 
https://github.com/libvirt/libvirt/blob/v1.2.9-maint/src/qemu/qemu_command.c#L7215-L7216

Change-Id: I9a50a111ff4911f4364a1b24d646095c72af3d2c
Closes-Bug: #1682020


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1682020

Title:
  Remove nova default keymap option for qemu-kvm (deprecated)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  In Progress

Bug description:
  Hi,

  Nowdays, qemu-kvm default keymap option -k en-us, or any keymap option
  is deprecated.

  In OpenStack it renders the web console access to vm unsuable even
  with the proper keymap in nova.conf. For exapmle the use of 'Alt-Gr +
  key' combination in French, Belgium, spanish, or alike keyboard is not
  working, this can be problematic for hard passwords.

  Using the linked patch and removing/commenting the keymap options in
  nova.conf, makes everything working again (depending on the NoVnc
  version which might be patched also on previous ones).

  So maybe by default the nova.conf should now comment/remove this
  option and we need to remove the default option in the nova code.

  Regards,
  Pierre-André

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1682020/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657981] Re: FloatingIPs not reachable after restart of compute node (DVR)

2018-03-12 Thread Swaminathan Vasudevan
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657981

Title:
  FloatingIPs not reachable after restart of compute node (DVR)

Status in neutron:
  Invalid

Bug description:
  I am running OpenStack Newton on Ubuntu 16.04 using DVR. When I
  restart a compute node, the FloatingIPs of those vms running on this
  node are unreachable. A manual restart of the service
  "neutron-l3-agent" or "neutron-vpn-agent" running in on node solves
  the issue.

  I think there must be a race condition at startup.

  I get the following error in the neutron-vpn-agent.log:
  2017-01-20 07:04:52.379 2541 INFO neutron.common.config [-] Logging enabled!
  2017-01-20 07:04:52.379 2541 INFO neutron.common.config [-] 
/usr/bin/neutron-vpn-agent version 9.0.0
  2017-01-20 07:04:52.380 2541 WARNING stevedore.named [-] Could not load 
neutron.agent.linux.interface.OVSInterfaceDriver
  2017-01-20 07:04:53.112 2541 WARNING stevedore.named 
[req-a9e10331-51ab-4c67-bfdd-0f6296510594 - - - - -] Could not load 
neutron_fwaas.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver
  2017-01-20 07:04:53.127 2541 INFO neutron.agent.agent_extensions_manager 
[req-a9e10331-51ab-4c67-bfdd-0f6296510594 - - - - -] Loaded agent extensions: 
['fwaas']
  2017-01-20 07:04:53.128 2541 INFO neutron.agent.agent_extensions_manager 
[req-a9e10331-51ab-4c67-bfdd-0f6296510594 - - - - -] Initializing agent 
extension 'fwaas'
  2017-01-20 07:04:53.163 2541 WARNING oslo_config.cfg 
[req-bdd95fb9-bcd7-473e-a350-3bd8d6be8758 - - - - -] Option 
"external_network_bridge" from group "DEFAULT" is deprecated for removal.  Its 
value may be silently ignored in the future.
  2017-01-20 07:04:53.165 2541 WARNING stevedore.named 
[req-bdd95fb9-bcd7-473e-a350-3bd8d6be8758 - - - - -] Could not load 
neutron_vpnaas.services.vpn.device_drivers.strongswan_ipsec.StrongSwanDriver
  2017-01-20 07:04:53.236 2541 INFO eventlet.wsgi.server [-] (2541) wsgi 
starting up on http:/var/lib/neutron/keepalived-state-change
  2017-01-20 07:04:53.261 2541 INFO neutron.agent.l3.agent [-] Agent has just 
been revived. Doing a full sync.
  2017-01-20 07:04:53.373 2541 INFO neutron.agent.l3.agent [-] L3 agent started
  2017-01-20 07:05:22.832 2541 ERROR neutron.agent.linux.utils [-] Exit code: 
1; Stdin: ; Stdout: ; Stderr: Cannot find device "fg-67afaa06-bb"

  2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info [-] Exit 
code: 1; Stdin: ; Stdout: ; Stderr: Cannot find device "fg-67afaa06-bb"
  2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info Traceback 
(most recent call last):
  2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/common/utils.py", line 239, in call
  2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info return 
func(*args, **kwargs)
  2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 1062, 
in process
  2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info 
self.process_external(agent)
  2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_local_router.py", line 
515, in process_external
  2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info 
self.create_dvr_fip_interfaces(ex_gw_port)
  2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_local_router.py", line 
546, in create_dvr_fip_interfaces
  2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info 
self.fip_ns.update_gateway_port(fip_agent_port)
  2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_fip_ns.py", line 239, in 
update_gateway_port
  2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info 
ipd.route.add_gateway(gw_ip)
  2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 702, in 
add_gateway
  2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info 
self._as_root([ip_version], tuple(args))
  2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 373, in 
_as_root
  2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info 
use_root_namespace=use_root_namespace)
  2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 95, in 
_as_root
  2017-01-20 07:05:22.833 2541 ERROR neutron.agent.l3.router_info 
log_fail_as_error=self.log_fail_as_error)
  2017-01-20 07:05:22.833 2541 ERROR 

[Yahoo-eng-team] [Bug 1746754] Re: network detail tab should be pluggable

2018-03-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/540097
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=a987c039cfa7943afd5bb9d1aef50c30b32ad8f4
Submitter: Zuul
Branch:master

commit a987c039cfa7943afd5bb9d1aef50c30b32ad8f4
Author: Akihiro Motoki 
Date:   Fri Feb 2 02:33:23 2018 +0900

TabGroup: Make tabs pluggable via horizon plugin config

This commit enhances django tab implementation to allow horizon plugins
to add tabs to a tab group in other repository like the main horizon repo.
New setting "EXTRA_TABS" is introduced to the horizon plugin 'enabled' file.
To this aim, the tab group class looks up HORIZON_CONFIG['extra_tabs']
with its class full name and loads them as extra tabs if any.
HORIZON_CONFIG['extra_tabs'] are populated via horizon plugin settings.

This commit moves update_settings in openstack_dashboard.test.helpers
to horizon as I would like to use it in a new horizon unit test.

blueprint horizon-plugin-tab-for-info-and-quotas
Closes-Bug: #1746754
Change-Id: Ice2469a90553754929826d14d20b4719bd1f62d3


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1746754

Title:
  network detail tab should be pluggable

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Some horizon plugin would like to add a tab to the network detail page.
  For example, Project networking-bgpvpn would like to add a tab about 
network-association to the network detail tab. [1]

  This can be pluggable via python entrypoints and the similar approach
  adopted in blueprint  horizon-plugin-tab-for-info-and-quotas [2] can
  be applied.

  [1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-horizon/%23openstack-horizon.2018-02-01.log.html#t2018-02-01T15:18:30
  [2] 
https://blueprints.launchpad.net/horizon/+spec/horizon-plugin-tab-for-info-and-quotas

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1746754/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716194] Re: IPTables rules are not updated if there is a change in the FWaaS rules when FWaaS is deployed in DVR mode

2018-03-12 Thread Swaminathan Vasudevan
*** This bug is a duplicate of bug 1715395 ***
https://bugs.launchpad.net/bugs/1715395

** This bug is no longer a duplicate of bug 1716401
   FWaaS: Ip tables rules do not get updated in case of distributed virtual 
routers (DVR)
** This bug has been marked a duplicate of bug 1715395
   FWaaS: Firewall creation fails in case of distributed routers (Pike)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1716194

Title:
  IPTables rules are not updated if there is a change in the FWaaS rules
  when FWaaS is deployed in DVR mode

Status in neutron:
  New

Bug description:
  Please see https://bugs.launchpad.net/neutron/+bug/1715395/comments/4
  and https://bugs.launchpad.net/neutron/+bug/1716401 for more
  information about this issue

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1716194/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1755266] [NEW] Instance resize with swap on cinder volume fails

2018-03-12 Thread Dr. Clemens Hardewig
Public bug reported:

Environment:

Nova on Pike and Queens with kvm/libvirt compute driver

Versions tested:
ii  nova-common  2:17.0.0-0ubuntu1~cloud0   
 all  OpenStack Compute - common files
ii  nova-compute 2:17.0.0-0ubuntu1~cloud0   
 all  OpenStack Compute - compute node base
ii  nova-compute-kvm 2:17.0.0-0ubuntu1~cloud0   
 all  OpenStack Compute - compute node (KVM)
ii  nova-compute-libvirt 2:17.0.0-0ubuntu1~cloud0   
 all  OpenStack Compute - compute node libvirt support
ii  python-nova  2:17.0.0-0ubuntu1~cloud0   
 all  OpenStack Compute Python libraries
ii  python-novaclient2:9.1.1-0ubuntu1~cloud0
 all  client library for OpenStack Compute API - Python 2.7
ii  python3-novaclient   2:9.1.1-0ubuntu1~cloud0
 all  client library for OpenStack Compute API - 3.x

Storage on cinder and lvm backend

ii  cinder-common2:12.0.0-0ubuntu1~cloud0   
 all  Cinder storage service - common files
ii  cinder-volume2:12.0.0-0ubuntu1~cloud0   
 all  Cinder storage service - Volume server
ii  python-cinder2:12.0.0-0ubuntu1~cloud0   
 all  Cinder Python libraries
ii  python-cinderclient  1:3.5.0-0ubuntu1~cloud0
 all  Python bindings to the OpenStack Volume API - Python 2.x
ii  python3-cinderclient 1:3.5.0-0ubuntu1~cloud0
 all  Python bindings to the OpenStack Volume API - Python 3.x

Network with Neutron
ii  neutron-common   2:12.0.0-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for Openstack - common
rc  neutron-linuxbridge-agent2:11.0.2-0ubuntu1.1~cloud0 
 all  Neutron is a virtual network service for Openstack - 
linuxbridge agent
ii  neutron-openvswitch-agent2:12.0.0-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for Openstack - Open 
vSwitch plugin agent
ii  python-neutron   2:12.0.0-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for Openstack - Python 
library
ii  python-neutron-fwaas 1:12.0.0-0ubuntu1~cloud0   
 all  Firewall-as-a-Service driver for OpenStack Neutron
ii  python-neutron-lib   1.13.0-0ubuntu1~cloud0 
 all  Neutron shared routines and utilities - Python 2.7
ii  python-neutronclient 1:6.7.0-0ubuntu1~cloud0
 all  client API library for Neutron - Python 2.7
ii  python3-neutronclient1:6.7.0-0ubuntu1~cloud0
 all  client API library for Neutron - Python 3.x

Steps to reproduce:
1.) Start a nova instance with a flavor which has a swap volume
Root Volume is created as /dev/cinder-volume/volume-UUID
The swap volume is created as /dev/cinder-volume/_disk.swap

2.) Resize the instance to a different flavor with swap storage via
horizon or CLI

Expected Result:

1.) The swap volume on Node 1 is dropped
2.) A new swap volume is created on Node 2 and the instance is 
migrated/restarted to node 2

Actual result:
1.) The swap volume is tried to be COPIED via scp from Node 1 to Node 2
2.) The source address is assumed to be /var/lib/nova/instances/_disk.swap
3.) This fails with 'No such file or directory'(SIC) as swap volume is 
/dev/cinder-volume/_disk.swap
4.) Resizing leaves instance in error mode 

Logfile:
 (...)
2018-03-12 17:08:51.771 15427 DEBUG os_brick.initiator.connectors.iscsi 
[req-c85ef642-2708-4395-bcf4-3a3f8820a39e 9fce36209f42437bb9d4e5d4423586ae 
87cbad1ec81143b5bbc557a40d81c81a - default default] <== disconnect_volume: 
return (678ms) None trace_logging_wrapper 
/usr/lib/python2.7/dist-packages/os_brick/utils.py:170
2018-03-12 17:08:51.772 15427 DEBUG nova.virt.libvirt.volume.iscsi 
[req-c85ef642-2708-4395-bcf4-3a3f8820a39e 9fce36209f42437bb9d4e5d4423586ae 
87cbad1ec81143b5bbc557a40d81c81a - default default] [instance: 
1ae5b6ee-aa77-4661-8542-58bd9da5ef82] Disconnected iSCSI Volume 
disconnect_volume 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume/iscsi.py:78
2018-03-12 17:08:51.774 15427 DEBUG nova.virt.libvirt.driver 
[req-c85ef642-2708-4395-bcf4-3a3f8820a39e 9fce36209f42437bb9d4e5d4423586ae 
87cbad1ec81143b5bbc557a40d81c81a - default default] skipping disk /dev/sdo 
(vda) as it is a volume _get_instance_disk_info_from_config 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:7722
2018-03-12 17:08:51.774 15594 DEBUG oslo.privsep.daemon 

[Yahoo-eng-team] [Bug 1755260] [NEW] [Azure] Published hostname (ddns) gets reset on reboot after `hostnamectl set-hostname`

2018-03-12 Thread Paul Meyer
Public bug reported:

Verified on Azure, using Trusty
(Canonical:UbuntuServer:14.04.5-LTS:14.04.201803080) (cloud-init
0.7.5-0ubuntu1.22)

1. create a Trusty VM:
   az vm create -g paulmey-test -n ubuntu14 --image 
Canonical:UbuntuServer:14.04.5-LTS:latest
2. On the VM, edit /etc/waagent.conf to set Provisioning.MonitorHostName=y and 
restart the agent.
   This sets waagent to ifdown/ifup when it detects a hostname change such that 
the new hostname is
   sent on the DHCP request, which in Azure populates the instance DNS.
3. verify 'nslookup ubuntu14' shows a DNS record for the initial hostname 
(ubuntu14)
4. run 'hostnamectl set-hostname seeifitsticks' to change the hostname
5. Wait a minute for the update to propagate, verify that 'nslookup 
seeifitsticks' now shows a DNS
   record for the new hostname. Verify that /etc/hostname is updated. Verify 
that 'nslookup ubuntu14'
   no longer returns a valid DNS record.
6. reboot the vm
7. Once back up, notice that the hostname is seeifitsticks. However, 'nslookup 
seeifitsticks' returns
   NXDOMAIN, which 'nslookup ubuntu14' shows a DNS record.

>From the cloud-init log, it looks like cloud-init sets the hostname to
whatever is in the ovf-env.xml during interface bounce. On Xenial, the
data source is loaded from cache, which is why this code does not even
run.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Attachment added: "repro cloud-init log"
   
https://bugs.launchpad.net/bugs/1755260/+attachment/5077320/+files/cloud-init.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1755260

Title:
  [Azure] Published hostname (ddns) gets reset on reboot after
  `hostnamectl set-hostname`

Status in cloud-init:
  New

Bug description:
  Verified on Azure, using Trusty
  (Canonical:UbuntuServer:14.04.5-LTS:14.04.201803080) (cloud-init
  0.7.5-0ubuntu1.22)

  1. create a Trusty VM:
 az vm create -g paulmey-test -n ubuntu14 --image 
Canonical:UbuntuServer:14.04.5-LTS:latest
  2. On the VM, edit /etc/waagent.conf to set Provisioning.MonitorHostName=y 
and restart the agent.
 This sets waagent to ifdown/ifup when it detects a hostname change such 
that the new hostname is
 sent on the DHCP request, which in Azure populates the instance DNS.
  3. verify 'nslookup ubuntu14' shows a DNS record for the initial hostname 
(ubuntu14)
  4. run 'hostnamectl set-hostname seeifitsticks' to change the hostname
  5. Wait a minute for the update to propagate, verify that 'nslookup 
seeifitsticks' now shows a DNS
 record for the new hostname. Verify that /etc/hostname is updated. Verify 
that 'nslookup ubuntu14'
 no longer returns a valid DNS record.
  6. reboot the vm
  7. Once back up, notice that the hostname is seeifitsticks. However, 
'nslookup seeifitsticks' returns
 NXDOMAIN, which 'nslookup ubuntu14' shows a DNS record.

  From the cloud-init log, it looks like cloud-init sets the hostname to
  whatever is in the ovf-env.xml during interface bounce. On Xenial, the
  data source is loaded from cache, which is why this code does not even
  run.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1755260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1755243] [NEW] AttributeError when updating DvrEdgeRouter objects running on network nodes

2018-03-12 Thread Daniel Gonzalez Nothnagel
Public bug reported:

In a configuration with L3 HA, DVR and neutron-lbaasv2, it can happen
that the update of a router with a connected load balancer crashes with
the following stack trace (line numbers may be a bit outdated):

Failed to process compatible router: 192c77b2-1487-4bc4-af40-26563e959989
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 543, 
in _process_router_update
self._process_router_if_compatible(router)
  File "/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 464, 
in _process_router_if_compatible
self._process_updated_router(router)
  File "/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 480, 
in _process_updated_router
router['id'], router.get(l3_constants.HA_ROUTER_STATE_KEY))
  File "/usr/lib/python2.7/site-packages/neutron/agent/l3/ha.py", line 132, in 
check_ha_state_for_router
if ri and current_state != TRANSLATION_MAP[ri.ha_state]:
AttributeError: 'DvrEdgeRouter' object has no attribute 'ha_state'

The issue is, that in a landscape with more network nodes than
'max_l3_agents_per_router', e.g. 6 network nodes and
max_l3_agents_per_router = 3, it may happen that a load balancer is
scheduled on a network node that does not have the correct router
deployed on it. In such a case, neutron deploys a DvrEdgeRouter on the
network node to serve the LB. Every time neutron updates that router,
e.g. to assign a floating IP to the LB, it crashes with the above stack
trace because it expected to find a DvrEdgeHaRouter on the network node
on which it has to check the ha state.

To verify if it has to check the ha state of a router object, neutron
runs the following check:

if router.get('ha') and not is_dvr_only_agent

In our case that check is true, because the agent runs in mode
'dvr_snat', and the router is HA. But the actual router object running
on the network node is of type DvrEdgeRouter and therefore has no
ha_state attribute, causing the update to fail.

** Affects: neutron
 Importance: Undecided
 Assignee: Daniel Gonzalez Nothnagel (dgonzalez)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1755243

Title:
  AttributeError when updating DvrEdgeRouter objects running on network
  nodes

Status in neutron:
  In Progress

Bug description:
  In a configuration with L3 HA, DVR and neutron-lbaasv2, it can happen
  that the update of a router with a connected load balancer crashes
  with the following stack trace (line numbers may be a bit outdated):

  Failed to process compatible router: 192c77b2-1487-4bc4-af40-26563e959989
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 
543, in _process_router_update
  self._process_router_if_compatible(router)
File "/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 
464, in _process_router_if_compatible
  self._process_updated_router(router)
File "/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 
480, in _process_updated_router
  router['id'], router.get(l3_constants.HA_ROUTER_STATE_KEY))
File "/usr/lib/python2.7/site-packages/neutron/agent/l3/ha.py", line 132, 
in check_ha_state_for_router
  if ri and current_state != TRANSLATION_MAP[ri.ha_state]:
  AttributeError: 'DvrEdgeRouter' object has no attribute 'ha_state'

  The issue is, that in a landscape with more network nodes than
  'max_l3_agents_per_router', e.g. 6 network nodes and
  max_l3_agents_per_router = 3, it may happen that a load balancer is
  scheduled on a network node that does not have the correct router
  deployed on it. In such a case, neutron deploys a DvrEdgeRouter on the
  network node to serve the LB. Every time neutron updates that router,
  e.g. to assign a floating IP to the LB, it crashes with the above
  stack trace because it expected to find a DvrEdgeHaRouter on the
  network node on which it has to check the ha state.

  To verify if it has to check the ha state of a router object, neutron
  runs the following check:

  if router.get('ha') and not is_dvr_only_agent

  In our case that check is true, because the agent runs in mode
  'dvr_snat', and the router is HA. But the actual router object running
  on the network node is of type DvrEdgeRouter and therefore has no
  ha_state attribute, causing the update to fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1755243/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511061] Re: Images in inconsistent state when calls to registry fail during image deletion

2018-03-12 Thread Prateek Goel
v1 API is deprecated.

** Changed in: glance
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1511061

Title:
  Images in inconsistent state when calls to registry fail during image
  deletion

Status in Glance:
  Invalid
Status in Glance juno series:
  New
Status in Glance kilo series:
  New
Status in Glance liberty series:
  New
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  [0] shows a sample image that was left in an inconsistent state when a
  call to registry failed during image deletion.

  Glance v1 API makes two registry calls when deleting an image.
  The first call [1] is made to to set the status of an image to 
deleted/pending_delete.
  And, the other [2], to delete the rest of the metadata, which sets 
'deleted_at' and 'deleted' fields in the db.

  If the first call fails, the image deletion request fails and the image is 
left intact in it's previous status.
  However, if the first call succeeds and the second one fails, the image is 
left in an inconsistent status where it's status is set to 
pending_delete/deleted but it's 'deleted_at' and 'deleted' fields are not set.

  If delayed delete is turned on, these images are never collected by the 
scrubber as they won't appear as deleted images because their deleted field is 
not set. So, these images will continue to occupy storage in the backend.
  Also, further attempts at deleting these images will fail with a 404 because 
the status is already set to pending_delete/deleted.

  [0] http://paste.openstack.org/show/477577/
  [1]: 
https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L1115-L1116
  [2]: 
https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L1132

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1511061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1755204] [NEW] make salt minion id more configurable

2018-03-12 Thread do3meli
Public bug reported:

per default the salt minion does create the minion_id file with the
short hostname if it does not exist on its first startup. in some
environments the salt minion id is required to be a fully qualified
domain name. therefore i recommend to have a salt minion cloud-config
parameter that allows to be set to true/false and based on the value
takes the FQDN or the shortname and writes it to the minion_id file.
alternatively the minion id could also be fully configurable. meaning:
the whole config string is taken and written to the minion_id file.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1755204

Title:
  make salt minion id more configurable

Status in cloud-init:
  New

Bug description:
  per default the salt minion does create the minion_id file with the
  short hostname if it does not exist on its first startup. in some
  environments the salt minion id is required to be a fully qualified
  domain name. therefore i recommend to have a salt minion cloud-config
  parameter that allows to be set to true/false and based on the value
  takes the FQDN or the shortname and writes it to the minion_id file.
  alternatively the minion id could also be fully configurable. meaning:
  the whole config string is taken and written to the minion_id file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1755204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1755205] [NEW] ValueError: Field value 21 is invalid

2018-03-12 Thread bjolo
Public bug reported:

we just upgraded to Pike from ocata and a new error is now seen in the
log files. We have not done any config changes, just upgraded the
containers

We are running kolla-ansible

neutron-server.log

2018-03-12 16:13:09.298 53 DEBUG neutron_lib.callbacks.manager 
[req-8351b200-f441-425d-87a9-a29dbe01a729 - - - - -] Notify callbacks 
['neutron.services.segments.plugin.NovaSegmentNotifier._notify_host_addition_to_aggregate-16251827']
 for segment_host_mapping, after_create _notify_loop 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:167
2018-03-12 16:13:09.335 53 DEBUG oslo_concurrency.lockutils [-] Lock 
"notifier-59bf1c54-b85b-4380-b08c-061c0cb242a2" acquired by 
"neutron.notifiers.batch_notifier.synced_send" :: waited 0.000s inner 
/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:270
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server 
[req-cf93a4c0-9462-41e5-9922-b9b55ef6d1e2 - - - - -] Exception during message 
handling: ValueError: Field value 21 is invalid
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", 
line 160, in _process_incoming
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 213, in dispatch
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 183, in _do_dispatch
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", 
line 232, in inner
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server return 
func(*args, **kwargs)
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/api/rpc/handlers/resources_rpc.py",
 line 143, in bulk_pull
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server **filter_kwargs)]
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/objects/base.py", line 
468, in get_objects
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server return 
[cls._load_object(context, db_obj) for db_obj in db_objs]
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/objects/base.py", line 
403, in _load_object
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server 
obj.from_db_object(db_obj)
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/objects/base.py", line 
346, in from_db_object
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server setattr(self, 
field, fields[field])
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 72, in setter
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server field_value = 
field.coerce(self, name, value)
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_versionedobjects/fields.py",
 line 195, in coerce
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server return 
self._type.coerce(obj, attr, value)
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_versionedobjects/fields.py",
 line 317, in coerce
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server raise 
ValueError(msg)
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server ValueError: Field 
value 21 is invalid
2018-03-12 16:13:10.956 65 ERROR oslo_messaging.rpc.server
2018-03-12 16:13:11.336 53 DEBUG oslo_concurrency.lockutils [-] Lock 
"notifier-59bf1c54-b85b-4380-b08c-061c0cb242a2" released by 
"neutron.notifiers.batch_notifier.synced_send" :: held 2.002s inner 
/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:282
2018-03-12 16:13:11.727 56 DEBUG neutron_lib.callbacks.manager 
[req-c10e276c-512a-43b3-a21d-03a3fe198c4d - - - - -] Notify callbacks 
['neutron.services.segments.db._update_segment_host_mapping_for_agent--9223372036848016538']
 for agent, after_update _notify_loop 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:167
2018-03-12 16:13:12.596 50 

[Yahoo-eng-team] [Bug 1753384] Re: The old QoS policy ID is returned when updating the QoS policy ID, when the revision plugin is enabled

2018-03-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/549699
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=f00d0a45cc544856b850b779af27625ae8435ce5
Submitter: Zuul
Branch:master

commit f00d0a45cc544856b850b779af27625ae8435ce5
Author: Guoshuai Li 
Date:   Mon Mar 5 15:44:45 2018 +0800

[L3] Expunge context session during floating IP updating

With a certain chance, updating the QoS policy ID of a floating IP does
not take effect. This is because the revision will be processed.
We use session.expunge to synchronize the latest floating IP data.

Change-Id: I5e708f91c70c63baeb886c5644f754d22df1637d
Closes-Bug: #1753384


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1753384

Title:
  The old QoS policy ID is returned when updating the QoS policy ID,
  when the revision plugin is enabled

Status in neutron:
  Fix Released

Bug description:
  The log:
  [stack@devstack-controller ~]$ curl -g -i -X PUT 
http://localhost:9696/v2.0/floatingips/eaaeb698-7e37-4c81-860f-edf522922e24  -H 
"User-Agent: python-neutronclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: $OS_TOKEN" -d '{"floatingip": 
{"qos_policy_id": null}}'
  HTTP/1.1 200 OK
  Content-Type: application/json
  Content-Length: 529
  X-Openstack-Request-Id: req-87dc9041-8bb8-49ef-b808-5454a83a564e
  Date: Mon, 05 Mar 2018 07:27:34 GMT

  {"floatingip": {"router_id": null, "status": "DOWN", "description": "", 
"tags": [], "tenant_id": "66acf7c4da124ba39c0acae5b0701c29", "created_at": 
"2018-02-19T13:42:50Z", "updated_at": "2018-03-05T07:27:34Z", 
"floating_network_id": "9e8e1281-a173-4f3a-82ec-4eb423bd8299", 
"fixed_ip_address": null, "floating_ip_address": "172.24.4.10", 
"revision_number": 33, "project_id": "66acf7c4da124ba39c0acae5b0701c29", 
"port_id": null, "id": "eaaeb698-7e37-4c81-860f-edf522922e24", "qos_policy_id": 
"d8a93a5b-a14e-47d7-b139-3eb43a6f5b42"}}[stack@devstack-controller ~]$ 
  [stack@devstack-controller ~]$

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1753384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1704071] Re: XenAPI: volume VM live migration failed with VDI_NOT_IN_MAP

2018-03-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/538415
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=0e9cd6c4d66ca4afb95bb60edb412af9e96c546e
Submitter: Zuul
Branch:master

commit 0e9cd6c4d66ca4afb95bb60edb412af9e96c546e
Author: Brooks Kaminski 
Date:   Sat Jan 27 01:35:07 2018 -0600

XenAPI: XCP2.1+ Swallow VDI_NOT_IN_MAP Exception

Changes within XenAPI have enforced a more strict policy when checking
assert_can_migrate.  In particular when checking the source_vdi:dest_sr
mapping it insists that the SR actually exist.  This is not a problem for
local disks, however this assertation is called extremely early in the
live migration process (check_can_migrate_source) which is called from
conductor, which makes a problem for attached volumes.

This early in the process the host has just barely been chosen and no SR
information has been configured yet for these volumes or their initiators.
Additionally we cannot prepare this SR any earlier as BDM information is
not set up until the pre_live_migration method. With the options to either
skip this assertion completely or swallow the exception, I have chosen to
swallow the exception.  My reasons for this are two-fold:

1. --block-migration can be called without regard for whether an iSCSI
volume is attached, and we still want to ensure that VIF, CPU and other
factors are checked, and not just skip all checks entirely.
2. Currently the Assert only exists within the --block-migration code
base but this needs to change. A future commit will remove this logic
to ensure that the commit runs without this flag. Once that is done we
want to be able to continue to use this Exception swallow logic rather
than continuing to skip the assert for all XCP2.1.0+ even without volumes.

This decision should help us handle less work in a future commit and does 
not
seem to align with the goals of that commit, where it does align properly 
here.
This commit still changes very little of the current codebase and puts us in
a good position to refactor the way this is handled at a later date, while
adding a TODO note to correct VM.assert_can_migrate only running during a
block migration.

Additionally there seems to be some confusion that the mapping data that is
generated during this initial trip through _call_live_migrate_command is 
needed
to continue along the code, however this data appears to be purely used to 
send
the mapping information through the assertation call, and is then discarded.
The only data returned from these methods is the original dest_data which
is carried into the live_migration method. The _call_live_migration method 
is
called again during the live_migration() method, and during this time it 
does
need that mapping to send along to XenAPI for the actual migration, but not
yet. Because this codebase is so confusing, I am providing a little bit of
context on the movement of these variables with some psuedocode:

---CONDUCTOR.TASKS.LIVE_MIGRATE---
LiveMigrationTask.Execute()
self._find_destination() <-
Unrelated Work
compute.live_migration(self, host, instance, destination,
   block_migrate, migration, migrate_data)

LiveMigrationTask._find_destination()
Scheduler Things.  Gets a Dest ref.
_check_compatible_with_source_hyp
_call_livem_checks_on_host(host) <-
_check_can_live_migrate_destination()
returns Host Node Name and Host Compute.  That's all.

---COMPUTE.MANAGER---
_do / _check_live_migration_destination
dest_check_data = xenops.can_live_migrate_destination

(Checks for the Assert)
try:
migrate_data = check_can_live_migrate_source(dest_check_data)

return migrate_data

---VMOPS--
check_can_migrate_source(self, ctxt, instance_ref, dest_check_data)
if block_migration:
_call_live_migration_command(assert_can_migrate)
_generate_vdi_map()
Does NOT return
ALSO Does NOT return
return dest_check_data

The changes made to address this issue are a fairly simple oslo_utils
version check. To pull this data I created two new host related methods
within VMops as well as a new import of oslo_config.versionutils.
I believe these methods ultimately belong in the xenapi.host class, but
for two very small methods I believed it better to avoid such a large import
to get minimal data.

Finally, an adjustment to the Fake XenAPI driver had to be made as it 
currently
does not include the host details beyond hostname environment in the
create_host check.  The change amends the stub dictionary to include this 

[Yahoo-eng-team] [Bug 1754409] Re: invalid raise construct in test_build_resources_instance_not_found_before_yield

2018-03-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/550914
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c52d34f1b46f88ae2786cfb873f9b12a1a01b629
Submitter: Zuul
Branch:master

commit c52d34f1b46f88ae2786cfb873f9b12a1a01b629
Author: Balazs Gibizer 
Date:   Thu Mar 8 17:53:16 2018 +0100

Raise a proper exception in unit test

Using the raise statement without parameter outside of an except block
is not a valid python constuct. The original intention was to simulate
a failure to see if the context manager handles it properly. This patch
replaces that invalid statement with a proper exception raise.

TrivialFix

Change-Id: I3c6fc0ab617796c70bac2a853504f46ae2adb536
Closes-Bug: #1754409


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1754409

Title:
  invalid raise construct in
  test_build_resources_instance_not_found_before_yield

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The test code in [1] uses `raise` statement without any parameter. It
  is not a valid python construct if used outside of an `expect` block.

  The test does not fail on this as this codepath never executed.

  [1]
  
https://github.com/openstack/nova/blob/93a985d33662723872ec5eedd1a173dc397f96fa/nova/tests/unit/compute/test_compute_mgr.py#L5686

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1754409/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1755140] Re: dashboard displays panels something weird

2018-03-12 Thread Akihiro Motoki
heat-dashboard was added as an affected project because the bug report
says this happens only when heat-dashboard is enabled.

** Also affects: heat-dashboard
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1755140

Title:
  dashboard displays panels something weird

Status in heat-dashboard:
  New
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Dashboard shows panels everything.

  Please looking at the attachment image.

  For example, Create Network panel shows

  'Network' 'Subnet' 'Subnet Details'

  But every menus are in Network tab, and when I click the 'Subnet' or
  'Subnet Details', nothing happen.

  And also when I click the dropdown menu such as 'Select a project', it
  shows the projects, but I cannot not select it. Even though I clicked
  it, it still shows 'Select a project'.

  The OpenStack version is 3.14.0 and Queens release.
  I installed it with devstack master version.

  What I suspect is 'heat-dashboard'.
  Before I add 'enable plugin ~~ heat-dashboard', it didn't happened.
  But after adding it, this error happened.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat-dashboard/+bug/1755140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1754327] Re: Tempest scenario jobs failing due to no FIP connectivity

2018-03-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/550832
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=0ab03003b9f9c4f0cace538eee84478a099c0c58
Submitter: Zuul
Branch:master

commit 0ab03003b9f9c4f0cace538eee84478a099c0c58
Author: Sławek Kapłoński 
Date:   Thu Mar 8 14:18:31 2018 +0100

[Scenario tests] Try longer SSH timeout for ubuntu image

It looks that many scenario tests are failing because of too long
instance booting time and reached ssh timeout during checking
connectivity.
So longer timeout should solve this problem and tests should
not fail with this reason.

Change-Id: I5d0678ea2383483e6106976c148353ef4352befd
Closes-Bug: #1754327


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1754327

Title:
  Tempest scenario jobs failing due to no FIP connectivity

Status in neutron:
  Fix Released

Bug description:
  It is quite often (especially for linuxbridge scenario job) that some tests 
(random) are failing because ssh to instance is not possible.
  Example of such failed tests: 
http://logs.openstack.org/07/525607/12/check/neutron-tempest-plugin-scenario-linuxbridge/09f04f9/logs/testr_results.html.gz

  Same issue appears sometimes in dvr scenario job but it is not so
  often probably because it is multinode job and load on host is maybe
  lower so instances can boot faster.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1754327/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1755140] [NEW] dashboard displays panels something wierd

2018-03-12 Thread Jaewook Oh
Public bug reported:

Dashboard shows panels everything.

Please looking at the attachment image.

For example, Create Network panel shows

'Network' 'Subnet' 'Subnet Details'

But every menus are in Network tab, and when I click the 'Subnet' or
'Subnet Details', nothing happen.

And also when I click the dropdown menu such as 'Select a project', it
shows the projects, but I cannot not select it. Even though I clicked
it, it still shows 'Select a project'.

The OpenStack version is 3.14.0 and Queens release.
I installed it with devstack master version.

What I suspect is 'heat-dashboard'.
Before I add 'enable plugin ~~ heat-dashboard', it didn't happened.
But after adding it, this error happened.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Captured image for reporting error."
   
https://bugs.launchpad.net/bugs/1755140/+attachment/5076826/+files/horizon_error.png

** Description changed:

  Dashboard shows panels everything.
  
- I recommend looking at the attachment image.
+ Please looking at the attachment image.
  
  For example, Create Network panel shows
  
  'Network' 'Subnet' 'Subnet Details'
  
  But every menus are in Network tab, and when I click the 'Subnet' or
  'Subnet Details', nothing happen.
  
  And also when I click the dropdown menu such as 'Select a project', it
  shows the projects, but I cannot not select it. Even though I clicked
  it, it still shows 'Select a project'.
  
  The OpenStack version is 3.14.0 and Queens release.
  I installed it with devstack master version.
  
  What I suspect is 'heat-dashboard'.
  Before I add 'enable plugin ~~ heat-dashboard', it didn't happened.
  But after adding it, this error happened.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1755140

Title:
  dashboard displays panels something wierd

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Dashboard shows panels everything.

  Please looking at the attachment image.

  For example, Create Network panel shows

  'Network' 'Subnet' 'Subnet Details'

  But every menus are in Network tab, and when I click the 'Subnet' or
  'Subnet Details', nothing happen.

  And also when I click the dropdown menu such as 'Select a project', it
  shows the projects, but I cannot not select it. Even though I clicked
  it, it still shows 'Select a project'.

  The OpenStack version is 3.14.0 and Queens release.
  I installed it with devstack master version.

  What I suspect is 'heat-dashboard'.
  Before I add 'enable plugin ~~ heat-dashboard', it didn't happened.
  But after adding it, this error happened.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1755140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1755131] [NEW] Form fields with 'switched' can't be set required=True

2018-03-12 Thread Wangliangyu
Public bug reported:

Some form fields like class AttachInterface in
dashboards/project/instances/forms.py:

class AttachInterface(forms.SelfHandlingForm):
specification_method = forms.ThemableChoiceField(
label=_("The way to specify an interface"),
initial=False,
widget=forms.ThemableSelectWidget(attrs={
'class': 'switchable',
'data-slug': 'specification_method',
}))
network = forms.ThemableChoiceField(
label=_("Network"),
required=False,
widget=forms.ThemableSelectWidget(attrs={
'class': 'switched',
'data-switch-on': 'specification_method',
'data-specification_method-network': _('Network'),
}))

When the value of specification_method field is selected as network,the network 
filed is necessary and should
set required=True.But when the value is selected as port, the network field is 
not necessary and the network
field is also be checked and return an error.Now, we need the star when the 
network is necessary and ignore the
check when it is not necessary.

** Affects: horizon
 Importance: Undecided
 Assignee: Wangliangyu (wangly)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Wangliangyu (wangly)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1755131

Title:
  Form fields with 'switched' can't be set required=True

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Some form fields like class AttachInterface in
  dashboards/project/instances/forms.py:

  class AttachInterface(forms.SelfHandlingForm):
  specification_method = forms.ThemableChoiceField(
  label=_("The way to specify an interface"),
  initial=False,
  widget=forms.ThemableSelectWidget(attrs={
  'class': 'switchable',
  'data-slug': 'specification_method',
  }))
  network = forms.ThemableChoiceField(
  label=_("Network"),
  required=False,
  widget=forms.ThemableSelectWidget(attrs={
  'class': 'switched',
  'data-switch-on': 'specification_method',
  'data-specification_method-network': _('Network'),
  }))

  When the value of specification_method field is selected as network,the 
network filed is necessary and should
  set required=True.But when the value is selected as port, the network field 
is not necessary and the network
  field is also be checked and return an error.Now, we need the star when the 
network is necessary and ignore the
  check when it is not necessary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1755131/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750353] Re: _get_changed_synthetic_fields() does not guarantee returned fields to be updatable

2018-03-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/545799
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d5f27524819bc95530d0d5856782546fd02d65f8
Submitter: Zuul
Branch:master

commit d5f27524819bc95530d0d5856782546fd02d65f8
Author: Lujin 
Date:   Mon Feb 19 20:06:07 2018 +0900

Ensure _get_changed_synthetic_fields() return updatable fields

Currently _get_changed_synthetic_fields() does not guarantee
returned fields to be updatable. This patch adds this guarantee.

Change-Id: I123ae390bec489a931180a2e33f4bf7b1d51edb2
Closes-Bug: #1750353


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750353

Title:
  _get_changed_synthetic_fields() does not guarantee returned fields to
  be updatable

Status in neutron:
  Fix Released

Bug description:
  While revising [1], I discovered an issue of
  _get_changed_synthetic_fields(): it does not guarantee returned fields
  to be updatable.

  How to reproduce:
   Set a breakpoint in [2] and then run 
neutron.tests.unit.objects.test_ports.DistributedPortBindingIfaceObjTestCase.test_update_updates_from_db_object,
 the returned fields are
  -> return fields
  (Pdb) fields
  {'host': u'c2753a12ec', 'port_id': 'ae5700cd-f872-4694-bf36-92b919b0d3bf'}
  where 'host' and 'port_id' are not updatable.

  [1] https://review.openstack.org/#/c/544206/
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/objects/base.py#L696

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750353/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp