[Yahoo-eng-team] [Bug 1947813] Re: add host to aggregate api doesnot support concurrent

2024-04-25 Thread Michael Sherman
*** This bug is a duplicate of bug 1542491 ***
https://bugs.launchpad.net/bugs/1542491

** This bug has been marked a duplicate of bug 1542491
   Scheduler update_aggregates race causes incorrect aggregate information

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1947813

Title:
  add host to aggregate api doesnot support concurrent

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  when add different host to one aggregate at the same time, the front
  host is rewrite by the back call.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1947813/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2045415] Re: ovn-octavia-provider lacks a sync script like Neutron

2023-12-01 Thread Michael Johnson
Marking the Octavia project as invalid. The OVN provider is a neutron
project and not under the Octavia team.

** Changed in: octavia
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2045415

Title:
  ovn-octavia-provider lacks a sync script like Neutron

Status in neutron:
  New
Status in octavia:
  Invalid

Bug description:
  Neutron has neutron-ovn-db-sync-util - but Octavia ovn-octavia-
  provider does not have one - so in case of discrepancies (e.g. OVN NB
  DB entries got removed manually or the whole database was re
  provisioned) - there's no way to get Octavia and OVN NB in sync.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2045415/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2023414] [NEW] Devices attached to running instances get reordered

2023-06-09 Thread Michael Quiniola
Public bug reported:

Openstack Focal/Ussuri
Libvirt

When a device (Network or Disk) is attached to a running instance and
then the instance is shutoff (via the OS or Nova), the re-render of the
xml file reorders the devices. Ubuntu/Linux has the ability to match the
network interface to the correct device (when configured properly) but
Windows does not. Upon shutdown and start of these instances the
instance follows the order of enumeration of the device and the OS then
attaches the wrong network configuration to (what it thinks) is the
correct interface.

Steps to reproduce:
1) Start an instance
2) Add another Network Interface to that instance while it is running.
3) Shutdown the instance
4) Start the instance again and observe the devices in the instance.

On Windows machines this immediately causes network connection issues as
the wrong configuration is being used on the wrong device.

We have not tested this with Nova/VMWare.

Per @krenshaw:

"The PCI slots are being reordered when Nova rebuilds the VM after any
sort of hard stop (openstack server stop, evacuate, etc). This causes
both the MAC interchange and disk offline issues.

The reason this occurs is that Nova redefines the VM after stop events,
up to and including a hard reboot[0]. When this occurs, the VM is
regenerated with all currently attached devices, making them sequential
within the device type.

This causes reordering when an instance has had volumes and/or networks
attached and detached, as devices that are attached after boot are added
at the end of the list of PCI slots. On rebuild, these move to PCI slots
in sequential order, regardless of the attach/detach order.

Having checked the Nova code, Nova doesn't store PCI information for
"regular" non-PCI-passthrough devices. This includes NICs and volumes.
Adding this capability would be a feature request with no guarantee of
implementation."


We (@setuid @krenshaw) believe it is the metadata that nova is passing to 
libvirt to re-render the XML file.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2023414

Title:
  Devices attached to running instances get reordered

Status in OpenStack Compute (nova):
  New

Bug description:
  Openstack Focal/Ussuri
  Libvirt

  When a device (Network or Disk) is attached to a running instance and
  then the instance is shutoff (via the OS or Nova), the re-render of
  the xml file reorders the devices. Ubuntu/Linux has the ability to
  match the network interface to the correct device (when configured
  properly) but Windows does not. Upon shutdown and start of these
  instances the instance follows the order of enumeration of the device
  and the OS then attaches the wrong network configuration to (what it
  thinks) is the correct interface.

  Steps to reproduce:
  1) Start an instance
  2) Add another Network Interface to that instance while it is running.
  3) Shutdown the instance
  4) Start the instance again and observe the devices in the instance.

  On Windows machines this immediately causes network connection issues
  as the wrong configuration is being used on the wrong device.

  We have not tested this with Nova/VMWare.

  Per @krenshaw:

  "The PCI slots are being reordered when Nova rebuilds the VM after any
  sort of hard stop (openstack server stop, evacuate, etc). This causes
  both the MAC interchange and disk offline issues.

  The reason this occurs is that Nova redefines the VM after stop
  events, up to and including a hard reboot[0]. When this occurs, the VM
  is regenerated with all currently attached devices, making them
  sequential within the device type.

  This causes reordering when an instance has had volumes and/or
  networks attached and detached, as devices that are attached after
  boot are added at the end of the list of PCI slots. On rebuild, these
  move to PCI slots in sequential order, regardless of the attach/detach
  order.

  Having checked the Nova code, Nova doesn't store PCI information for
  "regular" non-PCI-passthrough devices. This includes NICs and volumes.
  Adding this capability would be a feature request with no guarantee of
  implementation."

  
  We (@setuid @krenshaw) believe it is the metadata that nova is passing to 
libvirt to re-render the XML file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2023414/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2014226] [NEW] cloud-init crashes with IPv6 routes

2023-03-31 Thread Michael Camilli
Public bug reported:


I have static routes specified for two networks, and during cloud-init an error 
occurs as it tries to make use of NETMASK1.

  # Network 2
  eth1:
addresses: # List of IP[v4,v6] addresses to assign to this interface
  - 2001:db8:abcd:abce:fe::1000/96

routes: # List of static routes for this interface
  - to: 2001:db8:abcd:abce:fe::0/96
via: 2001:db8:abcd:bbce:fe::2

  # Network 3
  eth2:
addresses: # List of IP[v4,v6] addresses to assign to this interface
  - 2001:db8:abcd:abcf:fe::1000/96

routes: # List of static routes for this interface
  - to: 2001:db8:abcd:abcf:fe::0/96
via: 2001:db8:abcd:bbcf:fe::2

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 761, in 
status_wrapper
ret = functor(name, args)
  File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 433, in 
main_init
init.apply_network_config(bring_up=bring_up_interfaces)
  File "/usr/lib/python3.6/site-packages/cloudinit/stages.py", line 926, in 
apply_network_config
netcfg, bring_up=bring_up
  File "/usr/lib/python3.6/site-packages/cloudinit/distros/__init__.py", line 
233, in apply_network_config
self._write_network_state(network_state)
  File "/usr/lib/python3.6/site-packages/cloudinit/distros/__init__.py", line 
129, in _write_network_state
renderer.render_network_state(network_state)
  File "/usr/lib/python3.6/site-packages/cloudinit/net/sysconfig.py", line 
1011, in render_network_state
base_sysconf_dir, network_state, self.flavor, templates=templates
  File "/usr/lib/python3.6/site-packages/cloudinit/net/sysconfig.py", line 
1002, in _render_sysconfig
contents[cpath] = iface_cfg.routes.to_string(proto)
  File "/usr/lib/python3.6/site-packages/cloudinit/net/sysconfig.py", line 199, 
in to_string
netmask_value = str(self._conf["NETMASK" + index])
KeyError: 'NETMASK1'

Additional Info:
1. Using KVM on a private server
2. See above configuration details that cause an issue. Note in the 
documentation I could only find an example of an ipv4 route, so maybe you could 
enhance the documentation with an example for ipv6 if possible.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Attachment added: "cloud-init.tar.gz"
   
https://bugs.launchpad.net/bugs/2014226/+attachment/5659716/+files/cloud-init.tar.gz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/2014226

Title:
  cloud-init crashes with IPv6 routes

Status in cloud-init:
  New

Bug description:
  
  I have static routes specified for two networks, and during cloud-init an 
error occurs as it tries to make use of NETMASK1.

# Network 2
eth1:
  addresses: # List of IP[v4,v6] addresses to assign to this interface
- 2001:db8:abcd:abce:fe::1000/96

  routes: # List of static routes for this interface
- to: 2001:db8:abcd:abce:fe::0/96
  via: 2001:db8:abcd:bbce:fe::2

# Network 3
eth2:
  addresses: # List of IP[v4,v6] addresses to assign to this interface
- 2001:db8:abcd:abcf:fe::1000/96

  routes: # List of static routes for this interface
- to: 2001:db8:abcd:abcf:fe::0/96
  via: 2001:db8:abcd:bbcf:fe::2

  Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 761, in 
status_wrapper
  ret = functor(name, args)
File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 433, in 
main_init
  init.apply_network_config(bring_up=bring_up_interfaces)
File "/usr/lib/python3.6/site-packages/cloudinit/stages.py", line 926, in 
apply_network_config
  netcfg, bring_up=bring_up
File "/usr/lib/python3.6/site-packages/cloudinit/distros/__init__.py", line 
233, in apply_network_config
  self._write_network_state(network_state)
File "/usr/lib/python3.6/site-packages/cloudinit/distros/__init__.py", line 
129, in _write_network_state
  renderer.render_network_state(network_state)
File "/usr/lib/python3.6/site-packages/cloudinit/net/sysconfig.py", line 
1011, in render_network_state
  base_sysconf_dir, network_state, self.flavor, templates=templates
File "/usr/lib/python3.6/site-packages/cloudinit/net/sysconfig.py", line 
1002, in _render_sysconfig
  contents[cpath] = iface_cfg.routes.to_string(proto)
File "/usr/lib/python3.6/site-packages/cloudinit/net/sysconfig.py", line 
199, in to_string
  netmask_value = str(self._conf["NETMASK" + index])
  KeyError: 'NETMASK1'

  Additional Info:
  1. Using KVM on a private server
  2. See above configuration details that cause an issue. Note in the 
documentation I could only find an example of an ipv4 route, so maybe you could 
enhance the documentation with an example for ipv6 if possible.

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 2011454] Re: TypeError: load() missing 1 required positional argument: 'Loader'

2023-03-14 Thread Michael Hudson-Doyle
*** This bug is a duplicate of bug 2009746 ***
https://bugs.launchpad.net/bugs/2009746

I think this is a bug in cloud-init, it looks like it is not compatible
with pyyaml 6.

** Also affects: cloud-init
   Importance: Undecided
   Status: New

** This bug has been marked a duplicate of bug 2009746
   dpkg-reconfigure cloud-init: yaml.load errors during MAAS deloyment of 
Ubuntu 23.04(Lunar)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/2011454

Title:
  TypeError: load() missing 1 required positional argument: 'Loader'

Status in cloud-init:
  New
Status in curtin:
  New

Bug description:
  I'm seeing failures deploying lunar to an arm64 server, curtin
  2.1-0ubuntu1~22.04.1, MAAS 3.3:

 Running command ['unshare', '--help'] with allowed return codes [0] 
(capture=True)
  Running command ['unshare', '--fork', '--pid', '--', 'chroot', 
'/tmp/tmp6un94l9v/target', 'lsb_release', '--all'] with allowed return codes 
[0] (capture=True)
  Running command ['unshare', '--fork', '--pid', '--', 'chroot', 
'/tmp/tmp6un94l9v/target', 'dpkg', '--print-architecture'] with allowed return 
codes [0] (capture=True)
  got primary mirror: None
  got security mirror: None
  Apt Mirror info: {'PRIMARY': 'http://ports.ubuntu.com/ubuntu-ports', 
'SECURITY': 'http://ports.ubuntu.com/ubuntu-ports', 'MIRROR': 
'http://ports.ubuntu.com/ubuntu-ports'}
  Applying debconf selections
  Running command ['unshare', '--fork', '--pid', '--', 'chroot', 
'/tmp/tmp6un94l9v/target', 'debconf-set-selections'] with allowed return codes 
[0] (capture=True)
  Running command ['unshare', '--fork', '--pid', '--', 'chroot', 
'/tmp/tmp6un94l9v/target', 'dpkg-query', '--list'] with allowed return codes 
[0] (capture=True)
  unconfiguring cloud-init
  cleaning cloud-init config from: 
['/tmp/tmp6un94l9v/target/etc/cloud/cloud.cfg.d/90_dpkg.cfg']
  Running command ['unshare', '--fork', '--pid', '--', 'chroot', 
'/tmp/tmp6un94l9v/target', 'dpkg-reconfigure', '--frontend=noninteractive', 
'cloud-init'] with allowed return codes [0] (capture=True)
  finish: 
cmd-install/stage-curthooks/builtin/cmd-curthooks/writing-apt-config: FAIL: 
configuring apt configuring apt
  finish: cmd-install/stage-curthooks/builtin/cmd-curthooks: FAIL: 
curtin command curthooks
  Traceback (most recent call last):
File "/curtin/curtin/commands/main.py", line 202, in main
  ret = args.func(args)
^^^
File "/curtin/curtin/commands/curthooks.py", line 1886, in curthooks
  builtin_curthooks(cfg, target, state)
File "/curtin/curtin/commands/curthooks.py", line 1692, in 
builtin_curthooks
  do_apt_config(cfg, target)
File "/curtin/curtin/commands/curthooks.py", line 97, in 
do_apt_config
  apt_config.handle_apt(apt_cfg, target)
File "/curtin/curtin/commands/apt_config.py", line 73, in handle_apt
  apply_debconf_selections(cfg, target)
File "/curtin/curtin/commands/apt_config.py", line 167, in 
apply_debconf_selections
  dpkg_reconfigure(need_reconfig, target=target)
File "/curtin/curtin/commands/apt_config.py", line 133, in 
dpkg_reconfigure
  util.subp(['dpkg-reconfigure', '--frontend=noninteractive'] +
File "/curtin/curtin/util.py", line 275, in subp
  return _subp(*args, **kwargs)
 ^^
File "/curtin/curtin/util.py", line 139, in _subp
  raise ProcessExecutionError(stdout=out, stderr=err,
  curtin.util.ProcessExecutionError: Unexpected error while running 
command.
  Command: ['unshare', '--fork', '--pid', '--', 'chroot', 
'/tmp/tmp6un94l9v/target', 'dpkg-reconfigure', '--frontend=noninteractive', 
'cloud-init']
  Exit code: 1
  Reason: -
  Stdout: ''
  Stderr: Traceback (most recent call last):
File "", line 23, in 
  TypeError: load() missing 1 required positional argument: 
'Loader'
  
  Unexpected error while running command.
  Command: ['unshare', '--fork', '--pid', '--', 'chroot', 
'/tmp/tmp6un94l9v/target', 'dpkg-reconfigure', '--frontend=noninteractive', 
'cloud-init']
  Exit code: 1
  Reason: -
  Stdout: ''
  Stderr: Traceback (most recent call last):
File "", line 23, in 
  TypeError: load() missing 1 required positional argument: 
'Loader'
  
  
  Stderr: ''

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/2011454/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : 

[Yahoo-eng-team] [Bug 1990987] [NEW] keystone-manage segmentation fault on CentOS 9 Stream

2022-09-27 Thread Michael Johnson
Public bug reported:

When running wallaby devstack on a fresh build of CentOS 9 Stream,
keystone-manage causes a segmentation fault and stops the install.

python3-3.9.13-3.el9.x86_64

commit a9e81626c5e9dac897759c5f66c7ae1b4efa3c6d (HEAD -> stable/wallaby, 
origin/stable/wallaby)
Merge: 5633be211f edb8bcb029
Author: Zuul 
Date:   Wed Sep 7 02:21:04 2022 +

Merge "reenable greendns in nova." into stable/wallaby

[16313.919417] keystone-manage[105312]: segfault at 7bc20d57dec9 ip 
7fc20d351679 sp 7fff8cdba3f0 error 4 in 
libpython3.9.so.1.0[7fc20d2a4000+1b5000]
[16313.919431] Code: 83 ec 08 48 8b 5f 10 48 83 eb 01 78 2c 4d 39 f4 75 3f 0f 
1f 80 00 00 00 00 49 8b 47 18 48 8b 2c d8 48 85 ed 74 e1 48 8b 55 08  82 a9 
00 00 00 40 75 3e 48 83 eb 01 73 e0 31 c0 48 83 c4 08 5b

/opt/stack/devstack/lib/keystone: line 575: 105312 Segmentation fault
(core dumped) $KEYSTONE_BIN_DIR/keystone-manage bootstrap --bootstrap-
username admin --bootstrap-password "$ADMIN_PASSWORD" --bootstrap-
project-name admin --bootstrap-role-name admin --bootstrap-service-name
keystone --bootstrap-region-id "$REGION_NAME" --bootstrap-admin-url
"$KEYSTONE_AUTH_URI" --bootstrap-public-url "$KEYSTONE_SERVICE_URI"

** Affects: keystone
 Importance: Undecided
 Status: New

** Project changed: nova => keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1990987

Title:
  keystone-manage segmentation fault on CentOS 9 Stream

Status in OpenStack Identity (keystone):
  New

Bug description:
  When running wallaby devstack on a fresh build of CentOS 9 Stream,
  keystone-manage causes a segmentation fault and stops the install.

  python3-3.9.13-3.el9.x86_64

  commit a9e81626c5e9dac897759c5f66c7ae1b4efa3c6d (HEAD -> stable/wallaby, 
origin/stable/wallaby)
  Merge: 5633be211f edb8bcb029
  Author: Zuul 
  Date:   Wed Sep 7 02:21:04 2022 +

  Merge "reenable greendns in nova." into stable/wallaby

  [16313.919417] keystone-manage[105312]: segfault at 7bc20d57dec9 ip 
7fc20d351679 sp 7fff8cdba3f0 error 4 in 
libpython3.9.so.1.0[7fc20d2a4000+1b5000]
  [16313.919431] Code: 83 ec 08 48 8b 5f 10 48 83 eb 01 78 2c 4d 39 f4 75 3f 0f 
1f 80 00 00 00 00 49 8b 47 18 48 8b 2c d8 48 85 ed 74 e1 48 8b 55 08  82 a9 
00 00 00 40 75 3e 48 83 eb 01 73 e0 31 c0 48 83 c4 08 5b

  /opt/stack/devstack/lib/keystone: line 575: 105312 Segmentation fault
  (core dumped) $KEYSTONE_BIN_DIR/keystone-manage bootstrap --bootstrap-
  username admin --bootstrap-password "$ADMIN_PASSWORD" --bootstrap-
  project-name admin --bootstrap-role-name admin --bootstrap-service-
  name keystone --bootstrap-region-id "$REGION_NAME" --bootstrap-admin-
  url "$KEYSTONE_AUTH_URI" --bootstrap-public-url
  "$KEYSTONE_SERVICE_URI"

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1990987/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1988069] [NEW] neutron-dhcp-agent fails when small tenant network mtu is set

2022-08-29 Thread Michael Sherman
Public bug reported:

High level description: 
When a user creates a tenant network with a very small MTU (in our case 70), 
neutron-dhcp-agent stops updating the dnsmasq configuration, causing DHCP 
issues for all networks.

Pre-conditions:
Neutron is using the openvswitch, baremetal, and networking-generic-switch 
mechanism drivers.
A physical network named `physnet1` is configured, with MTU=9000

Step-by-step reproduction steps:
As an admin user, run:

# Create "normal" network and subnet
openstack network create --provider-network-type vlan 
--provider-physical-network physnet1 --mtu 1500 test-net-1500
openstack subnet create --subnet-range 10.100.10.0/24 --dhcp --network 
test-net-1500 test-subnet-1500

# Create "small MTU" network and subnet
openstack network create --provider-network-type vlan 
--provider-physical-network physnet1 --mtu 70 test-net-70
openstack subnet create --subnet-range 10.100.11.0/24 --dhcp --network 
test-net-70 test-subnet-70

# attempt to launch an instance on the "normal" network
openstack server create --image Ubuntu --flavor Baremetal --network 
test-net-1500

Expected output: what did you hope to see?
We expected to see neutron-dhcp-agent update the dnsmasq configuration, which 
would then serve requests from the instances.

* Actual output: did the system silently fail (in this case log traces are 
useful)?
Openstack commands complete successfully, but instance never receives a 
response to its DHCP requests. Neutron-dhcp-agent logs show:
https://paste.opendev.org/show/b4r0XCu5KpguM72bnh0u/

Version:
  ** OpenStack version "stable/xena", hash 
bc1dd6939d197d15799aaf252049f76442866c21
  ** Linux distro, kernel. Ubuntu 20.04
  ** Containers built with Kolla, and deployed via Kolla-Ansible

* Environment: 
Single node deployment, all services (core, networking, database, etc.) on one 
node.
All compute-nodes are baremetal via Ironic.

* Perceived severity: is this a blocker for you?
High, as non-admin users can trigger an DHCP outage affecting all users.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1988069

Title:
  neutron-dhcp-agent fails when small tenant network mtu is set

Status in neutron:
  New

Bug description:
  High level description: 
  When a user creates a tenant network with a very small MTU (in our case 70), 
neutron-dhcp-agent stops updating the dnsmasq configuration, causing DHCP 
issues for all networks.

  Pre-conditions:
  Neutron is using the openvswitch, baremetal, and networking-generic-switch 
mechanism drivers.
  A physical network named `physnet1` is configured, with MTU=9000

  Step-by-step reproduction steps:
  As an admin user, run:

  # Create "normal" network and subnet
  openstack network create --provider-network-type vlan 
--provider-physical-network physnet1 --mtu 1500 test-net-1500
  openstack subnet create --subnet-range 10.100.10.0/24 --dhcp --network 
test-net-1500 test-subnet-1500

  # Create "small MTU" network and subnet
  openstack network create --provider-network-type vlan 
--provider-physical-network physnet1 --mtu 70 test-net-70
  openstack subnet create --subnet-range 10.100.11.0/24 --dhcp --network 
test-net-70 test-subnet-70

  # attempt to launch an instance on the "normal" network
  openstack server create --image Ubuntu --flavor Baremetal --network 
test-net-1500

  Expected output: what did you hope to see?
  We expected to see neutron-dhcp-agent update the dnsmasq configuration, which 
would then serve requests from the instances.

  * Actual output: did the system silently fail (in this case log traces are 
useful)?
  Openstack commands complete successfully, but instance never receives a 
response to its DHCP requests. Neutron-dhcp-agent logs show:
  https://paste.opendev.org/show/b4r0XCu5KpguM72bnh0u/

  Version:
** OpenStack version "stable/xena", hash 
bc1dd6939d197d15799aaf252049f76442866c21
** Linux distro, kernel. Ubuntu 20.04
** Containers built with Kolla, and deployed via Kolla-Ansible

  * Environment: 
  Single node deployment, all services (core, networking, database, etc.) on 
one node.
  All compute-nodes are baremetal via Ironic.

  * Perceived severity: is this a blocker for you?
  High, as non-admin users can trigger an DHCP outage affecting all users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1988069/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1872124] Re: Please integrate ubuntu-drivers --gpgpu into Ubuntu Server

2022-08-25 Thread Michael Hudson-Doyle
The version of subiquity in 22.04.1 supports this now.

** Changed in: subiquity
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1872124

Title:
  Please integrate ubuntu-drivers --gpgpu into Ubuntu Server

Status in cloud-init:
  Incomplete
Status in MAAS:
  Incomplete
Status in subiquity:
  Fix Released
Status in ubuntu-drivers-common package in Ubuntu:
  New
Status in ubuntu-meta package in Ubuntu:
  New

Bug description:
  Could subiquity provide an option in the UI to install and execute
  ubuntu-drivers-common on the target? The use case I'm interested in is
  an "on-rails" installation of Nvidia drivers for servers being
  installed for deep learning workloads.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1872124/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1977524] Re: Wrong redirect after deleting zone from Zone Overview pane

2022-06-08 Thread Michael Johnson
** Also affects: designate-dashboard
   Importance: Undecided
   Status: New

** No longer affects: designate

** Changed in: designate-dashboard
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1977524

Title:
  Wrong redirect after deleting zone from Zone Overview pane

Status in Designate Dashboard:
  Confirmed
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When deleting zone from Zones -> specific zone ->  Overview pane i am getting 
page not exist error. 
  After successful notification that zone is being removed website redirects to 
/dashboard/dashboard/project/dnszones which has duplicate dashboard path. 
  When deleting from zones list view everything works fine.

  
  Tested on Ussuri environment, but code seems to be unchanged in newer 
releases. 
  I've tried to apply bugfixes for reloading zones/flating-ip panes but with no 
effect for this case

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate-dashboard/+bug/1977524/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1977524] Re: Wrong redirect after deleting zone from Zone Overview pane

2022-06-03 Thread Michael Johnson
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1977524

Title:
  Wrong redirect after deleting zone from Zone Overview pane

Status in Designate:
  New
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When deleting zone from Zones -> specific zone ->  Overview pane i am getting 
page not exist error. 
  After successful notification that zone is being removed website redirects to 
/dashboard/dashboard/project/dnszones which has duplicate dashboard path. 
  When deleting from zones list view everything works fine.

  
  Tested on Ussuri environment, but code seems to be unchanged in newer 
releases. 
  I've tried to apply bugfixes for reloading zones/flating-ip panes but with no 
effect for this case

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/1977524/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2022-05-03 Thread Michael Johnson
** Changed in: python-designateclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Astara:
  Fix Released
Status in Bandit:
  Fix Released
Status in Barbican:
  Fix Released
Status in Blazar:
  Fix Released
Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in daisycloud-core:
  New
Status in Designate:
  Fix Released
Status in OpenStack Backup/Restore and DR (Freezer):
  In Progress
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in Higgins:
  New
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  Fix Released
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in networking-calico:
  Fix Released
Status in networking-infoblox:
  In Progress
Status in networking-l2gw:
  Fix Released
Status in networking-sfc:
  Fix Released
Status in quark:
  In Progress
Status in OpenStack Compute (nova):
  Won't Fix
Status in os-brick:
  Fix Released
Status in PBR:
  Fix Released
Status in pycadf:
  Fix Released
Status in python-barbicanclient:
  Fix Released
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in python-designateclient:
  Fix Released
Status in Glance Client:
  Fix Released
Status in python-mistralclient:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Rally:
  In Progress
Status in Sahara:
  Fix Released
Status in Solum:
  Fix Released
Status in sqlalchemy-migrate:
  In Progress
Status in SWIFT:
  In Progress
Status in tacker:
  Fix Released
Status in tempest:
  Invalid
Status in zaqar:
  Fix Released

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/astara/+bug/1259292/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1970679] [NEW] neutron-tempest-plugin-designate-scenario cross project job is failing on OVN

2022-04-27 Thread Michael Johnson
Public bug reported:

The cross-project neutron-tempest-plugin-designate-scenario job is
failing during the Designate gate runs due to an OVN failure.

+ lib/neutron_plugins/ovn_agent:start_ovn:698 :   wait_for_sock_file 
/var/run/openvswitch/ovnnb_db.sock
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:173 :   local count=0
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=1
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 1 -gt 5 ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=2
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 2 -gt 5 ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=3
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 3 -gt 5 ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=4
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 4 -gt 5 ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=5
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 5 -gt 5 ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=6
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 6 -gt 5 ']'
+ lib/neutron_plugins/ovn_agent:wait_for_sock_file:178 :   die 178 'Socket 
/var/run/openvswitch/ovnnb_db.sock not found'
+ functions-common:die:264 :   local exitcode=0
[Call Trace]
./stack.sh:1284:start_ovn_services
/opt/stack/devstack/lib/neutron-legacy:516:start_ovn
/opt/stack/devstack/lib/neutron_plugins/ovn_agent:698:wait_for_sock_file
/opt/stack/devstack/lib/neutron_plugins/ovn_agent:178:die
[ERROR] /opt/stack/devstack/lib/neutron_plugins/ovn_agent:178 Socket 
/var/run/openvswitch/ovnnb_db.sock not found
exit_trap: cleaning up child processes

An example job run is here:
https://zuul.opendev.org/t/openstack/build/b014e50e018d426b9367fd3219ed489e

** Affects: neutron
 Importance: Critical
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1970679

Title:
  neutron-tempest-plugin-designate-scenario cross project job is failing
  on OVN

Status in neutron:
  New

Bug description:
  The cross-project neutron-tempest-plugin-designate-scenario job is
  failing during the Designate gate runs due to an OVN failure.

  + lib/neutron_plugins/ovn_agent:start_ovn:698 :   wait_for_sock_file 
/var/run/openvswitch/ovnnb_db.sock
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:173 :   local count=0
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 1 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=2
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 2 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=3
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 3 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=4
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 4 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 

[Yahoo-eng-team] [Bug 1915611] [NEW] 500 error on openstack server create

2021-02-13 Thread Michael Potter
Public bug reported:

Description
===

I used the "openstack server create" command on a two-node test cluster.

I tested neutron and nova user and mysql passwords, net connectivity,
disabled apparmor, loaded the geneve kernel module.


Steps to reproduce
==
run the command:

root@controller:~# openstack server create --flavor m1.tiny --image
cirros   --nic net-id=provider --security-group default   --key-name
mykey test01


Expected result
===
New server instance created. 


Actual result
=
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-4655d135-6509-46e4-b918-27c31ff705a3)


Environment
===
Debian 10.7
repo: buster-ussuri-backports
nova-common 2:21.1.1-1~bpo10+1
QEMU hypervisor
Neutron option 2 w/OpenVSwitch
Disk storage


Logs & Configs
==
nova-api.log

2021-02-13 09:05:28.744 1426 INFO nova.api.openstack.wsgi 
[req-a78ac65e-a3ff-4a7d-b9dd-b2082e319be0 24f68ea810674751a3f3f81641402153 
677c973f08a74a4d85368d3bfae359a7 - default default] HTTP exception thrown: 
Flavor m1.tiny could not be found.
2021-02-13 09:05:28.746 1426 INFO nova.api.openstack.requestlog 
[req-a78ac65e-a3ff-4a7d-b9dd-b2082e319be0 24f68ea810674751a3f3f81641402153 
677c973f08a74a4d85368d3bfae359a7 - default default] 172.29.236.13 "GET 
/v2.1/flavors/m1.tiny" status: 404 len: 80 microversion: 2.1 time: 0.817156
[pid: 1426|app: 0|req: 10/37] 172.29.236.13 () {32 vars in 626 bytes} [Sat Feb 
13 09:05:27 2021] GET /v2.1/flavors/m1.tiny => generated 80 bytes in 853 msecs 
(HTTP/1.1 404) 7 headers in 339 bytes (1 switches on core 0)
2021-02-13 09:05:28.812 1427 INFO nova.api.openstack.requestlog 
[req-b06b3fda-52fe-4ebf-bb91-9583eea28eec 24f68ea810674751a3f3f81641402153 
677c973f08a74a4d85368d3bfae359a7 - default default] 172.29.236.13 "GET 
/v2.1/flavors" status: 200 len: 187 microversion: 2.1 time: 0.056631
[pid: 1427|app: 0|req: 10/38] 172.29.236.13 () {32 vars in 610 bytes} [Sat Feb 
13 09:05:28 2021] GET /v2.1/flavors => generated 187 bytes in 65 msecs 
(HTTP/1.1 200) 7 headers in 317 bytes (1 switches on core 0)
2021-02-13 09:05:28.870 1425 INFO nova.api.openstack.requestlog 
[req-e9b74b79-3c78-4393-b363-5ac72ecaa096 24f68ea810674751a3f3f81641402153 
677c973f08a74a4d85368d3bfae359a7 - default default] 172.29.236.13 "GET 
/v2.1/flavors/0" status: 200 len: 354 microversion: 2.1 time: 0.046618
[pid: 1425|app: 0|req: 10/39] 172.29.236.13 () {32 vars in 614 bytes} [Sat Feb 
13 09:05:28 2021] GET /v2.1/flavors/0 => generated 354 bytes in 55 msecs 
(HTTP/1.1 200) 7 headers in 317 bytes (1 switches on core 0)
2021-02-13 09:05:33.879 1428 ERROR nova.api.openstack.wsgi 
[req-42b91a28-eaff-47aa-a7af-0d07b2ba768d 24f68ea810674751a3f3f81641402153 
677c973f08a74a4d85368d3bfae359a7 - default default] Unexpected exception in API 
method: keystoneauth1.exceptions.http.Unauthorized: The request you have made 
requires authentication. (HTTP 401) (Request-ID: 
req-cf6d651d-9e14-45f7-a87c-4fd8f970046c)
2021-02-13 09:05:33.879 1428 ERROR nova.api.openstack.wsgi Traceback (most 
recent call last):
2021-02-13 09:05:33.879 1428 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/openstack/wsgi.py", line 671, in 
wrapped
2021-02-13 09:05:33.879 1428 ERROR nova.api.openstack.wsgi return f(*args, 
**kwargs)
2021-02-13 09:05:33.879 1428 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/validation/__init__.py", line 110, in 
wrapper
2021-02-13 09:05:33.879 1428 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
2021-02-13 09:05:33.879 1428 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/validation/__init__.py", line 110, in 
wrapper
2021-02-13 09:05:33.879 1428 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
2021-02-13 09:05:33.879 1428 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/validation/__init__.py", line 110, in 
wrapper
2021-02-13 09:05:33.879 1428 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
2021-02-13 09:05:33.879 1428 ERROR nova.api.openstack.wsgi   [Previous line 
repeated 9 more times]
2021-02-13 09:05:33.879 1428 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/openstack/compute/servers.py", line 
697, in create
2021-02-13 09:05:33.879 1428 ERROR nova.api.openstack.wsgi **create_kwargs)
2021-02-13 09:05:33.879 1428 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/hooks.py", line 154, in inner
2021-02-13 09:05:33.879 1428 ERROR nova.api.openstack.wsgi rv = f(*args, 
**kwargs)
2021-02-13 09:05:33.879 1428 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/compute/api.py", line 1997, in create
2021-02-13 09:05:33.879 1428 ERROR nova.api.openstack.wsgi 
requested_hypervisor_hostname=requested_hypervisor_hostname)
2021-02-13 

[Yahoo-eng-team] [Bug 1915460] [NEW] no way to suppress host key info on console

2021-02-11 Thread Michael Hudson-Doyle
Public bug reported:

cc_keys_to_console does not have any way to prevent the keys being
written to the console (beyond deleting write-ssh-key-fingerprints,
which while actually ok for my use case is gross). I propose adding a
"no_keys_to_console" config option, defaulting to False, that suppresses
the output (modelled on what cc_ssh_authkey_fingerprints does).

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1915460

Title:
  no way to suppress host key info on console

Status in cloud-init:
  New

Bug description:
  cc_keys_to_console does not have any way to prevent the keys being
  written to the console (beyond deleting write-ssh-key-fingerprints,
  which while actually ok for my use case is gross). I propose adding a
  "no_keys_to_console" config option, defaulting to False, that
  suppresses the output (modelled on what cc_ssh_authkey_fingerprints
  does).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1915460/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1915077] [NEW] genisoimage may be going away

2021-02-08 Thread Michael Hudson-Doyle
Public bug reported:

It seems that cdrkit, which is where genisoimage comes from, is dead
upstream and is likely to be removed from debian:
https://lists.debian.org/debian-cloud/2021/02/msg00011.html

Plenty of cloud-init docs and tutorials refer to genisoimage to create
seed ISOs, it may be time to find something else.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1915077

Title:
  genisoimage may be going away

Status in cloud-init:
  New

Bug description:
  It seems that cdrkit, which is where genisoimage comes from, is dead
  upstream and is likely to be removed from debian:
  https://lists.debian.org/debian-cloud/2021/02/msg00011.html

  Plenty of cloud-init docs and tutorials refer to genisoimage to create
  seed ISOs, it may be time to find something else.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1915077/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1902276] [NEW] libvirtd going into a tight loop causing instances to not transition to ACTIVE

2020-10-30 Thread Michael Johnson
Public bug reported:

Description
===
This is current master branch (wallaby) of OpenStack.

We seen this regularly, but it's intermittent.

We are seeing nova instances that do not transition to ACTIVE inside
five minutes. Investigating this led us to find that libvirtd seems to
be going into a tight loop on an instance delete.

The 136MB log is here:
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_c77/759973/3/check/octavia-v2
-dsvm-scenario/c77fe63/controller/logs/libvirt/libvirtd_log.txt

The overall job logs are here: 
https://zuul.opendev.org/t/openstack/build/c77fe63a94ef4298872ad5f40c5df7d4/logs

When running the Octavia scenario test suite, we occasionally see nova
instances fail to become ACTIVE in a timely manner, causing timeouts and
failures. In investigating this issue we found the libvirtd log was
136MB.

Most of the file is full of this repeating:
2020-10-28 23:45:06.330+: 20852: debug : qemuMonitorIO:767 : Error on 
monitor internal error: End of file from qemu monitor
2020-10-28 23:45:06.330+: 20852: debug : qemuMonitorIO:788 : Triggering EOF 
callback
2020-10-28 23:45:06.330+: 20852: debug : qemuProcessHandleMonitorEOF:301 : 
Received EOF on 0x7f6278014ca0 'instance-0001'
2020-10-28 23:45:06.330+: 20852: debug : qemuProcessHandleMonitorEOF:305 : 
Domain is being destroyed, EOF is expected

Here is a snippet for the lead in to the repeated lines:
http://paste.openstack.org/show/799559/

It appears to be a tight loop, repeating many times per second.

Eventually it does stop and things seem to go back to normal in nova.

Here is the snippet of the end of the loop in the log:
http://paste.openstack.org/show/799560/

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1902276

Title:
  libvirtd going into a tight loop causing instances to not transition
  to ACTIVE

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  This is current master branch (wallaby) of OpenStack.

  We seen this regularly, but it's intermittent.

  We are seeing nova instances that do not transition to ACTIVE inside
  five minutes. Investigating this led us to find that libvirtd seems to
  be going into a tight loop on an instance delete.

  The 136MB log is here:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_c77/759973/3/check/octavia-v2
  -dsvm-scenario/c77fe63/controller/logs/libvirt/libvirtd_log.txt

  The overall job logs are here: 
  
https://zuul.opendev.org/t/openstack/build/c77fe63a94ef4298872ad5f40c5df7d4/logs

  When running the Octavia scenario test suite, we occasionally see nova
  instances fail to become ACTIVE in a timely manner, causing timeouts
  and failures. In investigating this issue we found the libvirtd log
  was 136MB.

  Most of the file is full of this repeating:
  2020-10-28 23:45:06.330+: 20852: debug : qemuMonitorIO:767 : Error on 
monitor internal error: End of file from qemu monitor
  2020-10-28 23:45:06.330+: 20852: debug : qemuMonitorIO:788 : Triggering 
EOF callback
  2020-10-28 23:45:06.330+: 20852: debug : qemuProcessHandleMonitorEOF:301 
: Received EOF on 0x7f6278014ca0 'instance-0001'
  2020-10-28 23:45:06.330+: 20852: debug : qemuProcessHandleMonitorEOF:305 
: Domain is being destroyed, EOF is expected

  Here is a snippet for the lead in to the repeated lines:
  http://paste.openstack.org/show/799559/

  It appears to be a tight loop, repeating many times per second.

  Eventually it does stop and things seem to go back to normal in nova.

  Here is the snippet of the end of the loop in the log:
  http://paste.openstack.org/show/799560/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1902276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1898553] [NEW] nova-compute doesn't start due to libvirtd --listen restriction

2020-10-05 Thread Michael
Public bug reported:

hello,

after upgrading from Openstack Train to Ussuri, running on Ubuntu 18.04,
the nova-compute service doesn't start due to the libvirtd service not
starting because of the --listen option being enabled in the
/etc/default/libvirtd file and so, the nova-compute service fail to
reach the socket that doesn't exist.

i have found other bug report and patch that claim to have fixed this
issue but it's not, at least not on the package installed version of
openstack on ubuntu 18.04 (https://bugs.launchpad.net/puppet-
nova/+bug/1880619)

is nova-compute able to use another mechanism to connect to libvirtd ?
and thus not needing the --listen option ?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1898553

Title:
  nova-compute doesn't start due to libvirtd --listen restriction

Status in OpenStack Compute (nova):
  New

Bug description:
  hello,

  after upgrading from Openstack Train to Ussuri, running on Ubuntu
  18.04, the nova-compute service doesn't start due to the libvirtd
  service not starting because of the --listen option being enabled in
  the /etc/default/libvirtd file and so, the nova-compute service fail
  to reach the socket that doesn't exist.

  i have found other bug report and patch that claim to have fixed this
  issue but it's not, at least not on the package installed version of
  openstack on ubuntu 18.04 (https://bugs.launchpad.net/puppet-
  nova/+bug/1880619)

  is nova-compute able to use another mechanism to connect to libvirtd ?
  and thus not needing the --listen option ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1898553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1894136] [NEW] [OVN Octavia Provider] OVN provider fails during listener delete

2020-09-03 Thread Michael Johnson
Public bug reported:

The OVN provider is consistently failing during a listener delete as
part of the member API tempest test tear down with a 'filedescriptor out
of range in select()' error.

o-api logs snippet:

Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn 
[None req-9201aee8-9a5b-460c-bf8b-c6408d20aec7 tempest-MemberAPITest-903346660 
tempest-MemberAPITest-903346660] OVS database connection to OVN_Northbound 
failed with error: 'filedescriptor out of range in select()'. Verify that the 
OVS and OVN services are available and that the 'ovn_nb_connection' and 
'ovn_sb_connection' configuration options are correct.: ValueError: 
filedescriptor out of range in select()
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn 
Traceback (most recent call last):
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn   
File 
"/opt/stack/ovn-octavia-provider/ovn_octavia_provider/ovsdb/impl_idl_ovn.py", 
line 61, in start_connection
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn
 self.ovsdb_connection.start()
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn   
File 
"/usr/local/lib/python3.6/dist-packages/ovsdbapp/backend/ovs_idl/connection.py",
 line 79, in start
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn
 idlutils.wait_for_change(self.idl, self.timeout)
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn   
File 
"/usr/local/lib/python3.6/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", 
line 201, in wait_for_change
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn
 ovs_poller.block()
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn   
File "/usr/local/lib/python3.6/dist-packages/ovs/poller.py", line 231, in block
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn
 events = self.poll.poll(self.timeout)
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn   
File "/usr/local/lib/python3.6/dist-packages/ovs/poller.py", line 140, in poll
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn
 timeout)
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn 
ValueError: filedescriptor out of range in select()
Sep 03 15:44:05.171297 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR ovn_octavia_provider.ovsdb.impl_idl_ovn 
Sep 03 15:44:05.172746 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR octavia.api.drivers.driver_factory [None 
req-9201aee8-9a5b-460c-bf8b-c6408d20aec7 tempest-MemberAPITest-903346660 
tempest-MemberAPITest-903346660] Unable to load provider driver ovn due to: OVS 
database connection to OVN_Northbound failed with error: 'filedescriptor out of 
range in select()'. Verify that the OVS and OVN services are available and that 
the 'ovn_nb_connection' and 'ovn_sb_connection' configuration options are 
correct.: ovn_octavia_provider.ovsdb.impl_idl_ovn.OvsdbConnectionUnavailable: 
OVS database connection to OVN_Northbound failed with error: 'filedescriptor 
out of range in select()'. Verify that the OVS and OVN services are available 
and that the 'ovn_nb_connection' and 'ovn_sb_connection' configuration options 
are correct.
Sep 03 15:44:05.175074 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: ERROR wsme.api [None 
req-9201aee8-9a5b-460c-bf8b-c6408d20aec7 tempest-MemberAPITest-903346660 
tempest-MemberAPITest-903346660] Server-side error: "Provider 'ovn' was not 
found.". Detail:
Sep 03 15:44:05.175074 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: Traceback (most recent call last):
Sep 03 15:44:05.175074 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]:   File 
"/opt/stack/ovn-octavia-provider/ovn_octavia_provider/ovsdb/impl_idl_ovn.py", 
line 61, in start_connection
Sep 03 15:44:05.175074 ubuntu-bionic-inap-mtl01-0019624710 
devstack@o-api.service[10087]: self.ovsdb_connection.start()
Sep 03 15:44:05.175074 ubuntu-bionic-inap-mtl01-0019624710 

[Yahoo-eng-team] [Bug 1886116] [NEW] slaac no longer works on IPv6 tenant subnets

2020-07-02 Thread Michael Johnson
Public bug reported:

Nova instances no longer get an IPv6 address using slaac on tenant
subnets.

Using a standard devstack install with "SERVICE_IP_VERSION="6"" added,
master (Victoria).

[ml2]
tenant_network_types = vxlan
extension_drivers = port_security
mechanism_drivers = openvswitch,linuxbridge


network:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| UP   |
| availability_zone_hints   |  |
| availability_zones| nova |
| created_at| 2020-07-02T22:55:51Z |
| description   |  |
| dns_domain| None |
| id| e8258754-6a0b-40ea-abf6-c55b39845f62 |
| ipv4_address_scope| None |
| ipv6_address_scope| None |
| is_default| None |
| is_vlan_transparent   | None |
| location  | cloud='', project.domain_id='default',   |
|   | project.domain_name=,|
|   | project.id='08c84a34e4c34dacb3abbfe840edf6e3',   |
|   | project.name='admin', region_name='RegionOne',   |
|   | zone=|
| mtu   | 1450 |
| name  | lb-mgmt-net  |
| port_security_enabled | True |
| project_id| 08c84a34e4c34dacb3abbfe840edf6e3 |
| provider:network_type | vxlan|
| provider:physical_network | None |
| provider:segmentation_id  | 2|
| qos_policy_id | None |
| revision_number   | 2|
| router:external   | Internal |
| segments  | None |
| shared| False|
| status| ACTIVE   |
| subnets   | 2f17a970-09b1-410d-89de-c75b1e5f6eef |
| tags  |  |
| updated_at| 2020-07-02T22:55:52Z |
+---+--+

Subnet:
+--+---+
| Field| Value |
+--+---+
| allocation_pools | fd00:0:0:42::2-fd00::42::::   |
| cidr | fd00:0:0:42::/64  |
| created_at   | 2020-07-02T22:55:52Z  |
| description  |   |
| dns_nameservers  |   |
| dns_publish_fixed_ip | None  |
| enable_dhcp  | True  |
| gateway_ip   | fd00:0:0:42:: |
| host_routes  |   |
| id   | 2f17a970-09b1-410d-89de-c75b1e5f6eef  |
| ip_version   | 6 |
| ipv6_address_mode| slaac |
| ipv6_ra_mode | slaac |
| location | cloud='', project.domain_id='default',|
|  | project.domain_name=, |
|  | project.id='08c84a34e4c34dacb3abbfe840edf6e3',|
|  | project.name='admin', region_name='RegionOne', zone=  |
| name | lb-mgmt-subnet|
| network_id   | 

[Yahoo-eng-team] [Bug 1869155] Re: When installing with subiquity, the generated network config uses the macaddress keyword on s390x (where MAC addresses are not necessarily stable across reboots)

2020-05-07 Thread Michael Hudson-Doyle
** Changed in: subiquity
   Status: New => Fix Released

** Changed in: initramfs-tools (Ubuntu)
   Status: New => Fix Released

** Changed in: ubuntu-z-systems
   Status: New => Fix Released

** Changed in: cloud-init
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1869155

Title:
  When installing with subiquity, the generated network config uses the
  macaddress keyword on s390x (where MAC addresses are not necessarily
  stable across reboots)

Status in cloud-init:
  Invalid
Status in subiquity:
  Fix Released
Status in Ubuntu on IBM z Systems:
  Fix Released
Status in initramfs-tools package in Ubuntu:
  Fix Released

Bug description:
  While performing a subiquity focal installation on an s390x LPAR (where the 
LPAR is connected to a VLAN trunk) I saw a section like this:
 match:
  macaddress: 02:28:0b:00:00:53
  So the macaddress keyword is used, but on several s390x machine generation 
MAC addresses are
  not necessarily stable and uniquie across reboots.
  (z14 GA2 and newer system have in between a modified firmware that ensures 
that MAC addresses are stable and uniquire across reboots, but for z14 GA 1 and 
older systems, incl. the z13 that I used this is not the case - and a backport 
of the firmware modification is very unlikely)

  The configuration that I found is this:

  $ cat /etc/netplan/50-cloud-init.yaml
  # This file is generated from information provided by the datasource. Changes
  # to it will not persist across an instance reboot. To disable cloud-init's
  # network configuration capabilities, write a file
  # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
  # network: {config: disabled}
  network:
  ethernets:
  enc600:
  addresses:
  - 10.245.236.26/24
  gateway4: 10.245.236.1
  match:
  macaddress: 02:28:0b:00:00:53
  nameservers:
  addresses:
  - 10.245.236.1
  set-name: enc600
  version: 2

  (This is a spin-off of ticket LP 1868246.)

  It's understood that the initial idea for the MAC addresses was to have a 
unique identifier, but
  I think with the right tooling (ip, ifconfig, ethtool or even the 
network-manager UI) you can even change MAC addresses today on other platforms.

  Nowadays interface names are based on their underlying physical
  device/address (here in this case '600' or to be precise '0600' -
  leading '0' are removed), which makes the interface and it's name
  already quite unique - since it is not possible to have two devices
  (in one system) with the exact same address.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1869155/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869967] Re: subiquity->cloud-init generates netplan yaml telling user not to edit it

2020-05-07 Thread Michael Hudson-Doyle
** Changed in: subiquity
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1869967

Title:
  subiquity->cloud-init generates netplan yaml telling user not to edit
  it

Status in cloud-init:
  Invalid
Status in subiquity:
  Fix Released

Bug description:
  As seen in , users who install with subiquity end up
  with a /etc/cloud/cloud.cfg.d/50-curtin-networking.cfg that persists
  on the target system, plus an /etc/netplan/50-cloud-init.yaml that
  tells users not to edit it without taking steps to disable cloud-init.

  I don't think this is what we want.  I think a subiquity install
  should unambiguously treat cloud-init as a one-shot at installation,
  and leave the user afterwards with config files that can be directly
  edited without fear of cloud-init interfering; and the yaml files
  generated by cloud-init on subiquity installs should therefore also
  not include this scary language:

  # This file is generated from information provided by the datasource.  Changes
  # to it will not persist across an instance reboot.  To disable cloud-init's
  # network configuration capabilities, write a file
  # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
  # network: {config: disabled}

  But we need to figure out how to fix this between subiquity and cloud-
  init.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1869967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1868246] Re: No network after subiquity LPAR installation on s390x with VLAN

2020-04-14 Thread Michael Hudson-Doyle
** Changed in: initramfs-tools (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1868246

Title:
  No network after subiquity LPAR installation on s390x with VLAN

Status in cloud-init:
  Invalid
Status in subiquity:
  Fix Released
Status in Ubuntu on IBM z Systems:
  Fix Released
Status in initramfs-tools package in Ubuntu:
  Fix Released

Bug description:
  I tried today an subiquity LPAR installation using the latest ISO (March 19) 
that includes the latest 20.03 subiquity.
  The installation itself completed fine, but after the post-install reboot the 
system didn't had a network active - please note that the LPAR is connected to 
a VLAN.

  $ ip a   
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
defaul
  t qlen 1000   
  
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 
  
  inet 127.0.0.1/8 scope host lo
  
 valid_lft forever preferred_lft forever
  
  inet6 ::1/128 scope host  
  
 valid_lft forever preferred_lft forever
  
  2: encc000:  mtu 1500 qdisc noop state DOWN group 
default q
  len 1000  
  
  link/ether a2:8d:91:85:12:e3 brd ff:ff:ff:ff:ff:ff
  
  3: enP1p0s0:  mtu 1500 qdisc noop state DOWN group 
default 
  qlen 1000 
  
  link/ether 82:0c:2d:0c:b8:70 brd ff:ff:ff:ff:ff:ff
  
  4: enP1p0s0d1:  mtu 1500 qdisc noop state DOWN group 
defaul
  t qlen 1000   
  
  link/ether 82:0c:2d:0c:b8:71 brd ff:ff:ff:ff:ff:ff
  
  5: enP2p0s0:  mtu 1500 qdisc noop state DOWN group 
default 
  qlen 1000 
  
  link/ether 82:0c:2d:0c:b7:00 brd ff:ff:ff:ff:ff:ff
  
  6: enP2p0s0d1:  mtu 1500 qdisc noop state DOWN group 
defaul
  t qlen 1000   
  
  link/ether 82:0c:2d:0c:b7:01 brd ff:ff:ff:ff:ff:ff   

  Wanting to have a look at the netplan config it turned out that there is no 
yaml file:
  $ ls -l /etc/netplan/
  total 0  

  Adding one manually and applying it worked fine.

  So looks like the installer does not properly generate or copy a
  01-netcfg.yaml to /etc/netplan.

  Please see below the entire steps as well as a compressed file with
  the entire content of /var/log

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1868246/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1834875] Re: cloud-init growpart race with udev

2020-04-08 Thread Michael Hudson-Doyle
** No longer affects: linux-azure (Ubuntu)

** No longer affects: systemd (Ubuntu)

** Also affects: cloud-utils (Ubuntu Eoan)
   Importance: Undecided
   Status: New

** Also affects: cloud-initramfs-tools (Ubuntu Eoan)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1834875

Title:
  cloud-init growpart race with udev

Status in cloud-init:
  Incomplete
Status in cloud-utils:
  Fix Committed
Status in cloud-initramfs-tools package in Ubuntu:
  Fix Released
Status in cloud-utils package in Ubuntu:
  Fix Released
Status in cloud-initramfs-tools source package in Eoan:
  New
Status in cloud-utils source package in Eoan:
  New

Bug description:
  On Azure, it happens regularly (20-30%), that cloud-init's growpart
  module fails to extend the partition to full size.

  Such as in this example:

  

  2019-06-28 12:24:18,666 - util.py[DEBUG]: Running command ['growpart', 
'--dry-run', '/dev/sda', '1'] with allowed return codes [0] (shell=False, 
capture=True)
  2019-06-28 12:24:19,157 - util.py[DEBUG]: Running command ['growpart', 
'/dev/sda', '1'] with allowed return codes [0] (shell=False, capture=True)
  2019-06-28 12:24:19,726 - util.py[DEBUG]: resize_devices took 1.075 seconds
  2019-06-28 12:24:19,726 - handlers.py[DEBUG]: finish: 
init-network/config-growpart: FAIL: running config-growpart with frequency 
always
  2019-06-28 12:24:19,727 - util.py[WARNING]: Running module growpart () failed
  2019-06-28 12:24:19,727 - util.py[DEBUG]: Running module growpart () failed
  Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 812, in 
_run_modules
  freq=freq)
File "/usr/lib/python3/dist-packages/cloudinit/cloud.py", line 54, in run
  return self._runners.run(name, functor, args, freq, clear_on_fail)
File "/usr/lib/python3/dist-packages/cloudinit/helpers.py", line 187, in run
  results = functor(*args)
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_growpart.py", line 
351, in handle
  func=resize_devices, args=(resizer, devices))
File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 2521, in 
log_time
  ret = func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_growpart.py", line 
298, in resize_devices
  (old, new) = resizer.resize(disk, ptnum, blockdev)
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_growpart.py", line 
159, in resize
  return (before, get_size(partdev))
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_growpart.py", line 
198, in get_size
  fd = os.open(filename, os.O_RDONLY)
  FileNotFoundError: [Errno 2] No such file or directory: 
'/dev/disk/by-partuuid/a5f2b49f-abd6-427f-bbc4-ba5559235cf3'

  

  @rcj suggested this is a race with udev. This seems to only happen on
  Cosmic and later.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1834875/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1871239] [NEW] ovn-octavia-provider is not using load balancing algorithm source-ip-port

2020-04-06 Thread Michael Johnson
Public bug reported:

When using the ovn-octavia-provider, OVN is not honoring the
SOURCE_IP_PORT pool load balancing algorithm. The ovn-octavia-provider
only supports the SOURCE_IP_PORT load balancing algorithm.

The following test was created for the SOURCE_IP_PORT algorithm in tempest:
octavia_tempest_plugin.tests.scenario.v2.test_traffic_ops.TrafficOperationsScenarioTest.test_source_ip_port_tcp_traffic

Available in this patch: https://review.opendev.org/#/c/714004/

The test run shows that OVN is randomly distributing the connections
from the same source IP and port across the backend member servers. One
server is configured to return '1' and the other '5'.

Loadbalancer response totals: {'1': 12, '5': 8}

It should be seeing a result of:

Loadbalancer response totals: {'1': 20}

The attached files provide:

ovn-provider.pcap -- A pcap file capturing the test run.
ovn-tempest-output.txt -- The tempest console output.
tempest.log -- The tempest framework log from the test run.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1871239

Title:
  ovn-octavia-provider is not using load balancing algorithm source-ip-
  port

Status in neutron:
  New

Bug description:
  When using the ovn-octavia-provider, OVN is not honoring the
  SOURCE_IP_PORT pool load balancing algorithm. The ovn-octavia-provider
  only supports the SOURCE_IP_PORT load balancing algorithm.

  The following test was created for the SOURCE_IP_PORT algorithm in tempest:
  
octavia_tempest_plugin.tests.scenario.v2.test_traffic_ops.TrafficOperationsScenarioTest.test_source_ip_port_tcp_traffic

  Available in this patch: https://review.opendev.org/#/c/714004/

  The test run shows that OVN is randomly distributing the connections
  from the same source IP and port across the backend member servers.
  One server is configured to return '1' and the other '5'.

  Loadbalancer response totals: {'1': 12, '5': 8}

  It should be seeing a result of:

  Loadbalancer response totals: {'1': 20}

  The attached files provide:

  ovn-provider.pcap -- A pcap file capturing the test run.
  ovn-tempest-output.txt -- The tempest console output.
  tempest.log -- The tempest framework log from the test run.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1871239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1870002] Re: The operating_status value of loadbalancer is abnormal

2020-04-01 Thread Michael Johnson
Octavia tracks bugs and RFEs in the new OpenStack Storyboard and not launchpad.
https://storyboard.openstack.org/#!/project/openstack/octavia
Please open your bug in Storyboard for the Octavia team.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1870002

Title:
  The operating_status value of loadbalancer is abnormal

Status in neutron:
  Invalid

Bug description:
  Summary of problems:
  1.One loadbalancer contains multiple pools and listeners,as long as the 
operating_status status of a pool is error, the operating_status status of 
loadbalancer is error.
  2. The operating_status state of listener is inconsistent with that of pool 
and loadbalancer.

  1. Loadbalancer contains multiple pools and listeners:

  openstack loadbalancer show a6c134fa-eb05-47e3-b760-ae5ca7117996
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | created_at  | 2020-03-23T03:36:15  |
  | description |  |
  | flavor_id   | a3de1882-8ace-4df7-9979-ce11153f912c |
  | id  | a6c134fa-eb05-47e3-b760-ae5ca7117996 |
  | listeners   | c0759d56-4eb1-4404-a805-ff196c8fa9ad |
  | | f6b8eee5-2a0a-4ff3-b339-be9489511f1c |
  | | 0fbf23f7-5b3d-48d8-b417-e7b770fb949f |
  | | 4b067982-2cb2-47cc-856b-ab65307f2ba5 |
  | name| gengjie-lvs  |
  | operating_status| ERROR|
  | pools   | 3ba5de47-3276-4687-aa27-9344d348cdda |
  | | 407eaff9-b90e-4cde-a254-04f3047b270f |
  | | 73edd2f9-78ea-4cd6-a20f-d02664dd4654 |
  | | bf07f027-9793-44e4-b307-495b3273a1ae |
  | | d479dba7-a7d2-4631-8eb0-0300800708a2 |
  | project_id  | bb0cfc0da1cd46f09ac3c3ed9781c6a8 |
  | provider| amphora  |
  | provisioning_status | ACTIVE   |
  | updated_at  | 2020-04-01T02:07:43  |
  | vip_address | 192.168.0.170|
  | vip_network_id  | 3d22ec75-5b4e-43d7-86bd-480d07c0784b |
  | vip_port_id | 518304bc-41d3-4ac6-bc5a-328c5c2a0674 |
  | vip_qos_policy_id   | None |
  | vip_subnet_id   | 2f55d6f6-ba8b-4390-8679-9338f94afe3e |
  +-+--+
  2. as long as the operating_status status of a pool is error, the 
operating_statusstatus of loadbalancer is error

  openstack loadbalancer pool show 3ba5de47-3276-4687-aa27-9344d348cdda
  +--+--+
  | Field| Value|
  +--+--+
  | admin_state_up   | True |
  | created_at   | 2020-03-27T08:32:41  |
  | description  |  |
  | healthmonitor_id | d6a78953-a5a5-49dd-b780-e28c6bf9f16e |
  | id   | 3ba5de47-3276-4687-aa27-9344d348cdda |
  | lb_algorithm | LEAST_CONNECTIONS|
  | listeners| c0759d56-4eb1-4404-a805-ff196c8fa9ad |
  | loadbalancers| a6c134fa-eb05-47e3-b760-ae5ca7117996 |
  | members  | 5cff4fa5-39c0-4f8b-8c9c-bfb53ea7d028 |
  | name | ysy-test-01  |
  | operating_status | ERROR|
  | project_id   | bb0cfc0da1cd46f09ac3c3ed9781c6a8 |
  | protocol | HTTP |
  | provisioning_status  | ACTIVE   |
  | session_persistence  | None |
  | updated_at   | 2020-03-31T11:56:30  |
  | tls_container_ref| None |
  | ca_tls_container_ref | None |
  | crl_container_ref| None |
  | tls_enabled  | False|
  +--+--+
  openstack loadbalancer pool show 407eaff9-b90e-4cde-a254-04f3047b270f
  +--+--+
  | Field| Value|
  +--+--+
  | admin_state_up   | True |
  | created_at   | 2020-03-30T07:18:09   

[Yahoo-eng-team] [Bug 1869155] Re: When installing with subiquity, the generated network config uses the macaddress keyword on s390x (where MAC addresses are not necessarily stable across reboots)

2020-03-31 Thread Michael Hudson-Doyle
Pretty sure it's initramfs-tools that is putting the mac addresses in
the netplan. That probably needs to grow a little platform-dependent
behaviour around this.

** Also affects: initramfs-tools (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1869155

Title:
  When installing with subiquity, the generated network config uses the
  macaddress keyword on s390x (where MAC addresses are not necessarily
  stable across reboots)

Status in cloud-init:
  Incomplete
Status in subiquity:
  New
Status in Ubuntu on IBM z Systems:
  New
Status in initramfs-tools package in Ubuntu:
  New

Bug description:
  While performing a subiquity focal installation on an s390x LPAR (where the 
LPAR is connected to a VLAN trunk) I saw a section like this:
 match:
  macaddress: 02:28:0b:00:00:53
  So the macaddress keyword is used, but on several s390x machine generation 
MAC addresses are
  not necessarily stable and uniquie across reboots.
  (z14 GA2 and newer system have in between a modified firmware that ensures 
that MAC addresses are stable and uniquire across reboots, but for z14 GA 1 and 
older systems, incl. the z13 that I used this is not the case - and a backport 
of the firmware modification is very unlikely)

  The configuration that I found is this:

  $ cat /etc/netplan/50-cloud-init.yaml
  # This file is generated from information provided by the datasource. Changes
  # to it will not persist across an instance reboot. To disable cloud-init's
  # network configuration capabilities, write a file
  # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
  # network: {config: disabled}
  network:
  ethernets:
  enc600:
  addresses:
  - 10.245.236.26/24
  gateway4: 10.245.236.1
  match:
  macaddress: 02:28:0b:00:00:53
  nameservers:
  addresses:
  - 10.245.236.1
  set-name: enc600
  version: 2

  (This is a spin-off of ticket LP 1868246.)

  It's understood that the initial idea for the MAC addresses was to have a 
unique identifier, but
  I think with the right tooling (ip, ifconfig, ethtool or even the 
network-manager UI) you can even change MAC addresses today on other platforms.

  Nowadays interface names are based on their underlying physical
  device/address (here in this case '600' or to be precise '0600' -
  leading '0' are removed), which makes the interface and it's name
  already quite unique - since it is not possible to have two devices
  (in one system) with the exact same address.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1869155/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863190] [NEW] Server group anti-affinity no longer works

2020-02-13 Thread Michael Johnson
Public bug reported:

Server group anti-affinity is no longer working, at least in the simple
case. I am able to boot two VMs in an anti-affinity server group on a
devstack that only has one compute instance. Previously this would fail
and/or require soft-anti-affinity enabled.

$ openstack host list
+---+---+--+
| Host Name | Service   | Zone |
+---+---+--+
| devstack2 | scheduler | internal |
| devstack2 | conductor | internal |
| devstack2 | conductor | internal |
| devstack2 | compute   | nova |
+---+---+--+

$ openstack compute service list
+++---+--+-+---++
| ID | Binary | Host  | Zone | Status  | State | Updated At 
|
+++---+--+-+---++
|  3 | nova-scheduler | devstack2 | internal | enabled | up| 
2020-02-14T00:59:15.00 |
|  6 | nova-conductor | devstack2 | internal | enabled | up| 
2020-02-14T00:59:16.00 |
|  1 | nova-conductor | devstack2 | internal | enabled | up| 
2020-02-14T00:59:19.00 |
|  3 | nova-compute   | devstack2 | nova | enabled | up| 
2020-02-14T00:59:17.00 |
+++---+--+-+---++

$ openstack server list
+--+--++---+-++
| ID   | Name   
  | Status | Networks  | Image  
 | Flavor |
+--+--++---+-++
| a44febef-330c-4db5-b220-959cbbff8f8c | 
amphora-1bc97ddb-80da-446a-bce3-0c867c1fc258 | ACTIVE | 
lb-mgmt-net=192.168.0.58; public=172.24.4.200 | amphora-x64-haproxy | 
m1.amphora |
| de776347-0cf4-47d5-bb37-17fb37d79f2e | 
amphora-433abe98-fd8e-4e4f-ac11-4f76bbfc7aba | ACTIVE | 
lb-mgmt-net=192.168.0.199; public=172.24.4.11 | amphora-x64-haproxy | 
m1.amphora |
+--+--++---+-++

$ openstack server group show ddbc8544-c664-4da4-8fd8-32f6bd01e960
+--++
| Field| Value  
|
+--++
| id   | ddbc8544-c664-4da4-8fd8-32f6bd01e960   
|
| members  | a44febef-330c-4db5-b220-959cbbff8f8c, 
de776347-0cf4-47d5-bb37-17fb37d79f2e |
| name | octavia-lb-cc40d031-6ce9-475f-81b4-0a6096178834
|
| policies | anti-affinity  
|
+--++

Steps to reproduce:
1. Boot a devstack.
2. Create an anti-affinity server group.
2. Boot two VMs in that server group.

Expected Behavior:

The second VM boot should fail with an error similar to "not enough
hosts"

Actual Behavior:

The second VM boots with no error, The two instances in the server group
are on the same host.

Environment:
Nova version (current Ussuri): 0d3aeb0287a0619695c9b9e17c2dec49099876a5
commit 0d3aeb0287a0619695c9b9e17c2dec49099876a5 (HEAD -> master, origin/master, 
origin/HEAD)
Merge: 1fcd74730d 65825ebfbd
Author: Zuul 
Date:   Thu Feb 13 14:25:10 2020 +

Merge "Make RBD imagebackend flatten method idempotent"

Fresh devstack install, however I have another devstack from August that
is also showing this behavior.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1863190

Title:
  Server group anti-affinity no longer works

Status in OpenStack Compute (nova):
  New

Bug description:
  Server group anti-affinity is no longer working, at least in the
  simple case. I am able to boot two VMs in an anti-affinity server
  group on a devstack that only has one compute instance. Previously
  this would fail and/or require soft-anti-affinity enabled.

  $ openstack host list
  +---+---+--+
  | Host Name | Service   | Zone |
  +---+---+--+
  | devstack2 | scheduler | internal |
  | devstack2 | conductor | internal |
  | devstack2 | conductor | internal |
  | devstack2 | compute   | nova |
  

[Yahoo-eng-team] [Bug 1857439] Re: Tempest test of add_remove_fixed_ip fails on API under wsgi

2019-12-25 Thread Michael Polenchuk
*** This bug is a duplicate of bug 1834758 ***
https://bugs.launchpad.net/bugs/1834758

** This bug has been marked a duplicate of bug 1834758
   Race condition in 
tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesUnderV243Test.
 test_add_remove_fixed_ip

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1857439

Title:
  Tempest test of add_remove_fixed_ip fails on API under wsgi

Status in neutron:
  New

Bug description:
  
  Description:
  Neutron is installed by means of Helm into dedicated containers,
  i.e. neutron-api under Apache mod_wsgi and rpc-server as an eventlet process.
  For debug purpose one replica was set for these services.
  Run of 
tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesUnderV243Test.test_add_remove_fixed_ip
 throws exception (not 100% reproducible but very often):

  Traceback (most recent call last):
File 
"/var/lib/openstack/lib/python3.6/site-packages/tempest/common/utils/__init__.py",
 line 89, in wrapper
  return f(*func_args, **func_kwargs)
File 
"/var/lib/openstack/lib/python3.6/site-packages/tempest/api/compute/servers/test_attach_interfaces.py",
 line 366, in test_add_remove_fixed_ip
  'Timed out while waiting for IP count to increase.')
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: Timed out while waiting for IP count to increase.

  Debugging reveals the os_primary.servers_client.list_addresses() function 
gets updates with some delay ~3-5sec, therefore original_ip_count variable is 
setting to the incorrect value [1].
  There is no such behaviour under classic neutron-server process.

  
  Version:
* OpenStack version is Stein (neutron 14.0.4.dev52 build from stable/stein)
* Ubuntu 18.04.2 LTS

  [1]
  
https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_attach_interfaces.py#L372

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1857439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1857439] [NEW] Tempest test of add_remove_fixed_ip fails on API under wsgi

2019-12-24 Thread Michael Polenchuk
Public bug reported:


Description:
Neutron is installed by means of Helm into dedicated containers,
i.e. neutron-api under Apache mod_wsgi and rpc-server as an eventlet process.
For debug purpose one replica was set for these services.
Run of 
tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesUnderV243Test.test_add_remove_fixed_ip
 throws exception (not 100% reproducible but very often):

Traceback (most recent call last):
  File 
"/var/lib/openstack/lib/python3.6/site-packages/tempest/common/utils/__init__.py",
 line 89, in wrapper
return f(*func_args, **func_kwargs)
  File 
"/var/lib/openstack/lib/python3.6/site-packages/tempest/api/compute/servers/test_attach_interfaces.py",
 line 366, in test_add_remove_fixed_ip
'Timed out while waiting for IP count to increase.')
tempest.lib.exceptions.TimeoutException: Request timed out
Details: Timed out while waiting for IP count to increase.

Debugging reveals the os_primary.servers_client.list_addresses() function gets 
updates with some delay ~3-5sec, therefore original_ip_count variable is 
setting to the incorrect value [1].
There is no such behaviour under classic neutron-server process.


Version:
  * OpenStack version is Stein (neutron 14.0.4.dev52 build from stable/stein)
  * Ubuntu 18.04.2 LTS

[1]
https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_attach_interfaces.py#L372

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1857439

Title:
  Tempest test of add_remove_fixed_ip fails on API under wsgi

Status in neutron:
  New

Bug description:
  
  Description:
  Neutron is installed by means of Helm into dedicated containers,
  i.e. neutron-api under Apache mod_wsgi and rpc-server as an eventlet process.
  For debug purpose one replica was set for these services.
  Run of 
tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesUnderV243Test.test_add_remove_fixed_ip
 throws exception (not 100% reproducible but very often):

  Traceback (most recent call last):
File 
"/var/lib/openstack/lib/python3.6/site-packages/tempest/common/utils/__init__.py",
 line 89, in wrapper
  return f(*func_args, **func_kwargs)
File 
"/var/lib/openstack/lib/python3.6/site-packages/tempest/api/compute/servers/test_attach_interfaces.py",
 line 366, in test_add_remove_fixed_ip
  'Timed out while waiting for IP count to increase.')
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: Timed out while waiting for IP count to increase.

  Debugging reveals the os_primary.servers_client.list_addresses() function 
gets updates with some delay ~3-5sec, therefore original_ip_count variable is 
setting to the incorrect value [1].
  There is no such behaviour under classic neutron-server process.

  
  Version:
* OpenStack version is Stein (neutron 14.0.4.dev52 build from stable/stein)
* Ubuntu 18.04.2 LTS

  [1]
  
https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_attach_interfaces.py#L372

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1857439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1853637] [NEW] Assign floating IP to port owned by another tenant is not override-able with RBAC policy

2019-11-22 Thread Michael Johnson
Public bug reported:

In neutron/db/l3_db.py:

def _internal_fip_assoc_data(self, context, fip, tenant_id):
"""Retrieve internal port data for floating IP.
Retrieve information concerning the internal port where
the floating IP should be associated to.
"""
internal_port = self._core_plugin.get_port(context, fip['port_id'])
if internal_port['tenant_id'] != tenant_id and not context.is_admin:
port_id = fip['port_id']
msg = (_('Cannot process floating IP association with '
 'Port %s, since that port is owned by a '
 'different tenant') % port_id)
raise n_exc.BadRequest(resource='floatingip', msg=msg)

This code does not allow operators to override the ability to assign
floating IPs to ports on another tenant using RBAC policy. It also does
not allow members of the advsvc role to take this action.

This code should be fixed to use the standard neutron RBAC and allow the
advsvc role to take this action.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1853637

Title:
  Assign floating IP to port owned by another tenant is not override-
  able with RBAC policy

Status in neutron:
  New

Bug description:
  In neutron/db/l3_db.py:

  def _internal_fip_assoc_data(self, context, fip, tenant_id):
  """Retrieve internal port data for floating IP.
  Retrieve information concerning the internal port where
  the floating IP should be associated to.
  """
  internal_port = self._core_plugin.get_port(context, fip['port_id'])
  if internal_port['tenant_id'] != tenant_id and not context.is_admin:
  port_id = fip['port_id']
  msg = (_('Cannot process floating IP association with '
   'Port %s, since that port is owned by a '
   'different tenant') % port_id)
  raise n_exc.BadRequest(resource='floatingip', msg=msg)

  This code does not allow operators to override the ability to assign
  floating IPs to ports on another tenant using RBAC policy. It also
  does not allow members of the advsvc role to take this action.

  This code should be fixed to use the standard neutron RBAC and allow
  the advsvc role to take this action.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1853637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1833311] [NEW] Token not decoded in SSO callback template

2019-06-18 Thread Michael Carpenter
Public bug reported:

In
https://github.com/openstack/keystone/blob/stable/stein/keystone/api/auth.py#L108
the token is not decoded and therefore is rendered in the SSO callback
template as bytes. See example below for how to recreate.

>>> import string
>>> template = string.Template("""
... http://www.w3.org/1999/xhtml;>
...   
... Keystone WebSSO redirect
...   
...   
...  
...Please wait...
...
...
...
...  
...
...  
...  
...window.onload = function() {
...  document.forms['sso'].submit();
...}
...  
...   
... """)
>>> subs = {"host": b"myhost", "token": b"mytoken"}
>>> template.substitute(subs)
'\nhttp://www.w3.org/1999/xhtml;>\n  \n
Keystone WebSSO redirect\n  \n  \n \n   Please 
wait...\n   \n   \n   \n \n   \n  
   \n \n   window.onload = 
function() {\n document.forms[\'sso\'].submit();\n   }\n 
\n  \n'

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1833311

Title:
  Token not decoded in SSO callback template

Status in OpenStack Identity (keystone):
  New

Bug description:
  In
  
https://github.com/openstack/keystone/blob/stable/stein/keystone/api/auth.py#L108
  the token is not decoded and therefore is rendered in the SSO callback
  template as bytes. See example below for how to recreate.

  >>> import string
  >>> template = string.Template("""
  ... http://www.w3.org/1999/xhtml;>
  ...   
  ... Keystone WebSSO redirect
  ...   
  ...   
  ...  
  ...Please wait...
  ...
  ...
  ...
  ...  
  ...
  ...  
  ...  
  ...window.onload = function() {
  ...  document.forms['sso'].submit();
  ...}
  ...  
  ...   
  ... """)
  >>> subs = {"host": b"myhost", "token": b"mytoken"}
  >>> template.substitute(subs)
  '\nhttp://www.w3.org/1999/xhtml;>\n  \n
Keystone WebSSO redirect\n  \n  \n \n   Please 
wait...\n   \n   \n   \n \n   \n  
   \n \n   window.onload = 
function() {\n document.forms[\'sso\'].submit();\n   }\n 
\n  \n'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1833311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1831613] Re: Deletion of Lbaas-listener is successfull even when it is part of Lbaas pool

2019-06-05 Thread Michael Johnson
neutron-lbaas is not a neutron project. This patch has been moved to the
neutron-lbaas storyboard in story:
https://storyboard.openstack.org/#!/story/2005827

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1831613

Title:
  Deletion of Lbaas-listener is successfull  even when it is part of
  Lbaas pool

Status in neutron:
  Invalid

Bug description:
  Description ->Deletion of  loadbalancer   listener  is successfull
  even when  it is attached to Lbaas pool. After deletion of listener
  when user creates a new listener neutron Cli command do not support
  addition of new listener in existing lbaas pool.

  User impact of deletion-> loadbalancer stops working if user  able to
  delete listener accidentally

  Step to reproduce the scenario->

  neutron lbaas-loadbalancer-create --name lb-15 public-subnet
  neutron lbaas-listener-create --name listener-15-1 --loadbalancer lb-15 
--protocol HTTP --protocol-port 80 --connection-limit 1
  neutron lbaas-pool-create --name pool-15 --lb-algorithm  ROUND_ROBIN  
--listener listener-15-1  --protocol HTTP
  neutron lbaas-healthmonitor-create --name health-15 --delay 5 --max-retries 4 
--timeout 3 --type PING --pool pool-15
  neutron lbaas-listener-delete 

  create a listener again and try to add to existing pool no cli support
  this operation as well as no Horizon support for the same

  Expected output-> Two approach to look for.

  1. If deletion of listener is possible then addition of listener
  should also be allowed.

  2. Another option is if listener is mandatory field for pool creation
  then like other field lbaaS listener  deletion  should throw an error.

  version of openstack -> stable stein
  linux ubuntu -> 18.04


  
  Reason why it is needed:(while creation of listener is mandatory for a pool 
then deletion also should not be allowed without deleting pool).
   root@vmware:~/vio6.0# neutron lbaas-pool-create --name lb-pool2 
--lb-algorithm ROUND_ROBIN --protocol HTTP --insecure
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  /usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py:857: 
InsecureRequestWarning: Unverified HTTPS request is being made. Adding 
certificate verification is strongly advised. See: 
https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
  /usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py:857: 
InsecureRequestWarning: Unverified HTTPS request is being made. Adding 
certificate verification is strongly advised. See: 
https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
  At least one of --listener or --loadbalancer must be specified.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1831613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822382] Re: DBDeadlock for INSERT INTO resourcedeltas

2019-03-29 Thread Michael Johnson
Looking at this deeper, it appears neutron did properly retry this DB
action and the instance connection issue may be unrelated. Marking this
invalid.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1822382

Title:
  DBDeadlock for INSERT INTO resourcedeltas

Status in neutron:
  Invalid

Bug description:
  Recently we started seeing instances fail to become reachable in the
  Octavia tempest jobs. This is intermittent, but recurring. This may be
  related to other DBDeadlock bugs recently reported for quotas, but
  since the SQL is different here I am reporting it.

  This is on Master/Train.

  Summary of the error in q-svc:

  Mar 29 20:04:12.816598 ubuntu-xenial-rax-dfw-0004550340 neutron-
  server[11470]: ERROR oslo_db.sqlalchemy.exc_filters
  oslo_db.exception.DBDeadlock: (pymysql.err.InternalError) (1213,
  'Deadlock found when trying to get lock; try restarting transaction')
  [SQL: 'INSERT INTO resourcedeltas (resource, reservation_id, amount)
  VALUES (%(resource)s, %(reservation_id)s, %(amount)s)'] [parameters:
  {'reservation_id': '4f198b7d-ac31-42bb-98bd-686c830322ab', 'resource':
  'port', 'amount': 1}] (Background on this error at:
  http://sqlalche.me/e/2j85)

  Full traceback from q-svc for once occurrence:

  Mar 29 20:04:12.790909 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters 
[req-56d6484e-b182-4dbd-8bb9-8db4ceb3c38a 
req-ddb65494-cdaf-4dec-ab19-84efbede0da7 admin admin] DBAPIError exception 
wrapped from (pymysql.err.InternalError) (1305, 'SAVEPOINT sa_savepoint_9 does 
not exist') [SQL: 'ROLLBACK TO SAVEPOINT sa_savepoint_9'] (Background on this 
error at: http://sqlalche.me/e/2j85): pymysql.err.InternalError: (1305, 
'SAVEPOINT sa_savepoint_9 does not exist')
  Mar 29 20:04:12.791441 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters Traceback (most 
recent call last):
  Mar 29 20:04:12.791889 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/base.py", line 1193, 
in _execute_context
  Mar 29 20:04:12.792380 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters context)
  Mar 29 20:04:12.792860 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/default.py", line 
507, in do_execute
  Mar 29 20:04:12.793296 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  Mar 29 20:04:12.793872 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/cursors.py", line 165, in 
execute
  Mar 29 20:04:12.794320 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters result = 
self._query(query)
  Mar 29 20:04:12.794743 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/cursors.py", line 321, in _query
  Mar 29 20:04:12.795219 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters conn.query(q)
  Mar 29 20:04:12.795668 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 860, in 
query
  Mar 29 20:04:12.796102 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  Mar 29 20:04:12.796505 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 1061, in 
_read_query_result
  Mar 29 20:04:12.796904 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters result.read()
  Mar 29 20:04:12.797336 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 1349, in 
read
  Mar 29 20:04:12.797730 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters first_packet = 
self.connection._read_packet()
  Mar 29 20:04:12.798022 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 1018, in 
_read_packet
  Mar 29 20:04:12.798305 ubuntu-xenial-rax-dfw-0004550340 
neutron-server[11470]: ERROR 

[Yahoo-eng-team] [Bug 1822382] [NEW] DBDeadlock for INSERT INTO resourcedeltas

2019-03-29 Thread Michael Johnson
Public bug reported:

Recently we started seeing instances fail to become reachable in the
Octavia tempest jobs. This is intermittent, but recurring. This may be
related to other DBDeadlock bugs recently reported for quotas, but since
the SQL is different here I am reporting it.

This is on Master/Train.

Summary of the error in q-svc:

Mar 29 20:04:12.816598 ubuntu-xenial-rax-dfw-0004550340 neutron-
server[11470]: ERROR oslo_db.sqlalchemy.exc_filters
oslo_db.exception.DBDeadlock: (pymysql.err.InternalError) (1213,
'Deadlock found when trying to get lock; try restarting transaction')
[SQL: 'INSERT INTO resourcedeltas (resource, reservation_id, amount)
VALUES (%(resource)s, %(reservation_id)s, %(amount)s)'] [parameters:
{'reservation_id': '4f198b7d-ac31-42bb-98bd-686c830322ab', 'resource':
'port', 'amount': 1}] (Background on this error at:
http://sqlalche.me/e/2j85)

Full traceback from q-svc for once occurrence:

Mar 29 20:04:12.790909 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters [req-56d6484e-b182-4dbd-8bb9-8db4ceb3c38a 
req-ddb65494-cdaf-4dec-ab19-84efbede0da7 admin admin] DBAPIError exception 
wrapped from (pymysql.err.InternalError) (1305, 'SAVEPOINT sa_savepoint_9 does 
not exist') [SQL: 'ROLLBACK TO SAVEPOINT sa_savepoint_9'] (Background on this 
error at: http://sqlalche.me/e/2j85): pymysql.err.InternalError: (1305, 
'SAVEPOINT sa_savepoint_9 does not exist')
Mar 29 20:04:12.791441 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters Traceback (most recent call last):
Mar 29 20:04:12.791889 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/base.py", line 1193, 
in _execute_context
Mar 29 20:04:12.792380 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters context)
Mar 29 20:04:12.792860 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/default.py", line 
507, in do_execute
Mar 29 20:04:12.793296 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters cursor.execute(statement, parameters)
Mar 29 20:04:12.793872 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/cursors.py", line 165, in 
execute
Mar 29 20:04:12.794320 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters result = self._query(query)
Mar 29 20:04:12.794743 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/cursors.py", line 321, in _query
Mar 29 20:04:12.795219 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters conn.query(q)
Mar 29 20:04:12.795668 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 860, in 
query
Mar 29 20:04:12.796102 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters self._affected_rows = 
self._read_query_result(unbuffered=unbuffered)
Mar 29 20:04:12.796505 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 1061, in 
_read_query_result
Mar 29 20:04:12.796904 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters result.read()
Mar 29 20:04:12.797336 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 1349, in 
read
Mar 29 20:04:12.797730 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters first_packet = 
self.connection._read_packet()
Mar 29 20:04:12.798022 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 1018, in 
_read_packet
Mar 29 20:04:12.798305 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters packet.check_error()
Mar 29 20:04:12.798600 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 384, in 
check_error
Mar 29 20:04:12.798894 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR oslo_db.sqlalchemy.exc_filters err.raise_mysql_exception(self._data)
Mar 29 20:04:12.799224 ubuntu-xenial-rax-dfw-0004550340 neutron-server[11470]: 
ERROR 

[Yahoo-eng-team] [Bug 1812225] [NEW] Firewall-as-a-Service (FWaaS) in neutron docs for Rocky still refer to plans for Ocata

2019-01-17 Thread Michael Schuh
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [x] This doc is inaccurate in this way: 
The page refers to plans for Ocata to enable firewall groups v2 for Ocata. The 
table at the end, esp. the entry with the two asterics indicates that the 
feature is to be implemented. Is this still true for Rocky?

---
Release: 13.0.3.dev28 on 2019-01-11 04:05
SHA: cdcfce3b82e57cf66efe12bacef2992c95fb86d9
Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/admin/fwaas.rst
URL: https://docs.openstack.org/neutron/rocky/admin/fwaas.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1812225

Title:
  Firewall-as-a-Service (FWaaS) in neutron docs for Rocky still refer to
  plans for Ocata

Status in neutron:
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: 
  The page refers to plans for Ocata to enable firewall groups v2 for Ocata. 
The table at the end, esp. the entry with the two asterics indicates that the 
feature is to be implemented. Is this still true for Rocky?

  ---
  Release: 13.0.3.dev28 on 2019-01-11 04:05
  SHA: cdcfce3b82e57cf66efe12bacef2992c95fb86d9
  Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/admin/fwaas.rst
  URL: https://docs.openstack.org/neutron/rocky/admin/fwaas.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1812225/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1811455] [NEW] QoS plugin fails if network is not found

2019-01-11 Thread Michael Johnson
 'qos_policy_id'

It appears that the qos_plugin is always assuming it will get a network object 
back for ports:
neutron/services/qos/qos_plugin.py: L97

# Note(lajoskatona): handle the case when the port inherits qos-policy
# from the network.
if not qos_policy:
net = network_object.Network.get_object(
context.get_admin_context(), id=port_res['network_id'])
if net.qos_policy_id:
qos_policy = policy_object.QosPolicy.get_network_policy(
context.get_admin_context(), net.id)

I think this needs to be updated to handle the case that a network is
not returned.

** Affects: neutron
 Importance: Undecided
 Assignee: Michael Johnson (johnsom)
 Status: In Progress


** Tags: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1811455

Title:
  QoS plugin fails if network is not found

Status in neutron:
  In Progress

Bug description:
  Master neutron (Stein):
  We are intermittently seeing gate failures with a q-svc exception:

  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server [None 
req-9e4027ef-c0b5-4d46-99be-1a1da640c506 None None] Exception during message 
handling: AttributeError: 'NoneType' object has no attribute 'qos_policy_id'
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server Traceback (most recent 
call last):
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/dist-packages/oslo_messaging/rpc/server.py", line 
166, in _process_incoming
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
265, in dispatch
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
194, in _do_dispatch
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server result = func(ctxt, 
**new_args)
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server   File 
"/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 146, in 
get_active_networks_info
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server ports = 
plugin.get_ports(context, filters=filters)
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/dist-packages/neutron_lib/db/api.py", line 233, in 
wrapped
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server return method(*args, 
**kwargs)
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/dist-packages/neutron_lib/db/api.py", line 140, in 
wrapped
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server setattr(e, 
'_RETRY_EXCEEDED', True)
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server self.force_reraise()
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/dist-packages/six.py", line 693, in reraise
  Jan 11 01:12:04.832940 ubuntu-bionic-ovh-bhs1-0001629903 
neutron-server[15297]: ERROR oslo_messaging.rpc.server raise value
  J

[Yahoo-eng-team] [Bug 1808072] [NEW] there is a window between user being created and ssh_pwauth being honoured

2018-12-11 Thread Michael Hudson-Doyle
Public bug reported:

I booted an instance locally and managed to log in over ssh using a
password despite ssh_pwauth being false. Turns out that this was because
the user was created around two minutes before sshd_config was updated.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1808072

Title:
  there is a window between user being created and ssh_pwauth being
  honoured

Status in cloud-init:
  New

Bug description:
  I booted an instance locally and managed to log in over ssh using a
  password despite ssh_pwauth being false. Turns out that this was
  because the user was created around two minutes before sshd_config was
  updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1808072/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1799779] [NEW] LXD module installs the wrong ZFS package if it's missing

2018-10-24 Thread Michael Skalka
Public bug reported:

When using the LXD module cloud-init will attempt to install ZFS if it
does not exist on the target system. However instead of installing the
`zfsutils-linux` package it attempts to install `zfs` resulting in an
error.

This was captured from a MAAS deployed server however the bug is
platform agnostic.

###
ubuntu@node10ob68:~$ cloud-init --version
/usr/bin/cloud-init 18.3-9-g2e62cb8a-0ubuntu1~18.04.2

### 
less /var/log/cloud-init.log
...
2018-10-24 19:23:54,255 - util.py[DEBUG]: apt-install [eatmydata apt-get 
--option=Dpkg::Options::=--force-confold 
--option=Dpkg::options::=--force-unsafe-io --assume-yes --quiet install zfs] 
took 0.302 seconds
2018-10-24 19:23:54,255 - cc_lxd.py[WARNING]: failed to install packages 
['zfs']: Unexpected error while running command.
Command: ['eatmydata', 'apt-get', '--option=Dpkg::Options::=--force-confold', 
'--option=Dpkg::options::=--force-unsafe-io', '--assume-yes', '--quiet', 
'install', 'zfs']
Exit code: 100
...

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1799779

Title:
  LXD module installs the wrong ZFS package if it's missing

Status in cloud-init:
  New

Bug description:
  When using the LXD module cloud-init will attempt to install ZFS if it
  does not exist on the target system. However instead of installing the
  `zfsutils-linux` package it attempts to install `zfs` resulting in an
  error.

  This was captured from a MAAS deployed server however the bug is
  platform agnostic.

  ###
  ubuntu@node10ob68:~$ cloud-init --version
  /usr/bin/cloud-init 18.3-9-g2e62cb8a-0ubuntu1~18.04.2

  ### 
  less /var/log/cloud-init.log
  ...
  2018-10-24 19:23:54,255 - util.py[DEBUG]: apt-install [eatmydata apt-get 
--option=Dpkg::Options::=--force-confold 
--option=Dpkg::options::=--force-unsafe-io --assume-yes --quiet install zfs] 
took 0.302 seconds
  2018-10-24 19:23:54,255 - cc_lxd.py[WARNING]: failed to install packages 
['zfs']: Unexpected error while running command.
  Command: ['eatmydata', 'apt-get', '--option=Dpkg::Options::=--force-confold', 
'--option=Dpkg::options::=--force-unsafe-io', '--assume-yes', '--quiet', 
'install', 'zfs']
  Exit code: 100
  ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1799779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1793845] [NEW] Federation Protocol saml2 fails on Rocky

2018-09-21 Thread Michael Rice
Public bug reported:

In previous releases when setting up federation one could do the
following:

openstack federation protocol create saml2 --mapping mymapping
--identity-provider myidp

Then in the keystone.conf you could add:

[auth]
methods = password,token,saml2
saml2 = keystone.auth.plugins.mapped.Mapped


That is not the case on Rocky. This will give you a 500 with the following 
error:
stevedore.named [-] Could not load keystone.auth.plugins.mapped.Mapped

To work around this issue I had to delete my mapping called "saml2",
remake it naming it "mapped" then update horizon, and apache configs
accordingly. Then in the keystone.conf file I had to remove the
"methods" line and the "saml2" line. Once I restarted apache then
Federation worked as expected.

Im not sure if this is a bug or if the way I was doing it before was
hanging around as legacy from when "saml2" had been removed but I
couldnt find anything release notes wise about the change, and the docs
examples still reference "saml2"...

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1793845

Title:
  Federation Protocol saml2 fails on Rocky

Status in OpenStack Identity (keystone):
  New

Bug description:
  In previous releases when setting up federation one could do the
  following:

  openstack federation protocol create saml2 --mapping mymapping
  --identity-provider myidp

  Then in the keystone.conf you could add:

  [auth]
  methods = password,token,saml2
  saml2 = keystone.auth.plugins.mapped.Mapped

  
  That is not the case on Rocky. This will give you a 500 with the following 
error:
  stevedore.named [-] Could not load keystone.auth.plugins.mapped.Mapped

  To work around this issue I had to delete my mapping called "saml2",
  remake it naming it "mapped" then update horizon, and apache configs
  accordingly. Then in the keystone.conf file I had to remove the
  "methods" line and the "saml2" line. Once I restarted apache then
  Federation worked as expected.

  Im not sure if this is a bug or if the way I was doing it before was
  hanging around as legacy from when "saml2" had been removed but I
  couldnt find anything release notes wise about the change, and the
  docs examples still reference "saml2"...

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1793845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1780376] Re: Queens neutron broken with recent L3 removal from neutron-lib.constants

2018-07-13 Thread Michael Johnson
This issue only applies to master where qa/infra has removed zuul cloner
and is now relying on requirements/upper-contraints.

So from Boden's comments it sounds like this is a broken requirements
/upper-constraint for neutron/neutron-lib.

I will add the requirements team to the bug.

** Also affects: openstack-requirements
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1780376

Title:
  Queens neutron broken with recent L3 removal from neutron-
  lib.constants

Status in neutron:
  Confirmed
Status in OpenStack Global Requirements:
  New

Bug description:
  This patch: https://github.com/openstack/neutron-
  lib/commit/ec829f9384547864aebb56390da8e17df7051aac breaks neutron in
  the current global requirements setup. Current GR with the new
  versioning pulls queens neutron and the 1.17.0 neutron-lib. Since L3
  was removed from neutron-lib.constants,  queens neutron fails on the
  reference neutron/plugins/common/constants.py

  I'm not sure if L3 should be put back, queens neutron patched, or the
  global requirements setup where it's pulling different versions of
  neutron and neutron-lib needs to be fixed.

  Steps to reproduce:
  Checkout neutron-lbaas and run tox -e py27
  Zuul seems to be pulling the right versions, local does not due to the GR 
constraints.

  Failed to import test module: neutron_lbaas.tests.unit.agent.test_agent
  Traceback (most recent call last):
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File "neutron_lbaas/tests/unit/agent/test_agent.py", line 19, in 
  from neutron_lbaas.agent import agent
File "neutron_lbaas/agent/agent.py", line 26, in 
  from neutron_lbaas.agent import agent_manager as manager
File "neutron_lbaas/agent/agent_manager.py", line 17, in 
  from neutron.agent import rpc as agent_rpc
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/agent/rpc.py",
 line 27, in 
  from neutron.agent import resource_cache
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/agent/resource_cache.py",
 line 20, in 
  from neutron.api.rpc.callbacks.consumer import registry as registry_rpc
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/api/rpc/callbacks/consumer/registry.py",
 line 15, in 
  from neutron.api.rpc.callbacks import resource_manager
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/api/rpc/callbacks/resource_manager.py",
 line 21, in 
  from neutron.api.rpc.callbacks import resources
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/api/rpc/callbacks/resources.py",
 line 15, in 
  from neutron.objects import network
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/objects/network.py",
 line 21, in 
  from neutron.db.models import segment as segment_model
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/db/models/segment.py",
 line 24, in 
  from neutron.extensions import segment
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/extensions/segment.py",
 line 26, in 
  from neutron.api import extensions
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/api/extensions.py",
 line 32, in 
  from neutron.plugins.common import constants as const
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/plugins/common/constants.py",
 line 28, in 
  'router': constants.L3,
  AttributeError: 'module' object has no attribute 'L3'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1780376/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1780376] [NEW] Queens neutron broken with recent L3 removal from neutron-lib.constants

2018-07-05 Thread Michael Johnson
Public bug reported:

This patch: https://github.com/openstack/neutron-
lib/commit/ec829f9384547864aebb56390da8e17df7051aac breaks neutron in
the current global requirements setup. Current GR with the new
versioning pulls queens neutron and the 1.17.0 neutron-lib. Since L3 was
removed from neutron-lib.constants,  queens neutron fails on the
reference neutron/plugins/common/constants.py

I'm not sure if L3 should be put back, queens neutron patched, or the
global requirements setup where it's pulling different versions of
neutron and neutron-lib needs to be fixed.

Steps to reproduce:
Checkout neutron-lbaas and run tox -e py27
Zuul seems to be pulling the right versions, local does not due to the GR 
constraints.

Failed to import test module: neutron_lbaas.tests.unit.agent.test_agent
Traceback (most recent call last):
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
module = self._get_module_from_name(name)
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
__import__(name)
  File "neutron_lbaas/tests/unit/agent/test_agent.py", line 19, in 
from neutron_lbaas.agent import agent
  File "neutron_lbaas/agent/agent.py", line 26, in 
from neutron_lbaas.agent import agent_manager as manager
  File "neutron_lbaas/agent/agent_manager.py", line 17, in 
from neutron.agent import rpc as agent_rpc
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/agent/rpc.py",
 line 27, in 
from neutron.agent import resource_cache
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/agent/resource_cache.py",
 line 20, in 
from neutron.api.rpc.callbacks.consumer import registry as registry_rpc
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/api/rpc/callbacks/consumer/registry.py",
 line 15, in 
from neutron.api.rpc.callbacks import resource_manager
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/api/rpc/callbacks/resource_manager.py",
 line 21, in 
from neutron.api.rpc.callbacks import resources
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/api/rpc/callbacks/resources.py",
 line 15, in 
from neutron.objects import network
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/objects/network.py",
 line 21, in 
from neutron.db.models import segment as segment_model
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/db/models/segment.py",
 line 24, in 
from neutron.extensions import segment
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/extensions/segment.py",
 line 26, in 
from neutron.api import extensions
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/api/extensions.py",
 line 32, in 
from neutron.plugins.common import constants as const
  File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/neutron/plugins/common/constants.py",
 line 28, in 
'router': constants.L3,
AttributeError: 'module' object has no attribute 'L3'

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1780376

Title:
  Queens neutron broken with recent L3 removal from neutron-
  lib.constants

Status in neutron:
  New

Bug description:
  This patch: https://github.com/openstack/neutron-
  lib/commit/ec829f9384547864aebb56390da8e17df7051aac breaks neutron in
  the current global requirements setup. Current GR with the new
  versioning pulls queens neutron and the 1.17.0 neutron-lib. Since L3
  was removed from neutron-lib.constants,  queens neutron fails on the
  reference neutron/plugins/common/constants.py

  I'm not sure if L3 should be put back, queens neutron patched, or the
  global requirements setup where it's pulling different versions of
  neutron and neutron-lib needs to be fixed.

  Steps to reproduce:
  Checkout neutron-lbaas and run tox -e py27
  Zuul seems to be pulling the right versions, local does not due to the GR 
constraints.

  Failed to import test module: neutron_lbaas.tests.unit.agent.test_agent
  Traceback (most recent call last):
File 
"/home/michjohn/projects/migration/neutron-lbaas/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
  

[Yahoo-eng-team] [Bug 1778199] [NEW] Migrate Volume dialogue box not displaying full destination host

2018-06-22 Thread michael-mcaleer
Public bug reported:

When viewing the 'Destination Host' options in the 'Migrate Volume'
dialogue box it is not possible to see the entire hostname, making it
very difficult to migrate to the correct host, this will become even
tougher if a longer hostname and/or back end name is used as it would
not be possible to see the rest of the pool details pertaining to
specific pool settings.

This drop down box would benefit from a dynamically sized drop down menu
which adjusts to the size of the destination host value.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Screenshot"
   
https://bugs.launchpad.net/bugs/1778199/+attachment/5155468/+files/MigrateVolumeUIBug.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1778199

Title:
  Migrate Volume dialogue box not displaying full destination host

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When viewing the 'Destination Host' options in the 'Migrate Volume'
  dialogue box it is not possible to see the entire hostname, making it
  very difficult to migrate to the correct host, this will become even
  tougher if a longer hostname and/or back end name is used as it would
  not be possible to see the rest of the pool details pertaining to
  specific pool settings.

  This drop down box would benefit from a dynamically sized drop down
  menu which adjusts to the size of the destination host value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1778199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1767028] Re: loadbalancer can't create with chinese character name

2018-04-30 Thread Michael Johnson
Marking invalid here to move the bug over to the neutron-lbaas
storyboard.

https://storyboard.openstack.org/#!/story/2001946

** Changed in: neutron
   Status: New => Invalid

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1767028

Title:
  loadbalancer can't create with chinese character name

Status in octavia:
  Invalid

Bug description:
  When create a loadbalancer with chinese character name, It will have
  some problems. Because its name will be written in haproxy
  configuration, but chinese character can't be written correctly.

  - version of Neutron server and Neutron LBaaS plugin are both mitaka
  - cat /var/log/neutron/lbaasv2-agent.log

  ……
  2018-04-26 17:08:28.115 2128890 INFO neutron.common.config [-] 
/usr/bin/neutron-lbaasv2-agent version 0.0.1.dev14379
  2018-04-26 17:08:30.985 2128890 WARNING oslo_config.cfg 
[req-ef0cef5b-d415-4a90-a953-616cb938bfb2 - - - - -] Option "quota_items" from 
group "QUOTAS" is deprecated for removal.  Its value may be silently ignored in 
the future.
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
[req-482029a2-2d4a-410a-9d24-5ec3eb7722fd 673c04fcbf374619af91d09eed27ed6f 
e1a0b669b61744ff867274586ef6a968 - - -] Create loadbalancer 
31822d01-d425-456b-8376-4853d820ab1d failed on device driver haproxy_ns
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
Traceback (most recent call last):
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File "/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", 
line 283, in create_loadbalancer
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
driver.loadbalancer.create(loadbalancer, ha_info=ha_info)
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 433, in create
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
self.refresh(loadbalancer, ha_info=ha_info)
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 423, in refresh
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
if (not self.driver.deploy_instance(loadbalancer, ha_info=ha_info) and
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 
271, in inner
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
return f(*args, **kwargs)
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 201, in deploy_instance
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
self.create(loadbalancer, ha_info=ha_info)
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 251, in create
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
self._spawn(loadbalancer)
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 406, in _spawn
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
haproxy_base_dir)
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/jinja_cfg.py",
 line 93, in save_config
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
n_utils.replace_file(conf_path, config_str)
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File "/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 535, in 
replace_file
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager   
File "/usr/lib64/python2.7/socket.py", line 316, in write
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
data = str(data) # XXX Should really reject non-string non-buffers
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager 
UnicodeEncodeError: 'ascii' codec can't encode characters in position 20-21: 
ordinal not in range(128)
  2018-04-26 17:11:19.533 2128890 ERROR neutron_lbaas.agent.agent_manager

  - command outputs
  # neutron lbaas-loadbalancer-create 0f45f8d1-7a50-4e4f-93f0-22bdf1e9a4fc 
--name 测试
  Created a new loadbalancer:
  +-+--+
  | Field   | Value

[Yahoo-eng-team] [Bug 1328939] Re: Setting instance default_ephemeral_device in Ironic driver should be more intelligent

2018-02-05 Thread Michael Turek
This wishlist bug has been open more than a year without any activity.
I'm going to move it to "Opinion / Wishlist", which is an easily-
obtainable queue of older requests that have come on. This bug can be
reopened (set back to "New") if someone decides to work on this.

** Changed in: ironic
   Status: Triaged => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1328939

Title:
  Setting instance default_ephemeral_device in Ironic driver should be
  more intelligent

Status in Ironic:
  Opinion
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  The instance default_ephemeral_device value needs to be set within the
  nova driver to the partition where the ephemeral partition is created.
  We currently hard code this value to /dev/sda1 to duplicate the old
  nova-bm behavior. While this makes things work for TripleO [1], we
  should do something smarter to determine the true partition value to
  set (e.g., a Cirros image value should be /dev/vda1).

  We could consider using something like udev by-label names (e.g.,
  /dev/disk/by-label/NNN). This obviously adds a requirement on udev.

  [1] https://bugs.launchpad.net/ironic/+bug/1324286

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1328939/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739516] [NEW] networking comes up before hostname is set

2017-12-20 Thread Michael Hudson-Doyle
Public bug reported:

When boot with libvirt a disk image that has been installed with
subiquity which has the workaround for bug 1737630 applied, i.e.
networkd starts automatically, I cannot ping the VM by hostname from the
host.

I think this is because the networking has come up before the hostname
is set, so the hostname is not sent along with the DHCP request to
libvirt's dnsmasq and so that dnsmasq cannot answer lookups for the
hostname. If I run "netplan apply" on the vm, enough things are
apparently restarted that DHCP happens again and I can ping the vm by
hostname from the host.

I'm not completely sure I have diagnosed this correctly and certainly
don't know how to fix it.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1739516

Title:
  networking comes up before hostname is set

Status in cloud-init:
  New

Bug description:
  When boot with libvirt a disk image that has been installed with
  subiquity which has the workaround for bug 1737630 applied, i.e.
  networkd starts automatically, I cannot ping the VM by hostname from
  the host.

  I think this is because the networking has come up before the hostname
  is set, so the hostname is not sent along with the DHCP request to
  libvirt's dnsmasq and so that dnsmasq cannot answer lookups for the
  hostname. If I run "netplan apply" on the vm, enough things are
  apparently restarted that DHCP happens again and I can ping the vm by
  hostname from the host.

  I'm not completely sure I have diagnosed this correctly and certainly
  don't know how to fix it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1739516/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1737630] [NEW] cloud-init's netplan rendering does not do anything that starts networkd

2017-12-11 Thread Michael Hudson-Doyle
Public bug reported:

Currently if an instance ends up using cloud-init's netplan support with
the networkd backed, networkd is never started and so networking doesn't
come up. The fix is probably to call "netplan apply" rather than
"netplan generate".

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1737630

Title:
  cloud-init's netplan rendering does not do anything that starts
  networkd

Status in cloud-init:
  New

Bug description:
  Currently if an instance ends up using cloud-init's netplan support
  with the networkd backed, networkd is never started and so networking
  doesn't come up. The fix is probably to call "netplan apply" rather
  than "netplan generate".

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1737630/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1734167] [NEW] DNS doesn't work in no-cloud as launched by ubuntu

2017-11-23 Thread Michael Lyle
Public bug reported:

I use no-cloud to test the kernel in CI (I am maintainer of the bcache
subsystem), and have been running it successfully under 16.04 cloud
images from qemu, using a qemu command that includes:

-smbios "type=1,serial=ds=nocloud-
net;s=https://raw.githubusercontent.com/mlyle/mlyle/master/cloud-
metadata/linuxtst/"

As documented here:

http://cloudinit.readthedocs.io/en/latest/topics/datasources/nocloud.html

Under the new 17.10 cloud images, this doesn't work: the network comes
up, but name resolution doesn't work-- /etc/resolv.conf is a symlink to
a nonexistent file at this point of the boot and systemd-resolved is not
running.  When I manually hack /etc/resolv.conf in the cloud image to
point to 4.2.2.1 it works fine.

I don't know if nameservice not working is by design, but it seems like
it should work.  The documentation states:

"With ds=nocloud-net, the seedfrom value must start with http://,
https:// or ftp://;

And https is not going to work for a raw IP address.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1734167

Title:
  DNS doesn't work in no-cloud as launched by ubuntu

Status in cloud-init:
  New

Bug description:
  I use no-cloud to test the kernel in CI (I am maintainer of the bcache
  subsystem), and have been running it successfully under 16.04 cloud
  images from qemu, using a qemu command that includes:

  -smbios "type=1,serial=ds=nocloud-
  net;s=https://raw.githubusercontent.com/mlyle/mlyle/master/cloud-
  metadata/linuxtst/"

  As documented here:

  http://cloudinit.readthedocs.io/en/latest/topics/datasources/nocloud.html

  Under the new 17.10 cloud images, this doesn't work: the network comes
  up, but name resolution doesn't work-- /etc/resolv.conf is a symlink
  to a nonexistent file at this point of the boot and systemd-resolved
  is not running.  When I manually hack /etc/resolv.conf in the cloud
  image to point to 4.2.2.1 it works fine.

  I don't know if nameservice not working is by design, but it seems
  like it should work.  The documentation states:

  "With ds=nocloud-net, the seedfrom value must start with http://,
  https:// or ftp://;

  And https is not going to work for a raw IP address.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1734167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1726651] [NEW] any netplan config for wifi devices should not be world readable

2017-10-23 Thread Michael Hudson-Doyle
Public bug reported:

Currently, as near as I can tell, curtin writes netplan config to a
world readable file in /etc/cloud/ and cloud-init writes it to a world
readable file in /etc/netplan. But if there are any wpa2 psks in the
config they should be put in a 0600 file.

This doesn't really make any sense for actual clouds, but subiquity
should be able to get this right.

One way to do this would be for cloud-init to check through the provided
config and put wifis in a separate file or another would be for there to
be a way to direct cloud-init to write different parts of the netplan
config to different files and a way to set the modes of those files
(neither of which appears to be possible today), and for curtin to make
use of that. I don't really care :)

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Affects: curtin
 Importance: Undecided
 Status: New

** Also affects: curtin
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1726651

Title:
  any netplan config for wifi devices should not be world readable

Status in cloud-init:
  New
Status in curtin:
  New

Bug description:
  Currently, as near as I can tell, curtin writes netplan config to a
  world readable file in /etc/cloud/ and cloud-init writes it to a world
  readable file in /etc/netplan. But if there are any wpa2 psks in the
  config they should be put in a 0600 file.

  This doesn't really make any sense for actual clouds, but subiquity
  should be able to get this right.

  One way to do this would be for cloud-init to check through the
  provided config and put wifis in a separate file or another would be
  for there to be a way to direct cloud-init to write different parts of
  the netplan config to different files and a way to set the modes of
  those files (neither of which appears to be possible today), and for
  curtin to make use of that. I don't really care :)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1726651/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1199536] Re: Move dict test matchers into testtools

2017-10-16 Thread Michael Turek
** Changed in: ironic
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1199536

Title:
  Move dict test matchers into testtools

Status in Ironic:
  Fix Released
Status in OpenStack Identity (keystone):
  Invalid
Status in OpenStack Compute (nova):
  Won't Fix
Status in oslo-incubator:
  Won't Fix
Status in testtools:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Triaged

Bug description:
  Reduce code duplication by pulling DictKeysMismatch, DictMismatch and
  DictMatches from glanceclient/tests/matchers.py into a library (e.g.
  testtools)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1199536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1722416] [NEW] Init failures on 17.1 with CentOS 7.4 and OpenStack

2017-10-09 Thread Michael J Burling
Public bug reported:

1. OpenStack Juno (very old, indeed)

2. 17.1+17.g45d361cb

3. Sending this simple configuration forward utilizing the puppet module:
#cloud-config
puppet:
  conf:
main:
  use_srv_records: true
  srv_domain: stp-1.redbrickhealth.net
  pluginsource: puppet:///plugins
  pluginfactsource: puppet:///pluginfacts
agent:
  runinterval: 3600
  report: true
  pluginsync: true
  environment: teng3989

4. (Not exactly the same output you're looking for, but getting the information 
is a little more difficult without cloud-init succeeding in this environment, 
instead, the cloud provisioner log with cloud-init relevant lines).
 Starting Initial cloud-init job (pre-networking)...
[3.269920] cloud-init[516]: Cloud-init v. 17.1 running 'init-local' at Mon, 
09 Oct 2017 20:23:39 +. Up 3.24 seconds.
[[32m  OK  [0m] Started Initial cloud-init job (pre-networking).
 Starting Initial cloud-init job (metadata service crawler)...
[6.494546] cloud-init[869]: Cloud-init v. 17.1 running 'init' at Mon, 09 
Oct 2017 20:23:43 +. Up 6.45 seconds.
[6.543879] cloud-init[869]: ci-info: Net device 
info
[6.545178] cloud-init[869]: ci-info: 
[6.546254] cloud-init[869]: ci-info: Route IPv4 
info
[6.547351] cloud-init[869]: ci-info: 
[   12.855065] cloud-init[869]: 2017-10-09 20:23:49,607 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2/120s]: 
request error [('Connection aborted.', error(113, 'No route to host'))]
[   15.861045] cloud-init[869]: 2017-10-09 20:23:52,612 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [5/120s]: 
request error [('Connection aborted.', error(113, 'No route to host'))]
[   18.867033] cloud-init[869]: 2017-10-09 20:23:55,618 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [8/120s]: 
request error [('Connection aborted.', error(113, 'No route to host'))]
[   21.873300] cloud-init[869]: 2017-10-09 20:23:58,625 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [11/120s]: 
request error [('Connection aborted.', error(113, 'No route to host'))]
[   24.879074] cloud-init[869]: 2017-10-09 20:24:01,630 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [14/120s]: 
request error [('Connection aborted.', error(113, 'No route to host'))]
[   27.885123] cloud-init[869]: 2017-10-09 20:24:04,637 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [17/120s]: 
request error [('Connection aborted.', error(113, 'No route to host'))]
[   30.891068] cloud-init[869]: 2017-10-09 20:24:07,643 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [20/120s]: 
request error [('Connection aborted.', error(113, 'No route to host'))]
[   35.904337] cloud-init[869]: 2017-10-09 20:24:12,656 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [25/120s]: 
request error [('Connection aborted.', error(113, 'No route to host'))]
[   38.913860] cloud-init[869]: 2017-10-09 20:24:15,665 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [28/120s]: 
request error [('Connection aborted.', error(113, 'No route to host'))]
[   41.920961] cloud-init[869]: 2017-10-09 20:24:18,672 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [31/120s]: 
request error [('Connection aborted.', error(113, 'No route to host'))]
[   44.927096] cloud-init[869]: 2017-10-09 20:24:21,679 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [35/120s]: 
request error [('Connection aborted.', error(113, 'No route to host'))]
[   50.938985] cloud-init[869]: 2017-10-09 20:24:27,690 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [41/120s]: 
request error [('Connection aborted.', error(113, 'No route to host'))]
[   53.944671] cloud-init[869]: 2017-10-09 20:24:30,696 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [44/120s]: 
request error [('Connection aborted.', error(113, 'No route to host'))]
[   59.956920] cloud-init[869]: 2017-10-09 20:24:36,708 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: 
request error [('Connection aborted.', error(113, 'No route to host'))]
[   62.962710] cloud-init[869]: 2017-10-09 20:24:39,714 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [53/120s]: 
request error [('Connection aborted.', error(113, 

[Yahoo-eng-team] [Bug 1718356] Re: Include default config files in python wheel

2017-09-21 Thread Michael Johnson
Correct, our policy is in code and we don't use paste.  Marking invalid.

** Changed in: octavia
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1718356

Title:
  Include default config files in python wheel

Status in Barbican:
  In Progress
Status in Cinder:
  In Progress
Status in Designate:
  In Progress
Status in Fuxi:
  New
Status in Glance:
  In Progress
Status in OpenStack Heat:
  In Progress
Status in Ironic:
  In Progress
Status in OpenStack Identity (keystone):
  In Progress
Status in kuryr-libnetwork:
  New
Status in Magnum:
  In Progress
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  In Progress
Status in octavia:
  Invalid
Status in openstack-ansible:
  New
Status in Sahara:
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in Zun:
  New

Bug description:
  The projects which deploy OpenStack from source or using python wheels
  currently have to either carry templates for api-paste, policy and
  rootwrap files or need to source them from git during deployment. This
  results in some rather complex mechanisms which could be radically
  simplified by simply ensuring that all the same files are included in
  the built wheel.

  A precedence for this has already been set in neutron [1], glance [2]
  and designate [3] through the use of the data_files option in the
  files section of setup.cfg.

  [1] 
https://github.com/openstack/neutron/blob/d3c393ff6b5fbd0bdaabc8ba678d755ebfba08f7/setup.cfg#L24-L39
  [2] 
https://github.com/openstack/glance/blob/02cd5cba70a8465a951cb813a573d390887174b7/setup.cfg#L20-L21
  [3] 
https://github.com/openstack/designate/blob/25eb143db04554d65efe2e5d60ad3afa6b51d73a/setup.cfg#L30-L37

  This bug will be used for a cross-project implementation of patches to
  normalise the implementation across the OpenStack projects. Hopefully
  the result will be a consistent implementation across all the major
  projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1718356/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716945] [NEW] Install and configure (Red Hat) in glance: missing DB steps

2017-09-13 Thread Michael Burk
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [x] This doc is inaccurate in this way: Under Prerequisites, the database 
setup shows to connect to the db, and then it skips to the CLI to create the 
glance user. Compare to Ocata doc:
https://docs.openstack.org/ocata/install-guide-rdo/glance-install.html#install-and-configure-components
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 15.0.0.0rc2.dev25 on 'Wed Aug 23 03:33:04 2017, commit 9820166'
SHA: 982016670fe908e5d7026714b115e63b7c31b46b
Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-rdo.rst
URL: https://docs.openstack.org/glance/pike/install/install-rdo.html

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1716945

Title:
  Install and configure (Red Hat) in glance: missing DB steps

Status in Glance:
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: Under Prerequisites, the database 
setup shows to connect to the db, and then it skips to the CLI to create the 
glance user. Compare to Ocata doc:
  
https://docs.openstack.org/ocata/install-guide-rdo/glance-install.html#install-and-configure-components
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 15.0.0.0rc2.dev25 on 'Wed Aug 23 03:33:04 2017, commit 9820166'
  SHA: 982016670fe908e5d7026714b115e63b7c31b46b
  Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-rdo.rst
  URL: https://docs.openstack.org/glance/pike/install/install-rdo.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1716945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716797] [NEW] Verify operation in keystone: step 1 has already been done

2017-09-12 Thread Michael Burk
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

On page https://docs.openstack.org/keystone/pike/install/keystone-
verify-obs.html,

- [x] This doc is inaccurate in this way: Step one apparently has already been 
done; there is no occurrence of "admin_token_auth" in 
/etc/keystone/keystone-paste.ini
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 12.0.0.0rc3.dev2 on 2017-08-26 22:01
SHA: 5a9aeefff06678d790d167b6dac752677f02edf9
Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-verify-obs.rst
URL: https://docs.openstack.org/keystone/pike/install/keystone-verify-obs.html

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1716797

Title:
  Verify operation in keystone: step 1 has already been done

Status in OpenStack Identity (keystone):
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  On page https://docs.openstack.org/keystone/pike/install/keystone-
  verify-obs.html,

  - [x] This doc is inaccurate in this way: Step one apparently has already 
been done; there is no occurrence of "admin_token_auth" in 
/etc/keystone/keystone-paste.ini
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 12.0.0.0rc3.dev2 on 2017-08-26 22:01
  SHA: 5a9aeefff06678d790d167b6dac752677f02edf9
  Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-verify-obs.rst
  URL: https://docs.openstack.org/keystone/pike/install/keystone-verify-obs.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1716797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716792] [NEW] Install and configure in keystone, Pike: nav button wrong

2017-09-12 Thread Michael Burk
Public bug reported:


This bug tracker is for errors with the documentation, use the following as a 
template and remove or add fields as you see fit. Convert [ ] into [x] to check 
boxes:

On page https://docs.openstack.org/keystone/pike/install/index-rdo.html,

- [x] This doc is inaccurate in this way: "Forward" button goes to Verify 
section, but should go to "Create a domain, projects, users, and roles" 
(https://docs.openstack.org/keystone/pike/install/keystone-users.html)
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 12.0.0.0rc3.dev2 on 2017-08-26 22:01
SHA: 5a9aeefff06678d790d167b6dac752677f02edf9
Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-install-rdo.rst
URL: https://docs.openstack.org/keystone/pike/install/keystone-install-rdo.html

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1716792

Title:
  Install and configure in keystone, Pike: nav button wrong

Status in OpenStack Identity (keystone):
  New

Bug description:
  
  This bug tracker is for errors with the documentation, use the following as a 
template and remove or add fields as you see fit. Convert [ ] into [x] to check 
boxes:

  On page https://docs.openstack.org/keystone/pike/install/index-
  rdo.html,

  - [x] This doc is inaccurate in this way: "Forward" button goes to Verify 
section, but should go to "Create a domain, projects, users, and roles" 
(https://docs.openstack.org/keystone/pike/install/keystone-users.html)
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 12.0.0.0rc3.dev2 on 2017-08-26 22:01
  SHA: 5a9aeefff06678d790d167b6dac752677f02edf9
  Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-install-rdo.rst
  URL: 
https://docs.openstack.org/keystone/pike/install/keystone-install-rdo.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1716792/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469498] Re: LbaasV2 session persistence- Create and update

2017-09-12 Thread Michael Johnson
** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1469498

Title:
  LbaasV2 session persistence- Create and update

Status in python-neutronclient:
  New

Bug description:
  When we create a Lbaas pool with session persistence it configured OK

  neutron lbaas-pool-create --session-persistence type=HTTP_COOKIE  
--lb-algorithm LEAST_CONNECTIONS --listener 
4658a507-dccc-41f9-87d7-913d31cab3a1 --protocol HTTP 
  Created a new pool:
  +-++
  | Field   | Value  |
  +-++
  | admin_state_up  | True   |
  | description ||
  | healthmonitor_id||
  | id  | a626dc28-0126-48f7-acd3-f486827a89c1   |
  | lb_algorithm| LEAST_CONNECTIONS  |
  | listeners   | {"id": "4658a507-dccc-41f9-87d7-913d31cab3a1"} |
  | members ||
  | name||
  | protocol| HTTP   |
  | session_persistence | {"cookie_name": null, "type": "HTTP_COOKIE"}   |
  | tenant_id   | ae0954b9cf0c438e99211227a7f3f937   |

  BUT, when we create a pool without session persistence and update it
  to do session persistence, the action is different and not user
  friendly.

  [root@puma09 ~(keystone_redhat)]# neutron lbaas-pool-create --lb-algorithm 
LEAST_CONNECTIONS --listener 4658a507-dccc-41f9-87d7-913d31cab3a1 --protocol 
HTTP 
  Created a new pool:
  +-++
  | Field   | Value  |
  +-++
  | admin_state_up  | True   |
  | description ||
  | healthmonitor_id||
  | id  | b9048a69-461a-4503-ba6b-8a2df281f804   |
  | lb_algorithm| LEAST_CONNECTIONS  |
  | listeners   | {"id": "4658a507-dccc-41f9-87d7-913d31cab3a1"} |
  | members ||
  | name||
  | protocol| HTTP   |
  | session_persistence ||
  | tenant_id   | ae0954b9cf0c438e99211227a7f3f937   |
  +-++
  [root@puma09 ~(keystone_redhat)]# neutron lbaas-pool-update 
b9048a69-461a-4503-ba6b-8a2df281f804 --session-persistence type=HTTP_COOKIE
  name 'HTTP_COOKIE' is not defined
  [root@puma09 ~(keystone_redhat)]# 


  we need to configure it in the following way- 
  neutron lbaas-pool-update b9048a69-461a-4503-ba6b-8a2df281f804 
--session-persistence type=dict type=HTTP_COOKIE
  Updated pool: b9048a69-461a-4503-ba6b-8a2df281f804

  The config and update should be done in same way.

  Kilo+ rhel 7.1
  openstack-neutron-common-2015.1.0-10.el7ost.noarch
  python-neutron-lbaas-2015.1.0-5.el7ost.noarch
  openstack-neutron-openvswitch-2015.1.0-10.el7ost.noarch
  python-neutronclient-2.4.0-1.el7ost.noarch
  openstack-neutron-lbaas-2015.1.0-5.el7ost.noarch
  python-neutron-fwaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-fwaas-2015.1.0-3.el7ost.noarch
  python-neutron-2015.1.0-10.el7ost.noarch
  openstack-neutron-2015.1.0-10.el7ost.noarch
  openstack-neutron-ml2-2015.1.0-10.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1469498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687366] Re: Radware LBaaS v2 driver should have config to skip SSL certificates verification

2017-09-12 Thread Michael Johnson
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1687366

Title:
  Radware LBaaS v2 driver should have config to skip SSL certificates
  verification

Status in neutron:
  Fix Released

Bug description:
  Radware LBaaS v2 driver communicates with Radware's back-end system over 
HTTPS.
  Since this back-end system is internal (VA on openstack compute node), 
usually self-signed certicates are used. 

  If python's default behavior is to verify SSL certificates, and no valid 
certificates exist, HTTPS communication will be halted.
  Starting from releases 2.7.9/3.4.3, python verifies SSL certificates by 
default.

  This enhancement adds a new configuration parameter for the driver
  which will turn the SSL certificates verification OFF in case when
  it's ON.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1687366/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1556342] Re: Able to create pool with different protocol than listener protocol

2017-09-12 Thread Michael Johnson
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1556342

Title:
  Able to create pool with different protocol than listener protocol

Status in octavia:
  In Progress

Bug description:
  When creating a pool with different protocol than listener protocol, a pool 
is create even though the protocols are not compatible.
  Previously, this would not display any pools in neutron lbaas-pool-list since 
the protocols are not compatible. 

  
  Initial state
  $ neutron lbaas-loadbalancer-list
  
+--+--+-+-+--+
  | id   | name | vip_address | 
provisioning_status | provider |
  
+--+--+-+-+--+
  | bf449f65-633d-4859-b417-28b35f4eaea2 | lb1  | 10.0.0.3| ERROR   
| octavia  |
  | c6bf0765-47a9-49d9-a2f2-dd3f1ea81a5c | lb2  | 10.0.0.13   | ACTIVE  
| octavia  |
  | e1210b03-f440-4bc1-84ca-9ba70190854f | lb3  | 10.0.0.16   | ACTIVE  
| octavia  |
  
+--+--+-+-+--+

  $ neutron lbaas-listener-list
  
+--+--+---+--+---++
  | id   | default_pool_id  
| name  | protocol | protocol_port | admin_state_up |
  
+--+--+---+--+---++
  | 4cda881c-9209-42ac-9c97-e1bfab0300b2 | 6be3dae1-f159-4e0e-94ea-5afdfdab05fc 
| list2 | HTTP |80 | True   |
  
+--+--+---+--+---++

  $ neutron lbaas-pool-list
  +--+---+--++
  | id   | name  | protocol | admin_state_up |
  +--+---+--++
  | 6be3dae1-f159-4e0e-94ea-5afdfdab05fc | pool2 | HTTP | True   |
  +--+---+--++

  
  Create new listener with TCP protocol 
  $ neutron lbaas-listener-create --name list3 --loadbalancer lb3 --protocol 
TCP --protocol-port 22
  Created a new listener:
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | connection_limit  | -1 |
  | default_pool_id   ||
  | default_tls_container_ref ||
  | description   ||
  | id| 9574801a-675b-4784-baf0-410d1a1fd941   |
  | loadbalancers | {"id": "e1210b03-f440-4bc1-84ca-9ba70190854f"} |
  | name  | list3  |
  | protocol  | TCP|
  | protocol_port | 22 |
  | sni_container_refs||
  | tenant_id | b24968d717804ffebd77803fce24b5a4   |
  +---++

  Create pool with HTTP protocol instead of TCP
  $ neutron lbaas-pool-create --name pool3 --lb-algorithm ROUND_ROBIN 
--listener list3 --protocol HTTP
  Listener protocol TCP and pool protocol HTTP are not compatible.

  Pool list shows pool3 even though the protocols are not compatible and should 
not be able to create pool
  $ neutron lbaas-pool-list
  +--+---+--++
  | id   | name  | protocol | admin_state_up |
  +--+---+--++
  | 6be3dae1-f159-4e0e-94ea-5afdfdab05fc | pool2 | HTTP | True   |
  | 7e6fbe67-60b0-40cd-afdd-44cddd8c60a1 | pool3 | HTTP | True   |
  +--+---+--++

  From mysql, pool table from octavia DB. No pool3 
  

[Yahoo-eng-team] [Bug 1653086] Re: Hit internal server error in lb creation with no subnets network

2017-09-12 Thread Michael Johnson
Neutron-lbaas is no longer a neutron project, so removing neutron from
the affected project.

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1653086

Title:
  Hit internal server error in lb creation with no subnets network

Status in octavia:
  Fix Released

Bug description:
  Currently, lbaas support create loadbalancer with vip-network. But if
  there isn't a subnet in this vip-network. Neutron server will hit
  internal error.

  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m
six.reraise(self.type_, self.value, self.tb)
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 526, in do_create
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m
return obj_creator(request.context, **kwargs)
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/opt/stack/neutron-lbaas/neutron_lbaas/services/loadbalancer/plugin.py", line 
362, in create_loadbalancer
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m
allocate_vip=not driver.load_balancer.allocates_vip)
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/opt/stack/neutron-lbaas/neutron_lbaas/db/loadbalancer/loadbalancer_dbv2.py", 
line 332, in create_loadbalancer
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m
vip_address, vip_network_id)
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m  File 
"/opt/stack/neutron-lbaas/neutron_lbaas/db/loadbalancer/loadbalancer_dbv2.py", 
line 155, in _create_port_for_load_balancer
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource ^[[01;35m^[[00m
lb_db.vip_address = fixed_ip['ip_address']
  2016-12-29 16:57:47.182 TRACE neutron.api.v2.resource 
^[[01;35m^[[00mTypeError: 'NoneType' object has no attribute '__getitem__'

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1653086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667259] Re: one more pool is created for a loadbalancer

2017-09-12 Thread Michael Johnson
As noted above, this was fixed in ocata.

Also, this didn't get updated as LBaaS is no longer part of neutron and
bugs are now tracked in the Octavia storyboard.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667259

Title:
  one more pool is created for a loadbalancer

Status in OpenStack Heat:
  Won't Fix
Status in neutron:
  Fix Released

Bug description:
  One more pool is created when creating a load balancer with two pools.
  That pool doesn't have complete information but related to that
  loadblancer, which caused failure when deleting loadbalancer.

  heat resource-list lbvd
  WARNING (shell) "heat resource-list" is deprecated, please use "openstack 
stack resource list" instead
  
+---+--+---+-+--+
  | resource_name | physical_resource_id | resource_type
 | resource_status | updated_time |
  
+---+--+---+-+--+
  | listener  | 12dfe005-80e0-4439-a4f8-1333f688e73b | 
OS::Neutron::LBaaS::Listener  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | listener2 | 26ba1151-3d4b-4732-826b-7f318800070d | 
OS::Neutron::LBaaS::Listener  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | loadbalancer  | 3a5bfa24-220c-4316-9c3d-57dd9c13feb8 | 
OS::Neutron::LBaaS::LoadBalancer  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | monitor   | 241bc328-4c9b-4f58-a34a-4e25ed7431ea | 
OS::Neutron::LBaaS::HealthMonitor | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | monitor2  | 6592b768-f3be-4ff9-bbf4-2c30b94f98e2 | 
OS::Neutron::LBaaS::HealthMonitor | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | pool  | 41652d9a-d0fe-4743-9e5f-2dfe98b19f3d | 
OS::Neutron::LBaaS::Pool  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  | pool2 | fae40172-7f16-4b1a-93f0-877d404fe466 | 
OS::Neutron::LBaaS::Pool  | CREATE_COMPLETE | 2017-02-23T09:38:58Z |
  
+---+--+---+-+--+

  
  neutron lbaas-pool-list | grep lbvd
  | 095c94b8-8c18-443f-9ce9-3d34e94f0c81 | lbvd-pool-ujtp6ddt4g6o   
 | HTTP| True  |
  | 41652d9a-d0fe-4743-9e5f-2dfe98b19f3d | lbvd-pool-ujtp6ddt4g6o   
 | HTTP| True  |
  | fae40172-7f16-4b1a-93f0-877d404fe466 | lbvd-pool2-kn7rlwltbdxh  
  | HTTPS| True  |

  
  neutron lbaas-pool-show 095c94b8-8c18-443f-9ce9-3d34e94f0c81
  +-++
  | Field  | Value  |
  +-++
  | admin_state_up  | True  |
  | description||
  | healthmonitor_id||
  | id  | 095c94b8-8c18-443f-9ce9-3d34e94f0c81  |
  | lb_algorithm| ROUND_ROBIN|
  | listeners  ||
  | loadbalancers  | {"id": "3a5bfa24-220c-4316-9c3d-57dd9c13feb8"} |
  | members||
  | name| lbvd-pool-ujtp6ddt4g6o|
  | protocol| HTTP  |
  | session_persistence ||
  | tenant_id  | 3dcf8b12327c460a966c1c1d4a6e2887  |
  +-++

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1667259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495430] Re: delete lbaasv2 can't delete lbaas namespace automatically.

2017-09-12 Thread Michael Johnson
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495430

Title:
  delete lbaasv2 can't delete lbaas namespace automatically.

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive kilo series:
  Triaged
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in Ubuntu Cloud Archive newton series:
  Triaged
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in octavia:
  Fix Released
Status in neutron-lbaas package in Ubuntu:
  Fix Released
Status in neutron-lbaas source package in Xenial:
  Triaged
Status in neutron-lbaas source package in Yakkety:
  Triaged
Status in neutron-lbaas source package in Zesty:
  Fix Released

Bug description:
  Try the lbaas v2 in my env and found lots of orphan lbaas namespace. Look 
back to the code and find that lbaas instance will be undelployed, when delete 
listener. All things are deleted except the namespace.
  However, from the method of deleting loadbalancer, the namespace will be 
deleted automatically.
  The behavior is not consistent, namespace should be deleted from deleting 
listener too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1495430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1711068] Re: lbaas listener update does not work when specifying listener name

2017-09-12 Thread Michael Johnson
LBaaS is no longer part of neutron.  LBaaS bugs should be submitted to
the Octavia project on Storyboard.

Mitaka is now EOL and the neutron client is deprecated.  If the issue
still existing in Newton or a non-EOL release of neutron client, please
re-open this bug against python-neutronclient.

** Project changed: neutron => python-neutronclient

** Tags removed: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1711068

Title:
  lbaas listener update does not work when specifying listener name

Status in python-neutronclient:
  New

Bug description:
  On MITAKA.

  When trying to update the name of the LBaaS listener which has a name,
  following is received: Unable to find listener with id 


  Updating with id works as expected.

  Example:

  radware@devstack131:~$ neutron lbaas-listener-list
  
+--+--+-+--+---++
  | id   | default_pool_id  
| name| protocol | protocol_port | admin_state_up |
  
+--+--+-+--+---++
  | cc2ddbd9-038d-4ff0-81bd-346ba9a47e23 | 3d379d02-3476-4d03-8e0f-3383102ff8f9 
| RADWARE_ANOTHER | HTTP |80 | True   |
  | 2491490d-12bc-4d4b-9744-bb8464e01672 |  
| RADWAREV2   | HTTP |80 | True   |
  
+--+--+-+--+---++
  radware@devstack131:~$ neutron lbaas-listener-update --name=RADWAREV2 
RADWAREV2
  Unable to find listener with id 'RADWAREV2'
  radware@devstack131:~$ neutron lbaas-listener-update --name=RADWAREV2 
2491490d-12bc-4d4b-9744-bb8464e01672
  Updated listener: 2491490d-12bc-4d4b-9744-bb8464e01672
  radware@devstack131:~$

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1711068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1699613] Re: LBaaS v2 agent security groups not filtering

2017-09-12 Thread Michael Johnson
LBaaS is no longer part of neutron and future bugs should be reported in
the Octavia project in Storyboard.

Mitaka is now EOL so this bug will be closed out.  If it is still
occurring in a non-EOL release, please re-open this bug in Storyboard
under the neutron-lbaas project under Octavia.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1699613

Title:
  LBaaS v2 agent security groups not filtering

Status in neutron:
  Invalid

Bug description:
  Greetings:

  Current environment details:

  - Mitaka with LBaaS v2 agent configured.
  - Deployed via Openstack Ansible
  - Neutron Linuxbridge
  - Ubuntu 14.04.5 LTS

  We had followed documentation at https://docs.openstack.org/mitaka
  /networking-guide/config-lbaas.html to secure traffic to the VIP.

  We created two security groups.

  1) SG-allowToVIP: We didn't want to open it globally, so we limited ingress 
HTTP access to certain IPs. This SG was applied to VIP port.
  2) SG-allowLB: ingress HTTP from the VIP address. This SG was applied to the 
pool member(s). The idea behind this was web server (load-balanced pool member) 
will always see traffic from the VIP.

  End result is/was we can access the VIP from any source IP and any
  rule applied to the security group (SG-allowToVIP) is ignored.

  We have verified the following:
  - Appropriate SG is applied properly to each port
  - When we look at the iptables-save for the VIP port, we are seeing the rules 
originating from the SG but they are not working.
  - When we look at the iptables-save for the pool-member(s), we are seeing the 
rules originating from the SG and they are working.

  The only time we were able to block traffic to the VIP was to edit the
  iptables rules for the LBaaS agent which is not practical obviously,
  but we were just experimenting.

  I will provide detailed output - after I clean it up.

  Thanks in advance

  Luke

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1699613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1713424] Re: [RFE] Support proxy protocol enablement in Neutron LBaaS API

2017-09-12 Thread Michael Johnson
LBaaS is no longer part of neutron.  LBaaS issues should be reported in
storyboard under the Octavia project.

That said, this is available in Octavia.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1713424

Title:
  [RFE] Support proxy protocol enablement in Neutron LBaaS API

Status in neutron:
  Invalid

Bug description:
  Problem: servers behind a TCP load balancer, as provisioned using the
  Neutron LBaaS API, can't determine the source IP of a TCP connection.
  Instead they will always see the load balancer IP as origin of
  requests. This makes troubleshooting client connection issues using
  logs gathered behind a LB very hard and often impossible.

  Solution: the PROXY protocol has been introduced to forward the
  missing information across a load balancer:

  http://www.haproxy.org/download/1.8/doc/proxy-protocol.txt

  A number of backend services can make use of it, such as Nginx

  https://www.nginx.com/resources/admin-guide/proxy-protocol/

  but also Apache, Squid, Undertow. Proxy protocol is also supported by
  Amazon ELB since 2013.

  As HAproxy, the implementation behind the Neutron LBaaS API, does
  already offer native support, this RFE is about its enablement using
  the LBaaS API and corresponding Heat resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1713424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1713424] [NEW] [RFE] Support proxy protocol enablement in Neutron LBaaS API

2017-08-28 Thread Michael Steffens
Public bug reported:

Problem: servers behind a TCP load balancer, as provisioned using the
Neutron LBaaS API, can't determine the source IP of a TCP connection.
Instead they will always see the load balancer IP as origin of requests.
This makes troubleshooting client connection issues using logs gathered
behind a LB very hard and often impossible.

Solution: the PROXY protocol has been introduced to forward the missing
information across a load balancer:

http://www.haproxy.org/download/1.8/doc/proxy-protocol.txt

A number of backend services can make use of it, such as Nginx

https://www.nginx.com/resources/admin-guide/proxy-protocol/

but also Apache, Squid, Undertow. Proxy protocol is also supported by
Amazon ELB since 2013.

As HAproxy, the implementation behind the Neutron LBaaS API, does
already offer native support, this RFE is about its enablement using the
LBaaS API and corresponding Heat resources.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1713424

Title:
  [RFE] Support proxy protocol enablement in Neutron LBaaS API

Status in neutron:
  New

Bug description:
  Problem: servers behind a TCP load balancer, as provisioned using the
  Neutron LBaaS API, can't determine the source IP of a TCP connection.
  Instead they will always see the load balancer IP as origin of
  requests. This makes troubleshooting client connection issues using
  logs gathered behind a LB very hard and often impossible.

  Solution: the PROXY protocol has been introduced to forward the
  missing information across a load balancer:

  http://www.haproxy.org/download/1.8/doc/proxy-protocol.txt

  A number of backend services can make use of it, such as Nginx

  https://www.nginx.com/resources/admin-guide/proxy-protocol/

  but also Apache, Squid, Undertow. Proxy protocol is also supported by
  Amazon ELB since 2013.

  As HAproxy, the implementation behind the Neutron LBaaS API, does
  already offer native support, this RFE is about its enablement using
  the LBaaS API and corresponding Heat resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1713424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1711215] [NEW] Default neutron-ns-metadata-proxy threads setting is too low (newton)

2017-08-16 Thread Michael Johnson
Public bug reported:

In the older neutron-ns-metadata-proxy, in the newton release, the
number of threads is fixed at 100.  This is a drop from the previous
default setting of 1000 as a side effect of changing the number of wsgi
threads [1].

This is causing failures at sites with a large number of instances using 
deployment tools (instance cloud-init logs):
2017-08-01 15:44:36,773 - url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: 
request error [HTTPConnectionPool(host='169.254.169.254', port=80): Request 
timed out. (timeout=17.0)]
2017-08-01 15:44:37,775 - DataSourceEc2.py[CRITICAL]: Giving up on md from 
['http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120 seconds

Setting the value, for neutron-ns-metadata-proxy only, back up to 1000 resolves 
this issue.
It should also be noted that in the Ocata forward version of the 
neutron-ns-metadata-proxy the default value is 1024 [2].

I am going to propose a patch for stable/newton that sets the default
thread count for the neutron-ns-metadata-proxy back up to 1000.

[1] 
https://github.com/openstack/neutron/blob/master/releasenotes/notes/config-wsgi-pool-size-a4c06753b79fee6d.yaml
[2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/metadata/driver.py#L44

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1711215

Title:
  Default neutron-ns-metadata-proxy threads setting is too low (newton)

Status in neutron:
  New

Bug description:
  In the older neutron-ns-metadata-proxy, in the newton release, the
  number of threads is fixed at 100.  This is a drop from the previous
  default setting of 1000 as a side effect of changing the number of
  wsgi threads [1].

  This is causing failures at sites with a large number of instances using 
deployment tools (instance cloud-init logs):
  2017-08-01 15:44:36,773 - url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: 
request error [HTTPConnectionPool(host='169.254.169.254', port=80): Request 
timed out. (timeout=17.0)]
  2017-08-01 15:44:37,775 - DataSourceEc2.py[CRITICAL]: Giving up on md from 
['http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120 seconds

  Setting the value, for neutron-ns-metadata-proxy only, back up to 1000 
resolves this issue.
  It should also be noted that in the Ocata forward version of the 
neutron-ns-metadata-proxy the default value is 1024 [2].

  I am going to propose a patch for stable/newton that sets the default
  thread count for the neutron-ns-metadata-proxy back up to 1000.

  [1] 
https://github.com/openstack/neutron/blob/master/releasenotes/notes/config-wsgi-pool-size-a4c06753b79fee6d.yaml
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/metadata/driver.py#L44

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1711215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703602] [NEW] Wrong snapshot usage info

2017-07-11 Thread Michael Dovgal
Public bug reported:

When creating snapshot on the page of creating we can see count of used 
snapshot that is based on tenants quotas.
But now we see volumes' usage instead of snapshots'.
The reason - this line [0] always uses volumes data instead of snapshots data.
We already have necessary info in snapshot_limit template here [1], but we 
don't use it.

[0] -
https://github.com/openstack/horizon/blob/29a6ed4cc06ef9cbadee311c947fe19308a387ed/openstack_dashboard/dashboards/project/volumes/templates/volumes/_limits.html#L42

[1] -
https://github.com/openstack/horizon/blob/29a6ed4cc06ef9cbadee311c947fe19308a387ed/openstack_dashboard/dashboards/project/volumes/templates/volumes/_snapshot_limits.html#L24-L30

** Affects: horizon
 Importance: Undecided
 Assignee: Michael Dovgal (mdovgal)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1703602

Title:
  Wrong snapshot usage info

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When creating snapshot on the page of creating we can see count of used 
snapshot that is based on tenants quotas.
  But now we see volumes' usage instead of snapshots'.
  The reason - this line [0] always uses volumes data instead of snapshots data.
  We already have necessary info in snapshot_limit template here [1], but we 
don't use it.

  [0] -
  
https://github.com/openstack/horizon/blob/29a6ed4cc06ef9cbadee311c947fe19308a387ed/openstack_dashboard/dashboards/project/volumes/templates/volumes/_limits.html#L42

  [1] -
  
https://github.com/openstack/horizon/blob/29a6ed4cc06ef9cbadee311c947fe19308a387ed/openstack_dashboard/dashboards/project/volumes/templates/volumes/_snapshot_limits.html#L24-L30

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1703602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703584] [NEW] Get rid of redundant cinder api calls

2017-07-11 Thread Michael Dovgal
Public bug reported:

During executing tenant_limit_usages quotas function here [0] we get
information about cinder volumes/snapshots/gigabites usage. Like this:

{u'maxTotalBackupGigabytes': 1000,
 u'maxTotalBackups': 10,
 u'maxTotalSnapshots': 10,
 u'maxTotalVolumeGigabytes': 1000,
 u'maxTotalVolumes': 10,
 u'totalBackupGigabytesUsed': 1,
 u'totalBackupsUsed': 1,
 u'totalGigabytesUsed': 8,
 u'totalSnapshotsUsed': 1,
 u'totalVolumesUsed': 6
}

After it here [1] we trying to get the same information as we've already got 
and add it one more time to limits dict here [2]
Also these calls slows down the application, so we need to get rid of them.

[0] -
https://github.com/openstack/horizon/blob/29a6ed4cc06ef9cbadee311c947fe19308a387ed/openstack_dashboard/usage/quotas.py#L489

https://github.com/openstack/horizon/blob/29a6ed4cc06ef9cbadee311c947fe19308a387ed/openstack_dashboard/usage/quotas.py#L490-L491
[2] - 
https://github.com/openstack/horizon/blob/29a6ed4cc06ef9cbadee311c947fe19308a387ed/openstack_dashboard/usage/quotas.py#L497-L499

** Affects: horizon
 Importance: Undecided
 Assignee: Michael Dovgal (mdovgal)
 Status: In Progress

** Description changed:

  During executing tenant_limit_usages quotas function here [0] we get
  information about cinder volumes/snapshots/gigabites usage. Like this:
  
  {u'maxTotalBackupGigabytes': 1000,
-  u'maxTotalBackups': 10,
-  u'maxTotalSnapshots': 10,
-  u'maxTotalVolumeGigabytes': 1000,
-  u'maxTotalVolumes': 10,
-  u'totalBackupGigabytesUsed': 1,
-  u'totalBackupsUsed': 1,
-  u'totalGigabytesUsed': 8,
-  u'totalSnapshotsUsed': 1,
-  u'totalVolumesUsed': 6
+  u'maxTotalBackups': 10,
+  u'maxTotalSnapshots': 10,
+  u'maxTotalVolumeGigabytes': 1000,
+  u'maxTotalVolumes': 10,
+  u'totalBackupGigabytesUsed': 1,
+  u'totalBackupsUsed': 1,
+  u'totalGigabytesUsed': 8,
+  u'totalSnapshotsUsed': 1,
+  u'totalVolumesUsed': 6
  }
  
- After it here [1] we trying to get the same information as we've already got.
+ After it here [1] we trying to get the same information as we've already got 
and add it one more time to limits dict here [2]
  Also these calls slows down the application, so we need to get rid of them.
  
- 
- [0] - 
https://github.com/openstack/horizon/blob/29a6ed4cc06ef9cbadee311c947fe19308a387ed/openstack_dashboard/usage/quotas.py#L489
+ [0] -
+ 
https://github.com/openstack/horizon/blob/29a6ed4cc06ef9cbadee311c947fe19308a387ed/openstack_dashboard/usage/quotas.py#L489
  
  
https://github.com/openstack/horizon/blob/29a6ed4cc06ef9cbadee311c947fe19308a387ed/openstack_dashboard/usage/quotas.py#L490-L491
+ [2] - 
https://github.com/openstack/horizon/blob/29a6ed4cc06ef9cbadee311c947fe19308a387ed/openstack_dashboard/usage/quotas.py#L497-L499

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1703584

Title:
   Get rid of redundant cinder api calls

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  During executing tenant_limit_usages quotas function here [0] we get
  information about cinder volumes/snapshots/gigabites usage. Like this:

  {u'maxTotalBackupGigabytes': 1000,
   u'maxTotalBackups': 10,
   u'maxTotalSnapshots': 10,
   u'maxTotalVolumeGigabytes': 1000,
   u'maxTotalVolumes': 10,
   u'totalBackupGigabytesUsed': 1,
   u'totalBackupsUsed': 1,
   u'totalGigabytesUsed': 8,
   u'totalSnapshotsUsed': 1,
   u'totalVolumesUsed': 6
  }

  After it here [1] we trying to get the same information as we've already got 
and add it one more time to limits dict here [2]
  Also these calls slows down the application, so we need to get rid of them.

  [0] -
  
https://github.com/openstack/horizon/blob/29a6ed4cc06ef9cbadee311c947fe19308a387ed/openstack_dashboard/usage/quotas.py#L489

  
https://github.com/openstack/horizon/blob/29a6ed4cc06ef9cbadee311c947fe19308a387ed/openstack_dashboard/usage/quotas.py#L490-L491
  [2] - 
https://github.com/openstack/horizon/blob/29a6ed4cc06ef9cbadee311c947fe19308a387ed/openstack_dashboard/usage/quotas.py#L497-L499

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1703584/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1700578] [NEW] Error during tenant_quota_usages function calls @memoized cache

2017-06-26 Thread Michael Dovgal
Public bug reported:

tenant_quota_usages method here [0] accept list as an input parameter.
Due to list is not immutable object, it couldn't be cached buy @memoized 
decorator but still method is covered by it.
Every call of this function triggers UnhashableKeyWarning here [1].

[0] -
https://github.com/openstack/horizon/blob/359467b4013bb4f89a6a1309e6eda89459288986/openstack_dashboard/usage/quotas.py#L442

[1] -
https://github.com/openstack/horizon/blob/4570b4cd7813c5b5d559a87c715f4ee6e6f1f63d/horizon/utils/memoized.py#L88

** Affects: horizon
 Importance: Undecided
 Assignee: Michael Dovgal (mdovgal)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Michael Dovgal (mdovgal)

** Summary changed:

- Error during function calls memoized cache
+ Error during tenant_quota_usages function calls @memoized cache

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1700578

Title:
  Error during tenant_quota_usages function calls @memoized cache

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  tenant_quota_usages method here [0] accept list as an input parameter.
  Due to list is not immutable object, it couldn't be cached buy @memoized 
decorator but still method is covered by it.
  Every call of this function triggers UnhashableKeyWarning here [1].

  [0] -
  
https://github.com/openstack/horizon/blob/359467b4013bb4f89a6a1309e6eda89459288986/openstack_dashboard/usage/quotas.py#L442

  [1] -
  
https://github.com/openstack/horizon/blob/4570b4cd7813c5b5d559a87c715f4ee6e6f1f63d/horizon/utils/memoized.py#L88

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1700578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1697462] [NEW] Snaphot update TemplateDoesNotExist error

2017-06-12 Thread Michael Dovgal
Public bug reported:

When follow the link
env_id/dashboard/project/snapshots/bdc20667-5821-4f10-8c0e-
1aaa707a80c1/update there is TemplateDoesNotExist error:


Error during template rendering

In template
/opt/stack/horizon/openstack_dashboard/dashboards/project/snapshots/templates/snapshots/update.html,
error at line 6

{% include 'project/volumes/snapshots/_update.html' %}

** Affects: horizon
 Importance: High
 Assignee: Michael Dovgal (mdovgal)
 Status: In Progress


** Tags: ocata-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1697462

Title:
  Snaphot update TemplateDoesNotExist error

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When follow the link
  env_id/dashboard/project/snapshots/bdc20667-5821-4f10-8c0e-
  1aaa707a80c1/update there is TemplateDoesNotExist error:

  
  Error during template rendering

  In template
  
/opt/stack/horizon/openstack_dashboard/dashboards/project/snapshots/templates/snapshots/update.html,
  error at line 6

  {% include 'project/volumes/snapshots/_update.html' %}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1697462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694537] [NEW] Instance creation fails with SSL, keystone v3

2017-05-30 Thread Michael Skalka
Public bug reported:

We can create volumes, networks, etc in an Ocata deployment using SSL,
but launching an instance fails with the following error in horizon:
https://pastebin.canonical.com/189552/ and an associated error in nova-
cloud-controller's apache2 nova-placement error log:
https://pastebin.canonical.com/189547/

This seems to be a communication issue between the nova scheduler and
the nova placement api.

Steps to remedy taken so far:
- Clearing the rabbitmq queue
- Bouncing the rabbitmq services
- Bouncing the apache2 services on nova-c-c and keystone

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1694537

Title:
  Instance creation fails with SSL, keystone v3

Status in OpenStack Compute (nova):
  New

Bug description:
  We can create volumes, networks, etc in an Ocata deployment using SSL,
  but launching an instance fails with the following error in horizon:
  https://pastebin.canonical.com/189552/ and an associated error in
  nova-cloud-controller's apache2 nova-placement error log:
  https://pastebin.canonical.com/189547/

  This seems to be a communication issue between the nova scheduler and
  the nova placement api.

  Steps to remedy taken so far:
  - Clearing the rabbitmq queue
  - Bouncing the rabbitmq services
  - Bouncing the apache2 services on nova-c-c and keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1694537/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1673754] Re: LBaaSv2: Cannot delete loadbalancer in PENDING_CREATE

2017-03-17 Thread Michael Johnson
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1673754

Title:
  LBaaSv2: Cannot delete loadbalancer in PENDING_CREATE

Status in octavia:
  New

Bug description:
  If all neutron-lbaasv2-agent are down at a given time, you could get
  some loadbalancers stucked in PENDING_CREATE.

  With CLI tools it is impossible to delete these resources:

  (neutron) lbaas-loadbalancer-delete 5173ac41-194d-4d0c-b833-657b728c469d
  Invalid state PENDING_CREATE of loadbalancer resource 
5173ac41-194d-4d0c-b833-657b728c469d
  Neutron server returns request_ids: 
['req-970a6338-b0d0-4bcc-9108-ec94360b45e2']

  Even deleting this loadbalancer from the database with mysql commands
  does not clean it completely.

  the VIP port also needs to be deleted, not possible with API because I get:
  (neutron) port-delete 264f6125-ef0f-46ab-84b6-79da2d00eb28
  Port 264f6125-ef0f-46ab-84b6-79da2d00eb28 cannot be deleted directly via the 
port API: has device owner neutron:LOADBALANCERV2.
  Neutron server returns request_ids: 
['req-ed4718e0-e8a3-4c53-9416-de97cc73230f']

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1673754/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608980] Re: Remove MANIFEST.in as it is not explicitly needed by PBR

2017-03-13 Thread Michael Johnson
** Changed in: neutron-lbaas-dashboard
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608980

Title:
  Remove MANIFEST.in as it is not explicitly needed by PBR

Status in anvil:
  Invalid
Status in craton:
  Fix Released
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in gce-api:
  Fix Released
Status in Glance:
  Fix Released
Status in Karbor:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in Kosmos:
  Fix Released
Status in Magnum:
  Fix Released
Status in masakari:
  Fix Released
Status in networking-midonet:
  Fix Released
Status in networking-odl:
  Fix Released
Status in neutron:
  Fix Released
Status in Neutron LBaaS Dashboard:
  Fix Released
Status in octavia:
  Fix Released
Status in os-vif:
  Fix Released
Status in python-searchlightclient:
  In Progress
Status in OpenStack Search (Searchlight):
  Fix Released
Status in Solum:
  Fix Released
Status in Swift Authentication:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in Tricircle:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in watcher:
  Fix Released
Status in Zun:
  Fix Released

Bug description:
  PBR do not explicitly require MANIFEST.in, so it can be removed.

  
  Snippet from: http://docs.openstack.org/infra/manual/developers.html

  Manifest

  Just like AUTHORS and ChangeLog, why keep a list of files you wish to
  include when you can find many of these in git. MANIFEST.in generation
  ensures almost all files stored in git, with the exception of
  .gitignore, .gitreview and .pyc files, are automatically included in
  your distribution. In addition, the generated AUTHORS and ChangeLog
  files are also included. In many cases, this removes the need for an
  explicit ‘MANIFEST.in’ file

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1608980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603458] Re: Cannot Delete loadbalancers due to undeleteable pools

2017-03-13 Thread Michael Johnson
Marking this invalid as you can delete a pool via horizon.  Did you
remember to delete the health monitor first?

I agree that in the future we could enable the cascade delete feature in
horizon with a warning, but that would be an RFE and not the bug as
reported.  Closing this as invalid as you can in fact delete pools via
the neutron-lbaas-dashboard.

** Changed in: horizon
   Status: New => Invalid

** Changed in: neutron-lbaas-dashboard
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603458

Title:
  Cannot Delete loadbalancers due to undeleteable pools

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in neutron:
  Invalid
Status in Neutron LBaaS Dashboard:
  Invalid

Bug description:
  To delete an LBaaSv2 loadbalancer, you must remove all the members
  from the pool, then delete the pool, then delete the listener, then
  you can delete the loadbalancer. Currently in Horizon you can do all
  of those except delete the pool. Since you can't delete the pool, you
  can't delete the listener, and therefore can't delete the
  loadbalancer.

  Either deleting the listener should trigger the pool delete too (since
  they're 1:1) or the Horizon Wizard for Listener should have a delete
  pool capability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1603458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1672345] Re: Loadbalancer V2 ports are not serviced by DVR

2017-03-13 Thread Michael Johnson
This is a neutron DVR bug and not an LBaaS/Octavia bug.  It may be a
duplicate of existing DVR bugs.

** Project changed: octavia => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1672345

Title:
  Loadbalancer V2 ports are not serviced by DVR

Status in neutron:
  New

Bug description:
  I reported #1629539, which was on Mitaka/LBaaSv1, but I'm seeing the
  exact same behaviour on Newton/LBaaSv2.

  There's apparently a fix (for Kilo) in #1493809. There's also #1494003
  (a duplicate of #1493809), which have a lot of debug output and
  apparently a way to reproduce.

  When I reinstalled my Openstack setup from Debian GNU/Linux Sid/Mitaka
  to Debian GNU/Linux Jessie/Newton, I started out with a non-
  distributed router (DVR). The LBaaS v1 _and_ v2 worked just fine
  there. But as soon as I enabled/setup DVR, they stoped working.

  I'm unsure of what information would be required, but "ask and it will
  be supplied".

  The problem I'm seeing is that the FIPS of the LB responds, but not
  the VIP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1672345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661303] [NEW] neutron-ns-metadata-proxy process failing under python3.5

2017-02-02 Thread Michael Johnson
Public bug reported:

When running under python 3.5, we are seeing the neutron-ns-metadata-
proxy fail repeatedly on Ocata RC1 master.

This is causing instances to fail to boot under a python3.5 devstack.

A gate example is here:
http://logs.openstack.org/99/407099/25/check/gate-rally-dsvm-py35-neutron-neutron-ubuntu-xenial/4741e0d/logs/screen-q-l3.txt.gz?level=ERROR#_2017-02-02_11_41_52_029

2017-02-02 11:41:52.029 29906 ERROR neutron.agent.linux.external_process
[-] metadata-proxy for router with uuid
79af72b9-6b17-4864-8088-5dc96b9271df not found. The process should not
have died

Running this locally I see the debug output of the configuration
settings and it immediately exits with no error output.

To reproduce:
Stack a fresh devstack with the "USE_PYTHON3=True" setting in your localrc 
(NOTE: There are other python3x devstack bugs that may reconfigure your host in 
bad ways once you do this.  Plan to only stack with this setting on a throw 
away host or one you plan to use for Python3.x going forward)

Once this devstack is up and running, setup a neuron network and subnet,
then boot a cirros instance on that new subnet.

Check the cirros console.log to see that it cannot find a metadata
datasource (Due to this change disabling configdrive: https://github.com
/openstack-
dev/devstack/commit/7682ea88a6ab8693b215646f16748dbbc2476cc4).

Check the q-l3.txt log to see the repeated "The process should not have
died" messages.

You will also note that the cirros instance did not receive it's ssh
keys and is requiring password login due to the missing datasource.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1661303

Title:
  neutron-ns-metadata-proxy process failing under python3.5

Status in neutron:
  New

Bug description:
  When running under python 3.5, we are seeing the neutron-ns-metadata-
  proxy fail repeatedly on Ocata RC1 master.

  This is causing instances to fail to boot under a python3.5 devstack.

  A gate example is here:
  
http://logs.openstack.org/99/407099/25/check/gate-rally-dsvm-py35-neutron-neutron-ubuntu-xenial/4741e0d/logs/screen-q-l3.txt.gz?level=ERROR#_2017-02-02_11_41_52_029

  2017-02-02 11:41:52.029 29906 ERROR
  neutron.agent.linux.external_process [-] metadata-proxy for router
  with uuid 79af72b9-6b17-4864-8088-5dc96b9271df not found. The process
  should not have died

  Running this locally I see the debug output of the configuration
  settings and it immediately exits with no error output.

  To reproduce:
  Stack a fresh devstack with the "USE_PYTHON3=True" setting in your localrc 
(NOTE: There are other python3x devstack bugs that may reconfigure your host in 
bad ways once you do this.  Plan to only stack with this setting on a throw 
away host or one you plan to use for Python3.x going forward)

  Once this devstack is up and running, setup a neuron network and
  subnet, then boot a cirros instance on that new subnet.

  Check the cirros console.log to see that it cannot find a metadata
  datasource (Due to this change disabling configdrive:
  https://github.com/openstack-
  dev/devstack/commit/7682ea88a6ab8693b215646f16748dbbc2476cc4).

  Check the q-l3.txt log to see the repeated "The process should not
  have died" messages.

  You will also note that the cirros instance did not receive it's ssh
  keys and is requiring password login due to the missing datasource.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1661303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661086] [NEW] Failed to plug VIF VIFBridge

2017-02-01 Thread Michael Johnson
Public bug reported:

I did a fresh restack/reclone this morning and can no longer boot up a
cirros instance.

Nova client returns:

| fault| {"message": "Failure running
os_vif plugin plug method: Failed to plug VIF
VIFBridge(active=False,address=fa:16:3e:6f:0e:84,bridge_name='qbrd3377ad5-43',has_traffic_filtering=True,id=d3377ad5-4397
-474c-b6dd-e29b9a52d277,network=Network(d2582e93-d6e5-43ac-be7b-
fe3fc3c7", "code": 500, "details": "  File
\"/opt/stack/nova/nova/compute/manager.py\", line 1780, in
_do_build_and_run_instance |

pip list:
nova (15.0.0.0b4.dev77, /opt/stack/nova)
os-vif (1.4.0)

n-cpu.log shows:
2017-02-01 11:13:32.880 DEBUG nova.network.os_vif_util 
[req-17c8b4e4-2197-4205-aed3-007d0f2837e4 admin admin] Converted object 
VIFBridge(active=False,address=fa:16:3e:6f:0e:84,bridge_name='qbrd3377ad5-43',has_traffic_filtering=True,id=d3377ad5-4397-474c-b6dd-e29b9a52d277,network=Network(d2582e93-d6e5-43ac-be7b-fe3fc3c7e593),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tapd3377ad5-43')
 from (pid=69603) nova_to_osvif_vif 
/opt/stack/nova/nova/network/os_vif_util.py:425
2017-02-01 11:13:32.880 DEBUG os_vif [req-17c8b4e4-2197-4205-aed3-007d0f2837e4 
admin admin] Unplugging vif 
VIFBridge(active=False,address=fa:16:3e:6f:0e:84,bridge_name='qbrd3377ad5-43',has_traffic_filtering=True,id=d3377ad5-4397-474c-b6dd-e29b9a52d277,network=Network(d2582e93-d6e5-43ac-be7b-fe3fc3c7e593),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tapd3377ad5-43')
 from (pid=69603) unplug 
/usr/local/lib/python2.7/dist-packages/os_vif/__init__.py:112
2017-02-01 11:13:32.881 DEBUG oslo.privsep.daemon [-] privsep: 
request[139935485013840]: (3, b'vif_plug_ovs.linux_net.delete_bridge', 
('qbrd3377ad5-43', b'qvbd3377ad5-43'), {}) from (pid=69603) out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
2017-02-01 11:13:32.881 DEBUG oslo.privsep.daemon [-] privsep: Exception during 
request[139935485013840]: a bytes-like object is required, not 'str' from 
(pid=69603) out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
2017-02-01 11:13:32.881 DEBUG oslo.privsep.daemon [-] privsep: 
reply[139935485013840]: (5, 'builtins.TypeError', ("a bytes-like object is 
required, not 'str'",)) from (pid=69603) out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
2017-02-01 11:13:32.882 ERROR os_vif [req-17c8b4e4-2197-4205-aed3-007d0f2837e4 
admin admin] Failed to unplug vif 
VIFBridge(active=False,address=fa:16:3e:6f:0e:84,bridge_name='qbrd3377ad5-43',has_traffic_filtering=True,id=d3377ad5-4397-474c-b6dd-e29b9a52d277,network=Network(d2582e93-d6e5-43ac-be7b-fe3fc3c7e593),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tapd3377ad5-43')
2017-02-01 11:13:32.882 TRACE os_vif Traceback (most recent call last):
2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/os_vif/__init__.py", line 113, in unplug
2017-02-01 11:13:32.882 TRACE os_vif plugin.unplug(vif, instance_info)
2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/vif_plug_ovs/ovs.py", line 216, in 
unplug
2017-02-01 11:13:32.882 TRACE os_vif self._unplug_bridge(vif, instance_info)
2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/vif_plug_ovs/ovs.py", line 192, in 
_unplug_bridge
2017-02-01 11:13:32.882 TRACE os_vif 
linux_net.delete_bridge(vif.bridge_name, v1_name)
2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/oslo_privsep/priv_context.py", line 
205, in _wrap
2017-02-01 11:13:32.882 TRACE os_vif return self.channel.remote_call(name, 
args, kwargs)
2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py", line 186, in 
remote_call
2017-02-01 11:13:32.882 TRACE os_vif exc_type = 
importutils.import_class(result[1])
2017-02-01 11:13:32.882 TRACE os_vif   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 30, in 
import_class
2017-02-01 11:13:32.882 TRACE os_vif __import__(mod_str)
2017-02-01 11:13:32.882 TRACE os_vif ImportError: No module named builtins
2017-02-01 11:13:32.882 TRACE os_vif

Full n-cpu.log is attached.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661086

Title:
  Failed to plug VIF VIFBridge

Status in OpenStack Compute (nova):
  New

Bug description:
  I did a fresh restack/reclone this morning and can no longer boot up a
  cirros instance.

  Nova client returns:

  | fault| {"message": "Failure running
  os_vif plugin plug method: Failed to plug VIF
  

[Yahoo-eng-team] [Bug 1654887] Re: Upgrade to 3.6.0 causes AttributeError: 'SecurityGroup' object has no attribute 'keys'

2017-01-08 Thread Michael Johnson
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1654887

Title:
  Upgrade to 3.6.0 causes AttributeError: 'SecurityGroup' object has no
  attribute 'keys'

Status in neutron:
  New
Status in python-openstackclient:
  New

Bug description:
  When running the command:

  openstack security group create foo

  Under version 3.5.0 of python-openstackclient the command succeeds,
  but after doing a pip install --upgrade python-openstackclient to
  version 3.6.0 I get the following error:

  'SecurityGroup' object has no attribute 'keys'

  Neutron successfully created the security group.

  Running with --debug shows:

  Using http://172.21.21.125:9696/v2.0 as public network endpoint
  REQ: curl -g -i -X POST http://172.21.21.125:9696/v2.0/security-groups -H 
"User-Agent: openstacksdk/0.9.12 keystoneauth1/2.16.0 python-requests/2.12.4 
CPython/2.7.6" -H "Content-Type: application/json" -H "X-Auth-Token: 
{SHA1}d46c48cdee00c9eefd4216b492f9b56e762749bc" -d '{"security_group": {"name": 
"foo", "description": "foo"}}'
  http://172.21.21.125:9696 "POST /v2.0/security-groups HTTP/1.1" 201 1302
  RESP: [201] Content-Type: application/json Content-Length: 1302 
X-Openstack-Request-Id: req-9bea5358-8341-4064-b7ea-54edd8e4fd53 Date: Sun, 08 
Jan 2017 20:27:07 GMT Connection: keep-alive
  RESP BODY: {"security_group": {"description": "foo", "tenant_id": 
"f0c5bc260c06423893b791890715a337", "created_at": "2017-01-08T20:27:07Z", 
"updated_at": "2017-01-08T20:27:07Z", "security_group_rules": [{"direction": 
"egress", "protocol": null, "description": null, "port_range_max": null, 
"updated_at": "2017-01-08T20:27:07Z", "revision_number": 1, "id": 
"fc82f0ef-df78-4b46-9b9e-96d71b5b34b4", "remote_group_id": null, 
"remote_ip_prefix": null, "created_at": "2017-01-08T20:27:07Z", 
"security_group_id": "b11e40a0-aed2-464e-851e-6901afa0f845", "tenant_id": 
"f0c5bc260c06423893b791890715a337", "port_range_min": null, "ethertype": 
"IPv4", "project_id": "f0c5bc260c06423893b791890715a337"}, {"direction": 
"egress", "protocol": null, "description": null, "port_range_max": null, 
"updated_at": "2017-01-08T20:27:07Z", "revision_number": 1, "id": 
"3e363162-93bf-49c4-9d00-203ffe1dd4ef", "remote_group_id": null, 
"remote_ip_prefix": null, "created_at": "2017-01-08T20:27:07Z", 
"security_group_id": "b
 11e40a0-aed2-464e-851e-6901afa0f845", "tenant_id": 
"f0c5bc260c06423893b791890715a337", "port_range_min": null, "ethertype": 
"IPv6", "project_id": "f0c5bc260c06423893b791890715a337"}], "revision_number": 
1, "project_id": "f0c5bc260c06423893b791890715a337", "id": 
"b11e40a0-aed2-464e-851e-6901afa0f845", "name": "foo"}}

  'SecurityGroup' object has no attribute 'keys'
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/cliff/app.py", line 400, in 
run_subcommand
  result = cmd.run(parsed_args)
File "/usr/local/lib/python2.7/dist-packages/osc_lib/command/command.py", 
line 41, in run
  return super(Command, self).run(parsed_args)
File "/usr/local/lib/python2.7/dist-packages/cliff/display.py", line 112, 
in run
  column_names, data = self.take_action(parsed_args)
File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/network/common.py", 
line 188, in take_action
  self.app.client_manager.network, parsed_args)
File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/network/v2/security_group.py",
 line 145, in take_action_network
  display_columns, property_columns = _get_columns(obj)
File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/network/v2/security_group.py",
 line 77, in _get_columns
  columns = list(item.keys())
File "/usr/local/lib/python2.7/dist-packages/openstack/resource2.py", line 
309, in __getattribute__
  return object.__getattribute__(self, name)
  AttributeError: 'SecurityGroup' object has no attribute 'keys'
  clean_up CreateSecurityGroup: 'SecurityGroup' object has no attribute 'keys'
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/osc_lib/shell.py", line 135, 
in run
  ret_val = super(OpenStackShell, self).run(argv)
File "/usr/local/lib/python2.7/dist-packages/cliff/app.py", line 279, in run
  result = self.run_subcommand(remainder)
File "/usr/local/lib/python2.7/dist-packages/osc_lib/shell.py", line 180, 
in run_subcommand
  ret_value = super(OpenStackShell, self).run_subcommand(argv)
File "/usr/local/lib/python2.7/dist-packages/cliff/app.py", line 400, in 
run_subcommand
  result = cmd.run(parsed_args)
File "/usr/local/lib/python2.7/dist-packages/osc_lib/command/command.py", 
line 41, in run
  return super(Command, self).run(parsed_args)
File "/usr/local/lib/python2.7/dist-packages/cliff/display.py", line 112, 
in run
  column_names, data = 

[Yahoo-eng-team] [Bug 1649527] [NEW] nova creates an invalid ethernet/bridge interface definition in virsh xml

2016-12-13 Thread Michael Henkel
Public bug reported:

Description
===

https://github.com/openstack/nova/blob/master/nova/virt/libvirt/designer.py#L61
sets the script path of an ethernet interface to ""

https://github.com/openstack/nova/blob/master/nova/virt/libvirt/config.py#L1228
checks script for None. As it is not none but a string it adds an empty 
script path to the ethernet interface definition in the virsh xml

Steps to reproduce
==

nova generated virsh:

[root@overcloud-novacompute-0 heat-admin]# cat 2.xml |grep tap -A5 -B3

  
  
  
  
  
  
  


XML validation:

[root@overcloud-novacompute-0 heat-admin]# virt-xml-validate 2.xml
Relax-NG validity error : Extra element devices in interleave
2.xml:59: element devices: Relax-NG validity error : Element domain failed to 
validate content
2.xml fails to validate

removing the  element the xml validation succeeds:

[root@overcloud-novacompute-0 heat-admin]# cat 1.xml |grep tap -A5 -B2

  
  
  
  
  
  

[root@overcloud-novacompute-0 heat-admin]# virt-xml-validate 1.xml
1.xml validates

Point is that libvirt <2.0.0 is more tolerant. libvirt 2.0.0 throws a segfault:
 
Dec  9 13:30:32 comp1 kernel: libvirtd[1048]: segfault at 8 ip 7fc9ff09e1c3 
sp 7fc9edfef1d0 error 4 in libvirt.so.0.2000.0[7fc9fef4b000+352000]
Dec  9 13:30:32 comp1 journal: End of file while reading data: Input/output 
error
Dec  9 13:30:32 comp1 systemd: libvirtd.service: main process exited, 
code=killed, status=11/SEGV
Dec  9 13:30:32 comp1 systemd: Unit libvirtd.service entered failed state.
Dec  9 13:30:32 comp1 systemd: libvirtd.service failed.
Dec  9 13:30:32 comp1 systemd: libvirtd.service holdoff time over, scheduling 
restart.
Dec  9 13:30:32 comp1 systemd: Starting Virtualization daemon...
Dec  9 13:30:32 comp1 systemd: Started Virtualization daemon. 

Expected result
===
VM can be started
instead of checking for None, config.py should check for an empty string before
adding script path


Actual result
=
VM doesn't start

Environment
===
OSP10/Newton, libvirt 2.0.0

** Affects: nova
 Importance: Undecided
     Assignee: Michael  Henkel (mhenkel-3)
 Status: New

** Summary changed:

- nova creates and invalid ethernet interface definition in virsh xml
+ nova creates an invalid ethernet interface definition in virsh xml

** Summary changed:

- nova creates an invalid ethernet interface definition in virsh xml
+ nova creates an invalid ethernet/bridge interface definition in virsh xml

** Changed in: nova
 Assignee: (unassigned) => Michael  Henkel (mhenkel-3)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1649527

Title:
  nova creates an invalid ethernet/bridge interface definition in virsh
  xml

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/designer.py#L61
  sets the script path of an ethernet interface to ""

  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/config.py#L1228
  checks script for None. As it is not none but a string it adds an empty 
  script path to the ethernet interface definition in the virsh xml

  Steps to reproduce
  ==

  nova generated virsh:

  [root@overcloud-novacompute-0 heat-admin]# cat 2.xml |grep tap -A5 -B3
  







  

  XML validation:

  [root@overcloud-novacompute-0 heat-admin]# virt-xml-validate 2.xml
  Relax-NG validity error : Extra element devices in interleave
  2.xml:59: element devices: Relax-NG validity error : Element domain failed to 
validate content
  2.xml fails to validate

  removing the  element the xml validation succeeds:

  [root@overcloud-novacompute-0 heat-admin]# cat 1.xml |grep tap -A5 -B2
  






  
  [root@overcloud-novacompute-0 heat-admin]# virt-xml-validate 1.xml
  1.xml validates

  Point is that libvirt <2.0.0 is more tolerant. libvirt 2.0.0 throws a 
segfault:
   
  Dec  9 13:30:32 comp1 kernel: libvirtd[1048]: segfault at 8 ip 
7fc9ff09e1c3 sp 7fc9edfef1d0 error 4 in 
libvirt.so.0.2000.0[7fc9fef4b000+352000]
  Dec  9 13:30:32 comp1 journal: End of file while reading data: Input/output 
error
  Dec  9 13:30:32 comp1 systemd: libvirtd.service: main process exited, 
code=killed, status=11/SEGV
  Dec  9 13:30:32 comp1 systemd: Unit libvirtd.service entered failed state.
  Dec  9 13:30:32 comp1 systemd: libvirtd.service failed.
  Dec  9 13:30:32 comp1 systemd: libvirtd.service holdoff time over, scheduling 
restart.
  Dec  9 13:30:32 comp1 systemd: Starting Virtualization daemon...
  Dec  9 13:30:32 comp1 systemd: Started Virtualization daemon. 

  Expected result
  ==

[Yahoo-eng-team] [Bug 1626093] Re: LBaaSV2: listener deletion causes LB port to be Detached "forever"

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Low

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1626093

Title:
  LBaaSV2: listener deletion causes LB port to be Detached "forever"

Status in octavia:
  New

Bug description:
  Case 1:
  Create a LBaaSV2 LB with a listener. Remove listener. Port Detached. Add 
listener. Nothing happens.

  Case 2:
  Create a LBaaSV2 LB with a listener. Add another listener. Remove one of the 
two. Port Detached.

  This is merely an annoyance.

  neutron port-show shows nothing for device_id and device_owner.
  In Horizon shows as Detached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1626093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1602974] Re: [stable/liberty] LBaaS v2 haproxy: need a way to find status of listener

2016-12-05 Thread Michael Johnson
Is this a duplicate to https://bugs.launchpad.net/octavia/+bug/1632054 ?

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1602974

Title:
  [stable/liberty] LBaaS v2 haproxy: need a way to find status of
  listener

Status in octavia:
  Incomplete

Bug description:
  Currently we dont have option to check status of listener. Below is
  the output of listener without status.

  root@runner:~# neutron lbaas-listener-show 
8c0e0289-f85d-4539-8970-467a45a5c191
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | connection_limit  | -1 |
  | default_pool_id   ||
  | default_tls_container_ref ||
  | description   ||
  | id| 8c0e0289-f85d-4539-8970-467a45a5c191   |
  | loadbalancers | {"id": "bda96c0a-0167-45ab-8772-ba92bc0f2d00"} |
  | name  | test-lb-http   |
  | protocol  | HTTP   |
  | protocol_port | 80 |
  | sni_container_refs||
  | tenant_id | ce1d087209c64df4b7e8007dc35def22   |
  +---++
  root@runner:~#

  Problem arise when we tried to configure listener and pool back to
  back without any delay. Pool create fails saying listener is not
  ready.

  Workaround is to add 3seconds delay between listener and pool
  creation.

  Logs:

  root@runner:~# neutron lbaas-loadbalancer-create --name test-lb vn-subnet; 
neutron lbaas-listener-create --name test-lb-http --loadbalancer test-lb 
--protocol HTTP --protocol-port 80; neutron lbaas-pool-create --name 
test-lb-pool-http  --lb-algorithm ROUND_ROBIN --listener test-lb-http  
--protocol HTTP
  Created a new loadbalancer:
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | description |  |
  | id  | 3ed2ff4a-4d87-46da-8e5b-265364dd6861 |
  | listeners   |  |
  | name| test-lb  |
  | operating_status| OFFLINE  |
  | provider| haproxy  |
  | provisioning_status | PENDING_CREATE   |
  | tenant_id   | ce1d087209c64df4b7e8007dc35def22 |
  | vip_address | 20.0.0.62|
  | vip_port_id | 4c33365e-64b9-428f-bc0b-bce6c08c9b20 |
  | vip_subnet_id   | 63cbeccd-6887-4dda-b4d2-b7503bce870a |
  +-+--+
  Created a new listener:
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | connection_limit  | -1 |
  | default_pool_id   ||
  | default_tls_container_ref ||
  | description   ||
  | id| 90260465-934a-44a4-a289-208e5af74cf5   |
  | loadbalancers | {"id": "3ed2ff4a-4d87-46da-8e5b-265364dd6861"} |
  | name  | test-lb-http   |
  | protocol  | HTTP   |
  | protocol_port | 80 |
  | sni_container_refs||
  | tenant_id | ce1d087209c64df4b7e8007dc35def22   |
  +---++
  Invalid state PENDING_UPDATE of loadbalancer resource 
3ed2ff4a-4d87-46da-8e5b-265364dd6861
  root@runner:~#

  
  Neutron:

  : 

[Yahoo-eng-team] [Bug 1464241] Re: Lbaasv2 command logs not seen

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => High

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464241

Title:
  Lbaasv2 command logs not seen

Status in octavia:
  New

Bug description:
  I am testing incorrect and correct lbaasv2 deletion. 
  even if a command fails we do not see it in the  
/var/log/neutron/lbaasv2-agent.log

  BUT 
  We see the lbaas (not lbaasv2) is being updated with information and has 
error. 

  2015-06-11 03:03:34.352 21274 WARNING neutron.openstack.common.loopingcall 
[-] task > run outlasted interval by 50.10 sec
  2015-06-11 03:04:34.366 21274 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager [-] Unable to retrieve 
ready devices
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent 
call last):
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/agent/agent_manager.py",
 line 152, in sync_state
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager ready_instances = 
set(self.plugin_rpc.get_ready_devices())
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/agent/agent_api.py",
 line 36, in get_ready_devices
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager return 
cctxt.call(self.context, 'get_ready_devices', host=self.host)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 156, in 
call
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager retry=self.retry)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 90, in 
_send
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager timeout=timeout, 
retry=retry)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
350, in send
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager retry=retry)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
339, in _send
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager result = 
self._waiter.wait(msg_id, timeout)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
243, in wait
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager message = 
self.waiters.get(msg_id, timeout=timeout)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
149, in get
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager 'to message ID %s' 
% msg_id)
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager MessagingTimeout: Timed 
out waiting for a reply to message ID 73130a6bb5444f259dbf810cfb1003b3
  2015-06-11 03:04:34.366 21274 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager

  
  configure lbaasv2 setup- loadbalncer, listener, member, pool, healthmonitor. 

  see lbaasv2 logs and lbaas logs
   /var/log/neutron/lbaasv2-agent.log
   /var/log/neutron/lbaasv-agent.log


  lbaasv2
  kilo
  rhel7.1 
  openstack-neutron-lbaas-2015.1.0-3.el7ost.noarch
  python-neutron-lbaas-2015.1.0-3.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1464241/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622946] Re: lbaas with haproxy backend creates the lbaas namespace without the members' subnet

2016-12-05 Thread Michael Johnson
Can you provide your lbaas agent logs?

** Changed in: neutron
   Importance: Undecided => High

** Project changed: neutron => octavia

** Changed in: octavia
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622946

Title:
  lbaas with haproxy backend creates the lbaas namespace without the
  members' subnet

Status in octavia:
  Incomplete

Bug description:
  When creating a new loadbalancer with haproxy, and the VIP and member
  subnets are different, the created lbaas namespace contains only the
  VIP subnet, so the members are unreachable.

  E.g.:
  neutron lbaas-loadbalancer-show 8e1c193a-ab63-4a1a-bc39-c663f2f9a0ee
  .
  .
  .
  | vip_subnet_id   | 23655977-d29f-4917-a519-de27951fde89   |

  neutron lbaas-member-list d3ebda43-53f8-4118-b4db-999c021c9680

  | 4fe79d5e-a517-4e4f-a145-3c80b414be08 |  | 192.168.168.8 |
  22 |  1 | 0a4a1f3e-43cb-4f9c-9d51-c71f0c231a3e | True   |

  Note that the two subnets are different.
  The created haproxy config is OK:
  .
  .
  .
  frontend 6821edd8-54ab-4fba-90e5-94831fcd0ec0
  option tcplog
  bind 10.97.37.1:22
  mode tcp

  backend d3ebda43-53f8-4118-b4db-999c021c9680
  mode tcp
  balance source
  timeout check 20
  server 4fe79d5e-a517-4e4f-a145-3c80b414be08 192.168.168.8:22 weight 1 
check inter 10s fall 3

  But the namespace is not:
  ip netns exec qlbaas-8e1c193a-ab63-4a1a-bc39-c663f2f9a0ee ip addr
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host 
 valid_lft forever preferred_lft forever
  2: ns-f56b5f8d-ef@if11:  mtu 1500 qdisc 
noqueue state UP group default qlen 1000
  link/ether fa:16:3e:82:9d:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0
  inet 10.97.37.1/25 brd 10.97.37.127 scope global ns-f56b5f8d-ef
 valid_lft forever preferred_lft forever
  inet6 fe80::f816:3eff:fe82:9d9a/64 scope link 
 valid_lft forever preferred_lft forever

  
  The member subnet is missing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1622946/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624097] Re: Neutron LBaaS CLI quota show includes l7policy and doesn't include member

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Medium

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624097

Title:
  Neutron LBaaS CLI quota show includes l7policy and doesn't include
  member

Status in octavia:
  In Progress
Status in python-openstackclient:
  Fix Released

Bug description:
  When running devstack and executing "neutron quota-show" it lists an
  l7 policy quota, but does not show a member quota.  However, the help
  message for "neutron quota-update" includes a member quota, but not an
  l7 policy quota.  The show command should not have the l7 policy
  quota, but should have the member quota.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1624097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632054] Re: Heat engine doesn't detect lbaas listener failures

2016-12-05 Thread Michael Johnson
The neutron project with lbaas tag was for neutron-lbaas, but now that
we have merged the projects, I am removing neutron as it is all under
octavia project now.

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1632054

Title:
  Heat engine doesn't detect lbaas listener failures

Status in heat:
  Triaged
Status in octavia:
  Triaged

Bug description:
  Please refer to the mail-list for comments from other developers,
  https://openstack.nimeyo.com/97427/openstack-neutron-octavia-doesnt-
  detect-listener-failures

  I am trying to use heat to launch lb resources with Octavia as backend. The
  template I used is from
  
https://github.com/openstack/heat-templates/blob/master/hot/lbaasv2/lb_group.yaml
  .

  Following are a few observations:

  1. Even though Listener was created with ERROR status, heat will still go
  ahead and mark it Creation Complete. As in the heat code, it only check
  whether root Loadbalancer status is change from PENDING_UPDATE to ACTIVE.
  And Loadbalancer status will be changed to ACTIVE anyway no matter
  Listener's status.

  2. As heat engine wouldn't know the Listener's creation failure, it will
  continue to create Pool\Member\Heatthmonitor on top of an Listener which
  actually doesn't exist. It causes a few undefined behaviors. As a result,
  those LBaaS resources in ERROR state are unable to be cleaned up
  with either normal neutron or heat api.

  3. The bug is introduce from here,
  
https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/lbaas/listener.py#L188.
  It only checks the provisioning status of the root loadbalancer.
  However the listener itself has its own provisioning status which may
  go into ERROR.

  4. The same scenario applies for not only listener but also pool,
  member, healthmonitor, etc., basically every resources except
  loadbalancer from lbaas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1632054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585250] Re: Statuses not shown for non-"loadbalancer" LBaaS objects on CLI

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Medium

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585250

Title:
  Statuses not shown for non-"loadbalancer" LBaaS objects on CLI

Status in octavia:
  In Progress

Bug description:
  There is no indication on the CLI when creating an LBaaSv2 object
  (other than a "loadbalancer") has failed...

  stack@openstack:~$ neutron lbaas-listener-create --name MyListener1 
--loadbalancer MyLB1 --protocol HTTP --protocol-port 80
  Created a new listener:
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | connection_limit  | -1 |
  | default_pool_id   ||
  | default_tls_container_ref ||
  | description   ||
  | id| 5ca664d6-3a3a-4369-821d-e36c87ff5dc2   |
  | loadbalancers | {"id": "549982d9-7f52-48ac-a4fe-a905c872d71d"} |
  | name  | MyListener1|
  | protocol  | HTTP   |
  | protocol_port | 80 |
  | sni_container_refs||
  | tenant_id | 22000d943c5341cd88d27bd39a4ee9cd   |
  +---++

  There is no indication of any issue here, and lbaas-listener-show
  produces the same output.  However, in reality, the listener is in an
  error state...

  mysql> select * from lbaas_listeners;
  
+--+--+-+-+--+---+--+--+-++-+--+--+
  | tenant_id| id   | 
name| description | protocol | protocol_port | connection_limit | 
loadbalancer_id  | default_pool_id | admin_state_up | 
provisioning_status | operating_status | default_tls_container_id |
  
+--+--+-+-+--+---+--+--+-++-+--+--+
  | 22000d943c5341cd88d27bd39a4ee9cd | 5ca664d6-3a3a-4369-821d-e36c87ff5dc2 | 
MyListener1 | | HTTP |80 |   -1 | 
549982d9-7f52-48ac-a4fe-a905c872d71d | NULL|  1 | ERROR 
  | OFFLINE  | NULL |
  
+--+--+-+-+--+---+--+--+-++-+--+--+
  1 row in set (0.00 sec)

  
  How is a CLI user who doesn't have access to the Neutron DB supposed to know 
an error has occurred (other than "it doesn't work", obviously)?

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1585250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1618559] Re: LBaaS v2 healthmonitor wrong status detection

2016-12-05 Thread Michael Johnson
Are you still having this issue?  I cannot reproduce it on my devstack.

If you can reproduce this, can you provide the commands you used to
setup the load balancer (all of the steps), the output of neutron net-
list, the output of neutron subnet-list, and the output of "sudo ip
netns"?


** Changed in: neutron
   Status: New => Incomplete

** Changed in: neutron
   Importance: Undecided => High

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1618559

Title:
  LBaaS v2 healthmonitor wrong status detection

Status in octavia:
  Incomplete

Bug description:
  Summary:
  After enabling health monitor loadbalancer on any request returns 
  HTTP/1.0 503 Service Unavailable  

  I have loadbalancer with vip ip 10.123.21.15. HTTP listener, pool and
  member with IP 10.123.21.12.

  I check status of web-server by:
  curl -I -X GET http://10.123.21.15/owncloud/status.php 
  ...
  HTTP/1.1 200 OK

  But when I add healthmonitor:
  neutron lbaas-healthmonitor-create \
--delay 5 \
--max-retries 2 \
--timeout 10 \
--type HTTP \
--url-path /owncloud/status.php \
--pool owncloud-app-lb-http-pool

  neutron lbaas-healthmonitor-show 
  +++
  | Field  | Value  |
  +++
  | admin_state_up | True   |
  | delay  | 5  |
  | expected_codes | 200|
  | http_method| GET|
  | id | cf3cc795-ab1f-44c7-a521-799281e1ff64   |
  | max_retries| 2  |
  | name   ||
  | pools  | {"id": "edcd43a2-41ad-4dd7-809d-10d3e45a08a7"} |
  | tenant_id  | b5d8bbe7742540c2b9b2e1b324ea854e   |
  | timeout| 10 |
  | type   | HTTP   |
  | url_path   | /owncloud/status.php   |
  +++

  I expect:
  curl -I -X GET http://10.123.21.15/owncloud/status.php 
  ...
  HTTP/1.1 200 OK

  But result:
  curl -I -X GET http://10.123.21.15/owncloud/status.php
  ...
  HTTP/1.0 503 Service Unavailable

  Direct request to member:
  curl -I -X GET http://10.123.21.12/owncloud/status.php 
  ...
  HTTP/1.1 200 OK

  In neutron logs have no ERROR.

  Some detail about configuration:

  I have 3 controllers. Installed by Fuel with l3 population and DVR enabled.
  lbaas_agent.ini
  interface_driver=openvswitch

  neutron lbaas-loadbalancer-status owncloud-app-lb
  {
  "loadbalancer": {
  "name": "owncloud-app-lb", 
  "provisioning_status": "ACTIVE", 
  "listeners": [
  {
  "name": "owncloud-app-lb-http", 
  "provisioning_status": "ACTIVE", 
  "pools": [
  {
  "name": "owncloud-app-lb-http-pool", 
  "provisioning_status": "ACTIVE", 
  "healthmonitor": {
  "provisioning_status": "ACTIVE", 
  "type": "HTTP", 
  "id": "cf3cc795-ab1f-44c7-a521-799281e1ff64", 
  "name": ""
  }, 
  "members": [
  {
  "name": "", 
  "provisioning_status": "ACTIVE", 
  "address": "10.123.21.12", 
  "protocol_port": 80, 
  "id": "8a588ed1-8818-44b2-80df-90debee59720", 
  "operating_status": "ONLINE"
  }
  ], 
  "id": "edcd43a2-41ad-4dd7-809d-10d3e45a08a7", 
  "operating_status": "ONLINE"
  }
  ], 
  "l7policies": [], 
  "id": "7521308a-15d1-4898-87c8-8f1ed4330b6c", 
  "operating_status": "ONLINE"
  }
  ], 
  "pools": [
  {
  "name": "owncloud-app-lb-http-pool", 
  "provisioning_status": "ACTIVE", 
  "healthmonitor": {
  "provisioning_status": "ACTIVE", 
  "type": "HTTP", 
  "id": "cf3cc795-ab1f-44c7-a521-799281e1ff64", 
  "name": ""
  

[Yahoo-eng-team] [Bug 1627393] Re: Neuton-LBaaS and Octavia out of synch if TLS container secret ACLs not set up correctly

2016-12-05 Thread Michael Johnson
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627393

Title:
  Neuton-LBaaS and Octavia out of synch if TLS container secret ACLs not
  set up correctly

Status in octavia:
  New

Bug description:
  I'm hoping this is something that will go away with the neutron-lbaas
  and Octavia merge.

  Create a self-signed certificate like so:

  openssl genrsa -des3 -out self-signed_encrypted.key 2048
  openssl rsa -in self-signed_encrypted.key -out self-signed.key
  openssl req -new -x509 -days 365 -key self-signed.key -out self-signed.crt

  As the admin user, grant the demo user the ability to create cloud
  resources on the demo project:

  openstack role add --project demo --user demo creator

  Now, become the demo user:

  source ~/devstack/openrc demo demo

  As the demo user, upload the self-signed certificate to barbican:

  openstack secret store --name='test_cert' --payload-content-type='text/plain' 
--payload="$(cat self-signed.crt)"
  openstack secret store --name='test_key' --payload-content-type='text/plain' 
--payload="$(cat self-signed.key)"
  openstack secret container create --name='test_tls_container' 
--type='certificate' --secret="certificate=$(openstack secret list | awk '/ 
test_cert / {print $2}')" --secret="private_key=$(openstack secret list | awk 
'/ test_key / {print $2}')"

  As the demo user, grant access to the the above secrets BUT NOT THE
  CONTAINER to the 'admin' user. In my test, the admin user has ID:
  02c0db7c648c4714971219ae81817ba7

  openstack acl user add -u 02c0db7c648c4714971219ae81817ba7 $(openstack secret 
list | awk '/ test_cert / {print $2}')
  openstack acl user add -u 02c0db7c648c4714971219ae81817ba7 $(openstack secret 
list | awk '/ test_key / {print $2}')

  Now, as the demo user, attempt to deploy a neutron-lbaas listener
  using the secret container above:

  neutron lbaas-loadbalancer-create --name lb1 private-subnet
  neutron lbaas-listener-create --loadbalancer lb1 --protocol-port 443 
--protocol TERMINATED_HTTPS --name listener1 
--default-tls-container=$(openstack secret container list | awk '/ 
test_tls_container / {print $2}')

  The neutron-lbaas command succeeds, but the Octavia deployment fails
  since it can't access the secret container.

  This is fixed if you remember to grant access to the TLS container to
  the admin user like so:

  openstack acl user add -u 02c0db7c648c4714971219ae81817ba7 $(openstack
  secret container list | awk '/ test_tls_container / {print $2}')

  However, neutron-lbaas and octavia should have similar failure
  scenarios if the permissions aren't set up exactly right in any case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1627393/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624145] Re: Octavia should ignore project_id on API create commands (except load_balancer)

2016-12-05 Thread Michael Johnson
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624145

Title:
  Octavia should ignore project_id on API create commands (except
  load_balancer)

Status in octavia:
  New

Bug description:
  Right now, the Octavia API allows the specification of the project_id
  on the create commands for the following objects:

  listener
  health_monitor
  member
  pool

  However, all of these objects should be inheriting their project_id
  from the ancestor load_balancer object. Allowing the specification of
  project_id when we create these objects could lead to a situation
  where the descendant object's project_id is different from said
  object's ancestor load_balancer project_id.

  We don't want to break our API's backward compatibility for at least
  two release cycles, so for now we should simply ignore this parameter
  if specified (and get it from the load_balancer object in the database
  directly), and insert TODO notes in the API code to remove the ability
  to specify project_id after a certain openstack release.

  We should also update the Octavia driver in neutron_lbaas to stop
  specifying the project_id on descendant object creation.

  This bug is related to https://bugs.launchpad.net/octavia/+bug/1624113

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1624145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596162] Re: lbaasv2:Member can be created with the same ip as vip in loadbalancer

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Low

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596162

Title:
  lbaasv2:Member can be created with the same ip as vip in loadbalancer

Status in octavia:
  In Progress

Bug description:
  Create a loadbalancer:
  [root@CrtlNode247 ~(keystone_admin)]# neutron lbaas-loadbalancer-show 
ebe0a748-7797-44fa-be09-1890ca2f5c1f
  +-++
  | Field   | Value  |
  +-++
  | admin_state_up  | True   |
  | description ||
  | id  | ebe0a748-7797-44fa-be09-1890ca2f5c1f   |
  | listeners   | {"id": "3cfe5262-7e25-4433-a342-93eb118049f9"} |
  | | {"id": "a7c014d4-8c57-43ee-aeab-539847a37f43"} |
  | | {"id": "794efa5b-1e5d-4182-857a-6d8415973007"} |
  | | {"id": "6b64350e-335f-4aa5-b2dd-e86adcdbc0b3"} |
  | name| lb1|
  | operating_status| ONLINE |
  | provider| zxveglb|
  | provisioning_status | ACTIVE |
  | tenant_id   | 6403670bcb0f45cba4cb732a9a936da4   |
  | vip_address | 193.168.1.200  |
  | vip_port_id | f401e0ae-2537-4018-9252-742c16fc22ef   |
  | vip_subnet_id   | 73bee51e-7ea3-44ea-8d98-cf778cd171e0   |
  +-++

  vip address is 193.168.1.200.
  Then create a listener and pool.
  Then create a member,the ip of member is assigned to 193.168.1.200
  [root@CrtlNode247 ~(keystone_admin)]# neutron lbaas-member-create --subnet 
73bee51e-7ea3-44ea-8d98-cf778cd171e0 --address 193.168.1.200 --protocol-port 80 
pool1
  Created a new member:
  ++--+
  | Field  | Value|
  ++--+
  | address| 193.168.1.200|
  | admin_state_up | True |
  | id | e377f7a5-e2d8-493d-ad61-c2ab25ed7c0b |
  | protocol_port  | 80   |
  | subnet_id  | 73bee51e-7ea3-44ea-8d98-cf778cd171e0 |
  | tenant_id  | 6403670bcb0f45cba4cb732a9a936da4 |
  | weight | 1|
  ++--+
  It runs OK.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1596162/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583955] Re: provisioning_status of loadbalancer is always PENDING_UPDATE when following these steps

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Medium

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583955

Title:
  provisioning_status of loadbalancer is always PENDING_UPDATE  when
  following these steps

Status in octavia:
  New

Bug description:
  issue is in kilo branch;

  following these steps:
  1. update admin_state_up of loadbalancer to False
  2. restart lbaas agent
  3. update admin_state_up of loadbalancer to True

  then the provisioning_status of loadbalancer is always PENDING_UPDATE

  agent log is:
  2013-11-20 12:33:54.358 12601 ERROR oslo_messaging.rpc.dispatcher 
[req-add12f1f-f693-4f0b-9eae-5204d8a50a3f ] Exception during message handling: 
An unknown exception occurred.
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
282, in update_loadbalancer
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher driver 
= self._get_driver(loadbalancer.id)
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
168, in _get_driver
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher raise 
DeviceNotFoundOnAgent(loadbalancer_id=loadbalancer_id)
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher 
DeviceNotFoundOnAgent: An unknown exception occurred.
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1583955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584209] Re: Neutron-LBaaS v2: PortID should be returned with Loadbalancer resource (API)

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Status: In Progress => Incomplete

** Changed in: neutron
   Importance: Undecided => Wishlist

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1584209

Title:
  Neutron-LBaaS v2: PortID should be returned with Loadbalancer resource
  (API)

Status in octavia:
  Incomplete

Bug description:
  When creating a new loadbalancer with lbaas v2 (Octavia provider) and
  would like to create a floating ip attached to the vip port for
  loadbalancer.  Currently have to lookup the port id based on the ip
  address associated with the loadbalancer.  It would greatly simplify
  the workflow if the Port ID is returned in the loadbalancer API,
  similar to vip API in lbaas v1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1584209/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551282] Re: devstack launches extra instance of lbaas agent

2016-12-05 Thread Michael Johnson
This was finished here: https://review.openstack.org/#/c/358255/

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551282

Title:
  devstack launches extra instance of lbaas agent

Status in neutron:
  Fix Released

Bug description:
  when using lbaas devstack plugin, two lbaas agents will be launced.
  one by devstack neutron-legacy, and another by neutron-lbaas devstack plugin.

  enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas
  ENABLED_SERVICES+=,q-lbaas

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552119] Re: NSxv LBaaS stats error

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552119

Title:
  NSxv LBaaS stats error

Status in neutron:
  Fix Released

Bug description:
  - OpenStack Kilo (2015.1.1-1)
  - NSXv 6.2.1

  I see following errors in neutron.log after enabling LBaaS

  
  2016-03-02 07:36:19.145 27350 INFO neutron.wsgi 
[req-28324239-c925-4602-91c3-24378466d8ae ] 192.168.0.2 - - [02/Mar/2016 
07:36:19] "GET /v2.0/lb/pools/ba3c7e8a-81bf-4459-ad85-224b9f92594f/stats.json 
HTTP/1.1" 500 378 2.441363
  2016-03-02 07:36:19.176 27349 INFO neutron.wsgi [-] (27349) accepted 
('192.168.0.2', 54704)
  2016-03-02 07:36:21.740 27349 ERROR neutron.api.v2.resource 
[req-94a3960b-b01f-4665-a733-1621d7f7cbfa ] stats failed
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 83, in 
resource
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 131, in wrapper
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 209, in 
_handle_action
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/plugin.py",
 line 336, in stats
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource stats_data = 
driver.stats(context, pool_id)
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/vmware/edge_driver.py",
 line 199, in stats
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource return 
self._nsxv_driver.stats(context, pool_id, pool_mapping)
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/vshield/edge_loadbalancer_driver.py",
 line 786, in stats
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource pools_stats = 
lb_stats.get('pool', [])
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource AttributeError: 
'tuple' object has no attribute 'get'
  2016-03-02 07:36:21.740 27349 TRACE neutron.api.v2.resource

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1552119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468457] Re: Invalid Tempest tests cause A10 CI to fail

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Medium

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468457

Title:
  Invalid Tempest tests cause A10 CI to fail

Status in octavia:
  New

Bug description:
  The following tests will not pass in A10's CI due to what appear to be 
incorrect tests.
  
neutron_lbaas.tests.tempest.v2.api.test_pools_admin.TestPools.test_create_pool_for_another_tenant[smoke]
  
neutron_lbaas.tests.tempest.v2.api.test_pools_admin.TestPools.test_create_pool_missing_tenant_id_for_admin[smoke]
  
neutron_lbaas.tests.tempest.v2.api.test_pools_admin.TestPools.test_create_pool_missing_tenant_id_for_other_tenant[smoke]
  
neutron_lbaas.tests.tempest.v2.api.test_pools_admin.TestPools.test_create_pool_using_empty_tenant_field[smoke]

  --
  I'm creating this bug so I have one to reference when I @skip the tests per 
dougwig.

  The empty tenant ID tests need to be modified to expect an error
  condition, but this is not possible as Neutron's request handling
  fills in missing tenant IDs with the tenant ID of the logged in user.
  This is an error condition and should be handled as such.  Fixing it
  in the request handling is going to require fixes in a lot more places
  in Neutron, I believe.  I'll look for other similar tests that would
  expose such functionality.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1468457/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464229] Re: LbaasV2 Health monitor status

2016-12-05 Thread Michael Johnson
Currently you can view the health status by using the load balancer
status API/command.

neutron lbaas-loadbalancer-status lb1

I am setting this to wishlist as I think there is a valid point that the
show commands should include the operating status.

** Changed in: neutron
   Importance: Undecided => Wishlist

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464229

Title:
  LbaasV2 Health monitor status

Status in octavia:
  In Progress

Bug description:
  lbaasv2 healmonnitor:

  We have no way to see if an LbaasV2 health monitor is succesfful or failed.
  Additionally, we have no way to see if a VM in lbaasv2 pool is up or down ( 
from an Lbaasv2 point of view)

  neutron lbaas-pool-show - should show HealtMonitor status for VMs.

  kilo
  rhel7.1
  python-neutron-2015.1.0-1.el7ost.noarch
  openstack-neutron-openvswitch-2015.1.0-1.el7ost.noarch
  python-neutronclient-2.4.0-1.el7ost.noarch
  openstack-neutron-2015.1.0-1.el7ost.noarch
  openstack-neutron-ml2-2015.1.0-1.el7ost.noarch
  openstack-neutron-lbaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-fwaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-common-2015.1.0-1.el7ost.noarch
  python-neutron-lbaas-2015.1.0-3.el7ost.noarch
  python-neutron-fwaas-2015.1.0-3.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1464229/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495430] Re: delete lbaasv2 can't delete lbaas namespace automatically.

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => High

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495430

Title:
  delete lbaasv2 can't delete lbaas namespace automatically.

Status in octavia:
  In Progress

Bug description:
  Try the lbaas v2 in my env and found lots of orphan lbaas namespace. Look 
back to the code and find that lbaas instance will be undelployed, when delete 
listener. All things are deleted except the namespace.
  However, from the method of deleting loadbalancer, the namespace will be 
deleted automatically.
  The behavior is not consistent, namespace should be deleted from deleting 
listener too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1495430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498130] Re: LBaaSv2: Can't delete the Load balancer and also dependant entities if the load balancer provisioning_status is in PENDING_UPDATE

2016-12-05 Thread Michael Johnson
Marking this as invalid as it is as designed to not allow actions on load 
balancers in PENDING_* states.
PENDING_* means an action against that load balancer (DELETE or UPDATE) is 
already in progress.

As for load balancers getting stuck in a PENDING_* state, many bugs have been 
cleaned up for that situation.  If you find a situation that leads to a load 
balancer stuck in a PENDING_* state, please report that as a new bug.
Operators can clear load balnacers stuck in PENDING_* by manually updating the 
database record for the resource.

** Project changed: neutron => octavia

** Changed in: octavia
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498130

Title:
  LBaaSv2: Can't  delete the Load balancer and also dependant entities
  if the load balancer provisioning_status is  in PENDING_UPDATE

Status in octavia:
  Invalid

Bug description:
  If the Load balancer provisioning_status is  in PENDING_UPDATE

  cannot delete the Loadbalancer and also dependent entities like
  listener or pool

   neutron -v lbaas-listener-delete 6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6
  DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://9.197.47.200:5000/v2.0 -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient"
  DEBUG: keystoneclient.session RESP: [200] content-length: 338 vary: 
X-Auth-Token connection: keep-alive date: Mon, 21 Sep 2015 18:35:55 GMT 
content-type: application/json x-openstack-request-id: 
req-952f21b0-81bf-4e0f-a6c8-b3fc13ac4cd2
  RESP BODY: {"version": {"status": "stable", "updated": 
"2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": 
[{"href": "http://9.197.47.200:5000/v2.0/;, "rel": "self"}, {"href": 
"http://docs.openstack.org/;, "type": "text/html", "rel": "describedby"}]}}

  DEBUG: neutronclient.neutron.v2_0.lb.v2.listener.DeleteListener 
run(Namespace(id=u'6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6', 
request_format='json'))
  DEBUG: keystoneclient.auth.identity.v2 Making authentication request to 
http://9.197.47.200:5000/v2.0/tokens
  DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://9.197.47.200:9696/v2.0/lbaas/listeners.json?fields=id=6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6
 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}9ea944020f06fa79f4b6db851dbd9e69aca65d58"
  DEBUG: keystoneclient.session RESP: [200] date: Mon, 21 Sep 2015 18:35:56 GMT 
connection: keep-alive content-type: application/json; charset=UTF-8 
content-length: 346 x-openstack-request-id: 
req-fd7ee22b-f776-4ebd-94c6-7548a5aff362
  RESP BODY: {"listeners": [{"protocol_port": 100, "protocol": "TCP", 
"description": "", "sni_container_ids": [], "admin_state_up": true, 
"loadbalancers": [{"id": "ab8f76ec-236f-4f4c-b28e-cd7bfee48cd2"}], 
"default_tls_container_id": null, "connection_limit": 100, "default_pool_id": 
null, "id": "6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6", "name": "listener100"}]}

  DEBUG: keystoneclient.session REQ: curl -g -i -X DELETE 
http://9.197.47.200:9696/v2.0/lbaas/listeners/6f9fdf3a-4578-4e3e-8b0b-f2699608b7e6.json
 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}9ea944020f06fa79f4b6db851dbd9e69aca65d58"
  DEBUG: keystoneclient.session RESP:
  DEBUG: neutronclient.v2_0.client Error message: {"NeutronError": {"message": 
"Invalid state PENDING_UPDATE of loadbalancer resource 
ab8f76ec-236f-4f4c-b28e-cd7bfee48cd2", "type": "StateInvalid", "detail": ""}}
  ERROR: neutronclient.shell Invalid state PENDING_UPDATE of loadbalancer 
resource ab8f76ec-236f-4f4c-b28e-cd7bfee48cd2
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/neutronclient/shell.py", line 766, 
in run_subcommand
  return run_command(cmd, cmd_parser, sub_argv)
File "/usr/lib/python2.7/site-packages/neutronclient/shell.py", line 101, 
in run_command
  return cmd.run(known_args)
File 
"/usr/lib/python2.7/site-packages/neutronclient/neutron/v2_0/__init__.py", line 
581, in run
  obj_deleter(_id)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
102, in with_params
  ret = self.function(instance, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
932, in delete_listener
  return self.delete(self.lbaas_listener_path % (lbaas_listener))
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
289, in delete
  headers=headers, params=params)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
270, in retry_request
  headers=headers, params=params)
File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 
211, in do_request
  self._handle_fault_response(status_code, replybody)
File 

[Yahoo-eng-team] [Bug 1440285] Re: When neutron lbaas agent is not running, 'neutron lb*’ commands must display an error instead of "404 Not Found"

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Low

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440285

Title:
  When neutron lbaas agent is not running, 'neutron lb*’ commands must
  display an error instead of "404 Not Found"

Status in octavia:
  Confirmed

Bug description:
  When neutron lbaas agent is not running, all the ‘neutron lb*’
  commands display "404 Not Found". This makes the user think that
  something is wrong with the lbaas agent (when it is not even
  running!).

  Instead, when neutron lbaas agent is not running, an error like
  “Neutron Load Balancer Agent not running” must be displayed so the
  user knows that the lbaas agent must be started first.

  The ‘ps’ command below shows that the neutron lbaas agent is not
  running.

  $ ps aux | grep lb
  $

  $ neutron lb-healthmonitor-list
  404 Not Found
  The resource could not be found.

  $ neutron lb-member-list
  404 Not Found
  The resource could not be found.

  $ neutron lb-pool-list
  404 Not Found
  The resource could not be found.

  $ neutron lb-vip-list
  404 Not Found
  The resource could not be found.

  $ neutron lbaas-healthmonitor-list
  404 Not Found
  The resource could not be found.

  $ neutron lbaas-listener-list
  404 Not Found
  The resource could not be found.

  $ neutron lbaas-loadbalancer-list
  404 Not Found
  The resource could not be found.

  $ neutron lbaas-pool-list
  404 Not Found
  The resource could not be found.

  $ neutron --version
  2.3.11

  =

  Below are the neutron verbose messages that show "404 Not Found".

  $ neutron -v lb-healthmonitor-list
  DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://192.168.122.205:5000/v2.0/ -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient"
  DEBUG: keystoneclient.session RESP: [200] content-length: 341 vary: 
X-Auth-Token keep-alive: timeout=5, max=100 server: Apache/2.4.7 (Ubuntu) 
connection: Keep-Alive date: Sat, 04 Apr 2015 04:37:54 GMT content-type: 
application/json x-openstack-request-id: 
req-95c6d1e1-02a7-4077-8ed2-0cb4f574a397
  RESP BODY: {"version": {"status": "stable", "updated": 
"2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": 
[{"href": "http://192.168.122.205:5000/v2.0/;, "rel": "self"}, {"href": 
"http://docs.openstack.org/;, "type": "text/html", "rel": "describedby"}]}}

  DEBUG: stevedore.extension found extension EntryPoint.parse('table = 
cliff.formatters.table:TableFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('csv = 
cliff.formatters.commaseparated:CSVLister')
  DEBUG: stevedore.extension found extension EntryPoint.parse('yaml = 
clifftablib.formatters:YamlFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('json = 
clifftablib.formatters:JsonFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('html = 
clifftablib.formatters:HtmlFormatter')
  DEBUG: neutronclient.neutron.v2_0.lb.healthmonitor.ListHealthMonitor 
get_data(Namespace(columns=[], fields=[], formatter='table', max_width=0, 
page_size=None, quote_mode='nonnumeric', request_format='json', 
show_details=False, sort_dir=[], sort_key=[]))
  DEBUG: keystoneclient.auth.identity.v2 Making authentication request to 
http://192.168.122.205:5000/v2.0/tokens
  DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://192.168.122.205:9696/v2.0/lb/health_monitors.json -H "User-Agent: 
python-neutronclient" -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}23f2a54d0348e6bfc5364565ece4baf2e2148fa8"
  DEBUG: keystoneclient.session RESP:
  DEBUG: neutronclient.v2_0.client Error message: 404 Not Found

  The resource could not be found.

  ERROR: neutronclient.shell 404 Not Found

  The resource could not be found.

  Traceback (most recent call last):
    File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py", line 
760, in run_subcommand
  return run_command(cmd, cmd_parser, sub_argv)
    File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py", line 
100, in run_command
  return cmd.run(known_args)
    File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/common/command.py", line 
29, in run
  return super(OpenStackCommand, self).run(parsed_args)
    File "/usr/local/lib/python2.7/dist-packages/cliff/display.py", line 91, in 
run
  column_names, data = self.take_action(parsed_args)
    File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/common/command.py", line 
35, in take_action
  return self.get_data(parsed_args)
    File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py",
 line 691, in get_data
  data = self.retrieve_list(parsed_args)
    File 

[Yahoo-eng-team] [Bug 1426248] Re: lbaas v2 member create should not require subnet_id

2016-12-05 Thread Michael Johnson
** Changed in: neutron
   Importance: Undecided => Wishlist

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426248

Title:
  lbaas v2 member create should not require subnet_id

Status in octavia:
  Incomplete

Bug description:
  subnet_id on a member is currently required.  It should be optional
  and if not provided, it can be assumed the member can be reached by
  the load balancer (through the loadbalancer's vip subnet)

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1426248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603458] Re: Cannot Delete loadbalancers due to undeleteable pools

2016-12-05 Thread Michael Johnson
I agree with Brandon here, this is an lbaas-dashboard issue, so marking
the neutron side invalid.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603458

Title:
  Cannot Delete loadbalancers due to undeleteable pools

Status in OpenStack Dashboard (Horizon):
  New
Status in neutron:
  Invalid
Status in Neutron LBaaS Dashboard:
  New

Bug description:
  To delete an LBaaSv2 loadbalancer, you must remove all the members
  from the pool, then delete the pool, then delete the listener, then
  you can delete the loadbalancer. Currently in Horizon you can do all
  of those except delete the pool. Since you can't delete the pool, you
  can't delete the listener, and therefore can't delete the
  loadbalancer.

  Either deleting the listener should trigger the pool delete too (since
  they're 1:1) or the Horizon Wizard for Listener should have a delete
  pool capability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1603458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   4   >