[Bug 1852221] Re: ovs-vswitchd needs to be forced to reconfigure after adding protocols to bridges

2020-05-31 Thread Edward Hope-Morley
** Description changed:

+ [Impact]
+ When the neutron native ovs driver creates bridges it will sometimes 
apply/modify the supported openflow protocols on that bridge. The OpenVswitch 
versions shipped with Train and Ussuri don't support this which results in OF 
protocol mismatches when neutron performs operations on that bridge. The patch 
we are backporting here ensures that all protocol versions are set on the 
bridge at the point on create/init.
+ 
+ [Test Case]
+  * deploy Openstack Train
+  * go to a compute host and do: sudo ovs-ofctl -O OpenFlow14 dump-flows br-int
+  * ensure you do not see "negotiation failed" errors
+ 
+ [Regression Potential]
+  * this patch is ensuring that newly created Neutron ovs bridges have 
OpenFlow 1.0, 1.3 and 1.4 set on them. Neutron already supports these so is not 
expected to have any change in behaviour. The patch will not impact bridges 
that already exist (so will not fix them either if they are affected).
+ 
+ --
+ 
  As part of programming OpenvSwitch, Neutron will add to which protocols
  bridges support [0].
  
  However, the Open vSwitch `ovs-vswitchd` process does not appear to
  always update its perspective of which protocol versions it should
  support for bridges:
  
  # ovs-ofctl -O OpenFlow14 dump-flows br-int
  2019-11-12T12:52:56Z|1|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: 
version negotiation failed (we support version 0x05, peer supports version 0x01)
  ovs-ofctl: br-int: failed to connect to socket (Broken pipe)
  
  # systemctl restart ovsdb-server
  # ovs-ofctl -O OpenFlow14 dump-flows br-int
   cookie=0x84ead4b79da3289a, duration=1.576s, table=0, n_packets=0, n_bytes=0, 
priority=65535,vlan_tci=0x0fff/0x1fff actions=drop
   cookie=0x84ead4b79da3289a, duration=1.352s, table=0, n_packets=0, n_bytes=0, 
priority=5,in_port="int-br-ex",dl_dst=fa:16:3f:69:2e:c6 actions=goto_table:4
  ...
  (Success)
  
  The restart of the `ovsdb-server` process above will make `ovs-vswitchd`
  reassess its configuration.
  
- 
- 0: 
https://github.com/openstack/neutron/blob/0fa7e74ebb386b178d36ae684ff04f03bdd6cb0d/neutron/agent/common/ovs_lib.py#L281
+ 0:
+ 
https://github.com/openstack/neutron/blob/0fa7e74ebb386b178d36ae684ff04f03bdd6cb0d/neutron/agent/common/ovs_lib.py#L281

** Patch added: "lp1852221-eoan-train.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1852221/+attachment/5379057/+files/lp1852221-eoan-train.debdiff

** Also affects: openvswitch (Ubuntu Eoan)
   Importance: Undecided
   Status: New

** Also affects: openvswitch (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Changed in: openvswitch (Ubuntu Focal)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852221

Title:
  ovs-vswitchd needs to be forced to reconfigure after adding protocols
  to bridges

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1852221/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852221] Re: ovs-vswitchd needs to be forced to reconfigure after adding protocols to bridges

2020-05-31 Thread Edward Hope-Morley
** Changed in: cloud-archive/ussuri
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852221

Title:
  ovs-vswitchd needs to be forced to reconfigure after adding protocols
  to bridges

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1852221/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852221] Re: ovs-vswitchd needs to be forced to reconfigure after adding protocols to bridges

2020-05-31 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/train
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852221

Title:
  ovs-vswitchd needs to be forced to reconfigure after adding protocols
  to bridges

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1852221/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1662324] Re: linux bridge agent disables ipv6 before adding an ipv6 address

2020-05-26 Thread Edward Hope-Morley
** Also affects: cloud-archive/mitaka
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1662324

Title:
  linux bridge agent disables ipv6 before adding an ipv6 address

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1662324/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1804261] Re: Ceph OSD units requires reboot if they boot before vault (and if not unsealed with 150s)

2020-04-20 Thread Edward Hope-Morley
Bionic/Queens is currently blocked on a potential regression in bug
1871820

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1804261

Title:
  Ceph OSD units requires reboot if they boot before vault (and if not
  unsealed with 150s)

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-osd/+bug/1804261/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1863014] Re: skip trying to decrypt device if it already exists

2020-04-16 Thread Edward Hope-Morley
eoan-proposed verified using [Test Case] with output:

root@juju-773c15-lp1863014-eoan-10:/home/ubuntu# apt-cache policy vaultlocker
vaultlocker:
  Installed: 1.0.6-0ubuntu0.19.10.1
  Candidate: 1.0.6-0ubuntu0.19.10.1
  Version table:
 *** 1.0.6-0ubuntu0.19.10.1 500
500 http://archive.ubuntu.com/ubuntu eoan-proposed/universe amd64 
Packages
100 /var/lib/dpkg/status
 1.0.4-0ubuntu0.19.10.1 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu eoan-updates/universe 
amd64 Packages
 1.0.3-0ubuntu2 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu eoan/universe amd64 
Packages
root@juju-773c15-lp1863014-eoan-10:/home/ubuntu# systemctl restart 
vaultlocker-decrypt@76a2e3b7-0977-4dcd-a0c9-ec036259bac1.service
root@juju-773c15-lp1863014-eoan-10:/home/ubuntu# journalctl -u 
system-vaultlocker\x2ddecrypt.slice
-- Logs begin at Wed 2020-04-15 17:26:55 UTC, end at Thu 2020-04-16 12:58:40 
UTC. --
-- No entries --
root@juju-773c15-lp1863014-eoan-10:/home/ubuntu# grep vault /var/log/syslog 
Apr 16 12:58:38 juju-773c15-lp1863014-eoan-10 systemd[1]: Created slice 
system-vaultlocker\x2ddecrypt.slice.
Apr 16 12:58:38 juju-773c15-lp1863014-eoan-10 systemd[1]: Starting vaultlocker 
retrieve: 76a2e3b7-0977-4dcd-a0c9-ec036259bac1...
Apr 16 12:58:39 juju-773c15-lp1863014-eoan-10 sh[28985]: 
INFO:vaultlocker.shell:Checking if 
/dev/mapper/crypt-76a2e3b7-0977-4dcd-a0c9-ec036259bac1 exists.
Apr 16 12:58:39 juju-773c15-lp1863014-eoan-10 sh[28985]: 
INFO:vaultlocker.shell:Skipping setup of 76a2e3b7-0977-4dcd-a0c9-ec036259bac1 
because it already exists.
Apr 16 12:58:39 juju-773c15-lp1863014-eoan-10 systemd[1]: 
vaultlocker-decrypt@76a2e3b7-0977-4dcd-a0c9-ec036259bac1.service: Succeeded.
Apr 16 12:58:39 juju-773c15-lp1863014-eoan-10 systemd[1]: Started vaultlocker 
retrieve: 76a2e3b7-0977-4dcd-a0c9-ec036259bac1.


** Tags removed: verification-needed verification-needed-eoan
** Tags added: sts-sru-needed verification-done verification-done-eoan

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1863014

Title:
  skip trying to decrypt device if it already exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/bionic-backports/+bug/1863014/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1868557] Re: vaultlocker spins indefinitely if it starts before dns configured

2020-04-16 Thread Edward Hope-Morley
eoan-proposed verified using [Test Case] with output:

root@juju-773c15-lp1863014-eoan-10:/home/ubuntu# apt-cache policy vaultlocker
vaultlocker:
  Installed: 1.0.6-0ubuntu0.19.10.1
  Candidate: 1.0.6-0ubuntu0.19.10.1
  Version table:
 *** 1.0.6-0ubuntu0.19.10.1 500
500 http://archive.ubuntu.com/ubuntu eoan-proposed/universe amd64 
Packages
100 /var/lib/dpkg/status
 1.0.4-0ubuntu0.19.10.1 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu eoan-updates/universe 
amd64 Packages
 1.0.3-0ubuntu2 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu eoan/universe amd64 
Packages
root@juju-773c15-lp1863014-eoan-10:/home/ubuntu# journalctl -u 
system-vaultlocker\x2ddecrypt.slice
-- Logs begin at Wed 2020-04-15 17:26:55 UTC, end at Thu 2020-04-16 12:51:32 
UTC. --
-- No entries --
root@juju-773c15-lp1863014-eoan-10:/home/ubuntu# mount| grep crypt
/dev/mapper/crypt-76a2e3b7-0977-4dcd-a0c9-ec036259bac1 on 
/var/lib/nova/instances type xfs 
(rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)


** Tags removed: verification-needed verification-needed-eoan
** Tags added: verification-done verification-done-eoan

** Tags added: sts-sru-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1868557

Title:
  vaultlocker spins indefinitely if it starts before dns configured

To manage notifications about this bug go to:
https://bugs.launchpad.net/bionic-backports/+bug/1868557/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1799737] Re: l3 agent external_network_bridge broken with ovs

2020-03-27 Thread Edward Hope-Morley
Does feel like the code change in
https://review.opendev.org/#/c/564825/10/neutron/agent/l3/router_info.py
could be reverted though since it only affects the legacy config and is
also breaking it.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1799737

Title:
  l3 agent external_network_bridge broken with ovs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1799737/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1799737] Re: l3 agent external_network_bridge broken with ovs

2020-03-27 Thread Edward Hope-Morley
@axino I assume the environment you have that is using
external_network_bridge/external_network_id is quite old and was
originally deployed with a version older than Queens? Using these option
to configure external networks is really deprecated and since at least
Juno we have used bridge_mappings for this purpose (and to allow > 1
external network). There is an annoying quirk here though (and perhaps
this is why you have not switched) which is that with the old way the
network will likely not have a provider name (in the db) and therefore
migrating it as-is to a bridge_mappings type config will break the
network (unless perhaps one can be set manually in the database).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1799737

Title:
  l3 agent external_network_bridge broken with ovs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1799737/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1799737] Re: l3 agent external_network_bridge broken with ovs

2020-03-27 Thread Edward Hope-Morley
@slaweq the bug description says this issue was observed in Queens which
is currently under Extended Maintenance so presumably still eligible for
fixes if there is sufficient consensus on their criticality and enough
people to review. We also need to consider upgrades from Q -> R -> S
where people are still using this config.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1799737

Title:
  l3 agent external_network_bridge broken with ovs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1799737/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1863014] Re: skip trying to decrypt device if it already exists

2020-03-23 Thread Edward Hope-Morley
** Also affects: vaultlocker (Ubuntu Eoan)
   Importance: Undecided
   Status: New

** Also affects: vaultlocker (Ubuntu Focal)
   Importance: Medium
   Status: Fix Released

** Also affects: vaultlocker (Ubuntu Bionic)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1863014

Title:
  skip trying to decrypt device if it already exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/vaultlocker/+bug/1863014/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1868557] Re: vaultlocker spins indefinitely if it starts before dns configured

2020-03-23 Thread Edward Hope-Morley
** Also affects: vaultlocker (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: vaultlocker (Ubuntu)
   Status: New => In Progress

** Changed in: vaultlocker (Ubuntu)
 Assignee: (unassigned) => Edward Hope-Morley (hopem)

** Also affects: vaultlocker (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: vaultlocker (Ubuntu Focal)
   Importance: Undecided
 Assignee: Edward Hope-Morley (hopem)
   Status: In Progress

** Also affects: vaultlocker (Ubuntu Eoan)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1868557

Title:
  vaultlocker spins indefinitely if it starts before dns configured

To manage notifications about this bug go to:
https://bugs.launchpad.net/vaultlocker/+bug/1868557/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1867676] Re: Fetching by secret container doesn't raises 404 exception

2020-03-18 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1867676

Title:
  Fetching by secret container doesn't raises 404 exception

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1867676/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1857126] Re: [SRU] Setting up external gateway on the router brings all ports of this router Down and errors "Router is not compatible with this agent" [bionic-stein]

2020-03-10 Thread Edward Hope-Morley
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1857126

Title:
  [SRU] Setting up external gateway on the router brings all ports of
  this router Down and errors "Router is not compatible with this agent"
  [bionic-stein]

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1857126/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1776622] Re: snapd updates on focal never finish installing. Can't install any other updates.

2020-03-04 Thread Edward Hope-Morley
I can confirm i hit this with a fresh install of Focal Desktop today.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1776622

Title:
  snapd updates on focal never finish installing. Can't install any
  other updates.

To manage notifications about this bug go to:
https://bugs.launchpad.net/snapd/+bug/1776622/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1863704] Re: wrongly used a string type as int value for CEPH_VOLUME_SYSTEMD_TRIES and CEPH_VOLUME_SYSTEMD_INTERVAL

2020-02-28 Thread Edward Hope-Morley
** Also affects: cloud-archive/rocky
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/train
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/queens
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/stein
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1863704

Title:
  wrongly used a string type as int value for CEPH_VOLUME_SYSTEMD_TRIES
  and CEPH_VOLUME_SYSTEMD_INTERVAL

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1863704/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1863704] Re: wrongly used a string type as int value for CEPH_VOLUME_SYSTEMD_TRIES and CEPH_VOLUME_SYSTEMD_INTERVAL

2020-02-18 Thread Edward Hope-Morley
@taodd can you please tell me which releases of Ubuntu Ceph this patch
already exists in and tell me which releases you are targeting this SRU
at. You have set Bionic but is it already in Focal, Eoan etc?

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1863704

Title:
  wrongly used a string type as int value for CEPH_VOLUME_SYSTEMD_TRIES
  and CEPH_VOLUME_SYSTEMD_INTERVAL

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1863704/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1838607] Re: vaultlocker service fails when some interface are DOWN with NO-CARRIER

2020-02-14 Thread Edward Hope-Morley
@mfo thanks yeah, the piece of info that was missing for me is that the
interfaces need to be down AND have a netplan configuration in order for
the issue to trigger which in my repro was not the case.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1838607

Title:
  vaultlocker service fails when some interface are DOWN with NO-CARRIER

To manage notifications about this bug go to:
https://bugs.launchpad.net/bionic-backports/+bug/1838607/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1838607] Re: vaultlocker service fails when some interface are DOWN with NO-CARRIER

2020-01-31 Thread Edward Hope-Morley
I'm trying to understand why I do not see this issue. I have several
interfaces DOWN and vaultlocker does not have this issue on boot:

root@chespin:~# ip a s| grep ": eno"
2: eno1:  mtu 1500 qdisc mq state DOWN group 
default qlen 1000
3: eno2:  mtu 1500 qdisc mq state DOWN group 
default qlen 1000
4: eno3:  mtu 1500 qdisc mq state DOWN group 
default qlen 1000
5: eno4:  mtu 1500 qdisc mq state DOWN group 
default qlen 1000
6: eno49:  mtu 9000 qdisc mq master br-eno49 
state UP group default qlen 1000
7: eno50:  mtu 1500 qdisc mq state UP group 
default qlen 1000
(reverse-i-search)`': ^C
root@chespin:~# dpkg -l| grep vaultlocker
ii  vaultlocker   1.0.3-0ubuntu1.18.10.1~ubuntu18.04.1  
  all  Secure storage of dm-crypt keys in Hashicorp Vault
root@chespin:~# grep "Dependency failed" /var/log/syslog*
root@chespin:~# 

It also appears you are using a vm so i wonder if that somehow impacts
your issue. The only other issue with vaultlocker on boot that i am
aware of is bug 1804261 where it can timeout reaching the vault api but
that is a different problem.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1838607

Title:
  vaultlocker service fails when some interface are DOWN with NO-CARRIER

To manage notifications about this bug go to:
https://bugs.launchpad.net/bionic-backports/+bug/1838607/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1782922] Re: LDAP: changing user_id_attribute bricks group mapping

2020-01-20 Thread Edward Hope-Morley
Hi @dorina-t this patch is already release in Bionic (Queens) and is
ready to be released for xenial Queens UCA so lets ping @corey.bryant to
see if he can get it released.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1782922

Title:
  LDAP: changing user_id_attribute bricks group mapping

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1782922/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1723030] Re: Under certain conditions check_rules is very sluggish

2020-01-13 Thread Edward Hope-Morley
** No longer affects: cloud-archive/mitaka

** No longer affects: python-oslo.policy (Ubuntu Xenial)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1723030

Title:
  Under certain conditions check_rules is very sluggish

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1723030/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1826114] Re: Errors creating users and projects

2020-01-07 Thread Edward Hope-Morley
** Also affects: horizon (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Eoan)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/train
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/stein
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1826114

Title:
  Errors creating users and projects

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1826114/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1850754] Re: ceph-volume lvm list is O(n^2)

2020-01-06 Thread Edward Hope-Morley
** Tags added: sts-sru-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1850754

Title:
  ceph-volume lvm list is O(n^2)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceph/+bug/1850754/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1804261] Re: Ceph OSD units requires reboot if they boot before vault (and if not unsealed with 150s)

2019-12-11 Thread Edward Hope-Morley
** Also affects: ceph (Ubuntu Focal)
   Importance: High
 Assignee: dongdong tao (taodd)
   Status: New

** Also affects: ceph (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: ceph (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: ceph (Ubuntu Eoan)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1804261

Title:
  Ceph OSD units requires reboot if they boot before vault (and if not
  unsealed with 150s)

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-osd/+bug/1804261/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1804261] Re: Ceph OSD units requires reboot if they boot before vault (and if not unsealed with 150s)

2019-12-11 Thread Edward Hope-Morley
Side note, I was initially unable to manually recover because I was
restarting the wrong ceph-volume service:

root@cephtest:~# systemctl -a| grep ceph-volume
  ceph-volume@bbfc0235-f8fd-458b-9c3d-21803b72f4bc.service
  loadedactivating start start Ceph Volume activation: 
bbfc0235-f8fd-458b-9c3d-21803b72f4bc
  ceph-volume@lvm-2-bbfc0235-f8fd-458b-9c3d-21803b72f4bc.service
  loadedinactive   deadCeph Volume activation: 
lvm-2-bbfc0235-f8fd-458b-9c3d-21803b72f4bc

i.e. there are two and it is the lvm* one that needs restarting (i tried
to restart the other which didnt work).

** Changed in: charm-ceph-osd
 Assignee: dongdong tao (taodd) => (unassigned)

** Changed in: charm-ceph-osd
   Status: Triaged => Invalid

** Changed in: charm-ceph-osd
   Importance: High => Undecided

** Changed in: ceph (Ubuntu)
   Importance: Undecided => High

** Changed in: ceph (Ubuntu)
 Assignee: (unassigned) => dongdong tao (taodd)

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/queens
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/rocky
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/train
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/stein
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1804261

Title:
  Ceph OSD units requires reboot if they boot before vault (and if not
  unsealed with 150s)

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-osd/+bug/1804261/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1804261] Re: Ceph OSD units requires reboot if they boot before vault (and if not unsealed with 150s)

2019-12-10 Thread Edward Hope-Morley
** Also affects: ceph (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1804261

Title:
  Ceph OSD units requires reboot if they boot before vault (and if not
  unsealed with 150s)

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-osd/+bug/1804261/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1840465] Re: [SRU] Fails to list security groups if one or more exists without rules

2019-12-09 Thread Edward Hope-Morley
** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1840465

Title:
  [SRU] Fails to list security groups if one or more exists without
  rules

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1840465/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1847544] Re: backport: S3 policy evaluated incorrectly

2019-12-03 Thread Edward Hope-Morley
** Also affects: cloud-archive/queens
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1847544

Title:
  backport: S3 policy evaluated incorrectly

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1847544/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1848286] Re: octavia is not reporting metrics like lbaasv2

2019-11-25 Thread Edward Hope-Morley
There are rocky and stein backport submissions @
https://review.opendev.org/#/q/Ib6e78438c3da0e22d93f720f00cdeadf0ed7a91f

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1848286

Title:
  octavia is not reporting metrics like lbaasv2

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1848286/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1815101] Re: [master] Restarting systemd-networkd breaks keepalived, heartbeat, corosync, pacemaker (interface aliases are restarted)

2019-10-31 Thread Edward Hope-Morley
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1815101

Title:
  [master] Restarting systemd-networkd breaks keepalived, heartbeat,
  corosync, pacemaker (interface aliases are restarted)

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-keepalived/+bug/1815101/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1838109] Re: civetweb does not allow tuning of maximum socket connections

2019-10-30 Thread Edward Hope-Morley
Since the others are already released ill mark them as verification-done
as well.

** Tags removed: verification-needed verification-rocky-needed 
verification-stein-needed
** Tags added: verification-done verification-rocky-done verification-stein-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1838109

Title:
  civetweb does not allow tuning of maximum socket connections

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceph/+bug/1838109/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1838109] Re: civetweb does not allow tuning of maximum socket connections

2019-10-30 Thread Edward Hope-Morley
xenial-queens verified

test output:

root@juju-662fa0-xq-sru-test-7:~# sed -i -r 's/(rgw frontends = civetweb 
port=70).*/\1 max_connections=1000/g' 
/etc/ceph/ceph.conf
root@juju-662fa0-xq-sru-test-7:~# systemctl restart ceph-radosgw@rgw.`hostname`
root@juju-662fa0-xq-sru-test-7:~# lsof -i :70
root@juju-662fa0-xq-sru-test-7:~# grep max_conn 
/var/log/ceph/ceph-client.rgw.juju-662fa0-xq-sru-test-7.log 
2019-10-30 11:11:22.085139 7f8c89268000  0 civetweb: 0x55e1294108e0: 
max_connections value "1000" is invalid
2019-10-30 11:11:23.446174 7f961780d000  0 civetweb: 0x55d4401128e0: 
max_connections value "1000" is invalid
2019-10-30 11:11:25.497004 7f3569786000  0 civetweb: 0x555c414398e0: 
max_connections value "1000" is invalid
root@juju-662fa0-xq-sru-test-7:~# sed -i -r 's/(rgw frontends = civetweb 
port=70).*/\1 max_connections=1000/g' /etc/ceph/ceph.conf
root@juju-662fa0-xq-sru-test-7:~# systemctl restart ceph-radosgw@rgw.`hostname`
root@juju-662fa0-xq-sru-test-7:~# lsof -i :70
COMMAND  PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
radosgw 7663 ceph   37u  IPv4  25171  0t0  TCP *:gopher (LISTEN)
root@juju-662fa0-xq-sru-test-7:~# grep max_conn 
/var/log/ceph/ceph-client.rgw.juju-662fa0-xq-sru-test-7.log 
2019-10-30 11:11:22.085139 7f8c89268000  0 civetweb: 0x55e1294108e0: 
max_connections value "1000" is invalid
2019-10-30 11:11:23.446174 7f961780d000  0 civetweb: 0x55d4401128e0: 
max_connections value "1000" is invalid
2019-10-30 11:11:25.497004 7f3569786000  0 civetweb: 0x555c414398e0: 
max_connections value "1000" is invalid
2019-10-30 11:11:26.860549 7fbbe5e42000  0 civetweb: 0x55b8810748e0: 
max_connections value "1000" is invalid
2019-10-30 11:11:28.070096 7f093f962000  0 civetweb: 0x55c44a9ac8e0: 
max_connections value "1000" is invalid
2019-10-30 11:12:15.341958 7f14baf35000  1 mgrc service_daemon_register 
rgw.juju-662fa0-xq-sru-test-7 metadata {arch=x86_64,ceph_version=ceph version 
12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable),cpu=Intel 
Xeon E312xx (Sandy Bridge, IBRS update),distro=ubuntu,distro_description=Ubuntu 
16.04.6 LTS,distro_version=16.04,frontend_config#0=civetweb port=70 
max_connections=1000,frontend_type#0=civetweb,hostname=juju-662fa0-xq-sru-test-7,kernel_description=#195-Ubuntu
 SMP Tue Oct 1 09:35:25 UTC 
2019,kernel_version=4.4.0-166-generic,mem_swap_kb=0,mem_total_kb=2047984,num_handles=1,os=Linux,pid=7663,zone_id=a3fe50ec-b375-4b10-ba6a-8e3bb8a15e89,zone_name=default,zonegroup_id=c35bd079-a42d-4540-acd1-bf77ba2ee5c8,zonegroup_name=default}


** Tags removed: verification-queens-needed
** Tags added: verification-queens-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1838109

Title:
  civetweb does not allow tuning of maximum socket connections

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceph/+bug/1838109/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1838109] Re: civetweb does not allow tuning of maximum socket connections

2019-10-30 Thread Edward Hope-Morley
Presumably we will need a way for the charm to apply this setting if
needs be so adding charm-ceph-radosgw.

** Also affects: charm-ceph-radosgw
   Importance: Undecided
   Status: New

** Changed in: charm-ceph-radosgw
   Importance: Undecided => Medium

** Changed in: charm-ceph-radosgw
Milestone: None => 20.01

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1838109

Title:
  civetweb does not allow tuning of maximum socket connections

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceph/+bug/1838109/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1847822] Re: CephFS authorize fails with unknown cap type

2019-10-21 Thread Edward Hope-Morley
** Tags added: sts-sru-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1847822

Title:
  CephFS authorize fails with unknown cap type

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1847822/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1843085] Re: Backport of zero-length gc chain fixes to Luminous

2019-10-14 Thread Edward Hope-Morley
** Tags added: sts-sru-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1843085

Title:
  Backport of zero-length gc chain fixes to Luminous

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1843085/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1840465] Re: Fails to list security groups if one or more exists without rules

2019-10-03 Thread Edward Hope-Morley
** Also affects: horizon (Ubuntu Eoan)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1840465

Title:
  Fails to list security groups if one or more exists without rules

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1840465/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1840465] Re: Fails to list security groups if one or more exists without rules

2019-10-03 Thread Edward Hope-Morley
** Also affects: horizon (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Bionic)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1840465

Title:
  Fails to list security groups if one or more exists without rules

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1840465/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1815101] Re: [master] Restarting systemd-networkd breaks keepalived, heartbeat, corosync, pacemaker (interface aliases are restarted)

2019-09-28 Thread Edward Hope-Morley
** Also affects: charm-keepalived
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1815101

Title:
  [master] Restarting systemd-networkd breaks keepalived, heartbeat,
  corosync, pacemaker (interface aliases are restarted)

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-keepalived/+bug/1815101/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1815101] Re: [master] Restarting systemd-networkd breaks keepalived, heartbeat, corosync, pacemaker (interface aliases are restarted)

2019-09-28 Thread Edward Hope-Morley
Thanks Rafael/Christian,

I see that all those patches are in 243 and Eoan is currently on 242
(albeit -6 but i dont think any are already backported) so we'll need to
get this backported all the way down to Bionic.

max@power:~/git/systemd$ _c=( 7da377e 95355a2 db51778 c98d78d 1e49885 )
max@power:~/git/systemd$ for c in ${_c[@]}; do git tag --contains $c| egrep -v 
"\-rc";  done| sort -u
v243

Do we have a feel for if/when the keepalived fix(es) will be
backportable to B (1.x) as well? Since those fixes already exist in
Discco (2.0.10) it might be easier to start with those?

I will add the charm-keepalived to this LP since it will need support
for the networkd/netplan fix once that is available.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1815101

Title:
  [master] Restarting systemd-networkd breaks keepalived, heartbeat,
  corosync, pacemaker (interface aliases are restarted)

To manage notifications about this bug go to:
https://bugs.launchpad.net/netplan/+bug/1815101/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1832265] Re: py3: inconsistent encoding of token fields

2019-09-18 Thread Edward Hope-Morley
Disco verified:

Test output:

ubuntu@hopem-bastion:~/stsstack-bundles/openstack$ openstack user list --domain 
userdomain
+--+--+
| ID   | Name |
+--+--+
| 556fcbee7ba012ac2d452f4d7c63b38f2e943b5920ee55104224cabbd73384e3 | Jane Doe |
| 05a92b81187802919b4f0c5bbcb8d42106924aa1cc5ac1670cff66339a82351a | John Doe |
+--+--+
ubuntu@hopem-bastion:~/stsstack-bundles/openstack$ juju ssh keystone/0 -- 'dpkg 
-l | grep keystone'
ii  keystone   2:15.0.0-0ubuntu1.1 all  
OpenStack identity service - Daemons
ii  keystone-common2:15.0.0-0ubuntu1.1 all  
OpenStack identity service - Common files
ii  python3-keystone   2:15.0.0-0ubuntu1.1 all  
OpenStack identity service - Python 3 library
ii  python3-keystoneauth1  3.13.1-0ubuntu1 all  
authentication library for OpenStack Identity - Python 3.x
ii  python3-keystoneclient 1:3.19.0-0ubuntu1   all  
client library for the OpenStack Keystone API - Python 3.x
ii  python3-keystonemiddleware 6.0.0-0ubuntu1  all  
Middleware for OpenStack Identity (Keystone) - Python 3.x
Connection to 10.5.0.38 closed.

** Tags removed: verification-needed-disco
** Tags added: verification-done-disco

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1832265

Title:
  py3: inconsistent encoding of token fields

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-keystone-ldap/+bug/1832265/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1782922] Re: LDAP: changing user_id_attribute bricks group mapping

2019-09-18 Thread Edward Hope-Morley
** Tags added: sts-sru-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1782922

Title:
  LDAP: changing user_id_attribute bricks group mapping

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1782922/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1819074] Re: Keepalived < 2.0.x in Ubuntu 18.04 LTS not compatible with systemd-networkd

2019-09-13 Thread Edward Hope-Morley
Looks like this has been fixed in keepalived 2.x (detection of missing
vip) - https://github.com/acassen/keepalived/issues/836 - but the patch
is embedded with a whole load others that were merged at once so might
be hard to backport.

** Bug watch added: github.com/acassen/keepalived/issues #836
   https://github.com/acassen/keepalived/issues/836

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1819074

Title:
  Keepalived < 2.0.x in Ubuntu 18.04 LTS  not compatible with systemd-
  networkd

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1819074/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1657256] Re: Percona crashes when doing a a 'larger' update

2019-09-09 Thread Edward Hope-Morley
** Tags removed: sts-sru-needed
** Tags added: sts-sru-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1657256

Title:
  Percona crashes when doing a a 'larger' update

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-test-infra/+bug/1657256/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1668771] Re: [SRU] systemd-resolved negative caching for extended period of time

2019-09-09 Thread Edward Hope-Morley
** Tags removed: sts-sru-needed
** Tags added: sts-sru-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1668771

Title:
  [SRU] systemd-resolved negative caching for extended period of time

To manage notifications about this bug go to:
https://bugs.launchpad.net/systemd/+bug/1668771/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1840347] Re: Ceph 12.2.12 restarts services during upgrade

2019-09-09 Thread Edward Hope-Morley
** Tags added: sts-sru-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1840347

Title:
  Ceph 12.2.12  restarts services during upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1840347/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1633120] Re: [SRU] Nova scheduler tries to assign an already-in-use SRIOV QAT VF to a new instance

2019-09-09 Thread Edward Hope-Morley
** Tags removed: sts-sru-needed
** Tags added: sts-sru-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1633120

Title:
  [SRU] Nova scheduler tries to assign an already-in-use SRIOV QAT VF to
  a new instance

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1633120/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1821594] Re: [SRU] Error in confirm_migration leaves stale allocations and 'confirming' migration state

2019-09-09 Thread Edward Hope-Morley
** Tags removed: sts-sru-needed
** Tags added: sts-sru-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1821594

Title:
  [SRU] Error in confirm_migration leaves stale allocations and
  'confirming' migration state

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1821594/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1751923] Re: _heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server

2019-08-29 Thread Edward Hope-Morley
** Changed in: nova (Ubuntu Disco)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1751923

Title:
  _heal_instance_info_cache periodic task bases on port list from nova
  db, not from neutron server

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1751923/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1751923] Re: _heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server

2019-08-29 Thread Edward Hope-Morley
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/rocky
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/queens
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/stein
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/stein
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1751923

Title:
  _heal_instance_info_cache periodic task bases on port list from nova
  db, not from neutron server

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1751923/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1768824] Re: [SRU] service_statuses table running full in Designate database

2019-08-27 Thread Edward Hope-Morley
Xenial Queens verifie using [Test Case]

Test Output:

root@juju-c01a91-lp1768824-sru-1:~# mysql -h$host -u${service} -p$passwd 
${service} -e'select count(*) from service_statuses where 
service_name="pool_manager";'
+--+
| count(*) |
+--+
|1 |
+--+
root@juju-c01a91-lp1768824-sru-1:~# ts=`date '+%Y-%m-%d %H:%M:%S'`
root@juju-c01a91-lp1768824-sru-1:~# svc_host=`mysql -B -h$host -u${service} 
-p$passwd ${service} -e 'select hostname from service_statuses where 
service_name="pool_manager";'| tail -n 1`
root@juju-c01a91-lp1768824-sru-1:~# mysql -h$host -u${service} -p$passwd 
${service} -e "insert into service_statuses values ('1234', '$ts', '$ts', 
'pool_manager', '$svc_host', '$ts', 'UP', '{}', '{}');"
ERROR 1062 (23000) at line 1: Duplicate entry 
'pool_manager-juju-c01a91-lp1768824-sru-1' for key 'unique_service_status'
root@juju-c01a91-lp1768824-sru-1:~# mysql -h$host -u${service} -p$passwd 
${service} -e "select * from INFORMATION_SCHEMA.TABLE_CONSTRAINTS where 
CONSTRAINT_TYPE='UNIQUE';"| grep service_statuses
def designate   unique_service_status   designate   
service_statusesUNIQUE


** Description changed:

  [Impact]
  This patch is required to prevent pool-manager from creating unbounded 
amounts of status logs in the service_statuses table triggered by having > 1 
log in there.
  
  [Test Case]
  * deploy openstack queens with designate
  * mysql> select count(*) from service_statuses where 
service_name="pool_manager"; should return 1
  * try to add an extra entry:
  
- ts=date '+%Y-%m-%d %H:%M:%S'
+ ts=`date '+%Y-%m-%d %H:%M:%S'`
  svc_host=`mysql -B -h$host -u${service} -p$passwd ${service} -e 'select 
hostname from service_statuses where service_name="pool_manager";'| tail -n 1`
  mysql -h$host -u${service} -p$passwd ${service} -e "insert into 
service_statuses values ('1234', '$ts', '$ts', 'pool_manager', '$svc_host', 
'$ts', 'UP', '{}', '{}');"
  
  * this should fail since the hostname/servicename columns should now be a 
unique contraint
  * can also check this with:
  
  mysql -h$host -u${service} -p$passwd ${service} -e "select * from
  INFORMATION_SCHEMA.TABLE_CONSTRAINTS where CONSTRAINT_TYPE='UNIQUE';"|
  grep service_statuses
  
- [Regression Potential] 
+ [Regression Potential]
  if the table already has multiple records for pool_manager in the 
service_statuses table, it will be necessary to (manually) delete all but one 
record in order for the upgrade to succeed.
  
  
  Hi,
  
  The service_statuses table in Designate database is running full of
  records in our deployment:
  
  MariaDB [designate]> select count(*) from service_statuses;
  
  +--+
  | count(*) |
  +--+
  | 24474342 |
  +--+
  1 row in set (7 min 19.09 sec)
  
  We got millions of rows in just couple of month. The problem is that the same 
services running on the same hosts create new record (instead of updating 
existing) during status report to Designate.
  This is how it looks in DB:
  
  MariaDB [designate]> select * from service_statuses;
  
+--+-+-+--++-++---+--+
  | id   | created_at  | updated_at 
 | service_name | hostname   | heartbeated_at  | 
status | stats | capabilities |
  
+--+-+-+--++-++---+--+
  | 0dde2b5f228549d5995cb0338841bd50 | 2018-05-02 12:06:03 | NULL   
 | producer | designate-producer-855855776-cr8d9 | 2018-05-02 12:06:03 | UP 
| {}| {}   |
  | 0e311d3000d8403d97066eba619490a3 | 2018-05-02 12:05:14 | NULL   
 | api  | designate-api-2042646259-6090v | 2018-05-02 12:05:13 | UP 
| {}| {}   |
  | 168448cd97cd428ea19318243570482c | 2018-05-02 12:05:48 | NULL   
 | producer | designate-producer-855855776-cr8d9 | 2018-05-02 12:05:48 | UP 
| {}| {}   |
  | 1685d7f80d8c4f75b052680e5e2f40ae | 2018-05-02 12:05:59 | NULL   
 | api  | designate-api-2042646259-6090v | 2018-05-02 12:05:58 | UP 
| {}| {}   |
  | 192275eb33854b4091b981b0c32d04f7 | 2018-05-02 12:05:41 | NULL   
 | worker   | designate-worker-3446544-7fzqx | 2018-05-02 12:05:35 | UP 
| {}| {}   |
  | 1e465011f21f47f096b54005675e8011 | 2018-05-02 12:05:25 | NULL   
 | mdns | designate-mdns-4198843580-lw6s2| 2018-05-02 12:05:25 | UP 
| {}| {}   |
  | 22e0ab87b3cd4228bc191e49923d13ba | 2018-05-02 12:05:58 | NULL   
 | producer | designate-producer-855855776-cr8d9 | 

[Bug 1768824] Re: [SRU] service_statuses table running full in Designate database

2019-08-14 Thread Edward Hope-Morley
Bionic Queen verified using [Test Case]

Test Output:

root@juju-01d5ff-lp1768824-sru-5:~# dpkg -l| grep designate
ii  designate-agent 1:6.0.1-0ubuntu1.2  
all  OpenStack DNS as a Service - agent
ii  designate-api   1:6.0.1-0ubuntu1.2  
all  OpenStack DNS as a Service - API server
ii  designate-central   1:6.0.1-0ubuntu1.2  
all  OpenStack DNS as a Service - central daemon
ii  designate-common1:6.0.1-0ubuntu1.2  
all  OpenStack DNS as a Service - common files
ii  designate-mdns  1:6.0.1-0ubuntu1.2  
all  OpenStack DNS as a Service - mdns
ii  designate-pool-manager  1:6.0.1-0ubuntu1.2  
all  OpenStack DNS as a Service - pool manager
ii  designate-sink  1:6.0.1-0ubuntu1.2  
all  OpenStack DNS as a Service - sink
ii  designate-zone-manager  1:6.0.1-0ubuntu1.2  
all  OpenStack DNS as a Service - zone manager
ii  python-designate1:6.0.1-0ubuntu1.2  
all  OpenStack DNS as a Service - Python libs
ii  python-designateclient  2.9.0-0ubuntu1  
all  client library for the OpenStack Designate API - Python 2.7
root@juju-01d5ff-lp1768824-sru-5:~# mysql -h$host -u${service} -p$passwd 
${service} -e'select count(*) from service_statuses where 
service_name="pool_manager"'
mysql: [Warning] Using a password on the command line interface can be insecure.
+--+
| count(*) |
+--+
|1 |
+--+
root@juju-01d5ff-lp1768824-sru-5:~# ts=`date '+%Y-%m-%d %H:%M:%S'`
root@juju-01d5ff-lp1768824-sru-5:~# svc_host=`mysql -B -h$host -u${service} 
-p$passwd ${service} -e 'select hostname from service_statuses where 
service_name="pool_manager";'| tail -n 1`
mysql: [Warning] Using a password on the command line interface can be insecure.
root@juju-01d5ff-lp1768824-sru-5:~# mysql -h$host -u${service} -p$passwd 
${service} -e "insert into service_statuses values ('1234', '$ts', '$ts', 
'pool_manager', '$svc_host', '$ts', 'UP', '{}', '{}');"
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1062 (23000) at line 1: Duplicate entry 
'pool_manager-juju-01d5ff-lp1768824-sru-5' for key 'unique_service_status'
root@juju-01d5ff-lp1768824-sru-5:~# mysql -h$host -u${service} -p$passwd 
${service} -e "select * from INFORMATION_SCHEMA.TABLE_CONSTRAINTS where 
CONSTRAINT_TYPE='UNIQUE';"| grep service_statuses
mysql: [Warning] Using a password on the command line interface can be insecure.
def designate   unique_service_status   designate   
service_statusesUNIQUE


** Tags removed: verification-needed-bionic
** Tags added: verification-done-bionic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1768824

Title:
  [SRU] service_statuses table running full in Designate database

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1768824/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1816468] Re: [SRU] Acceleration cinder - glance with ceph not working

2019-08-07 Thread Edward Hope-Morley
Bionic + Rocky verified (nova) using [Test Case]

Test Output:

root@juju-7ad500-lp1816468-sru-rocky-11:~# dpkg -l| grep nova-compute
ii  nova-compute 2:18.2.1-0ubuntu1~cloud3   
 all  OpenStack Compute - compute node base
ii  nova-compute-kvm 2:18.2.1-0ubuntu1~cloud3   
 all  OpenStack Compute - compute node (KVM)
ii  nova-compute-libvirt 2:18.2.1-0ubuntu1~cloud3   
 all  OpenStack Compute - compute node libvirt support

root@juju-7ad500-lp1816468-sru-rocky-0:~# rbd -p nova info 
30503caf-e7a0-44d0-886d-36f92690ab88_disk| grep parent
parent: glance/c981d74b-ef06-40e6-9e4c-199b99a49e82@snap


** Tags removed: verification-needed verification-rocky-needed
** Tags added: verification-done verification-rocky-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1816468

Title:
  [SRU] Acceleration cinder - glance with ceph not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1816468/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1816468] Re: [SRU] Acceleration cinder - glance with ceph not working

2019-08-06 Thread Edward Hope-Morley
** Tags removed: verification-rocky-failed
** Tags added: verification-rocky-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1816468

Title:
  [SRU] Acceleration cinder - glance with ceph not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1816468/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1816468] Re: [SRU] Acceleration cinder - glance with ceph not working

2019-08-06 Thread Edward Hope-Morley
Bionic + Stein verified (nova) using [Test Case]

Test Output:

root@juju-7f1874-lp1816468-sru-stein-0:~# rbd -p nova info 
2923df76-c175-4862-a61a-71207390b4cb_disk| grep parent
parent: glance/2ea4abff-a978-4b99-b717-5a3c8932b6f6@snap


** Tags removed: verification-stein-needed
** Tags added: verification-stein-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1816468

Title:
  [SRU] Acceleration cinder - glance with ceph not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1816468/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1816468] Re: [SRU] Acceleration cinder - glance with ceph not working

2019-08-06 Thread Edward Hope-Morley
Disco verified (nova) using [Test Case]

Test Output:

root@juju-935137-lp1816468-sru-disco-0:~# rbd -p nova info 
f507dc2b-fc6b-421c-be22-2fde84826099_disk| grep parent
parent: glance/f0d95291-aaf7-4e8f-829d-515ca55ba874@snap

** Tags removed: verification-needed-disco
** Tags added: verification-done-disco

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1816468

Title:
  [SRU] Acceleration cinder - glance with ceph not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1816468/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1633120] Re: [SRU] Nova scheduler tries to assign an already-in-use SRIOV QAT VF to a new instance

2019-08-01 Thread Edward Hope-Morley
Mitaka not backportable so abandoning:

$ git-deps -e mitaka-eol 5c5a6b93a07b0b58f513396254049c17e2883894^!
c2c3b97259258eec3c98feabde3b411b519eae6e

$ git-deps -e mitaka-eol c2c3b97259258eec3c98feabde3b411b519eae6e^!
a023c32c70b5ddbae122636c26ed32e5dcba66b2
74fbff88639891269f6a0752e70b78340cf87e9a
e83842b80b73c451f78a4bb9e7bd5dfcebdefcab
1f259e2a9423a4777f79ca561d5e6a74747a5019
b01187eede3881f72addd997c8fd763ddbc137fc
49d9433c62d74f6ebdcf0832e3a03e544b1d6c83


** Changed in: cloud-archive/mitaka
   Status: Triaged => Won't Fix

** Changed in: nova (Ubuntu Xenial)
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1633120

Title:
  [SRU] Nova scheduler tries to assign an already-in-use SRIOV QAT VF to
  a new instance

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1633120/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1816468] Re: [SRU] Acceleration cinder - glance with ceph not working

2019-07-29 Thread Edward Hope-Morley
Hi @vvsorokin can I get you to check something. If you upgrade to this
new package but you had previously stored an image using the old broken
package then you will need to manually modify the db entry for that
image in the glance db to get it to work again (after upgrading). There
is a bit more info in https://bugs.launchpad.net/cloud-
archive/+bug/1816721. You also need to upgrade glance. So basically to
fix this problem you need to:

 * upgrade cinder
 * upgrade glance
 * check glance db for incorrectly formatted image local locations in the 
image_locations table (see [Test Case] in 1816721)

Alternatively to messing with the db you could just delete and re-upload
your glance image and that should give you the correct value in the
database.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1816468

Title:
  [SRU] Acceleration cinder - glance with ceph not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1816468/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1633120] Re: [SRU] Nova scheduler tries to assign an already-in-use SRIOV QAT VF to a new instance

2019-07-23 Thread Edward Hope-Morley
Xenial Ocata verified using [Test Case]

Test output: https://pastebin.ubuntu.com/p/5gnDJBz5J4/

** Tags removed: verification-ocata-needed
** Tags added: verification-ocata-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1633120

Title:
  [SRU] Nova scheduler tries to assign an already-in-use SRIOV QAT VF to
  a new instance

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1633120/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1816468] Re: [SRU] Acceleration cinder - glance with ceph not working

2019-07-22 Thread Edward Hope-Morley
@sil2100 hi ive added the sru template and test case sorry i forgot to
add it before.

** Summary changed:

- Acceleration cinder - glance with ceph not working
+ [SRU] Acceleration cinder - glance with ceph not working

** Description changed:

- When using cinder, glance with ceph, in a code is support for creating
- volumes from images INSIDE ceph environment as copy-on-write volume.
- This option is saving space in ceph cluster, and increase speed of
- instance spawning because volume is created directly in ceph.   <= THIS
- IS NOT WORKING IN PY3
+ [Impact]
+ For >= rocky (i.e. if using py3 packages) librados.cluster.get_fsid() is 
returning a binary string which means that the fsid can't be matched against a 
string version of the same value from glance when deciding whether to use an 
image that is stored in Ceph.
+ 
+ [Test Case]
+ * deploy openstack rocky (using p3 packages)
+ * deploy ceph and use for glance backend
+ * set
+ /etc/glance/glance-api.conf:show_multiple_locations = True
+ /etc/glance/glance-api.conf:show_image_direct_url = True
+ * upload image to glance
+ * attempt to boot an instance using this image
+ * confirm that instance booted properly and check that the image it booted 
from is a cow clone of the glance image by doing the following in ceph:
+ 
+ rbd -p nova info | grep parent:
+ 
+ * confirm that you see "parent: glance/@snap"
+ 
+ [Regression Potential]
+ None expected
+ 
+ [Other Info]
+ None expected.
+ 
+ 
+ When using cinder, glance with ceph, in a code is support for creating 
volumes from images INSIDE ceph environment as copy-on-write volume. This 
option is saving space in ceph cluster, and increase speed of instance spawning 
because volume is created directly in ceph.   <= THIS IS NOT WORKING IN PY3
  
  If this function is not enabled , image is copying to compute-host
  ..convert ..create volume, and upload to ceph ( which is time consuming
  of course ).
  
  Problem is , that even if glance-cinder acceleration is turned-on , code
  is executed as when it is disabled, so ..the same as above , copy image
  , create volume, upload to ceph ... BUT it should create copy-on-write
  volume inside the ceph internally. <= THIS IS A BUG IN PY3
  
  Glance config ( controller ):
  
  [DEFAULT]
  show_image_direct_url = true   <= this has to be set to true to 
reproduce issue
  workers = 7
  transport_url = rabbit://openstack:openstack@openstack-db
  [cors]
  [database]
  connection = mysql+pymysql://glance:Eew7shai@openstack-db:3306/glance
  [glance_store]
  stores = file,rbd
  default_store = rbd
  filesystem_store_datadir = /var/lib/glance/images
  rbd_store_pool = images
  rbd_store_user = images
  rbd_store_ceph_conf = /etc/ceph/ceph.conf
  [image_format]
  [keystone_authtoken]
  auth_url = http://openstack-ctrl:35357
  project_name = service
  project_domain_name = default
  username = glance
  user_domain_name = default
  password = Eew7shai
  www_authenticate_uri = http://openstack-ctrl:5000
  auth_uri = http://openstack-ctrl:35357
  cache = swift.cache
  region_name = RegionOne
  auth_type = password
  [matchmaker_redis]
  [oslo_concurrency]
  lock_path = /var/lock/glance
  [oslo_messaging_amqp]
  [oslo_messaging_kafka]
  [oslo_messaging_notifications]
  [oslo_messaging_rabbit]
  [oslo_messaging_zmq]
  [oslo_middleware]
  [oslo_policy]
  [paste_deploy]
  flavor = keystone
  [store_type_location_strategy]
  [task]
  [taskflow_executor]
  [profiler]
  enabled = true
  trace_sqlalchemy = true
  hmac_keys = secret
  connection_string = redis://127.0.0.1:6379
  trace_wsgi_transport = True
  trace_message_store = True
  trace_management_store = True
  
- Cinder conf (controller) : 
- root@openstack-controller:/tmp# cat /etc/cinder/cinder.conf | grep -v '^#' | 
awk NF 
+ Cinder conf (controller) :
+ root@openstack-controller:/tmp# cat /etc/cinder/cinder.conf | grep -v '^#' | 
awk NF
  [DEFAULT]
  my_ip = 192.168.10.15
  glance_api_servers = http://openstack-ctrl:9292
  auth_strategy = keystone
  enabled_backends = rbd
  osapi_volume_workers = 7
  debug = true
  transport_url = rabbit://openstack:openstack@openstack-db
  [backend]
  [backend_defaults]
  rbd_pool = volumes
  rbd_user = volumes1
  rbd_secret_uuid = b2efeb49-9844-475b-92ad-5df4a3e1300e
  volume_driver = cinder.volume.drivers.rbd.RBDDriver
  [barbican]
  [brcd_fabric_example]
  [cisco_fabric_example]
  [coordination]
  [cors]
  [database]
  connection = mysql+pymysql://cinder:EeRe3ahx@openstack-db:3306/cinder
  [fc-zone-manager]
  [healthcheck]
  [key_manager]
  [keystone_authtoken]
  auth_url = http://openstack-ctrl:35357
  project_name = service
  project_domain_name = default
  username = cinder
  user_domain_name = default
  password = EeRe3ahx
  www_authenticate_uri = http://openstack-ctrl:5000
  auth_uri = http://openstack-ctrl:35357
  cache = swift.cache
  region_name = RegionOne
  auth_type = password
  

[Bug 1821594] Re: [SRU] Error in confirm_migration leaves stale allocations and 'confirming' migration state

2019-07-15 Thread Edward Hope-Morley
** Changed in: nova/stein
   Status: Fix Committed => Fix Released

** Changed in: cloud-archive/stein
   Status: Fix Committed => Fix Released

** Changed in: nova (Ubuntu Disco)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1821594

Title:
  [SRU] Error in confirm_migration leaves stale allocations and
  'confirming' migration state

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1821594/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1722584] Re: [SRU] Return traffic from metadata service may get dropped by hypervisor due to wrong checksum

2019-07-12 Thread Edward Hope-Morley
Xenial Queens verified using [Test Case]

Test output:

root@juju-2587ed-lp1722584-sru-5:~# dpkg -l | grep neutron-l3-agent
ii  neutron-l3-agent 2:12.0.6-0ubuntu2~cloud0   
all  Neutron is a virtual network service for Openstack - l3 agent
root@juju-2587ed-lp1722584-sru-5:~# sudo ip netns exec 
qrouter-120cf0e7-5349-4896-ac54-1035ca92c1b0 iptables -t mangle -S| grep 
'\--sport 9697 -j CHECKSUM --checksum-fill'
root@juju-2587ed-lp1722584-sru-5:~# 


** Tags removed: verification-needed verification-queens-needed
** Tags added: verification-done verification-queens-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1722584

Title:
  [SRU] Return traffic from metadata service may get dropped by
  hypervisor due to wrong checksum

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1722584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1768824] Re: [SRU] service_statuses table running full in Designate database

2019-07-12 Thread Edward Hope-Morley
** Patch added: "lp1768824-bionic.debdiff"
   
https://bugs.launchpad.net/cloud-archive/+bug/1768824/+attachment/5276673/+files/lp1768824-bionic.debdiff

** Tags removed: verification-needed verification-needed-bionic
** Tags added: verification-failed verification-failed-bionic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1768824

Title:
  [SRU] service_statuses table running full in Designate database

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1768824/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1768824] Re: [SRU] service_statuses table running full in Designate database

2019-07-12 Thread Edward Hope-Morley
Bionic proposed package failed to build because I forgot to remove a
dependency from the unit tests in that patch prior to attaching the
debdiff. Apologies for that. I have tested a new patch with this
dependency removed and will attach it now. Please go ahead and replace
the current bionic-proposed build with this one. Thanks.

** Patch removed: "lp1768824-bionic-queens.debdiff"
   
https://bugs.launchpad.net/cloud-archive/+bug/1768824/+attachment/5268554/+files/lp1768824-bionic-queens.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1768824

Title:
  [SRU] service_statuses table running full in Designate database

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1768824/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1722584] Re: [SRU] Return traffic from metadata service may get dropped by hypervisor due to wrong checksum

2019-07-12 Thread Edward Hope-Morley
Bionic (Queens) verified using [Test Case]

Test output:

root@juju-eabc2c-lp1722584-sru-5:~# dpkg -l| grep neutron-l3-agent
ii  neutron-l3-agent2:12.0.6-0ubuntu2   
all  Neutron is a virtual network service for Openstack - l3 agent
root@juju-eabc2c-lp1722584-sru-5:~# sudo ip netns exec 
qrouter-7952203a-0305-433c-8dc4-a7c6af1beb26 iptables -t mangle -S| grep 
'\--sport 9697 -j CHECKSUM --checksum-fill'
root@juju-eabc2c-lp1722584-sru-5:~# 

** Tags removed: verification-needed-bionic
** Tags added: verification-done-bionic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1722584

Title:
  [SRU] Return traffic from metadata service may get dropped by
  hypervisor due to wrong checksum

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1722584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1722584] Re: [SRU] Return traffic from metadata service may get dropped by hypervisor due to wrong checksum

2019-07-11 Thread Edward Hope-Morley
Xenial+Rocky verified using [Test Case]

Test output:

root@juju-7320a7-lp1722584-sru-5:~# dpkg -l| grep neutron-l3-agent
ii  neutron-l3-agent 2:13.0.3-0ubuntu2~cloud0   
 all  Neutron is a virtual network service for Openstack - l3 agent
root@juju-7320a7-lp1722584-sru-5:~# ip netns exec 
qrouter-be78d1c5-1fed-4853-8428-ca228153c669 iptables -t mangle -S| grep 
'\--sport 9697 -j CHECKSUM --checksum-fill'
root@juju-7320a7-lp1722584-sru-5:~# 


** Tags removed: verification-rocky-needed
** Tags added: verification-rocky-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1722584

Title:
  [SRU] Return traffic from metadata service may get dropped by
  hypervisor due to wrong checksum

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1722584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1722584] Re: [SRU] Return traffic from metadata service may get dropped by hypervisor due to wrong checksum

2019-07-11 Thread Edward Hope-Morley
Cosmic verified with [Test Case]

Test output:

root@juju-c36d25-lp1722584-sru-5:~# dpkg -l| grep neutron-l3-agent
ii  neutron-l3-agent 2:13.0.3-0ubuntu2  
all  Neutron is a virtual network service for Openstack - l3 agent
root@juju-c36d25-lp1722584-sru-5:~# ip netns exec 
qrouter-01a2b5a1-582e-420f-8838-b7928436797f iptables -t mangle -S| grep 9697
root@juju-c36d25-lp1722584-sru-5:~# 


** Tags removed: verification-needed-cosmic
** Tags added: verification-done-cosmic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1722584

Title:
  [SRU] Return traffic from metadata service may get dropped by
  hypervisor due to wrong checksum

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1722584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1722584] Re: [SRU] Return traffic from metadata service may get dropped by hypervisor due to wrong checksum

2019-07-11 Thread Edward Hope-Morley
oops sorry ^^ should be stein not rocky

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1722584

Title:
  [SRU] Return traffic from metadata service may get dropped by
  hypervisor due to wrong checksum

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1722584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1722584] Re: [SRU] Return traffic from metadata service may get dropped by hypervisor due to wrong checksum

2019-07-11 Thread Edward Hope-Morley
Xenial+Stein verified using [Test Case]

Test output:

root@juju-92f0c2-lp1722584-sru-5:~# dpkg -l| grep neutron-l3-agent
ii neutron-l3-agent 2:14.0.2-0ubuntu1~cloud0 all Neutron is a virtual network 
service for Openstack - l3 agent
root@juju-92f0c2-lp1722584-sru-5:~# sudo ip netns exec 
qrouter-32ea60b2-ac9b-4a16-8933-63818eb71568 iptables -t mangle -S| grep 9697
root@juju-92f0c2-lp1722584-sru-5:~#


** Tags removed: verification-rocky-done verification-stein-needed
** Tags added: verification-rocky-needed verification-stein-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1722584

Title:
  [SRU] Return traffic from metadata service may get dropped by
  hypervisor due to wrong checksum

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1722584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1722584] Re: [SRU] Return traffic from metadata service may get dropped by hypervisor due to wrong checksum

2019-07-11 Thread Edward Hope-Morley
Xenial+Rocky verified using [Test Case]

Test output:

root@juju-92f0c2-lp1722584-sru-5:~# dpkg -l|  grep neutron-l3-agent
ii  neutron-l3-agent 2:14.0.2-0ubuntu1~cloud0   
 all  Neutron is a virtual network service for Openstack - l3 agent
root@juju-92f0c2-lp1722584-sru-5:~# sudo ip netns exec 
qrouter-32ea60b2-ac9b-4a16-8933-63818eb71568 iptables -t mangle -S| grep 9697
root@juju-92f0c2-lp1722584-sru-5:~# 


** Tags removed: verification-rocky-needed
** Tags added: verification-rocky-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1722584

Title:
  [SRU] Return traffic from metadata service may get dropped by
  hypervisor due to wrong checksum

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1722584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1816721] Re: [SRU] Python3 librados incompatibility

2019-07-10 Thread Edward Hope-Morley
Cosmic verification test output ^^

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1816721

Title:
  [SRU] Python3 librados incompatibility

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1816721/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1816721] Re: [SRU] Python3 librados incompatibility

2019-07-10 Thread Edward Hope-Morley
root@juju-b1c912-lp1816721-cosmic-sru-5:~# dpkg -l| grep glance-api
ii  glance-api  2:17.0.0-0ubuntu4.1 all 
 OpenStack Image Registry and Delivery Service - API
root@juju-b1c912-lp1816721-cosmic-sru-5:~# mysql -h$host -u${service} -p$passwd 
${service} -e'select * from image_locations;'
++--+-+-+-++-+---++
| id | image_id | value 
  | created_at  
| updated_at  | deleted_at | deleted | meta_data | status |
++--+-+-+-++-+---++
|  1 | 26e83cd0-1692-4516-85c2-7c994330e0e4 | 
rbd://374de550-a32a-11e9-b0f4-fa163e55087b/glance/26e83cd0-1692-4516-85c2-7c994330e0e4/snap
 | 2019-07-10 19:33:25 | 2019-07-10 19:33:25 | NULL   |   0 | {}
| active |
++--+-+-+-++-+---++

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1816721

Title:
  [SRU] Python3 librados incompatibility

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1816721/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1722584] Re: [SRU] Return traffic from metadata service may get dropped by hypervisor due to wrong checksum

2019-07-10 Thread Edward Hope-Morley
Disco verified using [Test Case]

Test output:

root@juju-7eb0ae-lp1722584-sru-5:~# dpkg -l| grep neutron-l3-agent
ii  neutron-l3-agent 2:14.0.2-0ubuntu1   
all  Neutron is a virtual network service for Openstack - l3 agent
root@juju-7eb0ae-lp1722584-sru-5:~# sudo ip netns exec 
qrouter-88cd871f-b3a5-4ee9-8c53-cc6a5f2eb9d1 iptables -t mangle -S| grep 9697
root@juju-7eb0ae-lp1722584-sru-5:~#

** Tags removed: verification-needed-disco
** Tags added: verification-done-disco

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1722584

Title:
  [SRU] Return traffic from metadata service may get dropped by
  hypervisor due to wrong checksum

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1722584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1816721] Re: [SRU] Python3 librados incompatibility

2019-07-09 Thread Edward Hope-Morley
Xenial+Rocky verified using [Test Case]

** Tags removed: verification-needed verification-rocky-needed
** Tags added: verification-done verification-rocky-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1816721

Title:
  [SRU] Python3 librados incompatibility

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1816721/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1816721] Re: [SRU] Python3 librados incompatibility

2019-07-08 Thread Edward Hope-Morley
Cosmic verified using [Test Case]

** Tags removed: verification-needed-cosmic
** Tags added: verification-done-cosmic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1816721

Title:
  [SRU] Python3 librados incompatibility

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1816721/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1821594] Re: [SRU] Error in confirm_migration leaves stale allocations and 'confirming' migration state

2019-07-08 Thread Edward Hope-Morley
Marking Disco/Stein as Fix Committed since it is the same in bug 1831754

** Changed in: cloud-archive/stein
   Status: Triaged => Fix Committed

** Changed in: nova (Ubuntu Disco)
   Status: Triaged => Fix Committed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1821594

Title:
  [SRU] Error in confirm_migration leaves stale allocations and
  'confirming' migration state

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1821594/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1633120] Re: [SRU] Nova scheduler tries to assign an already-in-use SRIOV QAT VF to a new instance

2019-07-04 Thread Edward Hope-Morley
** Summary changed:

- Nova scheduler tries to assign an already-in-use SRIOV QAT VF to a new 
instance
+ [SRU] Nova scheduler tries to assign an already-in-use SRIOV QAT VF to a new 
instance

** Description changed:

+ [Impact]
+ This patch is required to prevent nova from accidentally marking pci_device 
allocations as deleted when it incorrectly reads the passthrough whitelist 
+ 
+ [Test Case]
+ * deploy openstack (any version that supports sriov)
+ * single compute configured for sriov with at least once device in 
pci_passthrough_whitelist
+ * create a vm and attach sriov port
+ * remove device from pci_passthrough_whitelist and restart nova-compute
+ * check that pci_devices allocations have not been marked as deleted
+ 
+ [Regression Potential]
+ None anticipated
+ 
  Upon trying to create VM instance (Say A) with one QAT VF, it fails with the 
following error i.e., “Requested operation is not valid: PCI device 
:88:04.7 is in use by driver QEMU, domain instance-0081”. Please note 
that, PCI device :88:04.7 is already being assigned to another VM (Say B) . 
 We have installed openstack-mitaka release on CentO7 system. It has two Intel 
QAT devices. There are 32 VF devices available per QAT Device/DH895xCC device 
Out of 64 VFs, only 8 VFs are allocated (to VM instances) and rest should be 
available.
- But the nova scheduler tries to assign an already-in-use SRIOV VF to a new 
instance and instance fails. It appears that the nova database is not tracking 
which VF's have already been taken. But if I shut down VM B instance, then 
other instance VM A boots up and vice-versa. Note that, both the VM instances 
cannot run simultaneously because of the aforesaid issue. 
+ But the nova scheduler tries to assign an already-in-use SRIOV VF to a new 
instance and instance fails. It appears that the nova database is not tracking 
which VF's have already been taken. But if I shut down VM B instance, then 
other instance VM A boots up and vice-versa. Note that, both the VM instances 
cannot run simultaneously because of the aforesaid issue.
  
  We should always be able to create as many instances with the requested
  PCI devices as there are available VFs.
  
  Please feel free to let me know if additional information is needed. Can
  anyone please suggest why it tries to assign same PCI device which has
  been assigned already? Is there any way to resolve this issue? Thank you
  in advance for your support and help.
  
  [root@localhost ~(keystone_admin)]# lspci -d:435
  83:00.0 Co-processor: Intel Corporation DH895XCC Series QAT
  88:00.0 Co-processor: Intel Corporation DH895XCC Series QAT
  [root@localhost ~(keystone_admin)]#
  
- 
  [root@localhost ~(keystone_admin)]# lspci -d:443 | grep "QAT Virtual 
Function" | wc -l
  64
  [root@localhost ~(keystone_admin)]#
-  
-  
+ 
  [root@localhost ~(keystone_admin)]# mysql -u root nova -e "SELECT 
hypervisor_hostname, address, instance_uuid, status FROM pci_devices JOIN 
compute_nodes oncompute_nodes.id=compute_node_id" | grep :88:04.7
  localhost  :88:04.7e10a76f3-e58e-4071-a4dd-7a545e8000deallocated
  localhost  :88:04.7c3dbac90-198d-4150-ba0f-a80b912d8021allocated
  localhost  :88:04.7c7f6adad-83f0-4881-b68f-6d154d565ce3allocated
  localhost.nfv.benunets.com :88:04.7
0c3c11a5-f9a4-4f0d-b120-40e4dde843d4allocated
  [root@localhost ~(keystone_admin)]#
-  
+ 
  [root@localhost ~(keystone_admin)]# grep -r 
e10a76f3-e58e-4071-a4dd-7a545e8000de /etc/libvirt/qemu
  /etc/libvirt/qemu/instance-0081.xml:  
e10a76f3-e58e-4071-a4dd-7a545e8000de
  /etc/libvirt/qemu/instance-0081.xml:  e10a76f3-e58e-4071-a4dd-7a545e8000de
  /etc/libvirt/qemu/instance-0081.xml:  
  /etc/libvirt/qemu/instance-0081.xml:  
  /etc/libvirt/qemu/instance-0081.xml:  
  [root@localhost ~(keystone_admin)]#
  [root@localhost ~(keystone_admin)]# grep -r 
0c3c11a5-f9a4-4f0d-b120-40e4dde843d4 /etc/libvirt/qemu
  /etc/libvirt/qemu/instance-00ab.xml:  
0c3c11a5-f9a4-4f0d-b120-40e4dde843d4
  /etc/libvirt/qemu/instance-00ab.xml:  0c3c11a5-f9a4-4f0d-b120-40e4dde843d4
  /etc/libvirt/qemu/instance-00ab.xml:  
  /etc/libvirt/qemu/instance-00ab.xml:  
  /etc/libvirt/qemu/instance-00ab.xml:  
  [root@localhost ~(keystone_admin)]#
-  
- On the controller, , it appears there are duplicate PCI device entries in the 
Database:
-  
+ 
+ On the controller, , it appears there are duplicate PCI device entries
+ in the Database:
+ 
  MariaDB [nova]> select hypervisor_hostname,address,count(*) from pci_devices 
JOIN compute_nodes on compute_nodes.id=compute_node_id group by 
hypervisor_hostname,address having count(*) > 1;
  +-+--+--+
  | hypervisor_hostname | address  | count(*) |
  +-+--+--+
  | localhost  | 

[Bug 1695876] Re: German Documentation file displays incorrect CUPS version

2019-06-24 Thread Edward Hope-Morley
** Tags removed: sts-sru-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1695876

Title:
  German Documentation file displays incorrect CUPS version

To manage notifications about this bug go to:
https://bugs.launchpad.net/cups/+bug/1695876/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833079] Re: backport librbd librados py3 string encoding fixes

2019-06-24 Thread Edward Hope-Morley
The linked patch has actually been reverted upstream (my fault for not
spotting that) and the following patch appears to be the one we actually
want -
https://github.com/ceph/ceph/commit/005f19eff0b4a92647e5847e306718be76704432

This new patch has a much narrower scope (just fixes get_fsid) but since
that is the only problem we area currently observing I think its
sufficient for requirements.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833079

Title:
  backport librbd librados py3 string encoding fixes

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1833079/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1722584] Re: [SRU] Return traffic from metadata service may get dropped by hypervisor due to wrong checksum

2019-06-19 Thread Edward Hope-Morley
** Description changed:

  [Impact]
  Prior addition of code to add checksum rules was found to cause problems with 
newer kernels. Patch subsequently reverted so this request is to backport those 
patches to the ubuntu archives.
  
  [Test Case]
  * deploy openstack (>= queens)
  * create router/network/instance (dvr=false,l3ha=false)
  * go to router ns on neutron-gateway and check that the following returns 
nothing
  sudo ip netns exec qrouter- iptables -t mangle -S| grep '\--sport 9697 -j 
CHECKSUM --checksum-fill'
  
  [Regression Potential]
- The original issue is no longer fixed once this patch is reverted.
+ Backporting the revert patch will mean that routers created with this patch 
will no longer have a checksum rule added for metadata tcp packets. The 
original patch added a rule that turned out not to be the fix for the root 
issue and was subsequently found to cause problems with kernels < 4.19 since it 
was never intended for gso tcp packets to have their checksum verified using 
this type of rule. So, removal of this rule (by addition of the revert patch) 
is not intended to change behaviour at all. The only potential side-effect is 
that rules that were already created will not be cleaned up (until node reboot 
or router recreate) and in an L3HA config you could end up with some router 
instances having the rule and some not depending on whether they were created 
before or after the patch was included.
  
  [Other Info]
  This revert patch does not remove rules added by the original patch so manual 
cleanup of those old rules is required.
  
  -
  We have a problem with the metadata service not being responsive, when the 
proxied in the router namespace on some of our networking nodes after upgrading 
to Ocata (Running on CentOS 7.4, with the RDO packages).
  
  Instance routes traffic to 169.254.169.254 to it's default gateway.
  Default gateway is an OpenStack router in a namespace on a networking node.
  
  - Traffic gets sent from the guest,
  - to the router,
  - iptables routes it to the metadata proxy service,
  - response packet gets routed back, leaving the namespace
  - Hypervisor gets the packet in
  - Checksum of packet is wrong, and the packet gets dropped before putting it 
on the bridge
  
  Based on the following bug https://bugs.launchpad.net/openstack-
  ansible/+bug/1483603, we found that adding the following iptable rule in
  the router namespace made this work again: 'iptables -t mangle -I
  POSTROUTING -p tcp --sport 9697 -j CHECKSUM --checksum-fill'
  
  (NOTE: The rule from the 1st comment to the bug did solve access to the
  metadata service, but the lack of precision introduced other problems
  with the network)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1722584

Title:
  [SRU] Return traffic from metadata service may get dropped by
  hypervisor due to wrong checksum

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1722584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833079] Re: backport librbd librados py3 string encoding fixes

2019-06-18 Thread Edward Hope-Morley
** Description changed:

+ [Impact]
+ Requesting back port of py3 encoding fixes to Mimic so that consumers of that 
code don't get strings returned with invalid encodings when run under py3.
+ 
+ [Test Case]
+ 
+ sudo apt install -y python3-rados
+ cat << EOF | python3
+ import rados
+ name = 'ceph'
+ conf = '/var/lib/charm/cinder-ceph/ceph.conf'
+ user = 'cinder-ceph'
+ client = rados.Rados(rados_id=user,
+   clustername=name,
+   conffile=conf)
+ client.connect()
+ fsid = client.get_fsid()
+ if type(fsid) == str:
+ print("value={} has correct type={}".format(fsid, type(fsid)))
+ else:
+ print("value={} has incorrect type={} (expect str)".format(fsid, 
type(fsid)))
+ EOF
+ 
+ [Regression Potential]
+ none expected
+ 
+ -
+ 
  These fixes relate to issues we have found in Openstack [1][2] when
  running under Python 3. They are fixed upstream by commit [3] which is
  available in nautilus and beyond so we would like to backport to mimic
  and luminous.
  
  [1] https://bugs.launchpad.net/nova/+bug/1816468
  [2] https://bugs.launchpad.net/glance-store/+bug/1816721
  [3] 
https://github.com/ceph/ceph/commit/c36d0f1a7de4668eb81075e4a94846cf81fc30cd

** Description changed:

  [Impact]
  Requesting back port of py3 encoding fixes to Mimic so that consumers of that 
code don't get strings returned with invalid encodings when run under py3.
  
  [Test Case]
  
  sudo apt install -y python3-rados
  cat << EOF | python3
  import rados
  name = 'ceph'
- conf = '/var/lib/charm/cinder-ceph/ceph.conf'
- user = 'cinder-ceph'
+ conf = '/etc/ceph/ceph.conf'
+ user = 'a-user'
  client = rados.Rados(rados_id=user,
-   clustername=name,
-   conffile=conf)
+  clustername=name,
+  conffile=conf)
  client.connect()
  fsid = client.get_fsid()
  if type(fsid) == str:
- print("value={} has correct type={}".format(fsid, type(fsid)))
+ print("value={} has correct type={}".format(fsid, type(fsid)))
  else:
- print("value={} has incorrect type={} (expect str)".format(fsid, 
type(fsid)))
+ print("value={} has incorrect type={} (expect str)".format(fsid, 
type(fsid)))
  EOF
  
  [Regression Potential]
  none expected
  
  -
  
  These fixes relate to issues we have found in Openstack [1][2] when
  running under Python 3. They are fixed upstream by commit [3] which is
  available in nautilus and beyond so we would like to backport to mimic
  and luminous.
  
  [1] https://bugs.launchpad.net/nova/+bug/1816468
  [2] https://bugs.launchpad.net/glance-store/+bug/1816721
  [3] 
https://github.com/ceph/ceph/commit/c36d0f1a7de4668eb81075e4a94846cf81fc30cd

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833079

Title:
  backport librbd librados py3 string encoding fixes

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1833079/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833079] Re: backport librbd librados py3 string encoding fixes

2019-06-18 Thread Edward Hope-Morley
** Tags added: py3

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833079

Title:
  backport librbd librados py3 string encoding fixes

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1833079/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1833079] [NEW] backport librbd librados py3 string encoding fixes

2019-06-17 Thread Edward Hope-Morley
Public bug reported:

These fixes relate to issues we have found in Openstack [1][2] when
running under Python 3. They are fixed upstream by commit [3] which is
available in nautilus and beyond so we would like to backport to mimic
and luminous.

[1] https://bugs.launchpad.net/nova/+bug/1816468
[2] https://bugs.launchpad.net/glance-store/+bug/1816721
[3] https://github.com/ceph/ceph/commit/c36d0f1a7de4668eb81075e4a94846cf81fc30cd

** Affects: cloud-archive
 Importance: Undecided
 Status: New

** Affects: cloud-archive/rocky
 Importance: Undecided
 Status: New

** Affects: cloud-archive/stein
 Importance: Undecided
 Status: New

** Affects: cloud-archive/train
 Importance: Undecided
 Status: New

** Affects: ceph (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: ceph (Ubuntu Cosmic)
 Importance: Undecided
 Status: New

** Affects: ceph (Ubuntu Disco)
 Importance: Undecided
 Status: New

** Affects: ceph (Ubuntu Eoan)
 Importance: Undecided
 Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/rocky
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/stein
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/train
   Importance: Undecided
   Status: New

** Also affects: ceph (Ubuntu Eoan)
   Importance: Undecided
   Status: New

** Also affects: ceph (Ubuntu Cosmic)
   Importance: Undecided
   Status: New

** Also affects: ceph (Ubuntu Disco)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1833079

Title:
  backport librbd librados py3 string encoding fixes

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1833079/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1722584] Re: [SRU] Return traffic from metadata service may get dropped by hypervisor due to wrong checksum

2019-06-13 Thread Edward Hope-Morley
** Tags added: sts sts-sru-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1722584

Title:
  [SRU] Return traffic from metadata service may get dropped by
  hypervisor due to wrong checksum

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1722584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1821594] Re: [SRU] Error in confirm_migration leaves stale allocations and 'confirming' migration state

2019-06-12 Thread Edward Hope-Morley
** Summary changed:

- Error in confirm_migration leaves stale allocations and 'confirming' 
migration state
+ [SRU] Error in confirm_migration leaves stale allocations and 'confirming' 
migration state

** Changed in: nova (Ubuntu Eoan)
   Status: New => Fix Committed

** Changed in: cloud-archive/train
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1821594

Title:
  [SRU] Error in confirm_migration leaves stale allocations and
  'confirming' migration state

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1821594/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1821594] Re: Error in confirm_migration leaves stale allocations and 'confirming' migration state

2019-06-12 Thread Edward Hope-Morley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/queens
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/stein
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/train
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/rocky
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Eoan)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Cosmic)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Tags added: sts-sru-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1821594

Title:
  [SRU] Error in confirm_migration leaves stale allocations and
  'confirming' migration state

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1821594/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1722584] Re: [SRU] Return traffic from metadata service may get dropped by hypervisor due to wrong checksum

2019-06-11 Thread Edward Hope-Morley
iiuc the kernel commit that actually mitigate the impact of this issue
is that landed in
https://github.com/torvalds/linux/commit/10568f6c5761db24249c610c94d6e44d5505a0ba
which is available from 4.19 onwards

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1722584

Title:
  [SRU] Return traffic from metadata service may get dropped by
  hypervisor due to wrong checksum

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1722584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1798184] Re: [SRU] PY3: python3-ldap does not allow bytes for DN/RDN/field names

2019-06-11 Thread Edward Hope-Morley
** Tags added: py3

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1798184

Title:
  [SRU] PY3: python3-ldap does not allow bytes for DN/RDN/field names

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1798184/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1832210] Re: incorrect decode of log prefix under python 3

2019-06-11 Thread Edward Hope-Morley
** Tags added: py3

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1832210

Title:
  incorrect decode of log prefix under python 3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1832210/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1816468] Re: Acceleration cinder - glance with ceph not working

2019-06-07 Thread Edward Hope-Morley
** Changed in: cinder (Ubuntu Eoan)
   Status: Triaged => Fix Released

** No longer affects: cinder (Ubuntu Eoan)

** Changed in: cinder (Ubuntu Disco)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1816468

Title:
  Acceleration cinder - glance with ceph not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1816468/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1816721] Re: [SRU] Python3 librados incompatibility

2019-06-07 Thread Edward Hope-Morley
** Also affects: python-glance-store (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: python-glance-store (Ubuntu Cosmic)
   Importance: Undecided
   Status: New

** Also affects: python-glance-store (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Changed in: python-glance-store (Ubuntu Disco)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1816721

Title:
  [SRU] Python3 librados incompatibility

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1816721/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1816468] Re: Acceleration cinder - glance with ceph not working

2019-06-07 Thread Edward Hope-Morley
glance stable/rocky backport submitted in bug 1816721

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1816468

Title:
  Acceleration cinder - glance with ceph not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1816468/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1816468] Re: Acceleration cinder - glance with ceph not working

2019-06-07 Thread Edward Hope-Morley
@melwitt sorry i didn't refresh the page before commenting so missed
that you added your comment. So yeah, its just a matter of getting the
various bits backported to R (and the nova part landed and backported to
S and R)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1816468

Title:
  Acceleration cinder - glance with ceph not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1816468/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1816468] Re: Acceleration cinder - glance with ceph not working

2019-06-07 Thread Edward Hope-Morley
Looks like the glance issue was fixed using a separate bug -
https://bugs.launchpad.net/glance-store/+bug/1816721 so at least that's
done (but needs backporting to Rocky)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1816468

Title:
  Acceleration cinder - glance with ceph not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1816468/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1816468] Re: Acceleration cinder - glance with ceph not working

2019-06-06 Thread Edward Hope-Morley
as an aside, i just tested the nova patch in a local deployment and it
isn't actually sufficient to resolve the problem because the value
stored by glance is also invalid -
https://pastebin.ubuntu.com/p/mj9gFpSBMv/

i'll have a look see if i can get the glance value stripped of invalid
characters when doing the comparison but otherwise ill have to patch
glance as well.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1816468

Title:
  Acceleration cinder - glance with ceph not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1816468/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828259] Re: [rocky][19.04] Upgrading a deployment from Queens to Rocky resulted in purging of neutron-l3-agent package

2019-06-06 Thread Edward Hope-Morley
Ok this was my mistake, i accidentally caused the neutron-openvswitch
charm to be installed on the same host as the neutron-gateway (and n-ovs
charm uninstalles l3-agent when dvr is not used). So not a bug after
all.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828259

Title:
  [rocky][19.04] Upgrading a deployment from Queens to Rocky resulted in
  purging of neutron-l3-agent package

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1828259/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828259] Re: [rocky][19.04] Upgrading a deployment from Queens to Rocky resulted in purging of neutron-l3-agent package

2019-06-06 Thread Edward Hope-Morley
I see to have just hit this for a fresh install of bionic+rocky:

root@crustle:~# dpkg -l| egrep "keepalived|neutron-l3"
ii  keepalived1:1.3.9-1ubuntu0.18.04.2  
amd64Failover and monitoring daemon for LVS clusters

root@crustle:~# tail -n 1 /var/log/neutron/neutron-l3-agent.log 
2019-06-05 23:43:07.487 385426 DEBUG oslo_concurrency.lockutils [-] Lock 
"_check_child_processes" released by 
"neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: 
held 0.000s inner 
/usr/lib/python3/dist-packages/oslo_concurrency/lockutils.py:285

root@crustle:~# grep l3-agent /var/log/apt/history.log |grep -vi install
Commandline: apt-get --assume-yes purge keepalived neutron-l3-agent
Purge: keepalived:amd64 (1:1.3.9-1ubuntu0.18.04.2), neutron-l3-agent:amd64 
(2:13.0.2-0ubuntu3.2~cloud0)

root@crustle:~# grep l3-agent /var/log/dpkg.log | grep -v status
2019-06-05 23:18:41 install neutron-l3-agent:all  
2:13.0.2-0ubuntu3.2~cloud0
2019-06-05 23:20:03 configure neutron-l3-agent:all 2:13.0.2-0ubuntu3.2~cloud0 

2019-06-05 23:43:14 remove neutron-l3-agent:all 2:13.0.2-0ubuntu3.2~cloud0 

2019-06-05 23:43:17 purge neutron-l3-agent:all 2:13.0.2-0ubuntu3.2~cloud0 

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828259

Title:
  [rocky][19.04] Upgrading a deployment from Queens to Rocky resulted in
  purging of neutron-l3-agent package

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1828259/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828259] Re: [rocky][19.04] Upgrading a deployment from Queens to Rocky resulted in purging of neutron-l3-agent package

2019-06-06 Thread Edward Hope-Morley
using stable (19.04) charms ^^

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828259

Title:
  [rocky][19.04] Upgrading a deployment from Queens to Rocky resulted in
  purging of neutron-l3-agent package

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1828259/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1825882] Re: [SRU] Virsh disk attach errors silently ignored

2019-06-05 Thread Edward Hope-Morley
** Tags added: sts-sru-needed

** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1825882

Title:
  [SRU] Virsh disk attach errors silently ignored

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1825882/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1768824] Re: [SRU] service_statuses table running full in Designate database

2019-06-04 Thread Edward Hope-Morley
** Tags added: sts sts-sru-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1768824

Title:
  [SRU] service_statuses table running full in Designate database

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1768824/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1768824] Re: [SRU] service_statuses table running full in Designate database

2019-06-03 Thread Edward Hope-Morley
** Patch removed: "lp1606741-queens.debdiff"
   
https://bugs.launchpad.net/designate/+bug/1768824/+attachment/5268542/+files/lp1606741-rocky.debdiff

** Patch added: "lp1768824-bionic-queens.debdiff"
   
https://bugs.launchpad.net/designate/+bug/1768824/+attachment/5268554/+files/lp1768824-bionic-queens.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1768824

Title:
  [SRU] service_statuses table running full in Designate database

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1768824/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

<    1   2   3   4   5   6   7   8   9   10   >