[Bug 1942623] [NEW] [Wishlist] systemctl CLI UI should allow glob expansion throughout the service name

2021-09-03 Thread Drew Freiberger
Public bug reported:

When attempting to craft a command to query and/or restart all services
with name 'something-*-else', such as 'neutron-*-agent' to match
[neutron-l3-agent, neutron-openvswitch-agent, and neutron-dhcp-agent],
I've found that no services are returned from globs in the middle of the
string.  The only supported glob seems to be a * at the end of the
service name. Also, single character globbing with ? is not honored (as
in the last example).

To provide an example, I expect each of these commands to return the
dbus.socket and dbus.service, but only 'systemctl status dbu*' shows the
proper return.

drew@grimoire:~$ systemctl status dbus
● dbus.service - D-Bus System Message Bus
 Loaded: loaded (/lib/systemd/system/dbus.service; static)
 Active: active (running) since Mon 2021-08-30 23:41:27 CDT; 3 days ago
TriggeredBy: ● dbus.socket
   Docs: man:dbus-daemon(1)
   Main PID: 1357 (dbus-daemon)
  Tasks: 1 (limit: 57290)
 Memory: 4.9M
 CGroup: /system.slice/dbus.service
 └─1357 @dbus-daemon --system --address=systemd: --nofork 
--nopidfile --systemd-activation --syslog-only

Warning: some journal files were not opened due to insufficient permissions.
drew@grimoire:~$ systemctl status *bus
drew@grimoire:~$ systemctl status '*bus'
drew@grimoire:~$ systemctl status 'dbu*'
● dbus.service - D-Bus System Message Bus
 Loaded: loaded (/lib/systemd/system/dbus.service; static)
 Active: active (running) since Mon 2021-08-30 23:41:27 CDT; 3 days ago
TriggeredBy: ● dbus.socket
   Docs: man:dbus-daemon(1)
   Main PID: 1357 (dbus-daemon)
  Tasks: 1 (limit: 57290)
 Memory: 5.0M
 CGroup: /system.slice/dbus.service
 └─1357 @dbus-daemon --system --address=systemd: --nofork 
--nopidfile --systemd-activation --syslog-only

Warning: some journal files were not opened due to insufficient permissions.
● dbus.socket - D-Bus System Message Bus Socket
 Loaded: loaded (/lib/systemd/system/dbus.socket; static)
 Active: active (running) since Mon 2021-08-30 23:41:27 CDT; 3 days ago
   Triggers: ● dbus.service
 Listen: /run/dbus/system_bus_socket (Stream)
drew@grimoire:~$ systemctl status 'db*s'
drew@grimoire:~$ systemctl status 'dbu?'

** Affects: systemd (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1942623

Title:
  [Wishlist] systemctl CLI UI should allow glob expansion throughout the
  service name

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1942623/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1783203] Re: Upgrade to RabbitMQ 3.6.10 causes beam lockup in clustered deployment

2021-07-06 Thread Drew Freiberger
We should revisit if this is still an issue with the Focal version of
RabbitMQ.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1783203

Title:
  Upgrade to RabbitMQ 3.6.10 causes beam lockup in clustered deployment

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-rabbitmq-server/+bug/1783203/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828534] Re: [19.04][Queens -> Rocky] Upgrading to Rocky resulted in "Services not running that should be: designate-producer"

2021-05-27 Thread Drew Freiberger
I also experienced this on 2 of 3 units of designate running 21.01
charms upgrading from cloud:bionic-stein to cloud:bionic-train with
action-managed-upgrade=false.

Interestingly, the unit that did not exhibit this race was not the
leader.

I've got an SOSreport available if interested.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828534

Title:
  [19.04][Queens -> Rocky] Upgrading to Rocky resulted in "Services not
  running that should be: designate-producer"

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-deployment-guide/+bug/1828534/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1912847] Re: ovs-vswitchd should consider lack of DPDK support fatal

2021-04-23 Thread Drew Freiberger
Added monitoring elements to charmhelpers here:
https://github.com/juju/charm-helpers/pull/601

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1912847

Title:
  ovs-vswitchd should consider lack of DPDK support fatal

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-helpers/+bug/1912847/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1912847] Re: ovs-vswitchd should consider lack of DPDK support fatal

2021-04-23 Thread Drew Freiberger
Added PR for layer:ovn which will need to later be re-synced into charm-
ovn-chassis.

https://github.com/openstack-charmers/charm-layer-ovn/pull/47

** Also affects: charm-helpers
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1912847

Title:
  ovs-vswitchd should consider lack of DPDK support fatal

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-helpers/+bug/1912847/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1912847] Re: ovs-vswitchd should consider lack of DPDK support fatal

2021-04-20 Thread Drew Freiberger
Adding OVS related charms which should be instrumented with additional
NRPE check to alert on incompatible interfaces configured in the switch.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1912847

Title:
  ovs-vswitchd should consider lack of DPDK support fatal

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-gateway/+bug/1912847/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1912847] Re: ovs-vswitchd should consider lack of DPDK support fatal

2021-04-20 Thread Drew Freiberger
Here's a reproducer:

juju deploy ubuntu # onto a vm or metal, can't repro with lxd
juju ssh ubuntu/0 sudo apt-get install openvswitch-switch -y
juju ssh ubuntu/0 'sudo ovs-vsctl add-br br0; sudo ovs-vsctl add-port br0 
dpdk-p0 -- set Interface dpdk-p0 type=dpdk options:dpdk-devargs=:01:00.0'

Error output from the add-port command is:

ovs-vsctl: Error detected while setting up 'dpdk-p0': could not open network 
device dpdk-p0 (Address family not supported by protocol).  See ovs-vswitchd 
log for details.
ovs-vsctl: The default log directory is "/var/log/openvswitch".


You can then query 'ovs-vsctl show' and see:

Bridge br0
Port dpdk-p0
Interface dpdk-p0
type: dpdk
options: {dpdk-devargs=":01:00.0"}
error: "could not open network device dpdk-p0 (Address family 
not supported by protocol)"
Port br0
Interface br0
type: internal


ovs-vsctl -f csv list Interface | grep 'not supported'

provides a response that could also be alerted on as an active
configuration issue rather than alerting on presence of the log entry.

$ sudo ovs-vsctl -f csv list Interface |grep 'not supported'
556792cf-733a-48fd-aeb1-9423c68e354e,[],{},{},[],[],[],[],[],[],[],[],"""could 
not open network device dpdk-p0 (Address family not supported by 
protocol)""",{},[],0,0,[],[],[],[],{},[],[],[],[],dpdk-p0,-1,[],"{dpdk-devargs="":01:00.0""}",{},{},{},dpdk


** Also affects: charm-neutron-openvswitch
   Importance: Undecided
   Status: New

** Also affects: charm-neutron-gateway
   Importance: Undecided
   Status: New

** Also affects: charm-ovn-chassis
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1912847

Title:
  ovs-vswitchd should consider lack of DPDK support fatal

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-gateway/+bug/1912847/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1907686] Re: ovn: instance unable to retrieve metadata

2021-04-20 Thread Drew Freiberger
Verified bionic-ussuri-proposed python3-openvswitch package in
production environment resolved the issue.

** Tags removed: verification-ussuri-needed
** Tags added: verification-ussuri-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1907686

Title:
  ovn: instance unable to retrieve metadata

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ovn-chassis/+bug/1907686/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1912847] Re: ovs-vswitchd should consider lack of DPDK support fatal

2021-04-19 Thread Drew Freiberger
** Merge proposal linked:
   
https://code.launchpad.net/~afreiberger/charm-prometheus-grok-exporter/+git/charm-prometheus-grok-exporter/+merge/401421

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1912847

Title:
  ovs-vswitchd should consider lack of DPDK support fatal

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1912847/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 50093] Re: Some sysctls are ignored on boot

2021-04-06 Thread Drew Freiberger
Still seeing this in bionic 18.04.3 with the following kernel and procps
package.

ii  procps 2:3.3.12-3ubuntu1.2
amd64/proc file system utilities

kernel 5.4.0-62-generic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/50093

Title:
  Some sysctls are ignored on boot

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/procps/+bug/50093/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1885430] Re: [Bionic/Stein] Ceilometer-agent fails to collect metrics after restart

2021-03-10 Thread Drew Freiberger
After investigating a bit more, I also find that libvirtd.service was
started at 03:19, and it appears the pollster error leads back to this
failing to connect to libvirt.
https://github.com/openstack/ceilometer/blob/c0632ae9e0f2eecbf0da9578c88f7c85727244f8/ceilometer/compute/virt/libvirt/inspector.py#L201

"Should-start" is a misnomer.  I believe the proper attribute from
systemd.unit(5) would be "Wants=".

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1885430

Title:
  [Bionic/Stein] Ceilometer-agent fails to collect metrics after restart

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceilometer-agent/+bug/1885430/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1259760] Re: Spice console isn't working when ssl_only=True is set

2021-02-02 Thread Drew Freiberger
This is a valid bug for cloud:xenial-queens UCA pocket.

I've tested that the fix for this issue is included in the bionic
repositories in version 0.1.7-2ubuntu1.

spice-html50.1.7-2ubuntu1

I'm requesting this to be backported into the xenial-queens cloud
archive.

** Also affects: charm-nova-cloud-controller
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1259760

Title:
  Spice console isn't working when ssl_only=True is set

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1259760/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906280] Re: [SRU] Add support for disabling mlockall() calls in ovs-vswitchd

2021-01-28 Thread Drew Freiberger
I'm also seeing this affecting neutron-gateway on focal with config-
changed hook hanging at:

ovs-vsctl -- --may-exist add-br br-int -- set bridge br-int external-ids
:charm-neutron-gateway=managed

This is during LMA charm testing which is performed on LXD provider at
the moment.

** Also affects: charm-neutron-gateway
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906280

Title:
  [SRU] Add support for disabling mlockall() calls in ovs-vswitchd

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-gateway/+bug/1906280/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904988] Re: [SRU] set defaults to be sslv23 not tlsv1

2020-12-17 Thread Drew Freiberger
Will this fix also address TLS1.2 enablement on novnc console proxies as
well, or is it only valid for Spice consoles?  I'm assuming since it's a
websockify update, it should work for both.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904988

Title:
  [SRU] set defaults to be sslv23 not tlsv1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-eventlet/+bug/1904988/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1804261] Re: Ceph OSD units requires reboot if they boot before vault (and if not unsealed with 150s)

2020-09-29 Thread Drew Freiberger
The bionic-ussuri package has the retries set for 1.  My start time
to vault unseal time was about 18 hours.  We should have this set to
heal for up to 5 days after machine start.

I'm almost wondering if vaultlocker-decrypt also needs the retries
increased as well.

Here's a workaround I've found for anyone experiencing this
operationally:

After unsealing the vault, loop through ceph-osd units with the
following two loops to decrypt and start the LVM volumes for ceph-osd
services to startup:

for i in $(ls 
/etc/systemd/system/multi-user.target.wants/vaultlocker-decrypt@*|cut -d/ -f6); 
do sudo systemctl start $i; done
for i in $(ls /etc/systemd/system/multi-user.target.wants/ceph-volume@*|cut -d/ 
-f6); do sudo systemctl start $i; done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1804261

Title:
  Ceph OSD units requires reboot if they boot before vault (and if not
  unsealed with 150s)

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-osd/+bug/1804261/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1804261] Re: Ceph OSD units requires reboot if they boot before vault (and if not unsealed with 150s)

2020-09-29 Thread Drew Freiberger
This is still an issue with bionic-ussuri ceph
15.2.3-0ubuntu0.20.04.2~cloud0

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1804261

Title:
  Ceph OSD units requires reboot if they boot before vault (and if not
  unsealed with 150s)

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-osd/+bug/1804261/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1226855] Re: Cannot use open-iscsi inside LXC container

2020-08-10 Thread Drew Freiberger
Adding Bootstack to watch this bug, as we are taking ownership of charm-
iscsi-connector which would be ideal to test within lxd confinement, but
requires VM or metal for functional tests.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1226855

Title:
  Cannot use open-iscsi inside LXC container

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1226855/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1861941] Re: bcache by-uuid links disappear after mounting bcache0

2020-07-17 Thread Drew Freiberger
I'm having similar issues to this bug and those described by Dmitrii in
https://bugs.launchpad.net/charm-ceph-osd/+bug/1883585 specifically
comment #2 and the last comment.

It appears that if I run 'udevadm trigger --subsystem-match=block', I
get my by-dname devices for bcaches, but if I run udevadm trigger w/out
a subsystem match, something is triggering later than the block
subsystem that is removing the links.

Here are the runs with --verbose to show what appears to be getting
probed on each run:

https://pastebin.ubuntu.com/p/VPvSKRfGt4/

This is with 5.3.0-62 kernel on Bionic.

I also have core and canonical-livepatch snaps installed as did the
environment where Dmitrii has run into this.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1861941

Title:
  bcache by-uuid links disappear after mounting bcache0

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/bcache-tools/+bug/1861941/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1861941] Re: bcache by-uuid links disappear after mounting bcache0

2020-07-17 Thread Drew Freiberger
Relevant package revisions for comment #50:
bcache-tools 1.0.8-2build1 
snapd2.45.1+18.04.2
systemd  237-3ubuntu10.41
udev 237-3ubuntu10.41

and snaps:
Name VersionRev   Tracking   Publisher   Notes
canonical-livepatch  9.5.5  95latest/stable  canonical✓  -
core 16-2.45.2  9665  latest/stable  canonical✓  core

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1861941

Title:
  bcache by-uuid links disappear after mounting bcache0

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/bcache-tools/+bug/1861941/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828534] Re: [19.04][Queens -> Rocky] Upgrading to Rocky resulted in "Services not running that should be: designate-producer"

2020-05-19 Thread Drew Freiberger
adding project charm-memcached per @chris.macnaughton comment #24.

That charm could grow a hook for cache-relation-changed to respond to
requests for encoding updates/service recycle from other charms.

** Also affects: charm-memcached
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828534

Title:
  [19.04][Queens -> Rocky] Upgrading to Rocky resulted in "Services not
  running that should be: designate-producer"

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-deployment-guide/+bug/1828534/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1847280] Re: Include osc-placement plugin in openstack client package and snap

2019-12-11 Thread Drew Freiberger
Subscribing field-medium as this is required for working around bugs
like #1821594.  While there's a workaround, the workaround is not
feasible in environments that block pythonhosted.org/pip repos through
their firewall.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1847280

Title:
  Include osc-placement plugin in openstack client package and snap

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-openstackclient/+bug/1847280/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1847280] [NEW] Include osc-placement plugin in openstack client package and snap

2019-10-08 Thread Drew Freiberger
Public bug reported:

During troubleshooting of migration errors due to differences in
placement API database allocations from the actual nova allocation
utilization resulting from bugs like lp#1821594, we need to be able to
run commands such as

"openstack resource provider allocation delete $UUID"

The openstack resource provider command sets are provided by the osc-
placement plugin.

Current workaround is to create a venv with the openstackclient and osc-
placement plugin.

virtualenv -p python3 osc
source osc/bin/activate
pip3 install osc-placement openstackclient
./osc/bin/openstack resource provider.

It would be helpful for these commands to be included in the UCA and
snap for Openstack Queens and newer.

For reference: https://docs.openstack.org/osc-
placement/queens/cli/index.html#resource-provider-allocation-delete

** Affects: python-openstackclient (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1847280

Title:
  Include osc-placement plugin in openstack client package and snap

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-openstackclient/+bug/1847280/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1832265] Re: py3: inconsistent encoding of token fields

2019-09-10 Thread Drew Freiberger
I've updated the rocky/stein tags to verificiation-rocky/stein-done as
requested per my comment #36.

** Tags removed: verification-rocky-needed verification-stein-needed
** Tags added: verification-done-rocky verification-done-stein

** Tags removed: verification-done-rocky verification-done-stein
** Tags added: verification-rocky-done verification-stein-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1832265

Title:
  py3: inconsistent encoding of token fields

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-keystone-ldap/+bug/1832265/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1832265] Re: py3: inconsistent encoding of token fields

2019-08-07 Thread Drew Freiberger
@kingttx  You will only need to add this to all the keystone charm's
units.  Then run apt-get upgrade to install the updated keystone*
packages, and lastly run 'service apache2 restart' in order to reload
the wsgi scripts.


We have tested both bionic-rocky-proposed and bionic-stein-proposed with 
success.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1832265

Title:
  py3: inconsistent encoding of token fields

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-keystone-ldap/+bug/1832265/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1834213] Re: After kernel upgrade, nf_conntrack_ipv4 module unloaded, no IP traffic to instances

2019-07-26 Thread Drew Freiberger
oddly, this did not happen on all hosts with this version kernel, it was
pseudo random and about ~30-40%.  There must be another variable at
play.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1834213

Title:
  After kernel upgrade, nf_conntrack_ipv4 module unloaded, no IP traffic
  to instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1834213/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1832766] Re: LDAP group_members_are_ids = false fails in Rocky/Stein

2019-06-17 Thread Drew Freiberger
It seems that most of our clouds use LDAP backends, which also use
group_member_are_ids = false, but we do not specify a
user_id_attribute..  This then falls back to the default of using the
"CN" attribute.

It just so happens that Active directory entities are labeled with
cn=$sAMAccountName,$user_tree_dn, and so the default code to assume
user_id_attribute is the first field of the DN works in most production
cases.

Suggest renaming title to "Specifying user_id_attribute that is not the
distinguished name's primary key field in LDAP database fails when
referencing group members if group_members_are_ids = false"

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1832766

Title:
  LDAP group_members_are_ids = false fails in Rocky/Stein

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1832766/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1832766] Re: LDAP group_members_are_ids = false fails in Rocky/Stein

2019-06-17 Thread Drew Freiberger
@petevg, as for the field being passed in, it does contain a UID,
however, UID == user_name for keystone.  $user_id_attribute (uidNumber)
is the field keystone should be looking for within the DN's record to
equate to user_id in the code context.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1832766

Title:
  LDAP group_members_are_ids = false fails in Rocky/Stein

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1832766/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1832766] Re: LDAP group_members_are_ids = false fails in Rocky/Stein

2019-06-17 Thread Drew Freiberger
We do not believe it is a good idea in this production cloud to change
the user_id_attribute to = uid, as the user mapping table has already
stored the uidNumbers as the user_id_attribute, and this would lead to
database inconsistency unless we wiped the user table from the database.

user_id_attribute is supposed to be like the passwd database UID field,
and user_name_attribute is supposed to be your login like "dfreiberger".

Please see documentation regarding posixAccount affinity for these variables on 
this page for configuration guide:
https://docs.openstack.org/keystone/pike/admin/identity-integrate-with-ldap.html

The intention of the keystone ldap integration documentation clearly
states that it expects a full DN in the group_member_attribute if
group_members_are_ids = false.  This means that the code must
dereference the dn uid=drew,ou=users,dc=mysite,dc=com and return the
user_id_attribute field if the function needs to reference the user_id
field.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1832766

Title:
  LDAP group_members_are_ids = false fails in Rocky/Stein

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1832766/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1832265] Re: py3: inconsistent encoding of token fields

2019-06-14 Thread Drew Freiberger
Positive outcome for initial testing of the ppa:james-page/rocky patch.
contacting other users of this cloud for confirmation.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1832265

Title:
  py3: inconsistent encoding of token fields

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-keystone-ldap/+bug/1832265/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1798184] Re: [SRU] PY3: python3-ldap does not allow bytes for DN/RDN/field names

2019-06-11 Thread Drew Freiberger
James, Corey,

UCA needs updating to provide 14.1.0 packages to bionic-rocky clouds.

ubuntu@juju-3e01e3-24-lxd-3:~$ sudo apt-cache policy keystone
keystone:
  Installed: 2:14.0.1-0ubuntu3~cloud0
  Candidate: 2:14.0.1-0ubuntu3~cloud0
  Version table:
 *** 2:14.0.1-0ubuntu3~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-updates/rocky/main amd64 Packages
100 /var/lib/dpkg/status
 2:13.0.2-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
 2:13.0.0-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1798184

Title:
  [SRU] PY3: python3-ldap does not allow bytes for DN/RDN/field names

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1798184/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1798184] Re: [SRU] PY3: python3-ldap does not allow bytes for DN/RDN/field names

2019-06-11 Thread Drew Freiberger
Sorry for my assumption there.  I see that it appears it was backported
into the UCA 14.0.1-0ubuntu3.

I should update that this does not fix all keystone-ldap functionality
for rocky.

Please review lp#1832265 for additional code paths which may need
similar patching, as this cloud is running 14.0.1-0ubuntu3.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1798184

Title:
  [SRU] PY3: python3-ldap does not allow bytes for DN/RDN/field names

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1798184/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1668771] Re: systemd-resolved negative caching for extended period of time

2019-05-23 Thread Drew Freiberger
This affects bionic openstack cloud environments when os-*-hostname is
configured for keystone, and the keystone entry is deleted temporarily
from upstream dns, or the upstream dns fails providing no record for the
lookup of keystone.endpoint.domain.com.

We have to then flush all caches across the cloud once DNS issue is
resolved, rather than auto-healing at 60 seconds as if we were running
nscd with negative-ttl set to 60 seconds.

Ultimately, a negative TTL that is settable would be ideal, or the
ability to not cache negative hits would also be useful.  Only
workaround now is to not use caches or to operationally flush caches as
needed.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1668771

Title:
  systemd-resolved negative caching for extended period of time

To manage notifications about this bug go to:
https://bugs.launchpad.net/systemd/+bug/1668771/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1825843] Re: systemd issues with bionic-rocky causing nagios alert and can't restart daemon

2019-04-22 Thread Drew Freiberger
Workaround to get temporary fix into radosgw init script.  note, this
will potentially restart all your rgw units at once, you may want to run
one unit at a time.

juju run --application ceph-radosgw 'perl -pi -e
"s/^PREFIX=.*/PREFIX=client.rgw./" /etc/init.d/radosgw; systemctl
daemon-reload; systemctl restart radosgw'

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1825843

Title:
  systemd issues with bionic-rocky causing nagios alert and can't
  restart daemon

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-radosgw/+bug/1825843/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1819453] Re: keystone-ldap TypeError: cannot concatenate 'str' and 'NoneType' object

2019-03-19 Thread Drew Freiberger
This is only confirmed on xenial Ocata.


When querying the domain, as it loops through users returned from the all user 
query of LDAP, it tries to create mappings in keystone for any new users.

https://github.com/openstack/keystone/blob/stable/ocata/keystone/identity/core.py#L599

This hits the method
keystone.identity.mapping_backends.sql.create_id_mapping()  If the hash
of the domain and the user data exist in id_mappings, it tosses the
exception:

https://github.com/openstack/keystone/blob/stable/ocata/keystone/identity/mapping_backends/sql.py#L80

it then tries to fall back to querying the public_id of the existing
local_entity which doesn't exist and hence returns None.  However, if it
would just return that public_id that just tossed as duplicate from this
line, it could work around the issue.

https://github.com/openstack/keystone/blob/stable/ocata/keystone/identity/mapping_backends/sql.py#L80

This is the duplicate being detected, why not just return that duplicate
ID rather than having to return a reverse lookup of a potentially non-
existent object.


Basically, this customer deletes entries from LDAP, then we delete them from 
the local_users and users tables, and sometimes forget to remove from 
id_mappings table as well.  This is done manually because there's no way to 
delete a keystone user w/out the user existing in the ldap backend still. (best 
practice being to disable the user's accountActive flag and leave them in LDAP)

So, operator error working around one bug is creating what appears to be
a new bug when the ldap user is recreated.

When we query the id_mappings table, we found 402 entries in id_mapping
table that don't belong to the domain any longer in nonlocal_users table
or users table.  So, these 402 entries could not be re-created as new
ldap users.

To reproduce:

create LDAP domain with user foo and query openstack domain so user foo gets a 
user entry in keystone.
remove user foo from user and nonlocal_user table in mysql database, leaving 
entry in id_mappings table.
Try to query domain (openstack user list --domain ), user foo should 
cause a traceback when it tries to recreate the id_mapping.

Ultimately, I believe we have to cleanup the id_mappings table, however, I 
believe the invalid assumption at the line below is still worth discussion:
https://github.com/openstack/keystone/blob/stable/ocata/keystone/identity/mapping_backends/sql.py#L81

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1819453

Title:
  keystone-ldap TypeError: cannot concatenate 'str' and 'NoneType'
  object

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1819453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1819453] Re: keystone-ldap TypeError: cannot concatenate 'str' and 'NoneType' object

2019-03-19 Thread Drew Freiberger
Here's a query I used to determine we have entries in id_mapping table
that don't have a matching local_entity in the user/nonlocal_user
tables.

select * from id_mapping where public_id not in (select
id_mapping.public_id from id_mapping join user on id_mapping.public_id =
user.id);

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1819453

Title:
  keystone-ldap TypeError: cannot concatenate 'str' and 'NoneType'
  object

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1819453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1813007] Re: Unable to install new flows on compute nodes when having broken security group rules

2019-01-23 Thread Drew Freiberger
Note, this is a Xenial queens install.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1813007

Title:
  Unable to install new flows on compute nodes when having broken
  security group rules

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1813007/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1813007] Re: Unable to install new flows on compute nodes when having broken security group rules

2019-01-23 Thread Drew Freiberger
Found https://paste.ubuntu.com/p/s5Z4DNJspV/ missing.
New copy here: https://pastebin.ubuntu.com/p/g7Q3nFmhWN/

The thing to note is that they both have the same remote_group_id,
40ee2790-282f-4c6a-8f00-d5ee0b8b66d7 and one has
ports_range_min/max=None (which in the code None is replaced with 1 on
ports_range_min and 65535 on port_range_max).

The code then translated this remote_group_id into id 38, which in the
cur_conj.remove(conj_id) call ended up failing to delete due to there
being two different entities with the key 38, hence the KeyError.  If I
recall correctly from looking at this last week, cur_conj is an array of
tuples of ('',
conj_id).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1813007

Title:
  Unable to install new flows on compute nodes when having broken
  security group rules

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1813007/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1772671] Re: Kernel produces empty lines in /proc/PID/status

2018-08-24 Thread Drew Freiberger
nevermind.  I see the patch is kernel fix...will upgrade my host.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1772671

Title:
  Kernel produces empty lines in /proc/PID/status

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iotop/+bug/1772671/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1772671] Re: Kernel produces empty lines in /proc/PID/status

2018-08-24 Thread Drew Freiberger
This needs to be backported to trusty for users of the linux-image-
generic-lts-xenial.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1772671

Title:
  Kernel produces empty lines in /proc/PID/status

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iotop/+bug/1772671/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1759787] Re: Very inaccurate TSC clocksource with kernel 4.13 on selected CPUs

2018-07-25 Thread Drew Freiberger
** Tags added: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1759787

Title:
  Very inaccurate TSC clocksource with kernel 4.13 on selected CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1759787/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1757277] Re: soft lockup from bcache leading to high load and lockup on trusty

2018-04-19 Thread Drew Freiberger
Joseph,

I'm currently testing a 4.15.0-13 kernel from xenial-16.04-edge path on
these hosts.  I just had the issue exhibit before the kernel change, so
we should know within a couple days if that helps.  Unfortunately, the
logs for this system beyond those shared are not available publicly.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1757277

Title:
  soft lockup from bcache leading to high load and lockup on trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1757277/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1764848] Re: Upgrade to ca-certificates to 20180409 causes ca-certificates.crt to be removed if duplicate certs found

2018-04-17 Thread Drew Freiberger
perhaps a proper fix is for ubuntu-sso-client to release a new python-
ubuntu-sso-client package in bionic that doesn't include this UbuntuOne-
Go_Daddy_Class_2_CA.pem now that ca-certificates package has the CA.

However, I'd still like to see duplicate certs not causing ca-
certificates.crt to be deleted.

** Also affects: ubuntu-sso-client
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1764848

Title:
  Upgrade to ca-certificates to 20180409 causes ca-certificates.crt to
  be removed if duplicate certs found

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-sso-client/+bug/1764848/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1764848] [NEW] Upgrade to ca-certificates to 20180409 causes ca-certificates.crt to be removed if duplicate certs found

2018-04-17 Thread Drew Freiberger
Public bug reported:

The certificate /usr/share/ca-
certificates/mozilla/Go_Daddy_Class_2_CA.crt in package ca-certificates
is conflicting with /etc/ssl/certs/UbuntuOne-Go_Daddy_Class_2_CA.pem
from package python-ubuntu-sso-client.

This results in the postinst trigger for ca-certificates to remove the
/etc/ssl/certs/ca-certificates.crt file.  This happens because the
postinst trigger runs update-ca-certificates --fresh.

If I run update-ca-certificates without the --fresh flag, the conflict
is a non-issue and the ca-certificates.crt file is restored.

If I understand some of the postinst code correctly, --fresh should only
be run if called directly or if upgrading from a ca-certificates version
older than 2011.

Running bionic with daily -updates channel and ran into this this
morning due to the release of ca-certificates version 20180409.

** Affects: ubuntu-sso-client
 Importance: Undecided
 Status: New

** Affects: ca-certificates (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1764848

Title:
  Upgrade to ca-certificates to 20180409 causes ca-certificates.crt to
  be removed if duplicate certs found

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-sso-client/+bug/1764848/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1757277] [NEW] soft lockup from bcache leading to high load and lockup on trusty

2018-03-20 Thread Drew Freiberger
Public bug reported:

I have an environment with Dell R630 servers with RAID controllers with
two virtual disks and 22 passthru devices.  2 SAS SSDs and 20 HDDs are
setup in 2 bcache cachesets with a resulting 20 mounted xfs filesystems
running bcache backending an 11 node swift cluster (one zone has 1 fewer
nodes).  Two of the zones have these nodes as described above and they
appear to be exibiting soft lockups in the bcache thread of the kernel
causing other kernel threads to go into i/o blocking state an keeping
processes on any bcache from being successful.  disk access to the
virtual disks mounted with out bcache is still possible when this lockup
occurs.

https://pastebin.ubuntu.com/p/mtn47QqBJ3/

There are several softlockup messages found in the dmesg and many of
the dumpstack are locked inside the bch_writeback_thread();

static int bch_writeback_thread(void *arg)
{
[...]
while (!kthread_should_stop()) {
down_write(&dc->writeback_lock);
[...]
}

One coredump is found when the kswapd is doing the reclaim about the
xfs inode cache.

__xfs_iflock(
struct xfs_inode *ip)
{
do {
prepare_to_wait_exclusive(wq, &wait.wait, TASK_UNINTERRUPTIBLE);
if (xfs_isiflocked(ip))
io_schedule();
} while (!xfs_iflock_nowait(ip));


- Possible fix commits:

1). 9baf30972b55 bcache: fix for gc and write-back race
https://www.spinics.net/lists/linux-bcache/msg04713.html


- Related discussions:

1). Re: [PATCH] md/bcache: Fix a deadlock while calculating writeback rate
https://www.spinics.net/lists/linux-bcache/msg04617.html

2). Re: hang during suspend to RAM when bcache cache device is attached
https://www.spinics.net/lists/linux-bcache/msg04636.html

We are running trusty/mitaka swift storage on these nodes with 4.4.0-111
kernel (linux-image-generic-lts-xenial).

** Affects: linux (Ubuntu)
 Importance: Undecided
 Status: Incomplete


** Tags: canonical-bootstack trusty

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1757277

Title:
  soft lockup from bcache leading to high load and lockup on trusty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1757277/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1750875] [NEW] charm build dereferences symlinks in built charm

2018-02-21 Thread Drew Freiberger
Public bug reported:

Running charm 2.2.2/charm-tools 2.2.3 via latest stable snap, I find
that the symlinks in the hooks and actions directories of my charms end
up deferenced such that the resulting charm has X copies of the file
that was originally symlinked to things such as hooks/config-changed or
actions/pause.

As an example:

Charm Source:

drew@grimoire:~/src/charm-hacluster$ ls -l hooks
total 156
drwxr-xr-x 6 drew drew  4096 Oct 18 23:11 charmhelpers
lrwxrwxrwx 1 drew drew 8 Jan 31  2017 config-changed -> hooks.py
lrwxrwxrwx 1 drew drew 8 Jan 31  2017 hanode-relation-changed -> hooks.py
lrwxrwxrwx 1 drew drew 8 Jan 31  2017 hanode-relation-joined -> hooks.py
lrwxrwxrwx 1 drew drew 8 Jan 31  2017 ha-relation-changed -> hooks.py
lrwxrwxrwx 1 drew drew 8 Jan 31  2017 ha-relation-joined -> hooks.py
-rwxrwxr-x 1 drew drew 17449 Dec 22 16:23 hooks.py
-rwxr-xr-x 1 drew drew   421 Oct 18 23:11 install
lrwxrwxrwx 1 drew drew 8 Oct 18 23:11 install.real -> hooks.py
-rw-r--r-- 1 drew drew  2318 Jan 31  2017 maas.py
lrwxrwxrwx 1 drew drew 8 Jan 31  2017 nrpe-external-master-relation-changed 
-> hooks.py
lrwxrwxrwx 1 drew drew 8 Jan 31  2017 nrpe-external-master-relation-joined 
-> hooks.py
-rw-r--r-- 1 drew drew  6336 Oct 18 23:11 pcmk.py
lrwxrwxrwx 1 drew drew 8 Jan 31  2017 start -> hooks.py
lrwxrwxrwx 1 drew drew 8 Jan 31  2017 stop -> hooks.py
lrwxrwxrwx 1 drew drew 8 Jan 31  2017 upgrade-charm -> hooks.py
-rw-r--r-- 1 drew drew 30003 Oct 18 23:11 utils.py

Charm build output:
drew@grimoire:~/src/charm-hacluster$ ls -l 
/home/drew/src/charms/builds/hacluster/hooks
total 420
drwxr-xr-x 6 drew drew  4096 Feb 21 11:32 charmhelpers
-rwxrwxr-x 1 drew drew 17449 Dec 22 16:23 config-changed
-rwxrwxr-x 1 drew drew 17449 Dec 22 16:23 hanode-relation-changed
-rwxrwxr-x 1 drew drew 17449 Dec 22 16:23 hanode-relation-joined
-rwxrwxr-x 1 drew drew 17449 Dec 22 16:23 ha-relation-changed
-rwxrwxr-x 1 drew drew 17449 Dec 22 16:23 ha-relation-joined
-rwxrwxr-x 1 drew drew 17449 Dec 22 16:23 hooks.py
-rwxr-xr-x 1 drew drew   421 Oct 18 23:11 install
-rwxrwxr-x 1 drew drew 17449 Dec 22 16:23 install.real
-rw-r--r-- 1 drew drew  2318 Jan 31  2017 maas.py
-rwxrwxr-x 1 drew drew 17449 Dec 22 16:23 nrpe-external-master-relation-changed
-rwxrwxr-x 1 drew drew 17449 Dec 22 16:23 nrpe-external-master-relation-joined
-rw-r--r-- 1 drew drew  6336 Oct 18 23:11 pcmk.py
-rwxrwxr-x 1 drew drew 17449 Dec 22 16:23 start
-rwxrwxr-x 1 drew drew 17449 Dec 22 16:23 stop
-rwxrwxr-x 1 drew drew 17449 Dec 22 16:23 upgrade-charm
-rw-r--r-- 1 drew drew 30003 Oct 18 23:11 utils.py


This both makes charm authoring/debugging more difficult and adds unnecessary 
size to the built charm.

** Affects: charm-tools (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1750875

Title:
  charm build dereferences symlinks in built charm

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/charm-tools/+bug/1750875/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1740892] Re: corosync upgrade on 2018-01-02 caused pacemaker to fail

2018-01-03 Thread Drew Freiberger
So, it was NOT a pacemaker restart, it was pacemaker trying to perform
standard dropped connection reconnect with an incompatible library for
CPG API access to the restarted corosync.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1740892

Title:
  corosync upgrade on 2018-01-02 caused pacemaker to fail

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1740892/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1740892] Re: corosync upgrade on 2018-01-02 caused pacemaker to fail

2018-01-03 Thread Drew Freiberger
@nacc:

The error condition was that when corosync restarted, pacemaker
disconnected (as was normal) and then tried reconnecting, but when
reconnecting ran into this error:

error: pcmk_cpg_dispatch: Connection to the CPG API failed: Library
error (2)

So, pacemaker is trying to re-handshake with the revived corosync, and
when it does, the api fails due to a library error.  Given that it's the
CPG API, and the libcpg4 package was updated, I'd guess that there was
an incompatible patch added to the libcpg4 library was incompatible with
the previous version of libcpg4 that was in-memory linked into the
running pacemaker binary.  Once we restarted the dead pacemaker service,
pacemaker reloaded the new library and was able to connect to the CPG
API as normal.

I don't know if that's a library failure or a change to the CPG API that
was not version-compatible with the previously running version of
libcpg4 whenever the dying pacemaker had been started.

The issue occured in trusty and xenial clouds across Mitaka and Ocata
cloud archives.

** Changed in: corosync (Ubuntu)
   Status: Incomplete => In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1740892

Title:
  corosync upgrade on 2018-01-02 caused pacemaker to fail

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1740892/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1736742] Re: New Prometheus snap refresh breaks exporter

2017-12-06 Thread Drew Freiberger
** Summary changed:

- New Prometheus snap refresh brakes exporter
+ New Prometheus snap refresh breaks exporter

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1736742

Title:
  New Prometheus snap refresh breaks exporter

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/prometheus-node-exporter/+bug/1736742/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1724529] Re: ceph health output flips between OK and WARN all the time

2017-11-30 Thread Drew Freiberger
Also of interest to anyone running into this, you can check active
daemon's injected args with:

ceph --admin-daemon /var/run/ceph/ceph-mon* config show | grep


ex:
ceph --admin-daemon /var/run/ceph/ceph-mon* config show|grep 
mon_pg_warn_max_per_osd
"mon_pg_warn_max_per_osd": "300",

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1724529

Title:
  ceph health output flips between OK and WARN all the time

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-mon/+bug/1724529/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1724529] Re: ceph health output flips between OK and WARN all the time

2017-11-30 Thread Drew Freiberger
We are seeing this same flapping on another cloud.  One node had rebooted 
yesterday when the HEALTH_WARN flapping began.  Running on trusty/mitaka cloud 
archive 10.2.7 ceph package.
That server is giving the health_warn on the too many pgs.
Ohter server rebooted 7 days ago was not giving warning, and third server was 
still running ceph-mon from March at 10.2.3.  Had to kill ceph-mon (as 
/etc/init.d/ceph restart mon did not work) and that's now running 10.2.7 mon.
Some OSDs running 10.2.6, some running 10.2.7 when I run "ceph tell osd.* 
version"
restart of third mon (up for 10 days) (also required kill command) and now 
error is not flapping.

Seems there's something that either /etc/init.d/ceph command is not
properly allowing for mon restarts (on ceph-charm, not ceph-mon-charm)
when OSDs are present (though haven't tested w/out OSDs present).
Having to kill the process with standard SIG is odd to get the process
to recycle.  Perhaps it's being blocked by init daemon configsside
issue.

I'm guessing what actually has happened is someone did a "ceph tell
mon.*" to ignore the pg counts, and then the restarts caused the setting
to be dropped.  This may be something to re-open against the ceph and
ceph-mon charms to allow for config opts for ceph health_warn configs,
or we can close this bug and open another.

The flapping makes so much more sense in this context of a ceph tell
mon.* having been run in the past.

We've got notes in a related case on another cloud to work-around this
with config-flags setting in the charm, but would love to see more of
these operational monitoring settings exposed by the charm directly
rather than relying on config-flags.

Here's the command to change on the live ceph-mons:
- ceph tell mon.* injectargs '--mon_pg_warn_max_per_osd=900'

Here's the command to configure the juju ceph charm to persist the setting:
- juju set ceph config-flags='{osd: {"mon pg warn max per osd": 900}}'

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1724529

Title:
  ceph health output flips between OK and WARN all the time

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-mon/+bug/1724529/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1732575] [NEW] GSpice-CRITICAL **: send_key: assertion 'scancode != 0' failed over ssh-tunnelled X display back to Wayland

2017-11-15 Thread Drew Freiberger
Public bug reported:

I have two hosts running artful 17.10 with up-to-date packages.  My
"source" desktop is running Wayland, and my destination desktop is
running Xorg due to nvidia card.

>From my source desktop, I'm able to run virt-viewer -c qemu+ssh
://remote-artful/system and connect to the remote hypervisor and vm
console and type on the keyboard.

>From my source desktop, I'm unable to type into the virtual machine if I run:
ssh -X remote-artful
then running virt-viewer on the remote-artful host displaying back to 
ssh-tunneled wayland display

If I switch my source desktop from Wayland to Xorg, I can type into the
VM with the ssh -X remote-artful -> virt-viewer command.

The errors scrolling in the remote virt-viewer STDERR is:

drew@remote-artful:~$ virt-viewer

(virt-viewer:6895): GSpice-WARNING **: PulseAudio context failed
Connection refused

(virt-viewer:6895): GSpice-WARNING **: pa_context_connect() failed:
Connection refused

(virt-viewer:6895): vnc-keymap-WARNING **: Unknown keycode mapping '(unnamed)'.
Please report to gtk-vnc-l...@gnome.org
including the following information:

  - Operating system
  - GDK build
  - X11 Server
  - xprop -root
  - xdpyinfo


(virt-viewer:6895): Gtk-WARNING **: Allocating size to SpiceDisplay 
0x55be15b0c350 without calling gtk_widget_get_preferred_width/height(). How 
does the code know the size to allocate?

(virt-viewer:6895): GSpice-CRITICAL **: send_key: assertion 'scancode !=
0' failed

(virt-viewer:6895): GSpice-CRITICAL **: send_key: assertion 'scancode !=
0' failed

(virt-viewer:6895): GSpice-CRITICAL **: send_key: assertion 'scancode !=
0' failed

(virt-viewer:6895): GSpice-CRITICAL **: send_key: assertion 'scancode !=
0' failed

(virt-viewer:6895): GSpice-CRITICAL **: send_key: assertion 'scancode !=
0' failed

(virt-viewer:6895): GSpice-CRITICAL **: send_key: assertion 'scancode !=
0' failed

(virt-viewer:6895): GSpice-CRITICAL **: send_key: assertion 'scancode !=
0' failed

(virt-viewer:6895): GSpice-CRITICAL **: send_key: assertion 'scancode !=
0' failed


Info requested from gtk-vnc-l...@gnome.org error:  
http://pastebin.ubuntu.com/25971653/

** Affects: virt-viewer (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1732575

Title:
  GSpice-CRITICAL **: send_key: assertion 'scancode != 0' failed over
  ssh-tunnelled X display back to Wayland

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/virt-viewer/+bug/1732575/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1719770] Re: hypervisor stats issue after charm removal if nova-compute service not disabled first

2017-10-26 Thread Drew Freiberger
I'm still working on reproducing.  While attempting reproduction, I had
an environment where I had 3 hosts, dummy charms on each for ubuntu,
then added nova-compute to 3.  Removed nova-compute unit from the third
host, and still saw stats for it in hypervisor-stats.  There may be some
cleanup missing in charm-nova-compute relation-depart hooks to
disable/remove the service.

http://pastebin.ubuntu.com/25824195/

I think to reproduce, it might require a full remove-machine, add-unit
ubuntu --to machine-name, add-unit nova-compute --to .

Ed (~dosaboy) may be working on this reproduction.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1719770

Title:
  hypervisor stats issue after charm removal if nova-compute service not
  disabled first

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1719770/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1727063] Re: Pacemaker package upgrades stop but fail to start pacemaker resulting in HA outage

2017-10-26 Thread Drew Freiberger
>From a high level, it appears that invoke-rc.d script used for
compatibility falls back to checking for /etc/rcX.d symlinks for a
"policy" check if there is no $POLICYHELPER installed.  Perhaps the
actual shortcoming is not having the policy-rc.d installed to prefer
systemd over init.d on Xenial.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1727063

Title:
  Pacemaker package upgrades stop but fail to start pacemaker resulting
  in HA outage

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1727063/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1727063] Re: Pacemaker package upgrades stop but fail to start pacemaker resulting in HA outage

2017-10-25 Thread Drew Freiberger
re: init-system-helpers, I noticed the oddity of the init script on a
systemd system as well and found that there's a systemd hack in /lib/lsb
/init-functions.d/40-systemd that allows for multi-startup
compatibility.  I believe invoke-rc.d should check systemd
"enabled/disabled" state instead of just the S/K links in /etc/rcX.d.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1727063

Title:
  Pacemaker package upgrades stop but fail to start pacemaker resulting
  in HA outage

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1727063/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1719770] Re: hypervisor stats issue after charm removal if nova-compute service not disabled first

2017-09-27 Thread Drew Freiberger
In models.py, both in Mitaka and in master, I've found that the relation
between ComputeNode and Service is using the following join in the
Service context:

primaryjoin='and_(Service.host == Instance.host,'
'Service.binary == "nova-compute",'
'Instance.deleted == 0)',

As in my case, I've redeployed a deleted node as the same hostname
(Service.host) this join is relating a deleted ComputeNode.host entry to
the non-deleted Service.host entry.

If I look at both my compute_nodes and services tables, it appears they
should potentially be joined on the "id" field, rather than the "host"
field, at least for this specific query, but this potentially breaks the
Service object relation model for other query contexts such as instances
running on a hypervisor.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1719770

Title:
  hypervisor stats issue after charm removal if nova-compute service not
  disabled first

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1719770/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1591411] Re: systemd-logind must be restarted every ~1000 SSH logins to prevent a ~25 second delay

2017-09-18 Thread Drew Freiberger
Trusty versions of packages affected (I see there is a systemd update
229-4ubuntu19.  Does this include the backported fixes from v230/v231
mentioned in comment #4?):

ii  dbus 1.10.6-1ubuntu3.1  
amd64simple interprocess messaging system (daemon and utilities)
ii  libdbus-1-3:amd641.10.6-1ubuntu3.1  
amd64simple interprocess messaging system (library)
ii  libdbus-glib-1-2:amd64   0.106-1
amd64simple interprocess messaging system (GLib-based shared library)
ii  python3-dbus 1.2.0-3
amd64simple interprocess messaging system (Python 3 interface)
ii  libpam-systemd:amd64 229-4ubuntu12  
amd64system and service manager - PAM module
ii  libsystemd0:amd64229-4ubuntu12  
amd64systemd utility library
ii  python3-systemd  231-2build1
amd64Python 3 bindings for systemd
ii  systemd  229-4ubuntu12  
amd64system and service manager
ii  systemd-sysv 229-4ubuntu12  
amd64system and service manager - SysV links

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1591411

Title:
  systemd-logind must be restarted every ~1000 SSH logins to prevent a
  ~25 second delay

To manage notifications about this bug go to:
https://bugs.launchpad.net/dbus/+bug/1591411/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1591411] Re: systemd-logind must be restarted every ~1000 SSH logins to prevent a ~25 second delay

2017-09-18 Thread Drew Freiberger
** Tags added: canonical-bootstack canonical-is

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1591411

Title:
  systemd-logind must be restarted every ~1000 SSH logins to prevent a
  ~25 second delay

To manage notifications about this bug go to:
https://bugs.launchpad.net/dbus/+bug/1591411/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1591411] Re: systemd-logind must be restarted every ~1000 SSH logins to prevent a ~25 second delay

2017-09-18 Thread Drew Freiberger
Can we get this backported to trusty?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1591411

Title:
  systemd-logind must be restarted every ~1000 SSH logins to prevent a
  ~25 second delay

To manage notifications about this bug go to:
https://bugs.launchpad.net/dbus/+bug/1591411/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1624997] Re: live-migration fails because of " Host key verification failed"

2017-03-30 Thread Drew Freiberger
James,

I just filed a related bug which narrows down that this is happening if
ceph-osd configures ssh key exchanges before nova-compute unit is added
to the host.

https://bugs.launchpad.net/charm-nova-compute/+bug/1677707

This is also a Xenial Mitaka cloud, these nodes were added to a juju-
deployer environment (Bootstack) manually by adding ubuntu charm to
provision host, then ceph-osd add-unit, then nova-compute add-unit.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1624997

Title:
  live-migration fails because of " Host key verification failed"

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1624997/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1620578] Re: Screen contents "flickering" after screen turned back on

2017-03-22 Thread Drew Freiberger
I am also seeing this on yakkety with three displays.  As I move the
mouse, the background flickers on two or three of the monitors.  tends
toward the lower-right than the upper left.  A right-click on the
desktop (bringing up the desktop sub-menu) clears the error immediately
for me until the sleep-wake cycle happens again.  This does not happen
when I only use my internal laptop screen.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1620578

Title:
  Screen contents "flickering" after screen turned back on

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/unity/+bug/1620578/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs