Public bug reported:
When attempting to craft a command to query and/or restart all services
with name 'something-*-else', such as 'neutron-*-agent' to match
[neutron-l3-agent, neutron-openvswitch-agent, and neutron-dhcp-agent],
I've found that no services are returned from globs in the middle of
We should revisit if this is still an issue with the Focal version of
RabbitMQ.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1783203
Title:
Upgrade to RabbitMQ 3.6.10 causes beam lockup in clustere
I also experienced this on 2 of 3 units of designate running 21.01
charms upgrading from cloud:bionic-stein to cloud:bionic-train with
action-managed-upgrade=false.
Interestingly, the unit that did not exhibit this race was not the
leader.
I've got an SOSreport available if interested.
--
You r
Added monitoring elements to charmhelpers here:
https://github.com/juju/charm-helpers/pull/601
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1912847
Title:
ovs-vswitchd should consider lack of DPDK
Added PR for layer:ovn which will need to later be re-synced into charm-
ovn-chassis.
https://github.com/openstack-charmers/charm-layer-ovn/pull/47
** Also affects: charm-helpers
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Adding OVS related charms which should be instrumented with additional
NRPE check to alert on incompatible interfaces configured in the switch.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1912847
Ti
Here's a reproducer:
juju deploy ubuntu # onto a vm or metal, can't repro with lxd
juju ssh ubuntu/0 sudo apt-get install openvswitch-switch -y
juju ssh ubuntu/0 'sudo ovs-vsctl add-br br0; sudo ovs-vsctl add-port br0
dpdk-p0 -- set Interface dpdk-p0 type=dpdk options:dpdk-devargs=:01:00.0'
Verified bionic-ussuri-proposed python3-openvswitch package in
production environment resolved the issue.
** Tags removed: verification-ussuri-needed
** Tags added: verification-ussuri-done
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubun
** Merge proposal linked:
https://code.launchpad.net/~afreiberger/charm-prometheus-grok-exporter/+git/charm-prometheus-grok-exporter/+merge/401421
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1912
Still seeing this in bionic 18.04.3 with the following kernel and procps
package.
ii procps 2:3.3.12-3ubuntu1.2
amd64/proc file system utilities
kernel 5.4.0-62-generic
--
You received this bug notification because you are a member of Ubuntu
Bugs, which
After investigating a bit more, I also find that libvirtd.service was
started at 03:19, and it appears the pollster error leads back to this
failing to connect to libvirt.
https://github.com/openstack/ceilometer/blob/c0632ae9e0f2eecbf0da9578c88f7c85727244f8/ceilometer/compute/virt/libvirt/inspector
This is a valid bug for cloud:xenial-queens UCA pocket.
I've tested that the fix for this issue is included in the bionic
repositories in version 0.1.7-2ubuntu1.
spice-html50.1.7-2ubuntu1
I'm requesting this to be backported into the xenial-queens cloud
archive.
** Also affe
I'm also seeing this affecting neutron-gateway on focal with config-
changed hook hanging at:
ovs-vsctl -- --may-exist add-br br-int -- set bridge br-int external-ids
:charm-neutron-gateway=managed
This is during LMA charm testing which is performed on LXD provider at
the moment.
** Also affects
Will this fix also address TLS1.2 enablement on novnc console proxies as
well, or is it only valid for Spice consoles? I'm assuming since it's a
websockify update, it should work for both.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubunt
The bionic-ussuri package has the retries set for 1. My start time
to vault unseal time was about 18 hours. We should have this set to
heal for up to 5 days after machine start.
I'm almost wondering if vaultlocker-decrypt also needs the retries
increased as well.
Here's a workaround I've fo
This is still an issue with bionic-ussuri ceph
15.2.3-0ubuntu0.20.04.2~cloud0
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1804261
Title:
Ceph OSD units requires reboot if they boot before vault (a
Adding Bootstack to watch this bug, as we are taking ownership of charm-
iscsi-connector which would be ideal to test within lxd confinement, but
requires VM or metal for functional tests.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu
I'm having similar issues to this bug and those described by Dmitrii in
https://bugs.launchpad.net/charm-ceph-osd/+bug/1883585 specifically
comment #2 and the last comment.
It appears that if I run 'udevadm trigger --subsystem-match=block', I
get my by-dname devices for bcaches, but if I run udeva
Relevant package revisions for comment #50:
bcache-tools 1.0.8-2build1
snapd2.45.1+18.04.2
systemd 237-3ubuntu10.41
udev 237-3ubuntu10.41
and snaps:
Name Version
adding project charm-memcached per @chris.macnaughton comment #24.
That charm could grow a hook for cache-relation-changed to respond to
requests for encoding updates/service recycle from other charms.
** Also affects: charm-memcached
Importance: Undecided
Status: New
--
You received
Subscribing field-medium as this is required for working around bugs
like #1821594. While there's a workaround, the workaround is not
feasible in environments that block pythonhosted.org/pip repos through
their firewall.
--
You received this bug notification because you are a member of Ubuntu
Bu
Public bug reported:
During troubleshooting of migration errors due to differences in
placement API database allocations from the actual nova allocation
utilization resulting from bugs like lp#1821594, we need to be able to
run commands such as
"openstack resource provider allocation delete $UUID
I've updated the rocky/stein tags to verificiation-rocky/stein-done as
requested per my comment #36.
** Tags removed: verification-rocky-needed verification-stein-needed
** Tags added: verification-done-rocky verification-done-stein
** Tags removed: verification-done-rocky verification-done-stein
@kingttx You will only need to add this to all the keystone charm's
units. Then run apt-get upgrade to install the updated keystone*
packages, and lastly run 'service apache2 restart' in order to reload
the wsgi scripts.
We have tested both bionic-rocky-proposed and bionic-stein-proposed with
oddly, this did not happen on all hosts with this version kernel, it was
pseudo random and about ~30-40%. There must be another variable at
play.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1834213
It seems that most of our clouds use LDAP backends, which also use
group_member_are_ids = false, but we do not specify a
user_id_attribute.. This then falls back to the default of using the
"CN" attribute.
It just so happens that Active directory entities are labeled with
cn=$sAMAccountName,$user
@petevg, as for the field being passed in, it does contain a UID,
however, UID == user_name for keystone. $user_id_attribute (uidNumber)
is the field keystone should be looking for within the DN's record to
equate to user_id in the code context.
--
You received this bug notification because you
We do not believe it is a good idea in this production cloud to change
the user_id_attribute to = uid, as the user mapping table has already
stored the uidNumbers as the user_id_attribute, and this would lead to
database inconsistency unless we wiped the user table from the database.
user_id_attri
Positive outcome for initial testing of the ppa:james-page/rocky patch.
contacting other users of this cloud for confirmation.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1832265
Title:
py3: incon
James, Corey,
UCA needs updating to provide 14.1.0 packages to bionic-rocky clouds.
ubuntu@juju-3e01e3-24-lxd-3:~$ sudo apt-cache policy keystone
keystone:
Installed: 2:14.0.1-0ubuntu3~cloud0
Candidate: 2:14.0.1-0ubuntu3~cloud0
Version table:
*** 2:14.0.1-0ubuntu3~cloud0 500
500 ht
Sorry for my assumption there. I see that it appears it was backported
into the UCA 14.0.1-0ubuntu3.
I should update that this does not fix all keystone-ldap functionality
for rocky.
Please review lp#1832265 for additional code paths which may need
similar patching, as this cloud is running 14.0
This affects bionic openstack cloud environments when os-*-hostname is
configured for keystone, and the keystone entry is deleted temporarily
from upstream dns, or the upstream dns fails providing no record for the
lookup of keystone.endpoint.domain.com.
We have to then flush all caches across the
Workaround to get temporary fix into radosgw init script. note, this
will potentially restart all your rgw units at once, you may want to run
one unit at a time.
juju run --application ceph-radosgw 'perl -pi -e
"s/^PREFIX=.*/PREFIX=client.rgw./" /etc/init.d/radosgw; systemctl
daemon-reload; syste
This is only confirmed on xenial Ocata.
When querying the domain, as it loops through users returned from the all user
query of LDAP, it tries to create mappings in keystone for any new users.
https://github.com/openstack/keystone/blob/stable/ocata/keystone/identity/core.py#L599
This hits the
Here's a query I used to determine we have entries in id_mapping table
that don't have a matching local_entity in the user/nonlocal_user
tables.
select * from id_mapping where public_id not in (select
id_mapping.public_id from id_mapping join user on id_mapping.public_id =
user.id);
--
You recei
Note, this is a Xenial queens install.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1813007
Title:
Unable to install new flows on compute nodes when having broken
security group rules
To manage
Found https://paste.ubuntu.com/p/s5Z4DNJspV/ missing.
New copy here: https://pastebin.ubuntu.com/p/g7Q3nFmhWN/
The thing to note is that they both have the same remote_group_id,
40ee2790-282f-4c6a-8f00-d5ee0b8b66d7 and one has
ports_range_min/max=None (which in the code None is replaced with 1 on
nevermind. I see the patch is kernel fix...will upgrade my host.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1772671
Title:
Kernel produces empty lines in /proc/PID/status
To manage notification
This needs to be backported to trusty for users of the linux-image-
generic-lts-xenial.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1772671
Title:
Kernel produces empty lines in /proc/PID/status
** Tags added: canonical-bootstack
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1759787
Title:
Very inaccurate TSC clocksource with kernel 4.13 on selected CPUs
To manage notifications about this
Joseph,
I'm currently testing a 4.15.0-13 kernel from xenial-16.04-edge path on
these hosts. I just had the issue exhibit before the kernel change, so
we should know within a couple days if that helps. Unfortunately, the
logs for this system beyond those shared are not available publicly.
--
Y
perhaps a proper fix is for ubuntu-sso-client to release a new python-
ubuntu-sso-client package in bionic that doesn't include this UbuntuOne-
Go_Daddy_Class_2_CA.pem now that ca-certificates package has the CA.
However, I'd still like to see duplicate certs not causing ca-
certificates.crt to be
Public bug reported:
The certificate /usr/share/ca-
certificates/mozilla/Go_Daddy_Class_2_CA.crt in package ca-certificates
is conflicting with /etc/ssl/certs/UbuntuOne-Go_Daddy_Class_2_CA.pem
from package python-ubuntu-sso-client.
This results in the postinst trigger for ca-certificates to remov
Public bug reported:
I have an environment with Dell R630 servers with RAID controllers with
two virtual disks and 22 passthru devices. 2 SAS SSDs and 20 HDDs are
setup in 2 bcache cachesets with a resulting 20 mounted xfs filesystems
running bcache backending an 11 node swift cluster (one zone h
Public bug reported:
Running charm 2.2.2/charm-tools 2.2.3 via latest stable snap, I find
that the symlinks in the hooks and actions directories of my charms end
up deferenced such that the resulting charm has X copies of the file
that was originally symlinked to things such as hooks/config-change
So, it was NOT a pacemaker restart, it was pacemaker trying to perform
standard dropped connection reconnect with an incompatible library for
CPG API access to the restarted corosync.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
htt
@nacc:
The error condition was that when corosync restarted, pacemaker
disconnected (as was normal) and then tried reconnecting, but when
reconnecting ran into this error:
error: pcmk_cpg_dispatch: Connection to the CPG API failed: Library
error (2)
So, pacemaker is trying to re-handshake with t
** Summary changed:
- New Prometheus snap refresh brakes exporter
+ New Prometheus snap refresh breaks exporter
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1736742
Title:
New Prometheus snap refr
Also of interest to anyone running into this, you can check active
daemon's injected args with:
ceph --admin-daemon /var/run/ceph/ceph-mon* config show | grep
ex:
ceph --admin-daemon /var/run/ceph/ceph-mon* config show|grep
mon_pg_warn_max_per_osd
"mon_pg_warn_max_per_osd": "300",
--
You
We are seeing this same flapping on another cloud. One node had rebooted
yesterday when the HEALTH_WARN flapping began. Running on trusty/mitaka cloud
archive 10.2.7 ceph package.
That server is giving the health_warn on the too many pgs.
Ohter server rebooted 7 days ago was not giving warning,
Public bug reported:
I have two hosts running artful 17.10 with up-to-date packages. My
"source" desktop is running Wayland, and my destination desktop is
running Xorg due to nvidia card.
>From my source desktop, I'm able to run virt-viewer -c qemu+ssh
://remote-artful/system and connect to the
I'm still working on reproducing. While attempting reproduction, I had
an environment where I had 3 hosts, dummy charms on each for ubuntu,
then added nova-compute to 3. Removed nova-compute unit from the third
host, and still saw stats for it in hypervisor-stats. There may be some
cleanup missi
>From a high level, it appears that invoke-rc.d script used for
compatibility falls back to checking for /etc/rcX.d symlinks for a
"policy" check if there is no $POLICYHELPER installed. Perhaps the
actual shortcoming is not having the policy-rc.d installed to prefer
systemd over init.d on Xenial.
re: init-system-helpers, I noticed the oddity of the init script on a
systemd system as well and found that there's a systemd hack in /lib/lsb
/init-functions.d/40-systemd that allows for multi-startup
compatibility. I believe invoke-rc.d should check systemd
"enabled/disabled" state instead of ju
In models.py, both in Mitaka and in master, I've found that the relation
between ComputeNode and Service is using the following join in the
Service context:
primaryjoin='and_(Service.host == Instance.host,'
'Service.binary == "nova-compute",'
'Instan
Trusty versions of packages affected (I see there is a systemd update
229-4ubuntu19. Does this include the backported fixes from v230/v231
mentioned in comment #4?):
ii dbus 1.10.6-1ubuntu3.1
amd64simple interprocess messaging system
** Tags added: canonical-bootstack canonical-is
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1591411
Title:
systemd-logind must be restarted every ~1000 SSH logins to prevent a
~25 second delay
Can we get this backported to trusty?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1591411
Title:
systemd-logind must be restarted every ~1000 SSH logins to prevent a
~25 second delay
To manage
James,
I just filed a related bug which narrows down that this is happening if
ceph-osd configures ssh key exchanges before nova-compute unit is added
to the host.
https://bugs.launchpad.net/charm-nova-compute/+bug/1677707
This is also a Xenial Mitaka cloud, these nodes were added to a juju-
dep
I am also seeing this on yakkety with three displays. As I move the
mouse, the background flickers on two or three of the monitors. tends
toward the lower-right than the upper left. A right-click on the
desktop (bringing up the desktop sub-menu) clears the error immediately
for me until the slee
60 matches
Mail list logo