Also added charm-rabbitmq-server, where this error is more clear.
Rabbitmq-server fails on: https://github.com/openstack/charm-rabbitmq-
server/blob/master/hooks/install#L9 because it runs a "dpkg" command
before a successful execution of "apt update" on the container. Install
eventually fails with
Public bug reported:
Hi,
Using manual provider on top of VMWare.
Juju: 2.9.14
VM: Focal
Containers: Bionic
I've noticed that, in a given Focal host, 2x Bionic containers may have
different behavior: one successfully completes cloud-init whereas the
other fails. The failed container errors at "a
Hi, here is the output of "apt policy python3-neutron":
https://pastebin.ubuntu.com/p/8xt5mdMkVg/
It shows: 16.4.0-0ubuntu2 as installed.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1885169
Title:
Hi,
We've run a test with -proposed version and our tests have been
successful.
Test scenario:
Run Rally test "VMTasks.boot_runcommand_delete" 4x times with 2x as concurrency
That fails on our current environment without the patch because some VMs are
unreachable for several minutes.
Once we've
I've ran a test with both arping and iputils-arping. I can notice that
using the "arping" package, indeed it works fine, just like @hopem
commented previously. However, if we run with "iputils-arping" package,
then I see the same issue:
$ sudo arping -A -I -c 1 -w 1.5 127.0.0.1
arping: invalid ar
** Also affects: neutron (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1885169
Title:
Some arping version only accept integer number as -w argume
Hi, downgraded masakari units to Bionic and re-ran the same test
scenario. I can also see notifications eventually fail and do not get
stuck in "running". Adding tag for bionic as well.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
h
** Tags added: verification-done-bionic
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1889765
Title:
Notification processing failure causes notifications to get stuck
To manage notifications about
Hi @james-page, I've ran the package in focal-proposed:
*** 9.0.0-0ubuntu0.20.04.5 500
500 http://archive.ubuntu.com/ubuntu focal-proposed/main amd64 Packages
And I can see that now notifications do not get stuck in "running"
anymore, but move to "failed" when an error happens. That fixe
I can reproduce this issue on the following scenario:
1) Create a segment and add the node
2) Create enough VMs so Masakari will take some time on the evacuation (e.g. 30
VMs)
3) Reboot the node
4) Masakari starts the migration of the VMs
5) Node eventually comes back and nova-compute is up & run
** Also affects: masakari (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1889765
Title:
Notification processing failure causes notifications to ge
Public bug reported:
Running openstack Ussuri + Focal with Cinder HPE 3PAR on Fiber Channel.
When trying to attach volumes or create a volume from an image, it fails with:
https://pastebin.ubuntu.com/p/mwPZhYK5c4/
That is cinder API-level error, but it happens on the client side of the
communic
Public bug reported:
Hi,
I've tested this on both 20.03 and 20.06.
Looking into ovn-architecture.xml:
https://github.com/ovn-org/ovn/blob/master/ovn-architecture.7.xml#L2530
It states that once RBAC is enabled, ovn-controllers will have access to some
of the tables and that is hardcoded within
Using focal desktop and I am seeing the same issue.
I was seeing tracker-store and tracker-miner-fs taking an entire core of my
notebook.
Tried: tracker reset --hard and reboot.
Did not work for me.
Only effective solution was to disable search engine on "Settings".
--
You received this bug not
** Tags added: cpe-onsite
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1880959
Title:
Rules from the policy directory files are not reapplied after changes
to the primary policy file
To manage n
Hi Tim, I don't think patching kubernetes-master will resolve this
issue. Mainly because the issue happens on kubernetes-workers.
We were seeing PVCs failing with: "No VM found". When we look into the
logs, actually kubernetes was trying to learn about its kubernetes-
workers through SystemUUID.
Hi, does the fix for Bionic has been backported to stable?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1832082
Title:
bnx2x driver causes 100% CPU load
To manage notifications about this bug go t
This script *should* trigger the issue on Bionic GA:
https://pastebin.ubuntu.com/p/WdKGbMWnM6/
Try it with both GA and HWE bionic, the commit on HWE should trigger up.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launc
I was looking into kernel commits and I came across this:
https://github.com/torvalds/linux/commit/fadd94e05c02afec7b70b0b14915624f1782f578
So, as far as I understood, it actually deals with the issue of manual
device detach during a writeback clean-up and causing deadlock. The
timeline makes sens
Public bug reported:
freeipmi: 1.4.11
Ubuntu Bionic
MAAS 2.5.2
Using ipmipower with opensesspriv work-around enabled to power up/down my
servers.
However, due to my network connectivity, some packages may face longer latency
on delivery.
I can see on tcpdump that messaging follows:
1) ipmipow
Well, I've found out why charm is not complaining: systemctl is marking
as SUCCESS despite failures on journalctl.
Also, running an strace on this, I see:
close(3)= 0
stat("/etc/swift/proxy-server.conf", {st_mode=S_IFREG|0644, st_size=2797, ...})
= 0
openat(AT_FDCW
Also, despite those issues, charm is stating "Ready/Active" on its
status
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1817055
Title:
Broken dependency when setting swift-proxy on Bionic
To manage
To keep it registered, I've set up swift-proxy on Bionic containers
(same configs as described on the main text of this bug).
After deployed, I've ran:
juju run --application swift-proxy "add-apt-repository
ppa:james-page/bug1817055"
juju run --application swift-proxy "apt update"
juju run --appl
The problem disappears when switching to your PPA. However, I still see
some trace on journalctl for swift-proxy:
https://pastebin.ubuntu.com/p/RFzk7qRQ6J/
The process marks:
● swift-proxy.service - LSB: Swift proxy server
Loaded: loaded (/etc/init.d/swift-proxy; generated)
Active: active (
@raharper
Please, reference to: https://bugs.launchpad.net/curtin/+bug/1815018
I am adding all the info I've collected there on that bug report instead
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/171
I'm facing this bug on a 33-server openstack deployment using xenial-queens.
Curtin version: 18.1-17-gae48e86f-0ubuntu1~16.04.1
As per comment #14, the fix seems to be applied to version 17.1.
Therefore, I can point that the problem still persists.
The failure is intermittent, meaning generally
26 matches
Mail list logo