[Bug 1883112] Re: rbd-target-api crashes with python TypeError
+1 makes sense. Thanks for doing this validation @chris.macnaughton -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1883112 Title: rbd-target-api crashes with python TypeError To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1907250] Re: [focal] charm becomes blocked with workload-status "Failed to connect to MySQL"
I've filed https://bugs.launchpad.net/charm-mysql-router/+bug/1973177 is track this seperatly -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1907250 Title: [focal] charm becomes blocked with workload-status "Failed to connect to MySQL" To manage notifications about this bug go to: https://bugs.launchpad.net/charm-mysql-router/+bug/1907250/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1907250] Re: [focal] charm becomes blocked with workload-status "Failed to connect to MySQL"
One of the causes of a charm going into a "Failed to connect to MySQL" state is that a connection to the database failed when the db-router charm attempted to restart the db-router service. Currently the charm will only retry the connection in response to one return code from the mysql. The return code is 2013 which is "Message: Lost connection to MySQL server during query" *1. However, if the connection fails to be established in the first place then the error returned is 2003 "Can't connect to MySQL server on...". *1 https://dev.mysql.com/doc/mysql-errors/8.0/en/client-error-reference.html -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1907250 Title: [focal] charm becomes blocked with workload-status "Failed to connect to MySQL" To manage notifications about this bug go to: https://bugs.launchpad.net/charm-mysql-router/+bug/1907250/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1969775] [NEW] rbd-target-api crashes with `blacklist removal failed`
Public bug reported: [Impact] * ceph-iscsi on Focal talking to a Pacific or later Ceph cluster * rbd-target-api service fails to start if there is a blocklist entry for the unit. * When the rbd-target-api service starts it checks if any of the ip addresses on the machine it is running on are listed as blocked. If there are entries it tries to remove them. When it issues the block removal command it checks stdout from the removal command looking for the string `un-blacklisting`. However from Pacific onward a successful unblocking returns `un-blocklisting` instead (https://github.com/ceph/ceph/commit/dfd01d765304ed8783cef613930e65980d9aee23) [Test Plan] If an existing ceph-iscsi deployment is available then skip to step 3. 1) Deploy the bundle below (tested with OpenStack provider). series: focal applications: ceph-iscsi: charm: cs:ceph-iscsi num_units: 2 ceph-osd: charm: ch:ceph-osd num_units: 3 storage: osd-devices: 'cinder,10G' options: osd-devices: '/dev/test-non-existent' source: yoga channel: latest/edge ceph-mon: charm: ch:ceph-mon num_units: 3 options: monitor-count: '3' source: yoga channel: latest/edge relations: - - 'ceph-mon:client' - 'ceph-iscsi:ceph-client' - - 'ceph-osd:mon' - 'ceph-mon:osd' 2) Connect to ceph-iscsi unit: juju ssh -m zaza-a1d88053ab85 ceph-iscsi/0 3) Stop rbd-target-api via systemd to make test case clearer: sudo systemctl stop rbd-target-api 4) Add 2 blocklist entries for this unit (due to another issue the ordering of the output from `osd blacklist ls` matters which can lead to the reproduction of this bug being intermittent. To avoid this add two entries which ensures there is always an entry for this node in the list of blocklist entries to be removed). sudo ceph -n client.ceph-iscsi --conf /etc/ceph/iscsi/ceph.conf osd blacklist add $(hostname --all-ip-addresses | awk '{print $1}'):0/1 sudo ceph -n client.ceph-iscsi --conf /etc/ceph/iscsi/ceph.conf osd blacklist add $(hostname --all-ip-addresses | awk '{print $1}'):0/2 sudo ceph -n client.ceph-iscsi --conf /etc/ceph/iscsi/ceph.conf osd blacklist ls listed 2 entries 172.20.0.135:0/2 2022-02-23T11:14:54.850352+ 172.20.0.135:0/1 2022-02-23T11:14:52.502592+ 5) Attempt to start service: sudo /usr/bin/python3 /usr/bin/rbd-target-api At this point the process should be running in the foreground but instead it will die. The log from the service will have an entry like: 2022-04-21 12:35:21,695 CRITICAL [gateway.py:51:ceph_rm_blacklist()] - blacklist removal failed. Run 'ceph -n client.ceph-iscsi --conf /etc/ceph/iscsi/ceph.conf osd blacklist rm 172.20.0.156:0/1' [Where problems could occur] * Problems could occur with the service starting as this blocklist check is done at startup. * Blocklist entries could fail to be removed. This issue is very similar to Bug #1883112 ** Affects: ceph-iscsi (Ubuntu) Importance: Undecided Status: New ** Description changed: [Impact] - * ceph-iscsi on Focal talking to a Pacific or later Ceph cluster + * ceph-iscsi on Focal talking to a Pacific or later Ceph cluster - * rbd-target-api service fails to start if there is a blocklist -entry for the unit. + * rbd-target-api service fails to start if there is a blocklist + entry for the unit. - * When the rbd-target-api service starts it checks if any of the -ip addresses on the machine it is running on are listed as -blocked. If there are entries it tries to remove them. When it -issues the block removal command it checks stdout from the -removal command looking for the string `un-blacklisting`. -However from Pacific onward a successful unblocking returns -`un-blocklisting` instead (https://github.com/ceph/ceph/commit/dfd01d765304ed8783cef613930e65980d9aee23) - + * When the rbd-target-api service starts it checks if any of the + ip addresses on the machine it is running on are listed as + blocked. If there are entries it tries to remove them. When it + issues the block removal command it checks stdout from the + removal command looking for the string `un-blacklisting`. + However from Pacific onward a successful unblocking returns + `un-blocklisting` instead (https://github.com/ceph/ceph/commit/dfd01d765304ed8783cef613930e65980d9aee23) [Test Plan] - If an existing ceph-iscsi deployment is available then skip to - step 3. + If an existing ceph-iscsi deployment is available then skip to + step 3. - 1) Deploy the bundle below (tested with OpenStack provider). + 1) Deploy the bundle below (tested with OpenStack provider). series: focal applications: - ceph-iscsi: - charm: cs:ceph-iscsi - num_units: 2 - ceph-osd: - charm: ch:ceph-osd - num_units: 3 - storage: - osd-devices: 'cinder,10G' - options: - osd-devices: '/dev/test-non-existent' - source: y
[Bug 1909399] Re: Exception during removal of OSD blacklist entries
*** This bug is a duplicate of bug 1883112 *** https://bugs.launchpad.net/bugs/1883112 ** This bug has been marked a duplicate of bug 1883112 rbd-target-api crashes with python TypeError -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1909399 Title: Exception during removal of OSD blacklist entries To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1909399/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1968586] [NEW] apparmor rules block socket and log creation
Public bug reported: While testing using openstack, guests failed to launch and these denied messages were logged: [ 8307.089627] audit: type=1400 audit(1649684291.592:109): apparmor="DENIED" operation="mknod" profile="swtpm" name="/run/libvirt/qemu/swtpm/11-instance-000b-swtpm.sock" pid=141283 comm="swtpm" requested_mask="c" denied_mask="c" fsuid=117 ouid=117 [10363.999211] audit: type=1400 audit(1649686348.455:115): apparmor="DENIED" operation="open" profile="swtpm" name="/var/log/swtpm/libvirt/qemu/instance-000e-swtpm.log" pid=184479 comm="swtpm" requested_mask="ac" denied_mask="ac" fsuid=117 ouid=117 Adding /run/libvirt/qemu/swtpm/* rwk, /var/log/swtpm/libvirt/qemu/* rwk, to /etc/apparmor.d/usr.bin.swtpm and reloading the profile seems to fix the issue. (Note: This is very similar to existing Bug #1968335) ** Affects: swtpm (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1968586 Title: apparmor rules block socket and log creation To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/swtpm/+bug/1968586/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1965280] Re: rbd-target-api will not start AttributeError: 'Context' object has no attribute 'wrap_socket'
** Patch added: "ceph-iscsi-deb.diff" https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1965280/+attachment/5569987/+files/ceph-iscsi-deb.diff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1965280 Title: rbd-target-api will not start AttributeError: 'Context' object has no attribute 'wrap_socket' To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1965280/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1883112] Re: rbd-target-api crashes with python TypeError
Verification on impish failed due to https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1965280 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1883112 Title: rbd-target-api crashes with python TypeError To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1965280] [NEW] rbd-target-api will not start AttributeError: 'Context' object has no attribute 'wrap_socket'
Public bug reported: The rbd-target-api fails to start on Ubuntu Impish (21.10) and later. This appears to be caused by a werkzeug package revision check in rbd- target-api. The check is used to decide whather to add an OpenSSL.SSL.Context or a ssl.SSLContext. The code comment suggests that ssl.SSLContext is used for werkzeug 0.9 so that TLSv1.2 can be used. It is also worth noting that support for OpenSSL.SSL.Context was dropped in werkzeug 0.10. The intention of the check appears to be to add OpenSSL.SSL.Context if the version of is werkzeug is below 0.9 otherwise use ssl.SSLContext. When rbd-target-api checks the werkzeug revision it only looks at the minor revision number and Ubuntu Impish contains werkzeug 1.0.1 which obviously has a minor revision number of 0 which causes rbd-target-api to use an OpenSSL.SSL.Context which is not supported by werkzeug which causes: # /usr/bin/rbd-target-api * Serving Flask app 'rbd-target-api' (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off Traceback (most recent call last): File "/usr/bin/rbd-target-api", line 3022, in main() File "/usr/bin/rbd-target-api", line 2952, in main app.run(host=settings.config.api_host, File "/usr/lib/python3/dist-packages/flask/app.py", line 922, in run run_simple(t.cast(str, host), port, self, **options) File "/usr/lib/python3/dist-packages/werkzeug/serving.py", line 1010, in run_simple inner() File "/usr/lib/python3/dist-packages/werkzeug/serving.py", line 950, in inner srv = make_server( File "/usr/lib/python3/dist-packages/werkzeug/serving.py", line 782, in make_server return ThreadedWSGIServer( File "/usr/lib/python3/dist-packages/werkzeug/serving.py", line 708, in __init__ self.socket = ssl_context.wrap_socket(self.socket, server_side=True) AttributeError: 'Context' object has no attribute 'wrap_socket' Reported upstream here: https://github.com/ceph/ceph-iscsi/issues/255 ** Affects: ceph-iscsi (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1965280 Title: rbd-target-api will not start AttributeError: 'Context' object has no attribute 'wrap_socket' To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1965280/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1883112] Re: rbd-target-api crashes with python TypeError
Tested successfully on focal with 3.4-0ubuntu2.1 Tested with ceph-iscsi charms functional tests which were previously failing. $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description:Ubuntu 20.04.4 LTS Release:20.04 Codename: focal $ apt-cache policy ceph-iscsi ceph-iscsi: Installed: 3.4-0ubuntu2.1 Candidate: 3.4-0ubuntu2.1 Version table: *** 3.4-0ubuntu2.1 500 500 http://archive.ubuntu.com/ubuntu focal-proposed/universe amd64 Packages 100 /var/lib/dpkg/status 3.4-0ubuntu2 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu focal/universe amd64 Packages ** Tags removed: verification-needed-focal ** Tags added: verification-done-focal -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1883112 Title: rbd-target-api crashes with python TypeError To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1883112] Re: rbd-target-api crashes with python TypeError
** Patch added: "gw-deb.diff" https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+attachment/5569162/+files/gw-deb.diff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1883112 Title: rbd-target-api crashes with python TypeError To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1883112] Re: rbd-target-api crashes with python TypeError
Thank you for the update Robie. I proposed the deb diff based on the fix that had landed upstream because I (wrongly) thought that was what the SRU policy required. I think it makes more sense to go for the minimal fix you suggest. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1883112 Title: rbd-target-api crashes with python TypeError To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1883112] Re: rbd-target-api crashes with python TypeError
** Description changed: + [Impact] + + * rbd-target-api service fails to start if there is a blocklist +entry for the unit making the service unavailable. + + * When the rbd-target-api service starts it checks if any of the +ip addresses on the machine it is running on are listed as +blocked. If there are entries it tries to remove them. In the +process of removing the entries the code attempts to test whether +a string is in the result of a subprocess.check_output call. This +would have worked in python2 but with python3 a byte like object +is returned and check now throws a TypeError. This fix, taken from +upstream, changes the code to remove the `in` check and replace it +with a try/except + + [Test Plan] + + If an existing ceph-iscsi deployment is available then skip to + step 3. + + 1) Deploy the bundle below (tested with OpenStack providor). + + series: focal + applications: + ceph-iscsi: + charm: cs:ceph-iscsi + num_units: 2 + ceph-osd: + charm: ch:ceph-osd + num_units: 3 + storage: + osd-devices: 'cinder,10G' + options: + osd-devices: '/dev/test-non-existent' + channel: latest/edge + ceph-mon: + charm: ch:ceph-mon + num_units: 3 + options: + monitor-count: '3' + channel: latest/edge + relations: + - - 'ceph-mon:client' + - 'ceph-iscsi:ceph-client' + - - 'ceph-osd:mon' + - 'ceph-mon:osd' + + + 2) Connect to ceph-iscsi unit: + + juju ssh -m zaza-a1d88053ab85 ceph-iscsi/0 + + 3) Stop rbd-target-api via systemd to make test case clearer: + + sudo systemctl stop rbd-target-api + + 4) Add 2 blocklist entries for this unit (due to another issue the + ordering of the output from `osd blacklist ls` matters which can lead to + the reproduction of this bug being intermittent. To avoid this add two + entries which ensures there is always an entry for this node in the list + of blocklist entries to be removed). + + sudo ceph -n client.ceph-iscsi --conf /etc/ceph/iscsi/ceph.conf osd blacklist add $(hostname --all-ip-addresses | awk '{print $1}'):0/1 + sudo ceph -n client.ceph-iscsi --conf /etc/ceph/iscsi/ceph.conf osd blacklist add $(hostname --all-ip-addresses | awk '{print $1}'):0/2 + sudo ceph -n client.ceph-iscsi --conf /etc/ceph/iscsi/ceph.conf osd blacklist ls + listed 2 entries + 172.20.0.135:0/2 2022-02-23T11:14:54.850352+ + 172.20.0.135:0/1 2022-02-23T11:14:52.502592+ + + + 5) Attempt to start service: + + sudo /usr/bin/python3 /usr/bin/rbd-target-api + Traceback (most recent call last): + File "/usr/bin/rbd-target-api", line 2952, in + main() + File "/usr/bin/rbd-target-api", line 2862, in main + osd_state_ok = ceph_gw.osd_blacklist_cleanup() + File "/usr/lib/python3/dist-packages/ceph_iscsi_config/gateway.py", line 111, in osd_blacklist_cleanup + rm_ok = self.ceph_rm_blacklist(blacklist_entry.split(' ')[0]) + File "/usr/lib/python3/dist-packages/ceph_iscsi_config/gateway.py", line 46, in ceph_rm_blacklist + if ("un-blacklisting" in result) or ("isn't blacklisted" in result): + TypeError: a bytes-like object is required, not 'str' + + + [Where problems could occur] + + * Problems could occur with the service starting as this blocklist check is done at startup. + + * Blocklist entries could fail to be removed. + + Old bug description: + $ lsb_release -rd Description: Ubuntu 20.04 LTS Release: 20.04 $ dpkg -S /usr/lib/python3/dist-packages/ceph_iscsi_config/gateway.py ceph-iscsi: /usr/lib/python3/dist-packages/ceph_iscsi_config/gateway.py $ apt-cache policy ceph-iscsi ceph-iscsi: - Installed: 3.4-0ubuntu2 - Candidate: 3.4-0ubuntu2 - Version table: - *** 3.4-0ubuntu2 500 - 500 http://de.archive.ubuntu.com/ubuntu focal/universe amd64 Packages - 500 http://de.archive.ubuntu.com/ubuntu focal/universe i386 Packages - 100 /var/lib/dpkg/status + Installed: 3.4-0ubuntu2 + Candidate: 3.4-0ubuntu2 + Version table: + *** 3.4-0ubuntu2 500 + 500 http://de.archive.ubuntu.com/ubuntu focal/universe amd64 Packages + 500 http://de.archive.ubuntu.com/ubuntu focal/universe i386 Packages + 100 /var/lib/dpkg/status On second startup after a reboot, rbd-target-api crashes with a TypeError: Traceback (most recent call last): - File "/usr/bin/rbd-target-api", line 2952, in - main() - File "/usr/bin/rbd-target-api", line 2862, in main - osd_state_ok = ceph_gw.osd_blacklist_cleanup() - File "/usr/lib/python3/dist-packages/ceph_iscsi_config/gateway.py", line 110, in osd_blacklist_cleanup - rm_ok = self.ceph_rm_blacklist(blacklist_entry.split(' ')[0]) - File "/usr/lib/python3/dist-packages/ceph_iscsi_config/gateway.py", line 46, in ceph_rm_blacklist - if ("un-blacklisting" in result) or ("isn't blacklisted" in result): + File "/usr/bin/rbd-target-api", line 2952, in + main() + File "/usr/bin/rbd-target
[Bug 1883112] Re: rbd-target-api crashes with python TypeError
** Patch added: "deb.diff" https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+attachment/5562748/+files/deb.diff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1883112 Title: rbd-target-api crashes with python TypeError To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1883112] Re: rbd-target-api crashes with python TypeError
** Changed in: ceph-iscsi (Ubuntu) Status: New => Confirmed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1883112 Title: rbd-target-api crashes with python TypeError To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1883112/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1954306] Re: Action `remove-instance` works but appears to fail
s/The issue appears when using the mysql to/The issue appears when using the mysql shell to/ -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1954306 Title: Action `remove-instance` works but appears to fail To manage notifications about this bug go to: https://bugs.launchpad.net/charm-mysql-innodb-cluster/+bug/1954306/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1954306] Re: Action `remove-instance` works but appears to fail
I don't think this is a charm bug. The issue appears when using the mysql to remove a node from the cluster. From what I can see you cannot persist group_replication_force_members and is correctly unset. So the error being reported seems wrong https://pastebin.ubuntu.com/p/sx6ZB3rs6r/ root@juju-1f04f3-zaza-90b9e082f2aa-2:/var/lib/juju/agents/unit-mysql-innodb-cluster-1/charm# /snap/bin/mysqlsh Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory MySQL Shell 8.0.23 Copyright (c) 2016, 2021, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit. mysql-py> shell.connect('clusteruser:d2Z27kpxZmJ826tSVWL6SVV4LYZhZwwryHtM@172.20.0.111') Creating a session to 'clusteruser@172.20.0.111' Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 1644 (X protocol) Server version: 8.0.27-0ubuntu0.20.04.1 (Ubuntu) No default schema selected; type \use to set one. mysql-py []> cluster = dba.get_cluster('jujuCluster') mysql-py []> cluster.remove_instance('clusteruser@172.20.0.166', {'force': False}) The instance will be removed from the InnoDB cluster. Depending on the instance being the Seed or not, the Metadata session might become invalid. If so, please start a new session to the Metadata Storage R/W instance. Instance '172.20.0.166:3306' is attempting to leave the cluster... ERROR: Instance '172.20.0.166:3306' failed to leave the cluster: Variable 'group_replication_force_members' is a non persistent variable Traceback (most recent call last): File "", line 1, in mysqlsh.DBError: MySQL Error (1238): Cluster.remove_instance: Variable 'group_replication_force_members' is a non persistent variable mysql-py []> \sql show variables like 'group_replication_force_members'; +-+---+ | Variable_name | Value | +-+---+ | group_replication_force_members | | +-+---+ 1 row in set (0.0086 sec) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1954306 Title: Action `remove-instance` works but appears to fail To manage notifications about this bug go to: https://bugs.launchpad.net/charm-mysql-innodb-cluster/+bug/1954306/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1954306] Re: Action `remove-instance` works but appears to fail
** Also affects: mysql-8.0 (Ubuntu) Importance: Undecided Status: New ** Changed in: charm-mysql-innodb-cluster Status: New => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1954306 Title: Action `remove-instance` works but appears to fail To manage notifications about this bug go to: https://bugs.launchpad.net/charm-mysql-innodb-cluster/+bug/1954306/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1944080] Re: [fan-network] Race-condition between "apt update" and dhcp request causes cloud-init error
Perhaps I'm missing something but this does not seem to be a bug in the rabbitmq-server charm. It may be easier to observe there but the root cause is elsewhere. ** Changed in: charm-rabbitmq-server Status: New => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1944080 Title: [fan-network] Race-condition between "apt update" and dhcp request causes cloud-init error To manage notifications about this bug go to: https://bugs.launchpad.net/charm-rabbitmq-server/+bug/1944080/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1946787] Re: [SRU] Fix inconsistent encoding secret encoding
Tested successfully on focal victoria using 1:11.0.0-0ubuntu1~cloud1 . I created an encrypted volume and attached it to a VM. cinder type-create LUKS cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512 --control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor cinder create --volume-type LUKS --poll --name testvol 1 openstack keypair show guests || openstack keypair create --public-key ~/.ssh/id_rsa_guests.pub guests openstack flavor create --id 8 --ram 1024 --disk 8 --vcpus 1 --public m1.ly openstack server create --image bionic --flavor m1.ly --network private --key-name guests --wait test3 openstack floating ip create ext_net openstack server add floating ip test3 172.20.0.235 openstack server add volume --device /dev/vdb test3 testvol cinder list WARNING:cinderclient.shell:API version 3.64 requested, WARNING:cinderclient.shell:downgrading to 3.62 based on server support. +--++-+--+-+--+--+ | ID | Status | Name| Size | Volume Type | Bootable | Attached to | +--++-+--+-+--+--+ | 7ea1296e-a478-4aea-ade0-49f00034b58b | in-use | testvol | 1| LUKS | false| e1b2c025-0ede-4330-9129-80f6c281ac4d | +--++-+--+-+--+--+ cinder show 7ea1296e-a478-4aea-ade0-49f00034b58b WARNING:cinderclient.shell:API version 3.64 requested, WARNING:cinderclient.shell:downgrading to 3.62 based on server support. ++--+ | Property | Value| ++--+ | attached_servers | ['e1b2c025-0ede-4330-9129-80f6c281ac4d'] | | attachment_ids | ['c4410464-ff27-4234-9f5f-c5a7b094463b'] | | availability_zone | nova | | bootable | false| | cluster_name | None | | consistencygroup_id| None | | created_at | 2021-11-02T11:23:28.00 | | description| None | | encrypted | True | | group_id | None | | id | 7ea1296e-a478-4aea-ade0-49f00034b58b | | metadata | | | migration_status | None | | multiattach| False| | name | testvol | | os-vol-host-attr:host | juju-4766ac-zaza-f0e92451c718-11@LVM#LVM | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 92c507c64e5b47d886e68b0a874499e6 | | provider_id| None | | replication_status | None | | service_uuid | 4e51ffb9-c259-4647-9a9a-d0adb19d0f6d | | shared_targets | False| | size | 1| | snapshot_id| None | | source_volid | None | | status | in-use | | updated_at | 2021-11-02T11:40:16.00 | | user_id| 0f41207ddcfd4bd5ab8ac694c772b709 | | volume_type| LUKS | ++--+ ** Tags removed: verification-victoria-needed ** Tags added: verification-victoria-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1946787 Title: [SRU] Fix inconsistent encoding secret encoding To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1946787/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1946787] Re: [SRU] Fix inconsistent encoding secret encoding
Tested successfully on focal wallaby using 2:12.0.0-0ubuntu2~cloud0 . I created an encrypted volume and attached it to a VM. cinder type-create LUKS cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512 --control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor cinder create --volume-type LUKS --poll --name testvol 1 openstack keypair show guests || openstack keypair create --public-key ~/.ssh/id_rsa_guests.pub guests openstack flavor create --id 8 --ram 1024 --disk 8 --vcpus 1 --public m1.ly openstack server create --image bionic --flavor m1.ly --network private --key-name guests --wait test3 openstack floating ip create ext_net openstack server add floating ip test3 172.20.0.207 openstack server add volume --device /dev/vdb test3 testvol cinder list +--++-+--+-+--+--+ | ID | Status | Name| Size | Volume Type | Bootable | Attached to | +--++-+--+-+--+--+ | ebf6c7d9-aac4-440e-b29f-c4ddd6a3e544 | in-use | testvol | 1| LUKS | false| 6c47befa-4b32-4d87-9a03-c23e26ed9255 | +--++-+--+-+--+--+ cinder show testvol ++--+ | Property | Value| ++--+ | attached_servers | ['6c47befa-4b32-4d87-9a03-c23e26ed9255'] | | attachment_ids | ['c6653494-c23e-4312-a441-f86eba08794f'] | | availability_zone | nova | | bootable | false| | cluster_name | None | | consistencygroup_id| None | | created_at | 2021-11-01T18:15:41.00 | | description| None | | encrypted | True | | encryption_key_id | dde779f5-ad06-45e8-979c-37dd3cea8505 | | group_id | None | | id | ebf6c7d9-aac4-440e-b29f-c4ddd6a3e544 | | metadata | | | migration_status | None | | multiattach| False| | name | testvol | | os-vol-host-attr:host | juju-9ce866-zaza-17f25c1dd768-11@LVM#LVM | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 9a7dac3a794f42f79bf32707ebbffb5f | | provider_id| None | | replication_status | None | | service_uuid | 86a123a1-3845-4099-8b37-52cec2a787de | | shared_targets | False| | size | 1| | snapshot_id| None | | source_volid | None | | status | in-use | | updated_at | 2021-11-01T18:33:38.00 | | user_id| 6f0383a710674745aaffbf083c101f52 | | volume_type| LUKS | | volume_type_id | 25408c30-0ffc-4584-99cd-dc834962bab7 | ++--+ ** Tags removed: verification-wallaby-needed ** Tags added: verification-wallaby-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1946787 Title: [SRU] Fix inconsistent encoding secret encoding To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1946787/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1946787] Re: [SRU] Fix inconsistent encoding secret encoding
Tested successfully on hirsute using 2:12.0.0-0ubuntu2 . I created an encrypted volume and attached it to a VM. cinder type-create LUKS cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512 --control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor cinder create --volume-type LUKS --poll --name testvol 1 openstack keypair show guests || openstack keypair create --public-key ~/.ssh/id_rsa_guests.pub guests openstack flavor create --id 8 --ram 1024 --disk 8 --vcpus 1 --public m1.ly openstack server create --image bionic --flavor m1.ly --network private --key-name guests --wait test3 openstack floating ip create ext_net openstack server add floating ip test3 172.20.0.207 openstack server add volume --device /dev/vdb test3 testvol cinder list +--++-+--+-+--+--+ | ID | Status | Name| Size | Volume Type | Bootable | Attached to | +--++-+--+-+--+--+ | 67564b48-54b7-47bf-ac95-d701b455cb7d | in-use | testvol | 1| LUKS | false| 6c43fed1-a195-47d8-b5a9-dc7fd166bf58 | +--++-+--+-+--+--+ cinder show testvol ++--+ | Property | Value| ++--+ | attached_servers | ['6c43fed1-a195-47d8-b5a9-dc7fd166bf58'] | | attachment_ids | ['f0c3ed24-2973-407a-b6f6-afcef999ed43'] | | availability_zone | nova | | bootable | false| | cluster_name | None | | consistencygroup_id| None | | created_at | 2021-11-01T16:38:32.00 | | description| None | | encrypted | True | | encryption_key_id | c6079e38-fe86-4e16-aee0-09d07fdfc719 | | group_id | None | | id | 67564b48-54b7-47bf-ac95-d701b455cb7d | | metadata | | | migration_status | None | | multiattach| False| | name | testvol | | os-vol-host-attr:host | juju-86a900-zaza-c440171f601b-11@LVM#LVM | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 6485c947c61046c99b88e8f5f3bcae9a | | provider_id| None | | replication_status | None | | service_uuid | 5a4cf232-59a0-4cd9-8d3f-badd74e9a5e8 | | shared_targets | False| | size | 1| | snapshot_id| None | | source_volid | None | | status | in-use | | updated_at | 2021-11-01T17:27:36.00 | | user_id| d16ea8b7d0d542d8b2f36f6a121434bc | | volume_type| LUKS | | volume_type_id | 2bfe04b8-3e70-412f-a348-f6f5ff359991 | ++--+ ** Tags removed: verification-needed-hirsute ** Tags added: verification-done-hirsute -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1946787 Title: [SRU] Fix inconsistent encoding secret encoding To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1946787/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1943863] Re: DPDK instances are failing to start: Failed to bind socket to /run/libvirt-vhost-user/vhu3ba44fdc-7c: No such file or directory
https://github.com/openstack-charmers/charm-layer-ovn/pull/52 ** Also affects: neutron Importance: Undecided Status: New ** No longer affects: neutron ** No longer affects: neutron (Ubuntu) ** Also affects: charm-layer-ovn Importance: Undecided Status: New ** Changed in: charm-layer-ovn Status: New => Confirmed ** Changed in: charm-layer-ovn Importance: Undecided => High ** Changed in: charm-layer-ovn Assignee: (unassigned) => Liam Young (gnuoy) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1943863 Title: DPDK instances are failing to start: Failed to bind socket to /run/libvirt-vhost-user/vhu3ba44fdc-7c: No such file or directory To manage notifications about this bug go to: https://bugs.launchpad.net/charm-layer-ovn/+bug/1943863/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1944424] Re: AppArmor causing HA routers to be in backup state on wallaby-focal
** Changed in: charm-neutron-gateway Assignee: (unassigned) => Liam Young (gnuoy) ** Changed in: charm-neutron-gateway Importance: Undecided => High -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1944424 Title: AppArmor causing HA routers to be in backup state on wallaby-focal To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-gateway/+bug/1944424/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1944424] Re: AppArmor causing HA routers to be in backup state on wallaby-focal
** Changed in: charm-neutron-gateway Status: Invalid => Confirmed ** Changed in: neutron (Ubuntu) Status: Confirmed => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1944424 Title: AppArmor causing HA routers to be in backup state on wallaby-focal To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-gateway/+bug/1944424/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1944424] Re: AppArmor causing HA routers to be in backup state on wallaby-focal
A patch was introduced [0] "..which sets the backup gateway device link down by default. When the VRRP sets the master state in one host, the L3 agent state change procedure will do link up action for the gateway device.". This change causes an issue when using keepalived 2.X (focal+) which is fixed by patch [1] which adds a new 'no_track' option to all VIPs and routes in keepalived's config file. Patch [1] which fixed keepalived 2.X broke keepalived 1.X (https://review.opendev.org/c/openstack/neutron/+/707406 [1] https://review.opendev.org/c/openstack/neutron/+/721799 [2] https://review.opendev.org/c/openstack/neutron/+/745641 [3] https://review.opendev.org/c/openstack/neutron/+/757620 ** Also affects: neutron (Ubuntu) Importance: Undecided Status: New ** Changed in: neutron (Ubuntu) Status: New => Confirmed ** Changed in: charm-neutron-gateway Status: Confirmed => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1944424 Title: AppArmor causing HA routers to be in backup state on wallaby-focal To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-gateway/+bug/1944424/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1943863] Re: DPDK instances are failing to start: Failed to bind socket to /run/libvirt-vhost-user/vhu3ba44fdc-7c: No such file or directory
** Also affects: neutron (Ubuntu) Importance: Undecided Status: New ** Changed in: neutron (Ubuntu) Status: New => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1943863 Title: DPDK instances are failing to start: Failed to bind socket to /run/libvirt-vhost-user/vhu3ba44fdc-7c: No such file or directory To manage notifications about this bug go to: https://bugs.launchpad.net/charm-nova-compute/+bug/1943863/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1789177] Re: RabbitMQ fails to synchronize exchanges under high load
I have tested the rocky scenario that was failing for me. Trilio on Train + OpenStack on Rocky. The Trilio functional test to snapshot a server failed without the fix and passed once python3-oslo.messaging 8.1.0-0ubuntu1~cloud2.2 was installed and services restarted ** Tags removed: verification-rocky-needed ** Tags added: verification-rocky-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1789177 Title: RabbitMQ fails to synchronize exchanges under high load To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1789177/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1917485] [NEW] Adding RBAC role to connection does not affect existing connections
Public bug reported: It seems that updating the role attribute of a connection has no affect on existing connections. For example when investigating another bug I needed to disable rbac but to get that to take effect I needed to either restart the southbound listener or the ovn-controller. fwiw these are the steps I took to disable rbac (excluding the restart): # ovn-sbctl find connection _uuid : a3b68994-4376-4506-81eb-e23d15641305 external_ids: {} inactivity_probe: 6 is_connected: false max_backoff : [] other_config: {} read_only : false role: "" status : {} target : "pssl:16642" _uuid : ee53c2b6-ed8b-4b21-9825-a4ecaf2bdc95 external_ids: {} inactivity_probe: 6 is_connected: false max_backoff : [] other_config: {} read_only : false role: ovn-controller status : {} target : "pssl:6642" # ovn-sbctl set connection ee53c2b6-ed8b-4b21-9825-a4ecaf2bdc95 role='""' # ovn-sbctl find connection _uuid : a3b68994-4376-4506-81eb-e23d15641305 external_ids: {} inactivity_probe: 6 is_connected: false max_backoff : [] other_config: {} read_only : false role: "" status : {} target : "pssl:16642" _uuid : ee53c2b6-ed8b-4b21-9825-a4ecaf2bdc95 external_ids: {} inac
[Bug 1917475] [NEW] RBAC Permissions too strict for Port_Binding table
Public bug reported: When using Openstack Ussuri with OVN 20.03 and adding a floating IP address to a port the ovn-controller on the hypervisor repeatedly reports: 2021-03-02T10:33:35.517Z|35359|ovsdb_idl|WARN|transaction error: {"details":"RBAC rules for client \"juju-eab186-zaza-d26c8c079cc7-11.project.serverstack\" role \"ovn-controller\" prohibit modification of table \"Port_Binding\".","error":"permission error"} 2021-03-02T10:33:35.518Z|35360|main|INFO|OVNSB commit failed, force recompute next time. The seams to be because the ovn-controller needs to update the virtual_parent attribute of the port binding *2 but that is not included in the list of permissions allowed by the ovn-controller role *1 *1 https://github.com/ovn-org/ovn/blob/aa8ef5588c119fa8615d78288a7db7e3df2d6fbe/northd/ovn-northd.c#L11331-L11332 *2 https://pastebin.ubuntu.com/p/4CfcxgDgdm/ Disabling rbac by changing the role to "" and stopping and starting the southbound db listener results in the port being immediately updated and the floating IP can be accessed. ** Affects: ovn (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1917475 Title: RBAC Permissions too strict for Port_Binding table To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1917475/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1896603] Re: ovn-octavia-provider: Cannot create listener due to alowed_cidrs validation
I have tested the package in victoria proposed (0.3.0-0ubuntu2) and it passed. I verified it by deploying the octavia charm and running its focal victoria functional tests which create an ovn loadbalancer and check it is functional. The log of the test run is here: https://openstack-ci- reports.ubuntu.com/artifacts/test_charm_pipeline_func_smoke/openstack /charm- octavia/775364/4/22201/consoleText.test_charm_func_smoke_21480.txt ** Tags removed: verification-victoria-needed ** Tags added: verification-victoria-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1896603 Title: ovn-octavia-provider: Cannot create listener due to alowed_cidrs validation To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1896603/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1896603] Re: ovn-octavia-provider: Cannot create listener due to alowed_cidrs validation
I have tested the package in groovy proposed (0.3.0-0ubuntu2) and it passed. I verified it by deploying the octavia charm and running its groovy victoria functional tests which create an ovn loadbalancer and check it is fuctional. The log of the test run is here: https://openstack-ci- reports.ubuntu.com/artifacts/test_charm_pipeline_func_smoke/openstack /charm- octavia/775364/4/22201/consoleText.test_charm_func_smoke_21480.txt ** Tags removed: verification-needed-groovy ** Tags added: verification-done-groovy -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1896603 Title: ovn-octavia-provider: Cannot create listener due to alowed_cidrs validation To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1896603/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1896603] Re: ovn-octavia-provider: Cannot create listener due to alowed_cidrs validation
https://code.launchpad.net/~gnuoy/ubuntu/+source/ovn-octavia- provider/+git/ovn-octavia-provider/+merge/397023 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1896603 Title: ovn-octavia-provider: Cannot create listener due to alowed_cidrs validation To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1896603/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1896603] Re: ovn-octavia-provider: Cannot create listener due to alowed_cidrs validation
** Description changed: - Kuryr-Kubernetes tests running with ovn-octavia-provider started to fail - with "Provider 'ovn' does not support a requested option: OVN provider - does not support allowed_cidrs option" showing up in the o-api logs. + [Impact] - We've tracked that to check [1] getting introduced. Apparently it's - broken and makes the request explode even if the property isn't set at - all. Please take a look at output from python-openstackclient [2] where - body I used is just '{"listener": {"loadbalancer_id": "faca9a1b-30dc- - 45cb-80ce-2ab1c26b5521", "protocol": "TCP", "protocol_port": 80, - "admin_state_up": true}}'. + * Users cannot add listeners to an Octavia loadbalancer if it was created using the ovn provider + * This makes the ovn provider unusable in Victoria and will force people to use the more painful alternative of using the Amphora driver - Also this is all over your gates as well, see o-api log [3]. Somehow - ovn-octavia-provider tests skip 171 results there, so that's why it's - green. + [Test Case] - [1] https://opendev.org/openstack/ovn-octavia-provider/src/branch/master/ovn_octavia_provider/driver.py#L142 - [2] http://paste.openstack.org/show/798197/ - [3] https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_4ba/751085/7/gate/ovn-octavia-provider-v2-dsvm-scenario/4bac575/controller/logs/screen-o-api.txt + $ openstack loadbalancer create --provider ovn --vip-subnet-id f92fa6ca-0f29-4b61-aeb6-db052caceff5 --name test-lb + $ openstack loadbalancer show test-lb -c provisioning_status (Repeat until it shows as ACTIVE) + $ openstack loadbalancer listener create --name listener1 --protocol TCP --protocol-port 80 test-lb + Provider 'ovn' does not support a requested option: OVN provider does not support allowed_cidrs option (HTTP 501) (Request-ID: req-52a10944-951d-4414-8441-fe743444ed7c) + + Alternatively run the focal-victoria-ha-ovn functional test in the + octavia charm + + + [Where problems could occur] + + * Problems would be isolated to the managment of octavia loadbalancers + within an openstack cloud. Specifically the patch fixes the checking of + the allowed_cidr option when a listener is created or updated. + + + [Other Info] -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1896603 Title: ovn-octavia-provider: Cannot create listener due to alowed_cidrs validation To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1896603/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1896603] Re: ovn-octavia-provider: Cannot create listener due to alowed_cidrs validation
** Also affects: ovn-octavia-provider (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1896603 Title: ovn-octavia-provider: Cannot create listener due to alowed_cidrs validation To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1896603/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1904199] Re: [groovy-victoria] "gwcli /iscsi-targets/ create ..." fails with 1, GatewayError
I have tested focal and groovy and is only happening on groovy. I have not tried Hirsute. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1904199 Title: [groovy-victoria] "gwcli /iscsi-targets/ create ..." fails with 1, GatewayError To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-iscsi/+bug/1904199/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1904199] Re: [groovy-victoria] "gwcli /iscsi-targets/ create ..." fails with 1, GatewayError
I don;t think this is a charm issue. It looks like an incompatibility between ceph-isci and python3-werkzeug in groovy. # /usr/bin/rbd-target-api * Serving Flask app "rbd-target-api" (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off Traceback (most recent call last): File "/usr/bin/rbd-target-api", line 2952, in main() File "/usr/bin/rbd-target-api", line 2889, in main app.run(host=settings.config.api_host, File "/usr/lib/python3/dist-packages/flask/app.py", line 990, in run run_simple(host, port, self, **options) File "/usr/lib/python3/dist-packages/werkzeug/serving.py", line 1052, in run_simple inner() File "/usr/lib/python3/dist-packages/werkzeug/serving.py", line 996, in inner srv = make_server( File "/usr/lib/python3/dist-packages/werkzeug/serving.py", line 847, in make_server return ThreadedWSGIServer( File "/usr/lib/python3/dist-packages/werkzeug/serving.py", line 766, in __init__ self.socket = ssl_context.wrap_socket(sock, server_side=True) AttributeError: 'Context' object has no attribute 'wrap_socket' ** Also affects: ceph-iscsi (Ubuntu) Importance: Undecided Status: New ** Changed in: charm-ceph-iscsi Status: New => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1904199 Title: [groovy-victoria] "gwcli /iscsi-targets/ create ..." fails with 1, GatewayError To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-iscsi/+bug/1904199/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1882900] Re: Missing sqlalchemy-utils dep on ussuri
Yep thats the traceback I'm seeing. Charm shows: 2020-06-10 12:45:57 ERROR juju-log amqp:40: Hook error: Traceback (most recent call last): File "/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/__init__.py", line 74, in main bus.dispatch(restricted=restricted_mode) File "/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/bus.py", line 390, in dispatch _invoke(other_handlers) File "/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/bus.py", line 359, in _invoke handler.invoke() File "/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/bus.py", line 181, in invoke self._action(*args) File "/var/lib/juju/agents/unit-masakari-0/charm/reactive/masakari_handlers.py", line 50, in init_db charm_class.db_sync() File "/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms_openstack/charm/core.py", line 849, in db_sync subprocess.check_call(self.sync_cmd) File "/usr/lib/python3.6/subprocess.py", line 311, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['masakari-manage', '--config-file', '/etc/masakari/masakari.conf', 'db', 'sync']' returned non-zero exit status 1. 2020-06-10 12:45:57 DEBUG amqp-relation-changed Traceback (most recent call last): 2020-06-10 12:45:57 DEBUG amqp-relation-changed File "/var/lib/juju/agents/unit-masakari-0/charm/hooks/amqp-relation-changed", line 22, in 2020-06-10 12:45:57 DEBUG amqp-relation-changed main() 2020-06-10 12:45:57 DEBUG amqp-relation-changed File "/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/__init__.py", line 74, in main 2020-06-10 12:45:57 DEBUG amqp-relation-changed bus.dispatch(restricted=restricted_mode) 2020-06-10 12:45:57 DEBUG amqp-relation-changed File "/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/bus.py", line 390, in dispatch 2020-06-10 12:45:57 DEBUG amqp-relation-changed _invoke(other_handlers) 2020-06-10 12:45:57 DEBUG amqp-relation-changed File "/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/bus.py", line 359, in _invoke 2020-06-10 12:45:57 DEBUG amqp-relation-changed handler.invoke() 2020-06-10 12:45:57 DEBUG amqp-relation-changed File "/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms/reactive/bus.py", line 181, in invoke 2020-06-10 12:45:57 DEBUG amqp-relation-changed self._action(*args) 2020-06-10 12:45:57 DEBUG amqp-relation-changed File "/var/lib/juju/agents/unit-masakari-0/charm/reactive/masakari_handlers.py", line 50, in init_db 2020-06-10 12:45:57 DEBUG amqp-relation-changed charm_class.db_sync() 2020-06-10 12:45:57 DEBUG amqp-relation-changed File "/var/lib/juju/agents/unit-masakari-0/.venv/lib/python3.6/site-packages/charms_openstack/charm/core.py", line 849, in db_sync 2020-06-10 12:45:57 DEBUG amqp-relation-changed subprocess.check_call(self.sync_cmd) 2020-06-10 12:45:57 DEBUG amqp-relation-changed File "/usr/lib/python3.6/subprocess.py", line 311, in check_call 2020-06-10 12:45:57 DEBUG amqp-relation-changed raise CalledProcessError(retcode, cmd) 2020-06-10 12:45:57 DEBUG amqp-relation-changed subprocess.CalledProcessError: Command '['masakari-manage', '--config-file', '/etc/masakari/masakari.conf', 'db', 'sync']' returned non-zero exit status 1. And manual run of masakari-manage returns: root@juju-656c93-zaza-74a8633f51ae-9:~# masakari-manage --config-file /etc/masakari/masakari.conf db sync 2020-06-10 12:59:29.604 6755 INFO migrate.versioning.api [-] 5 -> 6... 2020-06-10 12:59:29.606 6755 INFO masakari.engine.driver [-] Loading masakari notification driver 'taskflow_driver' 2020-06-10 12:59:29.681 6755 INFO keyring.backend [-] Loading Gnome 2020-06-10 12:59:29.695 6755 INFO keyring.backend [-] Loading Google 2020-06-10 12:59:29.697 6755 INFO keyring.backend [-] Loading Windows (alt) 2020-06-10 12:59:29.699 6755 INFO keyring.backend [-] Loading file 2020-06-10 12:59:29.700 6755 INFO keyring.backend [-] Loading keyczar 2020-06-10 12:59:29.700 6755 INFO keyring.backend [-] Loading multi 2020-06-10 12:59:29.701 6755 INFO keyring.backend [-] Loading pyfs Invalid input received: No module named 'sqlalchemy_utils' -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1882900 Title: Missing sqlalchemy-utils dep on ussuri To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/python-taskflow/+bug/1882900/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1882900] Re: Missing sqlalchemy-utils dep on ussuri
It seems sqlalchemy-utils may have been removed recently in error https://git.launchpad.net/ubuntu/+source/masakari/tree/debian/changelog?id=4d933765965f3d02cd68c696cc69cf53b7c6390d#n3 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1882900 Title: Missing sqlalchemy-utils dep on ussuri To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/masakari/+bug/1882900/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1882900] [NEW] Missing sqlalchemy-utils dep on ussuri
Public bug reported: Package seems to be missing a dependency on sqlalchemy-utils *1. The issue shows itself when running masakari-manage with the new 'taskflow' section enabled *2 *1 https://opendev.org/openstack/masakari/src/branch/stable/ussuri/requirements.txt#L29 *2 https://review.opendev.org/734450 I saw this with bionic ussuri but I assume affects focal too. ** Affects: masakari (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1882900 Title: Missing sqlalchemy-utils dep on ussuri To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/masakari/+bug/1882900/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1874719] Re: Focal deploy creates a 'node1' node
HAving looked into it further it seems to be the name of the node that has changed. juju deploy cs:bionic/ubuntu bionic-ubuntu juju deploy cs:focal/ubuntu focal-ubuntu juju run --unit bionic-ubuntu/0 "sudo apt install --yes crmsh pacemaker" juju run --unit focal-ubuntu/0 "sudo apt install --yes crmsh pacemaker" $ juju run --unit focal-ubuntu/0 "sudo crm status" Cluster Summary: * Stack: corosync * Current DC: node1 (version 2.0.3-4b1f869f0f) - partition with quorum * Last updated: Fri Apr 24 15:03:52 2020 * Last change: Fri Apr 24 15:02:20 2020 by hacluster via crmd on node1 * 1 node configured * 0 resource instances configured Node List: * Online: [ node1 ] Full List of Resources: * No resources $ juju run --unit bionic-ubuntu/0 "sudo crm status" Stack: corosync Current DC: juju-27f7a7-hatest2-0 (version 1.1.18-2b07d5c5a9) - partition WITHOUT quorum Last updated: Fri Apr 24 15:04:05 2020 Last change: Fri Apr 24 15:00:43 2020 by hacluster via crmd on juju-27f7a7-hatest2-0 1 node configured 0 resources configured Online: [ juju-27f7a7-hatest2-0 ] No resources -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1874719 Title: Focal deploy creates a 'node1' node To manage notifications about this bug go to: https://bugs.launchpad.net/charm-hacluster/+bug/1874719/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1874719] [NEW] Focal deploy creates a 'node1' node
Public bug reported: Testing of masakari on focal zaza tests failed because the test checks that all pacemaker nodes are online. This check failed due the appearance of a new node called 'node1' which was marked as offline. I don't know where that node came from or what is supposed to represent but it seems like an unwanted change in behaviour. ** Affects: charm-hacluster Importance: Undecided Status: New ** Affects: pacemaker (Ubuntu) Importance: Undecided Status: New ** Also affects: pacemaker (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1874719 Title: Focal deploy creates a 'node1' node To manage notifications about this bug go to: https://bugs.launchpad.net/charm-hacluster/+bug/1874719/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1873741] Re: Using ceph as a backing store fails on ussuri
The source option was not set properly for the ceph application leading to the python rbd lib being way ahead of the ceph cluster. ** Changed in: charm-glance Assignee: Liam Young (gnuoy) => (unassigned) ** Changed in: charm-glance Status: New => Invalid ** Changed in: glance (Ubuntu) Status: New => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1873741 Title: Using ceph as a backing store fails on ussuri To manage notifications about this bug go to: https://bugs.launchpad.net/charm-glance/+bug/1873741/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1873741] Re: Using ceph as a backing store fails on ussuri
** Also affects: glance (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1873741 Title: Using ceph as a backing store fails on ussuri To manage notifications about this bug go to: https://bugs.launchpad.net/charm-glance/+bug/1873741/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1864838] Re: Checks fail when creating an iscsi target
** Summary changed: - rbd pool name is hardcoded + Checks fail when creating an iscsi target -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1864838 Title: skipchecks=true is needed when deployed on Ubuntu To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1864838/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1864838] [NEW] rbd pool name is hardcoded
Public bug reported: See https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/ and the line: "If not using RHEL/CentOS or using an upstream or ceph-iscsi-test kernel, the skipchecks=true argument must be used. This will avoid the Red Hat kernel and rpm checks:" ** Affects: ceph-iscsi (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1864838 Title: rbd pool name is hardcoded To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph-iscsi/+bug/1864838/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1861321] [NEW] ceilometer-collector fails to stop if cannot connect to message broker
Public bug reported: ceilometer-collector fails to stop if it cannot connect to message broker. To reproduce (assuming amqp is running on localhost): 1) Comment out the 'oslo_messaging_rabbit' section from /etc/ceilometer/ceilometer.conf. This will trigger ceilometer-collector to look locally for a rabbit connection 2) Start ceilometer-collector 3) Observe errors like below in /var/log/ceilometer/ceilometer-collector.log 2020-01-29 18:28:35.848 11808 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] Connection refused. Trying again in 32 seconds. 4) Stop ceilometer-collector 5) Check if ceilometer-collector processes have gone Getting ceilometer from the cloud archive mitaka pocket. # apt-cache policy ceilometer-collector ceilometer-collector: Installed: 1:6.1.5-0ubuntu1~cloud0 Candidate: 1:6.1.5-0ubuntu1~cloud0 Version table: *** 1:6.1.5-0ubuntu1~cloud0 0 500 http://ubuntu-cloud.archive.canonical.com/ubuntu/ trusty-updates/mitaka/main amd64 Packages 100 /var/lib/dpkg/status 2014.1.5-0ubuntu2 0 500 http://nova.clouds.archive.ubuntu.com/ubuntu/ trusty-updates/main amd64 Packages 2014.1.2-0ubuntu1.1 0 500 http://security.ubuntu.com/ubuntu/ trusty-security/main amd64 Packages 2014.1-0ubuntu1 0 500 http://nova.clouds.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages ** Affects: ceilometer (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1861321 Title: ceilometer-collector fails to stop if cannot connect to message broker To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceilometer/+bug/1861321/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1854718] Re: Groups of swift daemons are all forced to use the same config
Sahid pointed out that the swift-init will traverse a search path and start a daemon for every config file it finds so no change to the init script is needed. Initial tests suggest this completely covers my use case. I will continue testing and report back. I will mark the bug as invalid for the moment. Thanks Sahid ! ** Changed in: cloud-archive/mitaka Status: Triaged => Invalid ** Changed in: cloud-archive/ocata Status: Triaged => Invalid ** Changed in: cloud-archive/queens Status: Triaged => Invalid ** Changed in: cloud-archive/rocky Status: Triaged => Invalid ** Changed in: cloud-archive/stein Status: Triaged => Invalid ** Changed in: cloud-archive/train Status: Triaged => Invalid ** Changed in: cloud-archive/ussuri Status: Triaged => Invalid ** Changed in: swift (Ubuntu Xenial) Status: Triaged => Invalid ** Changed in: swift (Ubuntu Bionic) Status: Triaged => Invalid ** Changed in: swift (Ubuntu Disco) Status: Triaged => Invalid ** Changed in: swift (Ubuntu Eoan) Status: Triaged => Invalid ** Changed in: swift (Ubuntu Focal) Status: Triaged => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1854718 Title: Groups of swift daemons are all forced to use the same config To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1854718/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1854718] Re: Groups of swift daemons are all forced to use the same config
Hi Sahid, In our deployment for swift global replication we have two account services. One for local and one for replication: # cat /etc/swift/account-server/1.conf [DEFAULT] bind_ip = 0.0.0.0 bind_port = 6002 workers = 1 [pipeline:main] pipeline = recon account-server [filter:recon] use = egg:swift#recon recon_cache_path = /var/cache/swift [app:account-server] use = egg:swift#account [account-auditor] [account-reaper] # # cat /etc/swift/account-server/2.conf [DEFAULT] bind_ip = 0.0.0.0 bind_port = 6012 workers = 1 [pipeline:main] pipeline = recon account-server [filter:recon] use = egg:swift#recon recon_cache_path = /var/cache/swift [app:account-server] use = egg:swift#account replication_server = true # I believe these two config files are mutually exclusive as they have different values for the same key in both the 'DEFAULT' and 'app:account-server' sections. Similarly, I believe the config file for the local account service is incompatable with the local config file for the local container service. # cat /etc/swift/account-server/1.conf [DEFAULT] bind_ip = 0.0.0.0 bind_port = 6002 workers = 1 [pipeline:main] pipeline = recon account-server [filter:recon] use = egg:swift#recon recon_cache_path = /var/cache/swift [app:account-server] use = egg:swift#account [account-auditor] [account-reaper] # # cat /etc/swift/container-server/1.conf [DEFAULT] bind_ip = 0.0.0.0 bind_port = 6001 workers = 1 [pipeline:main] pipeline = recon container-server [filter:recon] use = egg:swift#recon recon_cache_path = /var/cache/swift [app:container-server] use = egg:swift#container allow_versions = true [container-updater] [container-auditor] I believe these two config files are mutually exclusive as they have different values for the same key in both the 'DEFAULT' and 'pipeline:main' sections. sections. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1854718 Title: Groups of swift daemons are all forced to use the same config To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1854718/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1854718] Re: Groups of swift daemons are all forced to use the same config
Hi Cory, the init script update is to support swift global replication. The upstream code and the proposed changes to the charm support the feature in mitaka so ideally the support would go right back to trusty- mitaka. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1854718 Title: Groups of swift daemons are all forced to use the same config To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/swift/+bug/1854718/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1854718] Re: Groups of swift daemons are all forced to use the same config
** Description changed: - On swift proxy servers there are three groups of services: account, + On swift storage servers there are three groups of services: account, container and object. Each of these groups is comprised of a number of services, for instance: server, auditor, replicator etc Each service has its own init script but all the services in a group are configured to use the same group config file eg swift-account, swift- account-auditor, swift-account-reaper & swift-account-replicator all use /etc/swift/account-server.conf. Obviously this causes a problem when different services need different config. In the case of a swift cluster performing global replication the replication server need " replication_server = true" where as the auditor needs "replication_server = false" -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1854718 Title: Groups of swift daemons are all forced to use the same config To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/swift/+bug/1854718/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1854718] [NEW] Groups of swift daemons are all forced to use the same config
Public bug reported: On swift proxy servers there are three groups of services: account, container and object. Each of these groups is comprised of a number of services, for instance: server, auditor, replicator etc Each service has its own init script but all the services in a group are configured to use the same group config file eg swift-account, swift- account-auditor, swift-account-reaper & swift-account-replicator all use /etc/swift/account-server.conf. Obviously this causes a problem when different services need different config. In the case of a swift cluster performing global replication the replication server need " replication_server = true" where as the auditor needs "replication_server = false" ** Affects: swift (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1854718 Title: Groups of swift daemons are all forced to use the same config To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/swift/+bug/1854718/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1834565] Re: python 3.7: wrap_socket() got an unexpected keyword argument '_context'
I can confirm that the disco proposed repository fixes this issue. I have run the openstack teams mojo spec for disco stein which fails due to this bug. I then reran the test with the charms configured to install from the disco proposed repository and the bug was fixed and the tests passed. Log from test: http://paste.ubuntu.com/p/brSgbmsDpB/ ** Tags removed: verification-needed-disco ** Tags added: verification-done-disco -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1834565 Title: python 3.7: wrap_socket() got an unexpected keyword argument '_context' To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1834565/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp
Hi Christian, Thanks for your comments. I'm sure you spotted it but just to make it clear, the issue occurs with bonded and unbonded dpdk interfaces. I've emailed upstream here *1. Thanks Liam *1 https://mail.openvswitch.org/pipermail/ovs-discuss/2019-July/048997.html -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1833713 Title: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp
** Changed in: dpdk (Ubuntu) Status: Invalid => New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1833713 Title: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp
Ubuntu: eoan DPDK pkg: 18.11.1-3 OVS DPDK pkg: 2.11.0-0ubuntu2 Kerenl: 5.0.0-20-generic If a server has an ovs bridge with a dpdk device for external network access and a network namespace attached then sending data out of the namespace fails if jumbo frames are enabled. Setup: root@node-licetus:~# uname -r 5.0.0-20-generic root@node-licetus:~# ovs-vsctl show 523eab62-8d03-4445-a7ba-7570f5027ff6 Bridge br-test Port "tap1" Interface "tap1" type: internal Port br-test Interface br-test type: internal Port "dpdk-nic1" Interface "dpdk-nic1" type: dpdk options: {dpdk-devargs=":03:00.0"} ovs_version: "2.11.0" root@node-licetus:~# ovs-vsctl get Interface dpdk-nic1 mtu 9000 root@node-licetus:~# ip netns exec ns1 ip addr show tap1 12: tap1: mtu 9000 qdisc fq_codel state UNKNOWN group default qlen 1000 link/ether 0a:dd:76:38:52:54 brd ff:ff:ff:ff:ff:ff inet 10.246.112.101/21 scope global tap1 valid_lft forever preferred_lft forever inet6 fe80::8dd:76ff:fe38:5254/64 scope link valid_lft forever preferred_lft forever * Using iperf to send data out of the netns fails: root@node-licetus:~# ip netns exec ns1 iperf -c 10.246.114.29 Client connecting to 10.246.114.29, TCP port 5001 TCP window size: 325 KByte (default) [ 3] local 10.246.112.101 port 51590 connected with 10.246.114.29 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.3 sec 323 KBytes 257 Kbits/sec root@node-hippalus:~# iperf -s -m Server listening on TCP port 5001 TCP window size: 128 KByte (default) root@node-hippalus:~# * Switching the direction of flow and sending data into the namespace works: root@node-licetus:~# ip netns exec ns1 iperf -s -m Server listening on TCP port 5001 TCP window size: 128 KByte (default) [ 4] local 10.246.112.101 port 5001 connected with 10.246.114.29 port 59454 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 6.06 GBytes 5.20 Gbits/sec [ 4] MSS size 8948 bytes (MTU 8988 bytes, unknown interface) root@node-hippalus:~# iperf -c 10.246.112.101 Client connecting to 10.246.112.101, TCP port 5001 TCP window size: 942 KByte (default) [ 3] local 10.246.114.29 port 59454 connected with 10.246.112.101 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 6.06 GBytes 5.20 Gbits/sec * Using iperf to send data out of the netns after dropping tap mtu works: root@node-licetus:~# ip netns exec ns1 ip link set dev tap1 mtu 1500 root@node-licetus:~# ip netns exec ns1 iperf -c 10.246.114.29 Client connecting to 10.246.114.29, TCP port 5001 TCP window size: 845 KByte (default) [ 3] local 10.246.112.101 port 51594 connected with 10.246.114.29 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 508 MBytes 426 Mbits/sec root@node-hippalus:~# iperf -s -m Server listening on TCP port 5001 TCP window size: 128 KByte (default) [ 4] local 10.246.114.29 port 5001 connected with 10.246.112.101 port 51594 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.1 sec 508 MBytes 424 Mbits/sec [ 4] MSS size 1448 bytes (MTU 1500 bytes, ethernet) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1833713 Title: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp
Ubuntu: eoan DPDK pkg: 18.11.1-3 OVS DPDK pkg: 2.11.0-0ubuntu2 Kerenl: 5.0.0-20-generic If two servers each have an ovs bridge with a dpdk device for external network access and a network namespace attached then communication between taps in the namespaces fails if jumbo frames are enabled. If on one of the servers the external nic is switched so it is no longer managed by dpdk then service is restored. Server 1: root@node-licetus:~# ovs-vsctl show 1fed66c2-b7af-477d-b035-0e1d78451f6e Bridge br-test Port br-test Interface br-test type: internal Port "tap1" Interface "tap1" type: internal Port "dpdk-nic1" Interface "dpdk-nic1" type: dpdk options: {dpdk-devargs=":03:00.0"} ovs_version: "2.11.0" root@node-licetus:~# ovs-vsctl get Interface dpdk-nic1 mtu 9000 root@node-licetus:~# ip netns exec ns1 ip addr show tap1 11: tap1: mtu 9000 qdisc fq_codel state UNKNOWN group default qlen 1000 link/ether 56:b1:9c:a3:de:81 brd ff:ff:ff:ff:ff:ff inet 10.246.112.101/21 scope global tap1 valid_lft forever preferred_lft forever inet6 fe80::54b1:9cff:fea3:de81/64 scope link valid_lft forever preferred_lft forever Server 2: root@node-hippalus:~# ovs-vsctl show cd383272-d341-4be8-b2ab-17ea8cb63ae6 Bridge br-test Port "dpdk-nic1" Interface "dpdk-nic1" type: dpdk options: {dpdk-devargs=":03:00.0"} Port br-test Interface br-test type: internal Port "tap1" Interface "tap1" type: internal ovs_version: "2.11.0" root@node-hippalus:~# ovs-vsctl get Interface dpdk-nic1 mtu 9000 root@node-hippalus:~# ip netns exec ns1 ip addr show tap1 11: tap1: mtu 9000 qdisc fq_codel state UNKNOWN group default qlen 1000 link/ether a6:f2:d8:59:d5:7d brd ff:ff:ff:ff:ff:ff inet 10.246.112.102/21 scope global tap1 valid_lft forever preferred_lft forever inet6 fe80::a4f2:d8ff:fe59:d57d/64 scope link valid_lft forever preferred_lft forever Test: root@node-licetus:~# ip netns exec ns1 iperf -s -m Server listening on TCP port 5001 TCP window size: 128 KByte (default) root@node-hippalus:~# ip netns exec ns1 iperf -c 10.246.112.101 Client connecting to 10.246.112.101, TCP port 5001 TCP window size: 325 KByte (default) [ 3] local 10.246.112.102 port 52848 connected with 10.246.112.101 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.4 sec 323 KBytes 256 Kbits/sec * If the mtu of either tap device is dropped to 1500 then the tests pass: root@node-licetus:~# ip netns exec ns1 ip link set dev tap1 mtu 9000 root@node-licetus:~# ip netns exec ns1 iperf -s -m Server listening on TCP port 5001 TCP window size: 128 KByte (default) [ 4] local 10.246.112.101 port 5001 connected with 10.246.112.102 port 52850 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.1 sec 502 MBytes 418 Mbits/sec [ 4] MSS size 1448 bytes (MTU 1500 bytes, ethernet) root@node-hippalus:~# ip netns exec ns1 ip link set dev tap1 mtu 1500 root@node-hippalus:~# ip netns exec ns1 iperf -c 10.246.112.101 Client connecting to 10.246.112.101, TCP port 5001 TCP window size: 748 KByte (default) [ 3] local 10.246.112.102 port 52850 connected with 10.246.112.101 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 502 MBytes 420 Mbits/sec * If in server 2 the dpdk device is replaced with the same physical device but not managed by dpdk jumbo frame then the tests pass: root@node-hippalus:~# ls -dl /sys/devices/pci:00/:00:02.0/:03:00.0/net/enp3s0f0 drwxr-xr-x 6 root root 0 Jul 8 14:04 /sys/devices/pci:00/:00:02.0/:03:00.0/net/enp3s0f0 root@node-hippalus:~# ovs-vsctl show cd383272-d341-4be8-b2ab-17ea8cb63ae6 Bridge br-test Port "tap1" Interface "tap1" type: internal Port br-test Interface br-test type: internal Port "enp3s0f0" Interface "enp3s0f0" ovs_version: "2.11.0" root@node-hippalus:~# ip netns exec ns1 ip addr show tap1 10: tap1: mtu 9000 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether ba:39:55:e2:b8:81 brd ff:ff:ff:ff:ff:ff inet 10.246.112.102/21 scope global tap1 valid_lft forever preferred_lft forever inet6 fe80::b839:55ff:fee2:b881/64 scope link
[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp
At some point when I was attempting to simplify the test case I dropped setting the mtu on the dpdk devices via ovs so the above test is invalid. I've marked the bug against dpdk as invalid while I redo the tests. ** Changed in: dpdk (Ubuntu) Status: New => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1833713 Title: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1833713] Re: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp
Given the above I'm am going to mark this as affecting the dpdk package rather than the charm ** Also affects: dpdk (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1833713 Title: Metadata is broken with dpdk bonding, jumbo frames and metadata from qdhcp To manage notifications about this bug go to: https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1833713/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1828534] Re: [19.04][Queens -> Rocky] Upgrading to Rocky resulted in "Services not running that should be: designate-producer"
I think this is a packaging bug ** Also affects: designate (Ubuntu) Importance: Undecided Status: New ** Changed in: charm-designate Status: Triaged => Invalid ** Changed in: charm-designate Assignee: Liam Young (gnuoy) => (unassigned) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1828534 Title: [19.04][Queens -> Rocky] Upgrading to Rocky resulted in "Services not running that should be: designate-producer" To manage notifications about this bug go to: https://bugs.launchpad.net/charm-designate/+bug/1828534/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1832075] Re: [19.04][Queens -> Rocky] python3-pymysql is not installed before use
I haven't been able to reproduce this. Could you retry it ? Also could you confirm the version being upgraded to as it's slightly unclear if the error occurred on upgrade from Queens to Rocky (as the bug title says) or Rocky to Stein (as the bug description implies "Setting up openstack-dashboard (3:15.0.0-0ubuntu1~cloud0) ...", thanks. ** Changed in: charm-openstack-dashboard Assignee: Liam Young (gnuoy) => (unassigned) ** Changed in: charm-openstack-dashboard Status: New => Incomplete -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1832075 Title: [19.04][Queens -> Rocky] python3-pymysql is not installed before use To manage notifications about this bug go to: https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1832075/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1832075] Re: [19.04][Queens -> Rocky] python3-pymysql is not installed before use
** Changed in: charm-openstack-dashboard Assignee: (unassigned) => Liam Young (gnuoy) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1832075 Title: [19.04][Queens -> Rocky] python3-pymysql is not installed before use To manage notifications about this bug go to: https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1832075/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1805332] Re: [Swift backend] Upload image hit error: Unicode-objects must be encoded before hashing
The package from rocky-proposed worked for me. Version info below: python3-glance-store: Installed: 0.26.1-0ubuntu2.1~cloud0 Candidate: 0.26.1-0ubuntu2.1~cloud0 Version table: *** 0.26.1-0ubuntu2.1~cloud0 500 500 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-proposed/rocky/main amd64 Packages 100 /var/lib/dpkg/status 0.26.1-0ubuntu2~cloud0 500 500 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-updates/rocky/main amd64 Packages 0.23.0-0ubuntu1 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu bionic/universe amd64 Packages Test output: $ openstack image create --public --file /home/ubuntu/images/bionic-server-cloudimg-amd64.img bionic-test 500 Internal Server Error: The server has either erred or is incapable of performing the requested operation. (HTTP 500) $ juju run --unit glance/0 "apt-cache policy python3-glance-store" python3-glance-store: Installed: 0.26.1-0ubuntu2~cloud0 Candidate: 0.26.1-0ubuntu2~cloud0 Version table: *** 0.26.1-0ubuntu2~cloud0 500 500 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-updates/rocky/main amd64 Packages 100 /var/lib/dpkg/status 0.23.0-0ubuntu1 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu bionic/universe amd64 Packages $ juju run --unit glance/0 "add-apt-repository cloud-archive:rocky-proposed --yes --update" ... $ juju run --unit glance/0 "apt install --yes python3-glance-store; systemctl restart glance-api" ... (clients) ubuntu@gnuoy-bastion2:~/branches/nova-compute$ juju run --unit glance/0 "apt-cache policy python3-glance-store" python3-glance-store: Installed: 0.26.1-0ubuntu2.1~cloud0 Candidate: 0.26.1-0ubuntu2.1~cloud0 Version table: *** 0.26.1-0ubuntu2.1~cloud0 500 500 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-proposed/rocky/main amd64 Packages 100 /var/lib/dpkg/status 0.26.1-0ubuntu2~cloud0 500 500 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-updates/rocky/main amd64 Packages 0.23.0-0ubuntu1 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu bionic/universe amd64 Packages $ openstack image create --public --file /home/ubuntu/images/bionic-server-cloudimg-amd64.img bionic-test +--++ | Field| Value | +--++ | checksum | c8994590c7d61dc68922e461686ef936 | | container_format | bare | | created_at | 2019-05-22T07:41:28Z | | disk_format | raw | | file | /v2/images/788db968-ea48-4b4f-8c91-4e15d23dbe4c/file | | id | 788db968-ea48-4b4f-8c91-4e15d23dbe4c | | min_disk | 0 | | min_ram | 0 | | name | bionic-test | | owner| 3d4ca9d5799546bd852db00ee6d5d4c0
[Bug 1805332] Re: [Swift backend] Upload image hit error: Unicode-objects must be encoded before hashing
The cosmic package worked for me to. Version info below: python3-glance-store: Installed: 0.26.1-0ubuntu2.1 Candidate: 0.26.1-0ubuntu2.1 Version table: *** 0.26.1-0ubuntu2.1 500 500 http://archive.ubuntu.com/ubuntu cosmic-proposed/universe amd64 Packages 100 /var/lib/dpkg/status 0.26.1-0ubuntu2 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu cosmic/universe amd64 Packages Test output: $ juju run --unit glance/0 "apt-cache policy python3-glance-store" python3-glance-store: Installed: 0.26.1-0ubuntu2 Candidate: 0.26.1-0ubuntu2 Version table: *** 0.26.1-0ubuntu2 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu cosmic/universe amd64 Packages 100 /var/lib/dpkg/status $ openstack image create --public --file /home/ubuntu/images/bionic-server-cloudimg-amd64.img bionic-test 500 Internal Server Error: The server has either erred or is incapable of performing the requested operation. (HTTP 500) * Enable proposed, upgrade python3-glance-store and restart glance-api service $ juju run --unit glance/0 "apt-cache policy python3-glance-store" python3-glance-store: Installed: 0.26.1-0ubuntu2.1 Candidate: 0.26.1-0ubuntu2.1 Version table: *** 0.26.1-0ubuntu2.1 500 500 http://archive.ubuntu.com/ubuntu cosmic-proposed/universe amd64 Packages 100 /var/lib/dpkg/status 0.26.1-0ubuntu2 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu cosmic/universe amd64 Packages $ openstack image create --public --file /home/ubuntu/images/bionic-server-cloudimg-amd64.img bionic-test +--++ | Field| Value | +--++ | checksum | c8994590c7d61dc68922e461686ef936 | | container_format | bare | | created_at | 2019-05-22T07:10:26Z | | disk_format | raw | | file | /v2/images/eca7aeb5-4c16-4bb2-ad9a-53acfb3c18ca/file | | id | eca7aeb5-4c16-4bb2-ad9a-53acfb3c18ca | | min_disk | 0 | | min_ram | 0 | | name | bionic-test | | owner| 6c8b914f26bc40d9aae58729b818e398 | | properties | os_hash_algo='sha512', os_hash_value='be4993640deb7eb99b07667213b1fe3a9145df2c0ed5c72cf786a621fe64e93fb543cbb3fafa9a130988b684da432d2a55493c50e77a9dfe336e7ed996be92d9', os_hidden='False' | | protected| False | | schema | /v2/schemas/image | | size
[Bug 1805332] Re: [Swift backend] Upload image hit error: Unicode-objects must be encoded before hashing
The disco package worked for me to. Version info below: # apt-cache policy python3-glance-store python3-glance-store: Installed: 0.28.0-0ubuntu1.1 Candidate: 0.28.0-0ubuntu1.1 Version table: *** 0.28.0-0ubuntu1.1 500 500 http://archive.ubuntu.com/ubuntu disco-proposed/main amd64 Packages 100 /var/lib/dpkg/status 0.28.0-0ubuntu1 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu disco/main amd64 Packages ** Tags removed: verification-needed-disco ** Tags added: verification-done-disco -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1805332 Title: [Swift backend] Upload image hit error: Unicode-objects must be encoded before hashing To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1805332/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1805332] Re: [Swift backend] Upload image hit error: Unicode-objects must be encoded before hashing
Looks good to me. Tested 0.28.0-0ubuntu1.1~cloud0 from cloud-archive :stein-proposed $ openstack image create --public --file /home/ubuntu/images/bionic-server-cloudimg-amd64.img bionic-test 500 Internal Server Error: The server has either erred or is incapable of performing the requested operation. (HTTP 500) $ juju run --unit glance/0 "add-apt-repository cloud-archive:stein-proposed --yes --update" Reading package lists... Building dependency tree... Reading state information... ubuntu-cloud-keyring is already the newest version (2018.09.18.1~18.04.0). The following package was automatically installed and is no longer required: grub-pc-bin Use 'apt autoremove' to remove it. 0 upgraded, 0 newly installed, 0 to remove and 17 not upgraded. Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] Hit:2 http://nova.clouds.archive.ubuntu.com/ubuntu bionic InRelease Get:3 http://nova.clouds.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB] Ign:4 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-updates/stein InRelease Ign:5 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-proposed/stein InRelease Get:6 http://nova.clouds.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB] Get:7 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-updates/stein Release [7882 B] Get:8 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-proposed/stein Release [7884 B] Get:9 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-updates/stein Release.gpg [543 B] Get:10 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-proposed/stein Release.gpg [543 B] Get:11 http://ubuntu-cloud.archive.canonical.com/ubuntu bionic-proposed/stein/main amd64 Packages [179 kB] Fetched 448 kB in 1s (358 kB/s) Reading package lists... $ juju run --unit glance/0 "apt install --yes python3-glance-store; systemctl restart glance-api" Reading package lists... Building dependency tree... Reading state information... The following package was automatically installed and is no longer required: grub-pc-bin Use 'apt autoremove' to
[Bug 1805332] Re: [Swift backend] Upload image hit error: Unicode-objects must be encoded before hashing
It does not appear to have been fixed upstream yet as this patch is still in place at master: https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/swift/store.py#L1635 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1805332 Title: [Swift backend] Upload image hit error: Unicode-objects must be encoded before hashing To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1805332/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1805332] Re: [Swift backend] Upload image hit error: Unicode-objects must be encoded before hashing
** Description changed: [Impact] If we upload a large image (larger than 1G), the glance_store will hit a Unicode error. To fix this a patch has been merged in upstream master and backported to stable rocky. [Test Case] + Deploy glance related to swift-proxy using the object-store relation. Then attempt to upload a large image (not cirros) + + $ openstack image create --public --file /home/ubuntu/images/bionic-server-cloudimg-amd64.img bionic-test + 500 Internal Server Error: The server has either erred or is incapable of performing the requested operation. (HTTP 500) + + If the patch is manually applied and glance-api restarted then the above + command succeeds. + In order to avoid regression of existing consumers, the OpenStack team will run their continuous integration test against the packages that are in -proposed. A successful run of all available tests will be required before the proposed packages can be let into -updates. The OpenStack team will be in charge of attaching the output summary of the executed tests. The OpenStack team members will not mark ‘verification-done’ until this has happened. [Regression Potential] In order to mitigate the regression potential, the results of the aforementioned tests are attached to this bug. [Discussion] n/a [Original Description] env: master branch, Glance using swift backend. We hit a strange error, if we upload a large image (larger than 1G), the glance_store will hit a error:Unicode-objects must be encoded before hashing. But if the image is small enough, the error won't happen. error log: https://www.irccloud.com/pastebin/jP3DapNy/ After dig into the code, it appears that when chunk reading the image data, the date piece may be non-byte, so the checksum.updating will raise the error. encoding the date piece to ensure it's byte can solve the problem. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1805332 Title: [Swift backend] Upload image hit error: Unicode-objects must be encoded before hashing To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1805332/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1825356] Re: libvirt silently fails to attach a cinder ceph volume
Hi koalinux, please can you provide the requested logs or remove the field-critical tag please ? ** Changed in: cloud-archive Status: New => Incomplete ** Changed in: ceph (Ubuntu) Status: New => Incomplete ** Changed in: libvirt (Ubuntu) Status: New => Incomplete -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1825356 Title: libvirt silently fails to attach a cinder ceph volume To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1825356/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1808951] Re: python3 + Fedora + SSL + wsgi nova deployment, nova api returns RecursionError: maximum recursion depth exceeded while calling a Python object
** Description changed: Description:- So while testing python3 with Fedora in [1], Found an issue while running nova-api behind wsgi. It fails with below Traceback:- 2018-12-18 07:41:55.364 26870 INFO nova.api.openstack.requestlog [req-e1af4808-ecd8-47c7-9568-a5dd9691c2c9 - - - - -] 127.0.0.1 "GET /v2.1/servers/detail?all_tenants=True&deleted=True" status: 500 len: 0 microversion: - time: 0.007297 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack [req-e1af4808-ecd8-47c7-9568-a5dd9691c2c9 - - - - -] Caught error: maximum recursion depth exceeded while calling a Python object: RecursionError: maximum recursion depth exceeded while calling a Python object 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack Traceback (most recent call last): 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack File "/usr/lib/python3.6/site-packages/nova/api/openstack/__init__.py", line 94, in __call__ 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack return req.get_response(self.application) 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack File "/usr/lib/python3.6/site-packages/webob/request.py", line 1313, in send 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack application, catch_exc_info=False) 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack File "/usr/lib/python3.6/site-packages/webob/request.py", line 1277, in call_application 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack app_iter = application(self.environ, start_response) 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack File "/usr/lib/python3.6/site-packages/webob/dec.py", line 129, in __call__ 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack resp = self.call_func(req, *args, **kw) 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack File "/usr/lib/python3.6/site-packages/webob/dec.py", line 193, in call_func 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack return self.func(req, *args, **kwargs) 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack File "/usr/lib/python3.6/site-packages/nova/api/openstack/requestlog.py", line 92, in __call__ 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack self._log_req(req, res, start) 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack File "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack self.force_reraise() 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack File "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack six.reraise(self.type_, self.value, self.tb) 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack File "/usr/lib/python3.6/site-packages/six.py", line 693, in reraise 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack raise value 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack File "/usr/lib/python3.6/site-packages/nova/api/openstack/requestlog.py", line 87, in __call__ 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack res = req.get_response(self.application) 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack File "/usr/lib/python3.6/site-packages/webob/request.py", line 1313, in send 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack application, catch_exc_info=False) 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack File "/usr/lib/python3.6/site-packages/webob/request.py", line 1277, in call_application 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack app_iter = application(self.environ, start_response) 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack File "/usr/lib/python3.6/site-packages/webob/dec.py", line 143, in __call__ 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack return resp(environ, start_response) 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack File "/usr/lib/python3.6/site-packages/webob/dec.py", line 129, in __call__ 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack resp = self.call_func(req, *args, **kw) 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack File "/usr/lib/python3.6/site-packages/webob/dec.py", line 193, in call_func 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack return self.func(req, *args, **kwargs) 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack File "/usr/lib/python3.6/site-packages/osprofiler/web.py", line 112, in __call__ 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack return request.get_response(self.application) 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack File "/usr/lib/python3.6/site-packages/webob/request.py", line 1313, in send 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack application, catch_exc_info=False) 2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack File "/usr/lib/python3.6/site-pac
[Bug 1815844] Re: iscsi multipath dm-N device only used on first volume attachment
I don't think this is related to the charm, it looks like a bug in upstream nova. ** Also affects: nova (Ubuntu) Importance: Undecided Status: New ** No longer affects: nova (Ubuntu) ** Also affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1815844 Title: iscsi multipath dm-N device only used on first volume attachment To manage notifications about this bug go to: https://bugs.launchpad.net/charm-nova-compute/+bug/1815844/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1799406] Re: [SRU] Alarms fail on Rocky
** Changed in: charm-aodh Status: New => Invalid ** Changed in: oslo.i18n Status: New => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1799406 Title: [SRU] Alarms fail on Rocky To manage notifications about this bug go to: https://bugs.launchpad.net/aodh/+bug/1799406/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1800601] Re: [SRU] Infinite recursion in Python 3
I have successfully run the mojo spec which was failing (specs/full_stack/next_openstack_upgrade/queens). This boots an instance on rocky which indirectly queries glance: https://pastebin.canonical.com/p/7sVjF6QSNm/ ** Tags removed: verification-rocky-needed ** Tags added: verification-rocky-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1800601 Title: [SRU] Infinite recursion in Python 3 To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1800601/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1800601] Re: [SRU] Infinite recursion in Python 3
** Description changed: Hi, When running unit tests under Python 3.7 when building the Rocky Debian package in Sid, I get a never ending recursion. Please see the Debian bug report: https://bugs.debian.org/911947 Basically, it's this: | File "/build/1st/glance-17.0.0/glance/domain/__init__.py", line 316, in keys | return dict(self).keys() | File "/build/1st/glance-17.0.0/glance/domain/__init__.py", line 316, in keys | return dict(self).keys() | File "/build/1st/glance-17.0.0/glance/domain/__init__.py", line 316, in keys | return dict(self).keys() | RecursionError: maximum recursion depth exceeded while calling a Python object - == Ubuntu SRU details == [Impact] - An infinite recursion error occurs when running Python 3.6 glance from rocky. This issue has also been seen when running python 3.7 unit tests. + An infinite recursion error occurs when running Python 3.6 glance from rocky. This issue has also been seen when running python 3.7 unit tests. + The error has also been seen in a Rocky deployment and causes the glance api service to return 500 errors. [Test Case] - + Deploy the glance charm on bionic then upgrade it to rocky by updating the + openstack-origin to cloud:bionic-rocky/proposed [Regression Potential] Fairly low. The patch is a minimal fix and will be fully exercised by the OpenStack charms team. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1800601 Title: [SRU] Infinite recursion in Python 3 To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1800601/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1800601] Re: [SRU] Infinite recursion in Python 3
Just to be clear, when I say I'm hitting it I mean I'm hitting it on a deployed system, not just in unit tests. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1800601 Title: [SRU] Infinite recursion in Python 3 To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1800601/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1754508] Re: [next][queens] Horizon Errors out on first login or first access to a given dashboard unit: TypeError: coercing to Unicode: need string or buffer, NoneType found
Marking charm bug as invalid inlight of the packaging fix ** Changed in: charm-openstack-dashboard Status: In Progress => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1754508 Title: [next][queens] Horizon Errors out on first login or first access to a given dashboard unit: TypeError: coercing to Unicode: need string or buffer, NoneType found To manage notifications about this bug go to: https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1754508/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1414925] Re: ntpd seems to add offsets instead of subtracting them
** Changed in: ntp (Ubuntu) Assignee: Liam Young (gnuoy) => (unassigned) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1414925 Title: ntpd seems to add offsets instead of subtracting them To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ntp/+bug/1414925/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1657305] Re: percona cluster getting wrong private ip
I'm going to mark this is invalid against nova-compute as nova-compute does not have a relation with percona anymore (Icehouse+ I believe). ** Changed in: charm-nova-compute Status: Triaged => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1657305 Title: percona cluster getting wrong private ip To manage notifications about this bug go to: https://bugs.launchpad.net/charm-nova-compute/+bug/1657305/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1720378] Re: Two processes can bind to the same port
kernel-bug-exists-upstream ** Changed in: linux (Ubuntu) Status: Incomplete => Confirmed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1720378 Title: Two processes can bind to the same port To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1720378/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1720378] Re: Two processes can bind to the same port
I've retested with linux- image-4.4.9-040409-generic_4.4.9-040409.201605041832_amd64.deb but the image seems to persist. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1720378 Title: Two processes can bind to the same port To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1720378/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1720378] Re: Two processes can bind to the same port
Thanks jsalisbury. Ill try on another kernel now. The steps to reproduce on xenial: sudo su - apt install --yes apache2 haproxy echo " listen test bind *:8776 bind :::8776 " > /etc/haproxy/haproxy.cfg echo " Listen 8776 DocumentRoot /var/www/html " > /etc/apache2/sites-enabled/01-test.conf systemctl restart haproxy systemctl restart apache2 netstat -peanut | grep 8776 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1720378 Title: Two processes can bind to the same port To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1720378/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1720378] Re: Two processes can bind to the same port
Thanks for the suggestions. I will try with an upstream kernel and also add steps for reproducing -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1720378 Title: Two processes can bind to the same port To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1720378/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1720378] [NEW] Two processes can bind to the same port
Public bug reported: On both xenial and zesty apache and haproxy seem to be able to bind to the same port: # netstat -peanut | grep 8776 tcp0 0 0.0.0.0:87760.0.0.0:* LISTEN 0 76856 26190/haproxy tcp6 0 0 :::8776 :::*LISTEN 0 76749 26254/apache2 tcp6 0 0 :::8776 :::*LISTEN 0 76857 26190/haproxy I thought this should not be possible? ProblemType: Bug DistroRelease: Ubuntu 16.04 Package: linux-image-4.4.0-96-generic 4.4.0-96.119 ProcVersionSignature: Ubuntu 4.4.0-96.119-generic 4.4.83 Uname: Linux 4.4.0-96-generic x86_64 AlsaDevices: total 0 crw-rw 1 root audio 116, 1 Sep 29 11:46 seq crw-rw 1 root audio 116, 33 Sep 29 11:46 timer AplayDevices: Error: [Errno 2] No such file or directory: 'aplay' ApportVersion: 2.20.1-0ubuntu2.10 Architecture: amd64 ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord' AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', '/dev/snd/timer'] failed with exit code 1: CRDA: N/A Date: Fri Sep 29 14:15:26 2017 Ec2AMI: ami-0193 Ec2AMIManifest: FIXME Ec2AvailabilityZone: nova Ec2InstanceType: m1.blue Ec2Kernel: unavailable Ec2Ramdisk: unavailable IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig' Lsusb: Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub MachineType: OpenStack Foundation OpenStack Nova PciMultimedia: ProcEnviron: TERM=screen PATH=(custom, no user) LANG=en_US.UTF-8 SHELL=/bin/bash ProcFB: ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-4.4.0-96-generic root=LABEL=cloudimg-rootfs ro console=tty1 console=ttyS0 RelatedPackageVersions: linux-restricted-modules-4.4.0-96-generic N/A linux-backports-modules-4.4.0-96-generic N/A linux-firmwareN/A RfKill: Error: [Errno 2] No such file or directory: 'rfkill' SourcePackage: linux UpgradeStatus: No upgrade log present (probably fresh install) dmi.bios.date: 04/01/2014 dmi.bios.vendor: SeaBIOS dmi.bios.version: 1.10.1-1ubuntu1~cloud0 dmi.chassis.type: 1 dmi.chassis.vendor: QEMU dmi.chassis.version: pc-i440fx-zesty dmi.modalias: dmi:bvnSeaBIOS:bvr1.10.1-1ubuntu1~cloud0:bd04/01/2014:svnOpenStackFoundation:pnOpenStackNova:pvr15.0.2:cvnQEMU:ct1:cvrpc-i440fx-zesty: dmi.product.name: OpenStack Nova dmi.product.version: 15.0.2 dmi.sys.vendor: OpenStack Foundation ** Affects: linux (Ubuntu) Importance: Undecided Status: Confirmed ** Tags: amd64 apport-bug ec2-images uosci xenial -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1720378 Title: Two processes can bind to the same port To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1720378/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1720215] Re: [artful] apachectl: Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:8776
Ok, so this has been broken in the charm for a while. The package shipped vhost should be disabled by the charm but due to a bug that is not happening. However xenial and zesty both seem to allow apache to start when it has a conflicting port with haproxy. If haproxy is running and bound to 8776 on both ipv4 and ipv6 then apache starts and creates a duplicate bind to 8776 on the ipv6 address. Bug #1720378 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1720215 Title: [artful] apachectl: Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:8776 To manage notifications about this bug go to: https://bugs.launchpad.net/charm-cinder/+bug/1720215/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1720215] Re: [artful] apachectl: Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:8776
** Changed in: charm-cinder Assignee: (unassigned) => Liam Young (gnuoy) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1720215 Title: [artful] apachectl: Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:8776 To manage notifications about this bug go to: https://bugs.launchpad.net/charm-cinder/+bug/1720215/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1598208] Re: murano uses deprecated psutil.NUM_CPUS
** Also affects: murano (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1598208 Title: murano uses deprecated psutil.NUM_CPUS To manage notifications about this bug go to: https://bugs.launchpad.net/murano/+bug/1598208/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1637138] Re: The trove-guest agent service does not start on xenial
I think this is fixed in yakkety -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1637138 Title: The trove-guest agent service does not start on xenial To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1637138/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1637138] [NEW] The trove-guest agent service does not start on xenial
Public bug reported: When starting the trove guest agent on xenial it fails with: 2016-10-27 09:31:38.674 1366 CRITICAL root [-] NameError: global name '_LE' is not defined 2016-10-27 09:31:38.674 1366 ERROR root Traceback (most recent call last): 2016-10-27 09:31:38.674 1366 ERROR root File "/usr/bin/trove-guestagent", line 10, in 2016-10-27 09:31:38.674 1366 ERROR root sys.exit(main()) 2016-10-27 09:31:38.674 1366 ERROR root File "/usr/lib/python2.7/dist-packages/trove/cmd/guest.py", line 48, in main 2016-10-27 09:31:38.674 1366 ERROR root msg = (_LE("The guest_id parameter is not set. guest_info.conf " 2016-10-27 09:31:38.674 1366 ERROR root NameError: global name '_LE' is not defined 2016-10-27 09:31:38.674 1366 ERROR root It seems to be missing: https://github.com/openstack/trove/blob/master/trove/cmd/guest.py#L27 ** Affects: cloud-archive Importance: High Status: Triaged ** Affects: cloud-archive/mitaka Importance: High Status: Triaged ** Affects: cloud-archive/newton Importance: High Status: Triaged ** Affects: openstack-trove (Ubuntu) Importance: High Status: Triaged ** Affects: openstack-trove (Ubuntu Xenial) Importance: High Status: Triaged ** Affects: openstack-trove (Ubuntu Yakkety) Importance: High Status: Triaged ** Affects: openstack-trove (Ubuntu Zesty) Importance: High Status: Triaged -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1637138 Title: The trove-guest agent service does not start on xenial To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1637138/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1634475] [NEW] Files missing from package
Public bug reported: After the package is installed some of the files that support the initialisation of the database seem to missing. As does the policy.json. The files that are missing: /usr/lib/python2.7/dist-packages/mistral/actions/openstack/mapping.json /usr/lib/python2.7/dist-packages/mistral/resources (directory + contents) /etc/mistral/policy.json Without the DB files the schema initialisation fails: # mistral-db-manage --config-file /etc/mistral/mistral.conf populate Traceback (most recent call last): File "/usr/bin/mistral-db-manage", line 10, in sys.exit(main()) File "/usr/lib/python2.7/dist-packages/mistral/db/sqlalchemy/migration/cli.py", line 129, in main CONF.command.func(config, CONF.command.name) File "/usr/lib/python2.7/dist-packages/mistral/db/sqlalchemy/migration/cli.py", line 70, in do_populate action_manager.sync_db() File "/usr/lib/python2.7/dist-packages/mistral/services/action_manager.py", line 82, in sync_db register_action_classes() File "/usr/lib/python2.7/dist-packages/mistral/services/action_manager.py", line 128, in register_action_classes _register_dynamic_action_classes() File "/usr/lib/python2.7/dist-packages/mistral/services/action_manager.py", line 88, in _register_dynamic_action_classes actions = generator.create_actions() File "/usr/lib/python2.7/dist-packages/mistral/actions/openstack/action_generator/base.py", line 77, in create_actions mapping = get_mapping() File "/usr/lib/python2.7/dist-packages/mistral/actions/openstack/action_generator/base.py", line 45, in get_mapping MAPPING_PATH)).read()) IOError: [Errno 2] No such file or directory: '/usr/lib/python2.7/dist-packages/mistral/actions/openstack/mapping.json' And without the policy.json clients cannot authorise. Reproduce db issue: lxc launch ubuntu-daily:yakkety lxc execbash apt update apt install mistral-api mistral-db-manage --config-file /etc/mistral/mistral.conf populate ** Affects: mistral (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1634475 Title: Files missing from package To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/mistral/+bug/1634475/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1604501] Re: ceph-osd fails to initialize when encrypt is enabled
** Changed in: ceph-osd (Juju Charms Collection) Milestone: 16.07 => 16.10 ** Changed in: ceph (Juju Charms Collection) Milestone: 16.07 => 16.10 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1604501 Title: ceph-osd fails to initialize when encrypt is enabled To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1604501/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1578351] Re: mitaka ksclient fails to connect to v6 keystone
** Changed in: keystone (Juju Charms Collection) Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1578351 Title: mitaka ksclient fails to connect to v6 keystone To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1578351/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1546565] Re: Ownership/Permissions of vhost_user sockets for openvswitch-dpdk make them unusable by libvirt/qemu/kvm
** Changed in: neutron-openvswitch (Juju Charms Collection) Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1546565 Title: Ownership/Permissions of vhost_user sockets for openvswitch-dpdk make them unusable by libvirt/qemu/kvm To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/dpdk/+bug/1546565/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1488453] Re: Package postinst always fail on first install when using systemd
** No longer affects: hacluster (Juju Charms Collection) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1488453 Title: Package postinst always fail on first install when using systemd To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openhpi/+bug/1488453/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1558642] Re: Update charm to use new pause/resume helpers
** Changed in: cinder (Juju Charms Collection) Status: In Progress => Fix Committed ** Changed in: neutron-gateway (Ubuntu) Status: In Progress => Fix Committed ** Changed in: rabbitmq-server (Juju Charms Collection) Status: In Progress => Fix Committed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1558642 Title: Update charm to use new pause/resume helpers To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/neutron-gateway/+bug/1558642/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1567272] Re: systemd claims libvirt-bin is dead on xenial
Yes, removing '-d' fixed it, thank you ** Changed in: libvirt (Ubuntu) Status: New => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1567272 Title: systemd claims libvirt-bin is dead on xenial To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1567272/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1567272] Re: systemd claims libvirt-bin is dead on xenial
** Description changed: - On Xenial systemd is not reporting the state of libvirtd properly or + On Xenial, systemd is not reporting the state of libvirtd properly or shutting down on request. # pgrep libvirtd # systemctl start libvirt-bin.service # systemctl status libvirt-bin.service ● libvirt-bin.service - Virtualization daemon Loaded: loaded (/lib/systemd/system/libvirt-bin.service; enabled; vendor preset: enabled) Active: inactive (dead) since Thu 2016-04-07 08:05:15 UTC; 8s ago Docs: man:libvirtd(8) http://libvirt.org Process: 20385 ExecStart=/usr/sbin/libvirtd $libvirtd_opts (code=exited, status=0/SUCCESS) Main PID: 20385 (code=exited, status=0/SUCCESS) Tasks: 18 (limit: 512) Memory: 11.5M CPU: 356ms CGroup: /system.slice/libvirt-bin.service ├─14931 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leas ├─14932 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leas └─20387 /usr/sbin/libvirtd -d Apr 07 08:05:15 juju-trusty-machine-5 systemd[1]: Starting Virtualization daemon... Apr 07 08:05:15 juju-trusty-machine-5 systemd[1]: Started Virtualization daemon. Apr 07 08:05:15 juju-trusty-machine-5 dnsmasq[14931]: read /etc/hosts - 7 addresses Apr 07 08:05:15 juju-trusty-machine-5 dnsmasq[14931]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Apr 07 08:05:15 juju-trusty-machine-5 dnsmasq-dhcp[14931]: read /var/lib/libvirt/dnsmasq/default.hostsfile # pgrep libvirtd 20387 # systemctl stop libvirt-bin.service # pgrep libvirtd 20387 I would expect systemd to report libvirt-bin as "Active: active (running)" and to stop libvirtd on "systemctl stop libvirt-bin.service" libvirt-bin1.3.1-1ubuntu6 libvirt0:amd64 1.3.1-1ubuntu6 nova-compute-libvirt2:13.0.0~rc1-0ubuntu1 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1567272 Title: systemd claims libvirt-bin is dead on xenial To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1567272/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1567272] [NEW] systemd claims libvirt-bin is dead on xenial
Public bug reported: On Xenial, systemd is not reporting the state of libvirtd properly or shutting down on request. # pgrep libvirtd # systemctl start libvirt-bin.service # systemctl status libvirt-bin.service ● libvirt-bin.service - Virtualization daemon Loaded: loaded (/lib/systemd/system/libvirt-bin.service; enabled; vendor preset: enabled) Active: inactive (dead) since Thu 2016-04-07 08:05:15 UTC; 8s ago Docs: man:libvirtd(8) http://libvirt.org Process: 20385 ExecStart=/usr/sbin/libvirtd $libvirtd_opts (code=exited, status=0/SUCCESS) Main PID: 20385 (code=exited, status=0/SUCCESS) Tasks: 18 (limit: 512) Memory: 11.5M CPU: 356ms CGroup: /system.slice/libvirt-bin.service ├─14931 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leas ├─14932 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leas └─20387 /usr/sbin/libvirtd -d Apr 07 08:05:15 juju-trusty-machine-5 systemd[1]: Starting Virtualization daemon... Apr 07 08:05:15 juju-trusty-machine-5 systemd[1]: Started Virtualization daemon. Apr 07 08:05:15 juju-trusty-machine-5 dnsmasq[14931]: read /etc/hosts - 7 addresses Apr 07 08:05:15 juju-trusty-machine-5 dnsmasq[14931]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Apr 07 08:05:15 juju-trusty-machine-5 dnsmasq-dhcp[14931]: read /var/lib/libvirt/dnsmasq/default.hostsfile # pgrep libvirtd 20387 # systemctl stop libvirt-bin.service # pgrep libvirtd 20387 I would expect systemd to report libvirt-bin as "Active: active (running)" and to stop libvirtd on "systemctl stop libvirt-bin.service" libvirt-bin1.3.1-1ubuntu6 libvirt0:amd64 1.3.1-1ubuntu6 nova-compute-libvirt2:13.0.0~rc1-0ubuntu1 ** Affects: libvirt (Ubuntu) Importance: Undecided Status: New ** Description changed: - The systemd script is not reporting the state of libvirtd properly or - shutting down on request. + On Xenial the systemd script is not reporting the state of libvirtd + properly or shutting down on request. # pgrep libvirtd # systemctl start libvirt-bin.service # systemctl status libvirt-bin.service ● libvirt-bin.service - Virtualization daemon -Loaded: loaded (/lib/systemd/system/libvirt-bin.service; enabled; vendor preset: enabled) -Active: inactive (dead) since Thu 2016-04-07 08:05:15 UTC; 8s ago - Docs: man:libvirtd(8) -http://libvirt.org - Process: 20385 ExecStart=/usr/sbin/libvirtd $libvirtd_opts (code=exited, status=0/SUCCESS) - Main PID: 20385 (code=exited, status=0/SUCCESS) - Tasks: 18 (limit: 512) -Memory: 11.5M - CPU: 356ms -CGroup: /system.slice/libvirt-bin.service -├─14931 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leas -├─14932 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leas -└─20387 /usr/sbin/libvirtd -d + Loaded: loaded (/lib/systemd/system/libvirt-bin.service; enabled; vendor preset: enabled) + Active: inactive (dead) since Thu 2016-04-07 08:05:15 UTC; 8s ago + Docs: man:libvirtd(8) + http://libvirt.org + Process: 20385 ExecStart=/usr/sbin/libvirtd $libvirtd_opts (code=exited, status=0/SUCCESS) + Main PID: 20385 (code=exited, status=0/SUCCESS) + Tasks: 18 (limit: 512) + Memory: 11.5M + CPU: 356ms + CGroup: /system.slice/libvirt-bin.service + ├─14931 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leas + ├─14932 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leas + └─20387 /usr/sbin/libvirtd -d Apr 07 08:05:15 juju-trusty-machine-5 systemd[1]: Starting Virtualization daemon... Apr 07 08:05:15 juju-trusty-machine-5 systemd[1]: Started Virtualization daemon. Apr 07 08:05:15 juju-trusty-machine-5 dnsmasq[14931]: read /etc/hosts - 7 addresses Apr 07 08:05:15 juju-trusty-machine-5 dnsmasq[14931]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Apr 07 08:05:15 juju-trusty-machine-5 dnsmasq-dhcp[14931]: read /var/lib/libvirt/dnsmasq/default.hostsfile # pgrep libvirtd 20387 # systemctl stop libvirt-bin.service # pgrep libvirtd 20387 - - I would expect systemd to report libvirt-bin as "Active: active (running)" and to stop libvirtd on "systemctl stop libvirt-bin.service" - + I would expect systemd to report libvirt-bin as "Active: active + (running)" and to stop libvirtd on "systemctl stop libvirt-bin.service" libvirt-bin1.3.1-1ubuntu6 libvirt0:amd64
[Bug 1558642] Re: Update charm to use new pause/resume helpers
** Changed in: neutron-gateway (Ubuntu) Status: New => In Progress ** Changed in: neutron-gateway (Ubuntu) Assignee: (unassigned) => Liam Young (gnuoy) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1558642 Title: Update charm to use new pause/resume helpers To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/neutron-gateway/+bug/1558642/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1558642] Re: Update charm to use new pause/resume helpers
** Changed in: ceph-radosgw (Juju Charms Collection) Status: In Progress => Fix Committed ** Changed in: hacluster (Juju Charms Collection) Status: In Progress => Fix Committed ** Changed in: neutron-api (Juju Charms Collection) Status: In Progress => Fix Committed ** Changed in: nova-cloud-controller (Juju Charms Collection) Status: In Progress => Fix Committed ** Changed in: openstack-dashboard (Juju Charms Collection) Status: In Progress => Fix Committed ** Also affects: rabbitmq-server (Juju Charms Collection) Importance: Undecided Status: New ** Changed in: rabbitmq-server (Juju Charms Collection) Status: New => In Progress ** Changed in: rabbitmq-server (Juju Charms Collection) Assignee: (unassigned) => Liam Young (gnuoy) ** Changed in: rabbitmq-server (Juju Charms Collection) Importance: Undecided => Medium ** Changed in: rabbitmq-server (Juju Charms Collection) Milestone: None => 16.04 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1558642 Title: Update charm to use new pause/resume helpers To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/neutron-gateway/+bug/1558642/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1558642] Re: Update charm to use new pause/resume helpers
** Changed in: hacluster (Juju Charms Collection) Status: New => In Progress ** Also affects: cinder (Juju Charms Collection) Importance: Undecided Status: New ** Changed in: cinder (Juju Charms Collection) Status: New => In Progress ** Changed in: cinder (Juju Charms Collection) Importance: Undecided => Medium ** Changed in: cinder (Juju Charms Collection) Assignee: (unassigned) => Liam Young (gnuoy) ** Changed in: cinder (Juju Charms Collection) Milestone: None => 16.04 ** Also affects: ceph-radosgw (Juju Charms Collection) Importance: Undecided Status: New ** Changed in: ceph-radosgw (Juju Charms Collection) Status: New => In Progress ** Changed in: ceph-radosgw (Juju Charms Collection) Importance: Undecided => Medium ** Changed in: ceph-radosgw (Juju Charms Collection) Assignee: (unassigned) => Liam Young (gnuoy) ** Changed in: ceph-radosgw (Juju Charms Collection) Milestone: None => 16.04 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1558642 Title: Update charm to use new pause/resume helpers To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/neutron-gateway/+bug/1558642/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs