[Bug 1886494] Re: sosreport does not correctly enable the maas plugin for a snap install

2020-07-06 Thread Nick Niehoff
Looks like the maas plugin in the package is old. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1886494 Title: sosreport does not correctly enable the maas plugin for a snap install To manage

[Bug 1886494] [NEW] sosreport does not correctly enable the maas plugin for a snap install

2020-07-06 Thread Nick Niehoff
Public bug reported: Installing maas via snap on a focal machine (in my case lxd container) following: sudo snap install maas-test-db sudo snap install maas sudo maas init region --database-uri maas-test-db:/// Then running sosreport, sosreport does not correctly enable the maas plugin. I

[Bug 1871820] Re: luminous: bluestore rocksdb max_background_compactions regression in 12.2.13

2020-05-26 Thread Nick Niehoff
** Tags added: sts -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1871820 Title: luminous: bluestore rocksdb max_background_compactions regression in 12.2.13 To manage notifications about this

[Bug 1718761] Re: It's not possible to use OverlayFS (mount -t overlay) to stack directories on a ZFS volume

2020-04-21 Thread Nick Niehoff
I ran into this exactly as smoser described using using overlay in a container that is backed by zfs via lxd. I believe this will become more prevalent as people start to use zfs root with Focal 20.04. -- You received this bug notification because you are a member of Ubuntu Bugs, which is

[Bug 1873803] Re: Unable to create VMs with virt-install

2020-04-20 Thread Nick Niehoff
The virt-install works when using the installerd: http://archive.ubuntu.com/ubuntu/dists/focal/main/installer-amd64/ http://archive.ubuntu.com/ubuntu/dists/eoan/main/installer-amd64/ http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/ This may be a workaround instead of using:

[Bug 1871874] Re: lvremove occasionally fails on nodes with multiple volumes and curtin does not catch the failure

2020-04-10 Thread Nick Niehoff
Ryan, From the logs the concern is the device or resource busy from meesage: Running command ['lvremove', '--force', '--force', 'vgk/sdklv'] with allowed return codes [0] (capture=False) device-mapper: remove ioctl on (253:5) failed: Device or resource busy Logical volume "sdklv"

[Bug 1871874] Re: lvremove occasionally fails on nodes with multiple volumes and curtin does not catch the failure

2020-04-10 Thread Nick Niehoff
** Attachment added: "curtin-install-cfg.yaml" https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1871874/+attachment/5351450/+files/curtin-install-cfg.yaml -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1871874] Re: lvremove occasionally fails on nodes with multiple volumes and curtin does not catch the failure

2020-04-10 Thread Nick Niehoff
** Attachment added: "curtin-install.log" https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1871874/+attachment/5351448/+files/curtin-install.log -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1871874] Re: lvremove occasionally fails on nodes with multiple volumes and curtin does not catch the failure

2020-04-10 Thread Nick Niehoff
Ryan, We believe this is a bug as we expect curtin to wipe the disks. In this case it's failing to wipe the disks and occasionally that causes issues with our automation deploying ceph on those disks. This may be more of an issue with LVM and a race condition trying to wipe all of the

[Bug 1871874] Re: lvremove occasionally fails on nodes with multiple volumes and curtin does not catch the failure

2020-04-09 Thread Nick Niehoff
I was able to reproduce this with a VM deployed by MAAS. I created a VM and added 26 disks to in using virsh (NOTE: I use zfs volumes for my disks) for i in {a..z}; do sudo zfs create -s -V 30G rpool/libvirt/maas-node-20$i; done for i in {a..z}; do virsh attach-disk maas-node-20

[Bug 1871874] [NEW] lvmremove occasionally fails on nodes with multiple volumes and curtin does not catch the failure

2020-04-09 Thread Nick Niehoff
Public bug reported: For example: Wiping lvm logical volume: /dev/ceph-db-wal-dev-sdc/ceph-db-dev-sdi wiping 1M on /dev/ceph-db-wal-dev-sdc/ceph-db-dev-sdi at offsets [0, -1048576] using "lvremove" on ceph-db-wal-dev-sdc/ceph-db-dev-sdi Running command ['lvremove', '--force', '--force',

[Bug 1862226] Re: /usr/sbin/sss_obfuscate fails to run: ImportError: No module named pysss

2020-03-12 Thread Nick Niehoff
** Tags removed: verification-needed verification-needed-bionic ** Tags added: verification-done verification-done-bionic -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1862226 Title:

[Bug 1862226] Re: /usr/sbin/sss_obfuscate fails to run: ImportError: No module named pysss

2020-03-12 Thread Nick Niehoff
I have tested this on bionic, the required dependencies are there and it adds the correct parameters to the sssd.conf file as described in the man page. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu.

[Bug 1866137] Re: /usr/share/initramfs-tools/hooks/plymouth hangs: no THEME_PATH

2020-03-04 Thread Nick Niehoff
I think this is a duplicate of: https://bugs.launchpad.net/ubuntu/+source/plymouth/+bug/1865959 Installing http://archive.ubuntu.com/ubuntu/pool/main/p/plymouth/plymouth_0.9.4git20200109-0ubuntu3.3_amd64.deb seemed to fix the issue for me -- You received this bug notification because you are a

[Bug 1862830] Re: Update sosreport to 3.9

2020-02-15 Thread Nick Niehoff
I have tested: vanilla Focal instance Focal with openvswitch bionic juju machine bionic openstack nova-cloud-controller (openstack plugins need some work) bionic ceph (new plugin additions look good) bionic maas 2.6 ... commissioning-results is not a valid MAAS command plugin needs work -- You

[Bug 1862850] [NEW] ceph-mds dependency

2020-02-11 Thread Nick Niehoff
Public bug reported: I have a small lab nautilus ceph cluster, the 3 mon nodes are running mon, mds, mgr, and rgw services. I am using package from the Ubuntu cloud archive on eoan. Yesterday I decided to install the ceph-mgr- dashboard package. When I did this several other packages were

[Bug 1858304] [NEW] ceph-mgr-dashboard package missing dependencies

2020-01-04 Thread Nick Niehoff
Public bug reported: After deploying Ceph Nautilus on Eoan I installed the ceph-mgr-dashboard package and tried to enable the dashboard with: sudo ceph mgr module enable dashboard The following error is returned: Error ENOENT: module 'dashboard' reports that it cannot run on the active manager

[Bug 1858296] [NEW] ceph-deploy is not included in the Ubuntu Cloud Archives

2020-01-04 Thread Nick Niehoff
Public bug reported: There are issues deploying Ceph Nautilus with ceph-deploy 1.5.x which is included with the normal bionic archives. So if I want to use ceph- deploy to create a Nautilus cluster I need to upgrade to disco at least to use ceph-deploy 2.0.1. Because we need to support Ceph

[Bug 1582811] Re: update-grub produces broken grub.cfg with ZFS mirror volume

2019-10-24 Thread Nick Niehoff
In Eoan a workaround for this seems to be to add head -1 to the following line of /etc/grub.d/10_linux_zfs: initrd_device=$(${grub_probe} --target=device "${boot_dir}") to initrd_device=$(${grub_probe} --target=device "${boot_dir}" | head -1) This limits the initrd devices to 1, this is only a

[Bug 1815212] Re: [Xenial][Bionic][SRU] Update pci.ids to version 2018.07.21

2019-02-11 Thread Nick Niehoff
[VERIFICATION BIONIC] # diff old.lspci new.lspci 313c313 < 00:1f.0 ISA bridge: Intel Corporation 9 Series Chipset Family H97 Controller --- > 00:1f.0 ISA bridge: Intel Corporation H97 Chipset LPC Controller This machine: Manufacturer: Gigabyte Technology Co., Ltd. Product Name: H97N-WIFI --

[Bug 1815212] Re: [Xenial][Bionic][SRU] Update pci.ids to version 2018.07.21

2019-02-11 Thread Nick Niehoff
[VERIFICATION BIONIC] Found no differences on: Manufacturer: Gigabyte Technology Co., Ltd. Product Name: B85M-D3H -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1815212 Title: [Xenial][Bionic][SRU]

[Bug 1815212] Re: [Xenial][Bionic][SRU] Update pci.ids to version 2018.07.21

2019-02-11 Thread Nick Niehoff
[VERIFICATION BIONIC] Tested using lspci -vvv" # diff old.lspci new.lspci 1582c1582 < 0f:00.0 Network controller: Broadcom Limited BCM4360 802.11ac Wireless Network Adapter (rev 03) --- > 0f:00.0 Network controller: Broadcom Inc. and subsidiaries BCM4360 802.11ac > Wireless Network Adapter

[Bug 1775195] Re: [sync][sru]sosreport v3.6

2018-11-09 Thread Nick Niehoff
I ran sosreport -a on bionic with maas 2.4, corosync, pacemaker and postgres all in a very unhappy state. The output looks good. During execution I recieved the following: [plugin:pacemaker] crm_from parameter 'True' is not a valid date: using default A couple of minor notes from the output: