[Bug 1886494] Re: sosreport does not correctly enable the maas plugin for a snap install

2020-07-06 Thread Nick Niehoff
Looks like the maas plugin in the package is old.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1886494

Title:
  sosreport does not correctly enable the maas plugin for a snap install

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1886494/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1886494] [NEW] sosreport does not correctly enable the maas plugin for a snap install

2020-07-06 Thread Nick Niehoff
Public bug reported:

Installing maas via snap on a focal machine (in my case lxd container)
following:

sudo snap install maas-test-db
sudo snap install maas
sudo maas init region --database-uri maas-test-db:///

Then running sosreport, sosreport does not correctly enable the maas
plugin.  I tested with version 3.9.1-1ubuntu0.20.04.1 of sosreport and
also the upstream package from source.  Here are the results:

root@testing:/sos# dpkg -l | grep sosreport
ii  sosreport  3.9.1-1ubuntu0.20.04.1 amd64 
   Set of tools to gather troubleshooting data from a system
root@testing:/sos# sosreport -l | grep maas
 maas inactive   Ubuntu Metal-As-A-Service
root@testing:/sos# ./bin/sosreport -l | grep maas
 maas Ubuntu Metal-As-A-Service
 maas.profile-name The name with which you will later 
refer to this remote
 maas.url  The URL of the remote API
 maas.credentials  The credentials, also known as the 
API key

As you can see the version of sosreport from the package lists the maas
plugin as inactive but from source it is enabled.

** Affects: sosreport (Ubuntu)
 Importance: Undecided
 Assignee: Eric Desrochers (slashd)
 Status: In Progress

** Affects: sosreport (Ubuntu Bionic)
 Importance: Undecided
 Status: New

** Affects: sosreport (Ubuntu Focal)
 Importance: Undecided
 Status: New

** Affects: sosreport (Ubuntu Groovy)
 Importance: Undecided
 Assignee: Eric Desrochers (slashd)
 Status: In Progress


** Tags: seg sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1886494

Title:
  sosreport does not correctly enable the maas plugin for a snap install

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1886494/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1871820] Re: luminous: bluestore rocksdb max_background_compactions regression in 12.2.13

2020-05-26 Thread Nick Niehoff
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1871820

Title:
  luminous: bluestore rocksdb max_background_compactions regression in
  12.2.13

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceph/+bug/1871820/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1718761] Re: It's not possible to use OverlayFS (mount -t overlay) to stack directories on a ZFS volume

2020-04-21 Thread Nick Niehoff
I ran into this exactly as smoser described using using overlay in a
container that is backed by zfs via lxd.  I believe this will become
more prevalent as people start to use zfs root with Focal 20.04.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1718761

Title:
  It's not possible to use OverlayFS (mount -t overlay) to stack
  directories on a ZFS volume

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1718761/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1873803] Re: Unable to create VMs with virt-install

2020-04-20 Thread Nick Niehoff
The virt-install works when using the installerd:

http://archive.ubuntu.com/ubuntu/dists/focal/main/installer-amd64/
http://archive.ubuntu.com/ubuntu/dists/eoan/main/installer-amd64/
http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/

This may be a workaround instead of using:

http://archive.ubuntu.com/ubuntu/dists/bionic-updates/main/installer-
amd64/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1873803

Title:
  Unable to create VMs with virt-install

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/virt-manager/+bug/1873803/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1871874] Re: lvremove occasionally fails on nodes with multiple volumes and curtin does not catch the failure

2020-04-10 Thread Nick Niehoff
Ryan,
   From the logs the concern is the device or resource busy from meesage:

Running command ['lvremove', '--force', '--force', 'vgk/sdklv'] with allowed 
return codes [0] (capture=False)
  device-mapper: remove ioctl on  (253:5) failed: Device or resource busy
  Logical volume "sdklv" successfully removed
Running command ['lvdisplay', '-C', '--separator', '=', '--noheadings', '-o', 
'vg_name,lv_name'] with allowed return codes [0] (capture=True)

  Curtin does not fail and the node successfully deploys.  This is in an
integration lab so these hosts (including maas) are stopped, MAAS is
reinstalled, and the systems are redeployed without any release or
option to wipe during a MAAS release.  Then MAAS deploys Bionic on these
hosts thinking they are completely new systems but in reality they still
have the old volumes configured.  MAAS configures the root disk but
nothing to the other disks which are provisioned through other
automation later.  The customer has correlated these to problems
configuring ceph after deployment.  I have requested further information
about exactly the state of the system when it ends up in this case.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1871874

Title:
  lvremove occasionally fails on nodes with multiple volumes and curtin
  does not catch the failure

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1871874/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1871874] Re: lvremove occasionally fails on nodes with multiple volumes and curtin does not catch the failure

2020-04-10 Thread Nick Niehoff
** Attachment added: "curtin-install-cfg.yaml"
   
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1871874/+attachment/5351450/+files/curtin-install-cfg.yaml

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1871874

Title:
  lvremove occasionally fails on nodes with multiple volumes and curtin
  does not catch the failure

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1871874/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1871874] Re: lvremove occasionally fails on nodes with multiple volumes and curtin does not catch the failure

2020-04-10 Thread Nick Niehoff
** Attachment added: "curtin-install.log"
   
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1871874/+attachment/5351448/+files/curtin-install.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1871874

Title:
  lvremove occasionally fails on nodes with multiple volumes and curtin
  does not catch the failure

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1871874/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1871874] Re: lvremove occasionally fails on nodes with multiple volumes and curtin does not catch the failure

2020-04-10 Thread Nick Niehoff
Ryan,
   We believe this is a bug as we expect curtin to wipe the disks.  In this 
case it's failing to wipe the disks and occasionally that causes issues with 
our automation deploying ceph on those disks.  This may be more of an issue 
with LVM and a race condition trying to wipe all of the disks sequentially 
simply with the large number of disks/vgs/lvs.
 
   To clarify from my previous testing, I was mistaken, I thought MAAS used the 
commissioning OS as the ephemeral OS from which to deploy from, this is not the 
case.  MAAS uses the specified deployment OS as the ephemeral image to deploy 
from.  Based on this all of my previous testing was done with Bionic using the 
4.15 kernel.  This proves it is a race condition somewhere as sometimes this 
error does not reproduce and it was just a coincidence that I was changing the 
commissioning OS.  

   I have tested this morning and have been able to reproduce the issue
with bionic 4.15 and xenial 4.4 however I have yet to reproduce it using
either bionic or xenial hwe kernels.

   I will upload the curtin logs and config from my reproducer now.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1871874

Title:
  lvremove occasionally fails on nodes with multiple volumes and curtin
  does not catch the failure

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1871874/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1871874] Re: lvremove occasionally fails on nodes with multiple volumes and curtin does not catch the failure

2020-04-09 Thread Nick Niehoff
I was able to reproduce this with a VM deployed by MAAS.  I created a VM
and added 26 disks to in using virsh (NOTE: I use zfs volumes for my
disks)

for i in {a..z}; do sudo zfs create -s -V 30G rpool/libvirt/maas-node-20$i; done
for i in {a..z}; do virsh attach-disk maas-node-20 
/dev/zvol/rpool/libvirt/maas-node-20$i sd$i --current --cache none --io native; 
done

Then in maas:

commission the machine to recognize all of the disks

machine_id=123abc
for i in {b..z}; do device_id=$(maas admin machine read $machine_id | jq 
".blockdevice_set[] | select(.name == \"sd$i\") | .id"); vgid=$(maas admin 
volume-groups create $machine_id name=vg$i block_devices=$device_id | jq 
'.id'); maas admin volume-group create-logical-volume $machine_id $vgid 
name=sd${i}lv size=32208060416; done

You may need to change the size in the previous command.  I then
deployed the system 2 times with Bionic, with xenial as the
commissioning OS.  The second time I saw the "failed: Device or resource
busy" errors.  I am using MAAS 2.7.

This reproduces easily with Xenial as the commissioning OS.
This does not reproduce using Xenial with the hwe kernel as the commissioning 
OS.
I can not reproduce this using Bionic as the commissioning OS.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1871874

Title:
  lvremove occasionally fails on nodes with multiple volumes and curtin
  does not catch the failure

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1871874/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1871874] [NEW] lvmremove occasionally fails on nodes with multiple volumes and curtin does not catch the failure

2020-04-09 Thread Nick Niehoff
Public bug reported:

For example:

Wiping lvm logical volume: /dev/ceph-db-wal-dev-sdc/ceph-db-dev-sdi
wiping 1M on /dev/ceph-db-wal-dev-sdc/ceph-db-dev-sdi at offsets [0, -1048576]
using "lvremove" on ceph-db-wal-dev-sdc/ceph-db-dev-sdi
Running command ['lvremove', '--force', '--force', 
'ceph-db-wal-dev-sdc/ceph-db-dev-sdi'] with allowed return codes [0] 
(capture=False)
device-mapper: remove ioctl on (253:14) failed: Device or resource busy
Logical volume "ceph-db-dev-sdi" successfully removed

On a node with 10 disks configured as follows:

/dev/sda2 /
/dev/sda1 /boot
/dev/sda3 /var/log
/dev/sda5 /var/crash
/dev/sda6 /var/lib/openstack-helm
/dev/sda7 /var
/dev/sdj1 /srv

sdb and sdc are used for BlueStore WAL and DB
sdd, sde, sdf: ceph OSDs, using sdb
sdg, sdh, sdi: ceph OSDs, using sdc

across multiple servers this happens occasionally with various disks.
It looks like this maybe a race condition maybe in lvm as curtin is
wiping multiple volumes before lvm fails

** Affects: curtin (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1871874

Title:
  lvmremove occasionally fails on nodes with multiple volumes and curtin
  does not catch the failure

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1871874/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1862226] Re: /usr/sbin/sss_obfuscate fails to run: ImportError: No module named pysss

2020-03-12 Thread Nick Niehoff
** Tags removed: verification-needed verification-needed-bionic
** Tags added: verification-done verification-done-bionic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1862226

Title:
  /usr/sbin/sss_obfuscate fails to run: ImportError: No module named
  pysss

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1862226/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1862226] Re: /usr/sbin/sss_obfuscate fails to run: ImportError: No module named pysss

2020-03-12 Thread Nick Niehoff
I have tested this on bionic, the required dependencies are there and it
adds the correct parameters to the sssd.conf file as described in the
man page.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1862226

Title:
  /usr/sbin/sss_obfuscate fails to run: ImportError: No module named
  pysss

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1862226/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1866137] Re: /usr/share/initramfs-tools/hooks/plymouth hangs: no THEME_PATH

2020-03-04 Thread Nick Niehoff
I think this is a duplicate of:

https://bugs.launchpad.net/ubuntu/+source/plymouth/+bug/1865959

Installing
http://archive.ubuntu.com/ubuntu/pool/main/p/plymouth/plymouth_0.9.4git20200109-0ubuntu3.3_amd64.deb
seemed to fix the issue for me

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1866137

Title:
  /usr/share/initramfs-tools/hooks/plymouth hangs: no THEME_PATH

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/plymouth/+bug/1866137/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1862830] Re: Update sosreport to 3.9

2020-02-15 Thread Nick Niehoff
I have tested:

vanilla Focal instance
Focal with openvswitch
bionic juju machine
bionic openstack nova-cloud-controller (openstack plugins need some work)
bionic ceph (new plugin additions look good)
bionic maas 2.6 ... commissioning-results is not a valid MAAS command plugin 
needs work

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1862830

Title:
  Update sosreport to 3.9

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1862830/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1862850] [NEW] ceph-mds dependency

2020-02-11 Thread Nick Niehoff
Public bug reported:

I have a small lab nautilus ceph cluster, the 3 mon nodes are running
mon, mds, mgr, and rgw services.  I am using package from the Ubuntu
cloud archive on eoan.  Yesterday I decided to install the ceph-mgr-
dashboard package.  When I did this several other packages were upgraded
at the same time:

python3-ceph-argparse:amd64 (14.2.2-0ubuntu3, 14.2.4-0ubuntu0.19.10.1),
libradosstriper1:amd64 (14.2.2-0ubuntu3, 14.2.4-0ubuntu0.19.10.1), ceph-
common:amd64 (14.2.2-0ubuntu3, 14.2.4-0ubuntu0.19.10.1), ceph-mgr:amd64
(14.2.2-0ubuntu3, 14.2.4-0ubuntu0.19.10.1), ceph-mon:amd64
(14.2.2-0ubuntu3, 14.2.4-0ubuntu0.19.10.1), ceph-osd:amd64
(14.2.2-0ubuntu3, 14.2.4-0ubuntu0.19.10.1), radosgw:amd64
(14.2.2-0ubuntu3, 14.2.4-0ubuntu0.19.10.1), libcephfs2:amd64
(14.2.2-0ubuntu3, 14.2.4-0ubuntu0.19.10.1), librbd1:amd64
(14.2.2-0ubuntu3, 14.2.4-0ubuntu0.19.10.1), python3-rbd:amd64
(14.2.2-0ubuntu3, 14.2.4-0ubuntu0.19.10.1), ceph-base:amd64
(14.2.2-0ubuntu3, 14.2.4-0ubuntu0.19.10.1), librgw2:amd64
(14.2.2-0ubuntu3, 14.2.4-0ubuntu0.19.10.1), ceph:amd64 (14.2.2-0ubuntu3,
14.2.4-0ubuntu0.19.10.1), python3-rados:amd64 (14.2.2-0ubuntu3,
14.2.4-0ubuntu0.19.10.1), python3-cephfs:amd64 (14.2.2-0ubuntu3,
14.2.4-0ubuntu0.19.10.1), librados2:amd64 (14.2.2-0ubuntu3,
14.2.4-0ubuntu0.19.10.1)

Today I notice that ceph-mds is having issues, when I tried to restart
the mds service I see the following in the journal:

/usr/bin/ceph-mds: symbol lookup error: /usr/bin/ceph-mds: undefined
symbol: _ZTVN4ceph6buffer5errorE

I then ran an apt upgrade to find ceph-mds had NOT been upgraded with
the list above and thus the broken library linking.  Somewhere there is
a dependency missing on ceph-mds for this upgrade.

Here is my apt history:
https://paste.ubuntu.com/p/PtbCzKzQWp/

** Affects: ceph (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: seg

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1862850

Title:
  ceph-mds dependency

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1862850/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1858304] [NEW] ceph-mgr-dashboard package missing dependencies

2020-01-04 Thread Nick Niehoff
Public bug reported:

After deploying Ceph Nautilus on Eoan I installed the ceph-mgr-dashboard
package and tried to enable the dashboard with:

sudo ceph mgr module enable dashboard

The following error is returned:
Error ENOENT: module 'dashboard' reports that it cannot run on the active 
manager daemon: No module named 'distutils.util' (pass --force to force 
enablement)

Investigating the ceph-mgr logs I found:

2020-01-05 00:23:13.698 7f74b423cd00 -1 mgr[py] Traceback (most recent call 
last):
  File "/usr/share/ceph/mgr/dashboard/__init__.py", line 38, in 
from .module import Module, StandbyModule
  File "/usr/share/ceph/mgr/dashboard/module.py", line 26, in 
from .services.sso import load_sso_db
  File "/usr/share/ceph/mgr/dashboard/services/sso.py", line 21, in 
from ..tools import prepare_url_prefix
  File "/usr/share/ceph/mgr/dashboard/tools.py", line 11, in 
from distutils.util import strtobool
ModuleNotFoundError: No module named 'distutils.util'

I then installed python3-distutils which let me get further but the
dashboard still wasn't starting compaining about no cert configured so I
ran:

ceph config set mgr mgr/dashboard/ssl false

Now ceph status reports the error:
Module 'dashboard' has failed: No module named 'routes'

Investigating the logs I found:

2020-01-05 00:30:19.990 7f663bd26700 -1 log_channel(cluster) log [ERR] : 
Unhandled exception from module 'dashboard' while running on mgr.ceph-mon2: No 
module named 'routes'
2020-01-05 00:30:19.990 7f663bd26700 -1 dashboard.serve:
2020-01-05 00:30:19.990 7f663bd26700 -1 Traceback (most recent call last):
  File "/usr/share/ceph/mgr/dashboard/module.py", line 362, in serve
mapper, parent_urls = generate_routes(self.url_prefix)
  File "/usr/share/ceph/mgr/dashboard/controllers/__init__.py", line 336, in 
generate_routes
mapper = cherrypy.dispatch.RoutesDispatcher()
  File "/lib/python3/dist-packages/cherrypy/_cpdispatch.py", line 515, in 
__init__
import routes
ModuleNotFoundError: No module named 'routes'

This was addressed by installing the python3-routes package.

Based on this I believe both python3-distutils and python3-routes should
be added to the dependencies of the ceph-mgr-dashboard package.

** Affects: ceph (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: packaging

** Tags added: packaging

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1858304

Title:
  ceph-mgr-dashboard package missing dependencies

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1858304/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1858296] [NEW] ceph-deploy is not included in the Ubuntu Cloud Archives

2020-01-04 Thread Nick Niehoff
Public bug reported:

There are issues deploying Ceph Nautilus with ceph-deploy 1.5.x which is
included with the normal bionic archives.  So if I want to use ceph-
deploy to create a Nautilus cluster I need to upgrade to disco at least
to use ceph-deploy 2.0.1.  Because we need to support Ceph Nautilus on
Bionic deployed with ceph-deploy I suggest we build and include the
ceph-deploy package with the Ubuntu Cloud Archives which is where the
supported Ceph Nautilus packages are.

** Affects: cloud-archive
 Importance: Undecided
 Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** No longer affects: ceph-deploy (Ubuntu)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1858296

Title:
  ceph-deploy is not included in the Ubuntu Cloud Archives

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1858296/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1582811] Re: update-grub produces broken grub.cfg with ZFS mirror volume

2019-10-24 Thread Nick Niehoff
In Eoan a workaround for this seems to be to add head -1 to the
following line of /etc/grub.d/10_linux_zfs:

initrd_device=$(${grub_probe} --target=device "${boot_dir}")

to

initrd_device=$(${grub_probe} --target=device "${boot_dir}" | head -1)

This limits the initrd devices to 1, this is only a workaround for this
bug

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1582811

Title:
  update-grub produces broken grub.cfg with ZFS mirror volume

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1582811/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1815212] Re: [Xenial][Bionic][SRU] Update pci.ids to version 2018.07.21

2019-02-11 Thread Nick Niehoff
[VERIFICATION BIONIC]

# diff old.lspci new.lspci 
313c313
< 00:1f.0 ISA bridge: Intel Corporation 9 Series Chipset Family H97 Controller
---
> 00:1f.0 ISA bridge: Intel Corporation H97 Chipset LPC Controller

This machine:
Manufacturer: Gigabyte Technology Co., Ltd.
Product Name: H97N-WIFI

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1815212

Title:
  [Xenial][Bionic][SRU] Update pci.ids to version 2018.07.21

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pciutils/+bug/1815212/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1815212] Re: [Xenial][Bionic][SRU] Update pci.ids to version 2018.07.21

2019-02-11 Thread Nick Niehoff
[VERIFICATION BIONIC]

Found no differences on:
Manufacturer: Gigabyte Technology Co., Ltd.
Product Name: B85M-D3H

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1815212

Title:
  [Xenial][Bionic][SRU] Update pci.ids to version 2018.07.21

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pciutils/+bug/1815212/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1815212] Re: [Xenial][Bionic][SRU] Update pci.ids to version 2018.07.21

2019-02-11 Thread Nick Niehoff
[VERIFICATION BIONIC]

Tested using lspci -vvv"

# diff old.lspci new.lspci 
1582c1582
< 0f:00.0 Network controller: Broadcom Limited BCM4360 802.11ac Wireless 
Network Adapter (rev 03)
---
> 0f:00.0 Network controller: Broadcom Inc. and subsidiaries BCM4360 802.11ac 
> Wireless Network Adapter (rev 03)

I believe this is expected, my motherboard on this machine:
Manufacturer: ASUSTeK COMPUTER INC.
Product Name: X99-DELUXE II

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1815212

Title:
  [Xenial][Bionic][SRU] Update pci.ids to version 2018.07.21

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pciutils/+bug/1815212/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1775195] Re: [sync][sru]sosreport v3.6

2018-11-09 Thread Nick Niehoff
I ran sosreport -a on bionic with maas 2.4, corosync, pacemaker and
postgres all in a very unhappy state.  The output looks good. During
execution I recieved the following:

[plugin:pacemaker] crm_from parameter 'True' is not a valid date: using
default

A couple of minor notes from the output:

1.  In sos_commands/maas/maas-region-admin_dumpdata:

WARNING: The maas-region-admin command is deprecated and will be removed
in a future version. From now on please use 'maas-region' instead.

2.  In sos_commands/pacemaker it looks like the pcs commands were run
but pcs was not installed

I don't believe any of these are issues just notes to clean up at some
point.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1775195

Title:
  [sync][sru]sosreport v3.6

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1775195/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs