[Bug 1945874] Re: 21.10 beta, errors in 10-linux and 10_linux_zfs

2022-04-15 Thread Mason Loring Bliss
A quick update, I might get a chance to dig into this again. I recently 
noted that the issue persists in 22.04, via the beta installer.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945874

Title:
  21.10 beta, errors in 10-linux and 10_linux_zfs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1945874/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1945874] Re: 21.10 beta, errors in 10-linux and 10_linux_zfs

2021-11-18 Thread Mason Loring Bliss
I haven't had a chance to dig deeper, but I just noticed this same issue in 
Focal Fossa.

If I get a chance to debug this I'll submit a patch here. I might get a 
chance over the next week, during Thanksgiving break.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945874

Title:
  21.10 beta, errors in 10-linux and 10_linux_zfs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1945874/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906476] Re: PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, &zp->z_sa_hdl)) failed

2021-10-27 Thread Mason Loring Bliss
(Or Ubuntu systems post-fix but with pools created while the bug was
active - and is there a fix possible, or is it "make a new pool"? Is
there a diagnostic possible to be sure either way?)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906476

Title:
  PANIC at zfs_znode.c:335:zfs_znode_sa_init() // VERIFY(0 ==
  sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED,
  &zp->z_sa_hdl)) failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/zfs/+bug/1906476/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1945874] Re: 21.10 beta, errors in 10-linux and 10_linux_zfs

2021-10-07 Thread Mason Loring Bliss
Didier,

That part didn't strike me as exceptional because the pool's already 
mounted, since we're running update-grub from the running system. It's
not available to be listed or imported again.

I'll want to read 10_linux_zfs in depth to see what it's doing, but if
it's depending on a list to come back from 'zpool import' it's not
going to get one in circumstances where the pool's already imported,
unless there's some critical concept confusing me.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945874

Title:
  21.10 beta, errors in 10-linux and 10_linux_zfs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1945874/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1945874] Re: 21.10 beta, errors in 10-linux and 10_linux_zfs

2021-10-04 Thread Mason Loring Bliss
A quick test shows the issue not cropping up if I use an install with 
inherited mountpoints in a more standard hierarchy. I haven't checked
out what's different.

tank/var/log /var/log zfs defaults 0 0
tank/tmp /tmp zfs defaults 0 0
/dev/md0 /boot ext4 defaults 0 1
/dev/mapper/swap none swap sw 0 0

Realistically, this probably makes it a not-super-high-priority bug
given how rare legacy mountpoints are and the fact that probably 25%
of the world population using them (me) has a workaround.

Ironically, the issue noted in 1945873 didn't show up with this build
either, for reasons that I haven't yet sussed out.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945874

Title:
  21.10 beta, errors in 10-linux and 10_linux_zfs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1945874/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1945874] Re: 21.10 beta, errors in 10-linux and 10_linux_zfs

2021-10-04 Thread Mason Loring Bliss
Sure. This is a mode I've been using lately where I'm using legacy 
mountpoints on datasets out of fstab. I suspect this would do the same 
thing with a traditional inherited mount hierarchy.


# cat /etc/fstab 
tank/zroot / zfs defaults 0 0
tank/home /home zfs defaults 0 0
tank/usr/src /usr/src zfs defaults 0 0
tank/var/mail /var/mail zfs defaults 0 0
tank/home/mason /home/mason zfs defaults 0 0
tank/var/log /var/log zfs defaults 0 0
tank/tmp /tmp zfs defaults 0 0
/dev/md0 /boot ext4 defaults 0 1
/dev/sda2 /boot/efi0 vfat defaults 0 1
/dev/sdb2 /boot/efi1 vfat defaults 0 1
/dev/mapper/swap none swap sw 0 0


# update-grub
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/init-select.cfg'
Generating grub configuration file ...
+ set -e
+ prefix=/usr
+ datarootdir=/usr/share
+ ubuntu_recovery=1
+ quiet_boot=1
+ quick_boot=1
+ gfxpayload_dynamic=1
+ vt_handoff=1
+ . /usr/share/grub/grub-mkconfig_lib
+ prefix=/usr
+ exec_prefix=/usr
+ datarootdir=/usr/share
+ datadir=/usr/share
+ bindir=/usr/bin
+ sbindir=/usr/sbin
+ [ x/usr/share/grub = x ]
+ test x = x
+ grub_probe=/usr/sbin/grub-probe
+ test x = x
+ grub_file=/usr/bin/grub-file
+ test x = x
+ grub_mkrelpath=/usr/bin/grub-mkrelpath
+ which gettext
+ :
+ grub_tab= 
+ export TEXTDOMAIN=grub
+ export TEXTDOMAINDIR=/usr/share/locale
+ set -u
+ which zfs
+ 
+ imported_pools=
+ mktemp -d /tmp/zfsmnt.XX
+ MNTDIR=/tmp/zfsmnt.TssEDF
+ mktemp -d /tmp/zfstmp.XX
+ ZFSTMP=/tmp/zfstmp.v6r2Ln
+ uname -m
+ machine=x86_64
+ GENKERNEL_ARCH=x86_64
+ RC=0
+ trap on_exit EXIT INT QUIT ABRT PIPE TERM
+ GRUB_LINUX_ZFS_TEST=
+ import_pools
+ zpool list
+ awk {if (NR>1) print $1}
+ local initial_pools=tank
+ local all_pools=
+ local imported_pools=
+ local err=
+ set +e
+ zpool import -f -a -o cachefile=none -o readonly=on -N
+ err=no pools available to import
+ [ 0 -ne 0 ]
+ set -e
+ zpool list
+ awk {if (NR>1) print $1}
+ all_pools=tank
+ echo tank
+ grep -wq tank
+ continue
+ echo 
+ imported_pools=
+ bootlist /tmp/zfsmnt.TssEDF
+ local mntdir=/tmp/zfsmnt.TssEDF
+ local boot_list=
+ get_root_datasets
+ zpool list
+ awk {if (NR>1) print $1}
+ local pools=tank
+ zpool get -H altroot tank
+ awk {print $3}
+ local rel_pool_root=-
+ [ - = - ]
+ rel_pool_root=/
+ zfs list -H -o name,canmount,mountpoint -t filesystem
+ awk {print $1}
+ grep -E ^tank(\s|/[[:print:]]*\s)(on|noauto)\s/$
+ echo 
+ boot_list=
+ generate_grub_menu_metadata 
+ local bootlist=
+ get_machines_sorted 
+ local bootlist=
+ echo 
+ + awk {print $3}
sort -u
+ local machineids=
+ + sort -nr
awk {print $2}
+ menu_metadata=
+ generate_grub_menu 
+ local menu_metadata=
+ local last_section=
+ local main_dataset_name=
+ local main_dataset=
+ local have_zsys=
+ [ -z  ]
+ return
+ grub_menu=
+ [ -n  ]
+ on_exit
+ mountpoint -q /tmp/zfsmnt.TssEDF
+ true
+ rmdir /tmp/zfsmnt.TssEDF
+ rm -rf /tmp/zfstmp.v6r2Ln
+ exit 0
done


To reiterate, though, 10_linux does the right thing if it's allowed to 
probe for entries. Maybe a good solution would be to let it, and save a 
few lines and some complexity in 10_linux_zfs.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945874

Title:
  21.10 beta, errors in 10-linux and 10_linux_zfs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1945874/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1945873] Re: vt.handoff=7 ~broken on VM

2021-10-04 Thread Mason Loring Bliss
Quick note, removing "splash" from the kernel command line mitigates the
issue.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945873

Title:
  vt.handoff=7 ~broken on VM

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1945873/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1945873] Re: vt.handoff=7 ~broken on VM

2021-10-04 Thread Mason Loring Bliss
I just installed Impish Indri on my Thinkpad T420 and the same issue 
cropped up. This is notable in that it is a UEFI install on real 
hardware. I'd expected this was a glitch that would only show up on VMs.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945873

Title:
  vt.handoff=7 ~broken on VM

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1945873/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1945873] Re: vt.handoff=7 ~broken on VM

2021-10-03 Thread Mason Loring Bliss
** Attachment added: "in case it's useful: dpkg -l | awk '{print $2,$3}' > 
packages"
   
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1945873/+attachment/5530338/+files/packages

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945873

Title:
  vt.handoff=7 ~broken on VM

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1945873/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1945873] Re: vt.handoff=7 ~broken on VM

2021-10-03 Thread Mason Loring Bliss
I've been lax in exercising this stuff. I tend to stick to Ubuntu LTS 
hypervisors (hence hardware), and my VMs are often other systems. I only
noticed this issue because I needed a VM test environment for the un-
related ZFS root bug I was exploring. I haven't ruled out that it's an
artifact of my install method, although if it is, that itself might
indicate a missing explicit dependency somewhere. I'll try to get time 
to do a vanilla install using one of the shipped installers, and I'll
report back with results.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945873

Title:
  vt.handoff=7 ~broken on VM

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1945873/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1945874] Re: 21.10 beta, errors in 10-linux and 10_linux_zfs

2021-10-02 Thread Mason Loring Bliss
** Description changed:

- 
  In a custom install of Ubuntu 21.10 beta, both hardware and VM installs
  suffer from a bug in the grub.d/10_linux and 10_linux_zfs scripts. (For
  comparison, Debian Bullseye, running a similar version of grub, doesn't
  have this issue.)
  
  Unique to Ubuntu, there's this block in 10_linux:
  
- xzfs)
- # We have a more specialized ZFS handler, with multiple system in
- # 10_linux_zfs.
- if [ -e "`dirname $(readlink -f $0)`/10_linux_zfs" ]; then
-   echo "zoinks!" >> /tmp/foo
-   exit 0
- fi
+ xzfs)
+ # We have a more specialized ZFS handler, with multiple system in
+ # 10_linux_zfs.
+ if [ -e "`dirname $(readlink -f $0)`/10_linux_zfs" ]; then
+   echo "zoinks!" >> /tmp/foo
+   exit 0
+ fi
  
  This looks at the root filesystem, and if it's ZFS, it shunts kernel
  discovery and entry population off to 10_linux_zfs. This subsequent
  script assumes that the default/automated Ubuntu ZFS layout is in
  effect, and if it's not, the end result is that 10_linux doesn't add an
  entry because there is ZFS present, and 10_linux_zfs doesn't add a
  kernel stanza either, evidently because /boot isn't in a pool. (I
  haven't tracked the logic in 10_linux_zfs fully but given time pressure
  I wanted to get this bug in so someone could look at it.) With this
  combination of events, the resulting grub.cfg has no kernel stanzas at
  all, leaving the user at a grub> prompt. Manual configuration and
- booting from the prompt works from this point, but it's obvious not
+ booting from the prompt works from this point, but it's obviously not
  ideal.
  
  In testing, commenting out the "exit" in the code block noted above
  resulted in correct stanzas being generated, in this case with /boot
  being on ext4 atop MD-RAID1. Rather than exiting if the root is on ZFS,
  correct behaviour would occur in more cases if we check for /boot being
  on ZFS or not. A simple check (untested) might be:
  
- if ! grep -q '[[:space:]]/boot[[:space:]]' /etc/fstab; then
-   exit 0
- fi
+ if ! grep -q '[[:space:]]/boot[[:space:]]' /etc/fstab; then
+   exit 0
+ fi
  
  This doesn't check for 10_linux_zfs existing, but that check is perhaps
  redundant given that the file is installed alongside 10_linux and thus
  will always exist, as packaged. This instead checks to see if there's a
  /boot in /etc/fstab, which if present should indicate that /boot is not
  going to be found on a ZFS dataset. (Certainly, traditional filesystems
  can exist on zvols and legacy-mount datasets exist, and both of these
  can appear in fstab, but neither of those is possible for a working
  /boot.)

** Description changed:

  In a custom install of Ubuntu 21.10 beta, both hardware and VM installs
  suffer from a bug in the grub.d/10_linux and 10_linux_zfs scripts. (For
  comparison, Debian Bullseye, running a similar version of grub, doesn't
  have this issue.)
  
  Unique to Ubuntu, there's this block in 10_linux:
  
  xzfs)
  # We have a more specialized ZFS handler, with multiple system in
  # 10_linux_zfs.
  if [ -e "`dirname $(readlink -f $0)`/10_linux_zfs" ]; then
-   echo "zoinks!" >> /tmp/foo
    exit 0
  fi
  
  This looks at the root filesystem, and if it's ZFS, it shunts kernel
  discovery and entry population off to 10_linux_zfs. This subsequent
  script assumes that the default/automated Ubuntu ZFS layout is in
  effect, and if it's not, the end result is that 10_linux doesn't add an
  entry because there is ZFS present, and 10_linux_zfs doesn't add a
  kernel stanza either, evidently because /boot isn't in a pool. (I
  haven't tracked the logic in 10_linux_zfs fully but given time pressure
  I wanted to get this bug in so someone could look at it.) With this
  combination of events, the resulting grub.cfg has no kernel stanzas at
  all, leaving the user at a grub> prompt. Manual configuration and
  booting from the prompt works from this point, but it's obviously not
  ideal.
  
  In testing, commenting out the "exit" in the code block noted above
  resulted in correct stanzas being generated, in this case with /boot
  being on ext4 atop MD-RAID1. Rather than exiting if the root is on ZFS,
  correct behaviour would occur in more cases if we check for /boot being
  on ZFS or not. A simple check (untested) might be:
  
  if ! grep -q '[[:space:]]/boot[[:space:]]' /etc/fstab; then
    exit 0
  fi
  
  This doesn't check for 10_linux_zfs existing, but that check is perhaps
  redundant given that the file is installed alongside 10_linux and thus
  will always exist, as packaged. This instead checks to see if there's a
  /boot in /etc/fstab, which if present should indicate that /boot is not
  going to be found on a ZFS dataset. (Certainly, traditional filesystems
  can exist on zvols and legacy-mount

[Bug 1945873] Re: vt.handoff=7 ~broken on VM

2021-10-02 Thread Mason Loring Bliss
Additional detail in case it's useful:

Hypervisor is Debian Buster, virt-manager, libvirt/KVM, default Spice 
display, default QXL video. Legacy install, so no efifb.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945873

Title:
  vt.handoff=7 ~broken on VM

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1945873/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1945873] Re: vt.handoff=7 ~broken on VM

2021-10-02 Thread Mason Loring Bliss
** Description changed:

- In an install made with 21.10 beta, the default settings add vt.handoff=7 
- to the primary kernel command line in /boot/grub/grub.cfg. The effect of 
- this is that the virtual console is somewhat broken during boot. Prompts 
- for LUKS passphrases are hidden, and once the system is done booting, getty
- prompts do not show until the user navigates to another virtual console. 
- (This isn't possible until the system is booted, meaning it's not a 
- mitigation for accessing LUKS prompt. For that, a user can hit escape and 
- be presented with a partially-functional LUKS prompt.)
+ In an install made with 21.10 beta, the default settings add 
+ vt.handoff=7 to the primary kernel command line in /boot/grub/grub.cfg.
+ The effect of this is that the virtual console is somewhat broken during
+ boot. Prompts for LUKS passphrases are hidden, and once the system is 
+ done booting, getty prompts do not show until the user navigates to
+ another virtual console. (This isn't possible until the system is 
+ booted, meaning it's not a mitigation for accessing LUKS prompt. For
+ that, a user can hit escape and be presented with a partially-functional
+ LUKS prompt.)
  
- Attached is an example of a getty prompt not showing. This is after hitting 
- ESC to get a LUKS prompt so that disks could be unlocked, so that doesn't 
- help later boot.
+ Attached is an example of a getty prompt not showing. This is after 
+ hitting ESC to get a LUKS prompt so that disks could be unlocked, so
+ that doesn't help later boot.
  
- The system in this case is a debootstrap-installed ZFS-on-root with an ext4
- MD-RAID1 /boot.
+ The system in this case is a debootstrap-installed ZFS-on-root with an 
+ ext4 MD-RAID1 /boot.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945873

Title:
  vt.handoff=7 ~broken on VM

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub/+bug/1945873/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1945874] [NEW] 21.10 beta, errors in 10-linux and 10_linux_zfs

2021-10-02 Thread Mason Loring Bliss
Public bug reported:


In a custom install of Ubuntu 21.10 beta, both hardware and VM installs
suffer from a bug in the grub.d/10_linux and 10_linux_zfs scripts. (For
comparison, Debian Bullseye, running a similar version of grub, doesn't
have this issue.)

Unique to Ubuntu, there's this block in 10_linux:

xzfs)
# We have a more specialized ZFS handler, with multiple system in
# 10_linux_zfs.
if [ -e "`dirname $(readlink -f $0)`/10_linux_zfs" ]; then
  echo "zoinks!" >> /tmp/foo
  exit 0
fi

This looks at the root filesystem, and if it's ZFS, it shunts kernel
discovery and entry population off to 10_linux_zfs. This subsequent
script assumes that the default/automated Ubuntu ZFS layout is in
effect, and if it's not, the end result is that 10_linux doesn't add an
entry because there is ZFS present, and 10_linux_zfs doesn't add a
kernel stanza either, evidently because /boot isn't in a pool. (I
haven't tracked the logic in 10_linux_zfs fully but given time pressure
I wanted to get this bug in so someone could look at it.) With this
combination of events, the resulting grub.cfg has no kernel stanzas at
all, leaving the user at a grub> prompt. Manual configuration and
booting from the prompt works from this point, but it's obvious not
ideal.

In testing, commenting out the "exit" in the code block noted above
resulted in correct stanzas being generated, in this case with /boot
being on ext4 atop MD-RAID1. Rather than exiting if the root is on ZFS,
correct behaviour would occur in more cases if we check for /boot being
on ZFS or not. A simple check (untested) might be:

if ! grep -q '[[:space:]]/boot[[:space:]]' /etc/fstab; then
  exit 0
fi

This doesn't check for 10_linux_zfs existing, but that check is perhaps
redundant given that the file is installed alongside 10_linux and thus
will always exist, as packaged. This instead checks to see if there's a
/boot in /etc/fstab, which if present should indicate that /boot is not
going to be found on a ZFS dataset. (Certainly, traditional filesystems
can exist on zvols and legacy-mount datasets exist, and both of these
can appear in fstab, but neither of those is possible for a working
/boot.)

** Affects: grub (Ubuntu)
 Importance: Undecided
 Status: New

** Description changed:

+ 
  In a custom install of Ubuntu 21.10 beta, both hardware and VM installs
  suffer from a bug in the grub.d/10_linux and 10_linux_zfs scripts. (For
  comparison, Debian Bullseye, running a similar version of grub, doesn't
  have this issue.)
  
  Unique to Ubuntu, there's this block in 10_linux:
  
- xzfs) 
+ xzfs)
  # We have a more specialized ZFS handler, with multiple system in
  # 10_linux_zfs.
  if [ -e "`dirname $(readlink -f $0)`/10_linux_zfs" ]; then
echo "zoinks!" >> /tmp/foo
exit 0
  fi
  
- This looks at the root filesystem, and if it's ZFS, it shunts kernel 
- discovery and entry population off to 10_linux_zfs. This subsequent script 
- assumes that the default/automated Ubuntu ZFS layout is in effect, and if 
- it's not, the end result is that 10_linux doesn't add an entry because 
- there is ZFS present, and 10_linux_zfs doesn't add a kernel stanza either,
- evidently because /boot isn't in a pool. (I haven't tracked the logic in 
- 10_linux_zfs fully but given time pressure I wanted to get this bug in so
- someone could look at it.) With this combination of events, the resulting 
- grub.cfg has no kernel stanzas at all, leaving the user at a grub> prompt. 
- Manual configuration and booting from the prompt works from this point, but 
- it's obvious not ideal.
+ This looks at the root filesystem, and if it's ZFS, it shunts kernel
+ discovery and entry population off to 10_linux_zfs. This subsequent
+ script assumes that the default/automated Ubuntu ZFS layout is in
+ effect, and if it's not, the end result is that 10_linux doesn't add an
+ entry because there is ZFS present, and 10_linux_zfs doesn't add a
+ kernel stanza either, evidently because /boot isn't in a pool. (I
+ haven't tracked the logic in 10_linux_zfs fully but given time pressure
+ I wanted to get this bug in so someone could look at it.) With this
+ combination of events, the resulting grub.cfg has no kernel stanzas at
+ all, leaving the user at a grub> prompt. Manual configuration and
+ booting from the prompt works from this point, but it's obvious not
+ ideal.
  
- In testing, commenting out the "exit" in the code block noted above 
- resulted in correct stanzas being generated, in this case with /boot being 
- on ext4 atop MD-RAID1. Rather than exiting if the root is on ZFS, correct 
- behaviour would occur in more cases if we check for /boot being on ZFS or 
- not. A simple check (untested) might be:
-   
+ In testing, commenting out the "exit" in the code block noted above
+ resulted in correct stanzas being generated, in thi

[Bug 1945873] [NEW] vt.handoff=7 ~broken on VM

2021-10-02 Thread Mason Loring Bliss
Public bug reported:

In an install made with 21.10 beta, the default settings add vt.handoff=7 
to the primary kernel command line in /boot/grub/grub.cfg. The effect of 
this is that the virtual console is somewhat broken during boot. Prompts 
for LUKS passphrases are hidden, and once the system is done booting, getty
prompts do not show until the user navigates to another virtual console. 
(This isn't possible until the system is booted, meaning it's not a 
mitigation for accessing LUKS prompt. For that, a user can hit escape and 
be presented with a partially-functional LUKS prompt.)

Attached is an example of a getty prompt not showing. This is after hitting 
ESC to get a LUKS prompt so that disks could be unlocked, so that doesn't 
help later boot.

The system in this case is a debootstrap-installed ZFS-on-root with an ext4
MD-RAID1 /boot.

** Affects: grub (Ubuntu)
 Importance: Undecided
 Status: New

** Attachment added: "Example of screen where getty prompt should be showing"
   
https://bugs.launchpad.net/bugs/1945873/+attachment/5530170/+files/2021-10-02-143452_1024x835_scrot.png

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1945873

Title:
  vt.handoff=7 ~broken on VM

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub/+bug/1945873/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1853164] Re: systemd: /etc/dhcp/dhclient-enter-hooks.d/resolved error

2021-07-09 Thread Mason Loring Bliss
I've tested this, and under the noted conditions resolvconf no longer
has an issue updating /etc/resolv.conf. Thank you for your time and
attention to detail!

Unless someone else weighs in noting a problem, from my perspective it
seems like this bug can be closed now, as you've corrected the issue.
Thank you.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1853164

Title:
  systemd: /etc/dhcp/dhclient-enter-hooks.d/resolved error

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1853164/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1853164] Re: systemd: /etc/dhcp/dhclient-enter-hooks.d/resolved error

2021-07-09 Thread Mason Loring Bliss
Hey there. Thanks for your time on this. I'll try to supply positive
confirmation over the weekend.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1853164

Title:
  systemd: /etc/dhcp/dhclient-enter-hooks.d/resolved error

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1853164/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1886851] [NEW] irssi freezes on particular input - upstream bugs

2020-07-08 Thread Mason Loring Bliss
Public bug reported:

I've observed that irssi in Focal Fossa locks up randomly, requiring that 
the process be killed. It seems very likely that it is (or is related to) 
the following upstream bug report:

https://github.com/irssi/irssi/issues/1180
https://github.com/irssi/irssi/pull/1183

Also noted here:

https://irssi.org/NEWS/#v1-2-2

I depend on IRC for work so I moved away from Ubuntu on my IRC client
machine, which was unfortunate. I haven't been able to test the patches 
noted. It'd be great if Ubuntu pulled in the correction in whatever is the 
most reasonable way. I may get a chance to prepare and test a patched 
version of irssi before long, in which case I'll share feedback.

** Affects: irssi (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1886851

Title:
  irssi freezes on particular input - upstream bugs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/irssi/+bug/1886851/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1868553] Re: libefi* integration breaks grub-install on MD devices

2020-06-24 Thread Mason Loring Bliss
FWIW, my prior comment was confusing. GRUB handles both
efibootmgr entries correctly on its own with this new
functionality.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1868553

Title:
  libefi* integration breaks grub-install on MD devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1868553/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1868553] Re: libefi* integration breaks grub-install on MD devices

2020-06-05 Thread Mason Loring Bliss
Paride,

It was a limited-duration copy of the original text pasted into my previous
ticket comment, so it contained unblemished formatting but was otherwise 
identical content.

As an addendum, I tested 20.04 with a two-ESP system and I was able to 
specify both ESPs when I ran:

dpkg-reconfigure grub-efi-amd64

The resulting EFI boot entries were correct.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1868553

Title:
  libefi* integration breaks grub-install on MD devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1868553/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1868553] Re: libefi* integration breaks grub-install on MD devices

2020-06-02 Thread Mason Loring Bliss
So, "dpkg-reconfigure grub-efi-amd64" now has a screen that matches what 
we'd get reconfiguring the old grub-pc:

 
 ┌──┤ Configuring grub-efi-amd64 ├───┐
 │ The grub-efi package is being upgraded. This menu allows you to select│
 │ which EFI system partions you'd like grub-install to be automatically │
 │ run for, if any.  │
 │   │
 │ Running grub-install automatically is recommended in most situations, to  │
 │ prevent the installed GRUB core image from getting out of sync with GRUB  │
 │ modules or grub.cfg.  │
 │   │
 │ GRUB EFI system partitions:   │
 │   │
 │[*] /dev/sda1 (199 MB; /boot/efi) on 120034 MB INTEL_SSDSC2BW12│
 │   │
 │   │
 │   │
 │   │
 └───┘
 
I'll test with a system with two ESPs later, but this ought to do the right 
thing. You'll need one entry for each, just as you would with an old-meta-
data MD-RAID1 used as an ESP, but as vorlon's noting, this will be a little
safer in the face of UEFI firmware that writes stuff to the drives.

It'd be something like:

efibootmgr -c -d /dev/sda -L ubuntu0 -l '\EFI\UBUNTU\SHIMX64.EFI'
efibootmgr -c -d /dev/sdb -L ubuntu1 -l '\EFI\UBUNTU\SHIMX64.EFI'

This is a win, and I have no further desire for direct MD-RAID 1
support.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1868553

Title:
  libefi* integration breaks grub-install on MD devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1868553/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1868553] Re: libefi* integration breaks grub-install on MD devices

2020-06-02 Thread Mason Loring Bliss
https://bpa.st/FQOQ

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1868553

Title:
  libefi* integration breaks grub-install on MD devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1868553/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1861359] Re: swap storms kills interactive use

2020-04-03 Thread Mason Loring Bliss
Reporter hasn't confirmed that it's corrected yet... "Fix committed"
seems premature.

** Changed in: linux (Ubuntu Focal)
   Status: Fix Committed => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1861359

Title:
  swap storms kills interactive use

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1861359/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1869716] Re: Removing libpango1.0-0 broke Minecraft Launcher

2020-03-30 Thread Mason Loring Bliss
Note: In Bionic - I haven't checked exhaustively - libpango1.0-0 is a 
transitional package, and
libpango-1.0-0 is the actual package. Perhaps Focal Fossa should continue to 
ship the transitional
package to keep Minecraft folk from encountering this. Not sure what policy 
applies.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1869716

Title:
  Removing  libpango1.0-0  broke Minecraft Launcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pango1.0/+bug/1869716/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1869716] Re: Removing libpango1.0-0 broke Minecraft Launcher

2020-03-30 Thread Mason Loring Bliss
And RikMills notes:

https://launchpad.net/ubuntu/+source/pango1.0/1.44.7-2ubuntu1

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1869716

Title:
  Removing  libpango1.0-0  broke Minecraft Launcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pango1.0/+bug/1869716/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1869614] [NEW] Missing bits noted on 16.04 to 18.04 do-release-upgrade

2020-03-29 Thread Mason Loring Bliss
Public bug reported:

I've noted two things on a recent do-release-upgrade wherein I upgraded a 
system from 16.04 to 18.04. It largely went well, but after the upgrade I 
didn't see Canonical Livepatch status in my motd.

When I looked, I noted update-motd:amd64 missing, so I installed that, but 
that still didn't make my motd show Livepatch status. A bit of searching noted 
that /etc/cron.daily/ubuntu-advantage-tools runs to update 
/var/cache/ubuntu-advantage-tools/ubuntu-advantage-status.cache, and evidently 
that hadn't happened.

That would be a very reasonable thing to run on boot, not just daily out of
cron. I'd think that on boot, perhaps as part of the livepatch service 
start-up, would be the most reasonable place to do that, in addition to the 
cron job.

It's as yet unclear why update-motd was missing. I didn't see spoor from the 
do-release-upgrade in /var/log/apt/history.log, which is where I'd have 
naively expected to see it.

** Affects: ubuntu-release-upgrader (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1869614

Title:
  Missing bits noted on 16.04 to 18.04 do-release-upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-release-upgrader/+bug/1869614/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1853164] Re: systemd: /etc/dhcp/dhclient-enter-hooks.d/resolved error

2019-11-21 Thread Mason Loring Bliss
Interesting. But the code is still incorrect, with my patch correcting it, 
so I guess we'll see how it goes. I can either fix it locally or go elsewhere,
but I'm hoping it's simply fixed in the distribution.

Thanks for the pointer.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1853164

Title:
  systemd: /etc/dhcp/dhclient-enter-hooks.d/resolved error

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1853164/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1853164] Re: systemd: /etc/dhcp/dhclient-enter-hooks.d/resolved error

2019-11-19 Thread Mason Loring Bliss
sdezial notes this being terser. I do the long form out of superstitious awe
at the notion of a return code of zero being "true", even though it always
is, but this would be terser and also correct:

if systemctl is-active systemd-resolved > /dev/null 2>&1; then

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1853164

Title:
  systemd: /etc/dhcp/dhclient-enter-hooks.d/resolved error

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1853164/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1853164] [NEW] systemd: /etc/dhcp/dhclient-enter-hooks.d/resolved error

2019-11-19 Thread Mason Loring Bliss
Public bug reported:

The functionality exists to allow users to revert to the traditional ifupdown 
package for network configuration. Alongside this, systemd's often-buggy 
resolver can be disabled. However, there's a logic error in the systemd-
supplied /etc/dhcp/dhclient-enter-hooks.d/resolved that prevents the system
from populating /etc/resolv.conf properly when systemd-resolved is disabled. 
The issue is here:

if [ -x /lib/systemd/systemd-resolved ] ; then

Instead of checking to see if the systemd-resolved service is enabled or 
active, which would be the correct behaviour, this checks for the existence of
a binary, assuming that if it exists it's supposed to be used.

I've not tested this in the absence of resolvconf, but if systemd-resolved 
isn't enabled, it's difficult to imagine this code wanting to run. I've tested 
this with resolvconf and ifupdown driving dhclient, and it corrects the 
behaviour that was broken with the introduction of systemd-resolved.

I'm attaching a patch, and am also including it here for easy access:

*** resolved.broken 2019-11-19 15:01:28.785588838 +
--- resolved2019-11-19 15:08:06.519430073 +
***
*** 14,20 
  #   (D) = master script downs interface
  #   (-) = master script does nothing with this

! if [ -x /lib/systemd/systemd-resolved ] ; then
  # For safety, first undefine the nasty default make_resolv_conf()
  make_resolv_conf() { : ; }
  case "$reason" in
--- 14,21 
  #   (D) = master script downs interface
  #   (-) = master script does nothing with this

! systemctl is-active systemd-resolved > /dev/null 2>&1
! if [ $? -eq 0 ]; then
  # For safety, first undefine the nasty default make_resolv_conf()
  make_resolv_conf() { : ; }
  case "$reason" in

** Affects: systemd (Ubuntu)
 Importance: Undecided
 Status: New

** Patch added: "Check for active service, not existence of binary."
   
https://bugs.launchpad.net/bugs/1853164/+attachment/5306478/+files/systemd-resolved.patch

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1853164

Title:
  systemd: /etc/dhcp/dhclient-enter-hooks.d/resolved error

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1853164/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1853164] Re: systemd: /etc/dhcp/dhclient-enter-hooks.d/resolved error

2019-11-19 Thread Mason Loring Bliss
-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1853164

Title:
  systemd: /etc/dhcp/dhclient-enter-hooks.d/resolved error

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1853164/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1830096] Re: Firefox 67 in Ubuntu 18.10 thinks it's an older version

2019-05-31 Thread Mason Loring Bliss
For what it's worth, my kids encountered this on two desktops today,
moving from 16.04 to 18.04. As this would be a major issue for non-
technical folk I'd urge getting the fix out the door sooner rather than
later - especially on 18.04, to preserve LTS as being a safe choice.

The workaround I used here, prior to being shown this bug, was to let
Firefox make a new profile and to follow these instructions to copy over
all the critical files:

https://support.mozilla.org/en-US/kb/recovering-important-data-from-
an-old-profile

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1830096

Title:
  Firefox 67 in Ubuntu 18.10 thinks it's an older version

To manage notifications about this bug go to:
https://bugs.launchpad.net/firefox/+bug/1830096/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1779736] Re: umask ignored on NFSv4.2 mounts

2019-03-07 Thread Mason Loring Bliss
I can confirm that "zfs set acltype=posixacl foo/bar/" is an effective 
workaround. It appears to be
unset by default.

root@box /root# zfs set acltype=posixacl pool/srv/thing
root@box /root# zfs get acltype pool/srv
NAME  PROPERTY  VALUE SOURCE
pool/srv  acltype   off   default
root@box /root# zfs get acltype pool/srv/thing
NAMEPROPERTY  VALUE SOURCE
pool/srv/thing  acltype   posixacl  local

Thanks, Quentin.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1779736

Title:
  umask ignored on NFSv4.2 mounts

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1779736/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789130] Re: netplan apply doesn't activate NetworkManager

2018-08-27 Thread Mason Loring Bliss
I just tried again on a fresh install, and noted this in syslog:

Aug 27 21:55:14 penguin systemd-timesyncd[904]: Network configuration changed, 
trying to establish connection.
Aug 27 21:55:14 penguin systemd[1]: Starting Load/Save RF Kill Switch Status...
Aug 27 21:55:14 penguin systemd-rfkill[9744]: Failed to open device rfkill0: No 
such device
Aug 27 21:55:14 penguin systemd[1]: Started Load/Save RF Kill Switch Status.
Aug 27 21:55:14 penguin systemd-timesyncd[904]: Synchronized to time server 
91.189.91.157:123 (ntp.ubuntu.com).
Aug 27 21:55:14 penguin kernel: [24668.459650] iwlwifi :03:00.0: can't 
disable ASPM; OS doesn't have ASPM control
Aug 27 21:55:14 penguin kernel: [24668.460729] iwlwifi :03:00.0: loaded 
firmware version 18.168.6.1 op_mode iwldvm
Aug 27 21:55:14 penguin kernel: [24668.460758] iwlwifi :03:00.0: 
CONFIG_IWLWIFI_DEBUG disabled
Aug 27 21:55:14 penguin kernel: [24668.460760] iwlwifi :03:00.0: 
CONFIG_IWLWIFI_DEBUGFS enabled
Aug 27 21:55:14 penguin kernel: [24668.460761] iwlwifi :03:00.0: 
CONFIG_IWLWIFI_DEVICE_TRACING enabled
Aug 27 21:55:14 penguin kernel: [24668.460764] iwlwifi :03:00.0: Detected 
Intel(R) Centrino(R) Advanced-N 6205 AGN, REV=0xB0
Aug 27 21:55:14 penguin kernel: [24668.489358] ieee80211 phy1: Selected rate 
control algorithm 'iwl-agn-rs'
Aug 27 21:55:14 penguin systemd-udevd[9759]: link_config: autonegotiation is 
unset or enabled, the speed and duplex are not writable.
Aug 27 21:55:14 penguin kernel: [24668.492733] iwlwifi :03:00.0 wlp3s0: 
renamed from wlan0
Aug 27 21:55:14 penguin systemd-networkd[922]: wlan0: Interface name change 
detected, wlan0 has been renamed to wlp3s0.
Aug 27 21:55:14 penguin systemd-timesyncd[904]: Network configuration changed, 
trying to establish connection.
Aug 27 21:55:14 penguin networkd-dispatcher[1123]: WARNING:Unknown index 4 
seen, reloading interface list
Aug 27 21:55:14 penguin systemd-timesyncd[904]: Synchronized to time server 
91.189.91.157:123 (ntp.ubuntu.com).
Aug 27 21:55:17 penguin ModemManager[9578]:   Couldn't check support for 
device at '/sys/devices/pci:00/:00:1c.1/:03:00.0': not supported by 
any plugin


That said, if I manually start NetworkManager afterwards, it appears to inherit 
network handling unproblematically without a reboot.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789130

Title:
  netplan apply doesn't activate NetworkManager

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1789130/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789119] Re: conversion from netplan to ifupdown broken

2018-08-27 Thread Mason Loring Bliss
I'm going to go ahead and close this. I can't reproduce it, on VMs or on
the hardware where I encountered it. I must have fat-fingered something.
Sorry for the noise.


** Changed in: netplan.io (Ubuntu)
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789119

Title:
  conversion from netplan to ifupdown broken

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1789119/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789130] [NEW] netplan apply doesn't activate NetworkManager

2018-08-26 Thread Mason Loring Bliss
Public bug reported:

Starting with the Ubuntu Server install, I get netplan driving systemd-
networkd by default.

If I change /etc/netplan/01-netcfg.yaml to point to NetworkManager,
"netplan apply" doesn't fire up NetworkManager.

Attached screenshot illustrates what I see.

** Affects: netplan.io (Ubuntu)
 Importance: Undecided
 Status: New

** Attachment added: "2018-08-26-162427_1024x835_scrot.png"
   
https://bugs.launchpad.net/bugs/1789130/+attachment/5181058/+files/2018-08-26-162427_1024x835_scrot.png

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789130

Title:
  netplan apply doesn't activate NetworkManager

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1789130/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789119] [NEW] conversion from netplan to ifupdown broken

2018-08-26 Thread Mason Loring Bliss
Public bug reported:

A new install of Ubuntu 18.04 server suggests that ifupdown can be
accessed by installing and configuring it.

I was able to install and configure ifupdown, but booting hung on an
unlimited wait for network, and this didn't stop until I removed the
netplan.io package while booted using rescue media.

Once I did that, the interface I'd configured came up, except for the
supplied nameserver, which was evidently ignored - the systemd-networkd
127.0.0.53 address remained as the server in /etc/resolv.conf.

** Affects: netplan.io (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789119

Title:
  conversion from netplan to ifupdown broken

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1789119/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1789119] Re: conversion from netplan to ifupdown broken

2018-08-26 Thread Mason Loring Bliss
If it matters, the /etc/network/interfaces file I created was similar to
this:

auto br0
iface br0 inet static
address /24
gateway 
bridge_ports enp2s0
dns_nameserver 
dns_search 

I didn't think to configure lo, and conceivably that could have
mattered.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1789119

Title:
  conversion from netplan to ifupdown broken

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1789119/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1782224] [NEW] Xenial fails to boot on a degraded array

2018-07-17 Thread Mason Loring Bliss
Public bug reported:

This is similar to what was reported as fixed in bug 1635049 and
described here:

https://askubuntu.com/questions/789953/how-to-enable-degraded-raid1
-boot-in-16-04lts/798213

Automatic booting with a degraded array fails. One must wait for the
initramfs prompt and manually assemble with "mdadm -IRs" to continue.

I see mdadm 3.3-2ubuntu7.6 installed.

Reference to earlier correction: https://wiki.ubuntu.com/ReliableRaid

** Affects: mdadm (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1782224

Title:
  Xenial fails to boot on a degraded array

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1782224/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1578830] Re: gnome-terminal fails to launch after install of xenial

2018-03-19 Thread Mason Loring Bliss
Note: 1474927, 1578830, and 1652451 all seem to refer to the same issue. The 
error code noted is documented here:

https://wiki.gnome.org/Apps/Terminal/FAQ#Exit_status_8

This followed by a reboot or logging back in ought to resolve the issue:

sudo localectl set-locale LANG=en_US.utf8

If you've done something invasive with locales, you might need to do something 
more invasive to resolve the issue, as noted here:

https://askubuntu.com/questions/608330/problem-with-gnome-terminal-
on-gnome-3-12-2/651169#651169

Thanks to timeless on Freenode for the link to the exit status.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1578830

Title:
  gnome-terminal fails to launch after install of xenial

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnome-terminal/+bug/1578830/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1474927] Re: gnome-terminal crashes with "Error constructing proxy for org.gnome.Terminal:/org/gnome/Terminal/Factory0: Error calling StartServiceByName for org.gnome.Terminal: GDBus.Error:org.fr

2018-03-19 Thread Mason Loring Bliss
Note: 1474927, 1578830, and 1652451 all seem to refer to the same issue. The 
error code noted is documented here:

https://wiki.gnome.org/Apps/Terminal/FAQ#Exit_status_8

This followed by a reboot or logging back in ought to resolve the issue:

sudo localectl set-locale LANG=en_US.utf8

If you've done something invasive with locales, you might need to do something 
more invasive to resolve the issue, as noted here:

https://askubuntu.com/questions/608330/problem-with-gnome-terminal-
on-gnome-3-12-2/651169#651169

Thanks to timeless on Freenode for the link to the exit status.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1474927

Title:
  gnome-terminal crashes with "Error constructing proxy for
  org.gnome.Terminal:/org/gnome/Terminal/Factory0: Error calling
  StartServiceByName for org.gnome.Terminal:
  GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process
  org.gnome.Terminal exited with status 8" immediately after start

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnome-terminal/+bug/1474927/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1652451] Re: Gnome Terminal doesn't launch

2018-03-19 Thread Mason Loring Bliss
Note: 1474927, 1578830, and 1652451 all seem to refer to the same issue. The 
error code noted is documented here:

https://wiki.gnome.org/Apps/Terminal/FAQ#Exit_status_8

This followed by a reboot or logging back in ought to resolve the issue:

sudo localectl set-locale LANG=en_US.utf8

If you've done something invasive with locales, you might need to do something 
more invasive to resolve the issue, as noted here:

https://askubuntu.com/questions/608330/problem-with-gnome-terminal-
on-gnome-3-12-2/651169#651169

Thanks to timeless on Freenode for the link to the exit status.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1652451

Title:
  Gnome Terminal doesn't launch

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnome-terminal/+bug/1652451/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1554795] Re: timeout on restart or shutdown with LUKS root

2016-11-23 Thread Mason Loring Bliss
I'm curious about these two status changes:

Changed in systemd (Ubuntu):
status: Confirmed → Fix Released
Changed in initramfs-tools (Ubuntu):
status: Confirmed → Fix Committed

Would it be possible to have pointers to the commit(s) that fixed this?

Thanks!

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1554795

Title:
  timeout on restart or shutdown with LUKS root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/1554795/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1554795] Re: timeout on restart or shutdown with LUKS root

2016-03-22 Thread Mason Loring Bliss
I'm going to open a seperate ticket for the typo/cosmetic issues noted,
so that this ticket can focus on the core. I'll note the ticket number
here once I've created it, which I'll do on the other side of my
commute.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1554795

Title:
  timeout on restart or shutdown with LUKS root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/1554795/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1554795] Re: timeout on restart or shutdown with LUKS root

2016-03-22 Thread Mason Loring Bliss
In addition to the typo, there may be an issue where the new setting I
poked in is used, but the hardcoded default is still displayed on-
screen. I've not been able to capture this on camera as yet, and I have
yet to set things up such that all the generated messages are stored for
later perusal, if that's in fact possible.

The lack of a shutdown-initrd seems to be the more critical issue, in
any event. The typos and possible erroneously displayed data on-screen
are cosmetic.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1554795

Title:
  timeout on restart or shutdown with LUKS root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/1554795/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1554795] Re: timeout on restart or shutdown with LUKS root

2016-03-21 Thread Mason Loring Bliss
I just confirmed that I see this on a laptop as well. Conveniently, the
laptop install is not using MD-RAID or ZFS - it's using LVM on LUKS on a
single disk, so it's a simpler case.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1554795

Title:
  timeout on restart or shutdown with LUKS root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/1554795/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1554795] Re: timeout on restart or shutdown with LUKS root

2016-03-20 Thread Mason Loring Bliss
Assuming this isn't fixed any time soon, I'm running with this:

mason@ogre /home/mason$ cat /etc/systemd/system.conf.d/expletive.conf 
# required singleton - high ceremony
[Manager]
#DefaultTimeoutStartSec=15s
DefaultTimeoutStopSec=15s

Note that while "Manager" is evidently the only possible section, declaring it 
is required, even for what would
otherwise be presumed to be config snippets in a system.conf.d/ directory.

To clarify previous notes:

1. I don't blame Canonical for systemd.
2. md1_crypt is swap and EXT4 /  → LVM → LUKS → MD-RAID1
3. luksroot1 and luksroot2 are /home and /usr/src → ZFS mirror on two LUKS

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1554795

Title:
  timeout on restart or shutdown with LUKS root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/1554795/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1554795] Re: timeout on restart or shutdown with LUKS root

2016-03-20 Thread Mason Loring Bliss
I just managed to time it right such that I caught the error on my
screen, and in doing to noticed a glaring typo that's likely indicative
of the overall code quality of the related software.

I'd be grateful for debugging tips. I hope Canonical's not going to ship
software that punishes LUKS users with these 90-second delays each
reboot or shutdown. I'd settle for having a way to shorten the timeout,
but systemd seems hopelessly opaque and I haven't found where this is
set as yet.

Anyway, thanks in advance for fixing this annoyance.

** Attachment added: "note careless typo in error message"
   
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1554795/+attachment/4605227/+files/IMG_20160320_040026.jpg

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1554795

Title:
  timeout on restart or shutdown with LUKS root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/1554795/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1554803] Re: apparmor: missing stub hardware directories

2016-03-09 Thread Mason Loring Bliss
That seems like a reasonable approach.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1554803

Title:
  apparmor: missing stub hardware directories

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1554803/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1554803] [NEW] apparmor throwing inexplicable errors

2016-03-08 Thread Mason Loring Bliss
Public bug reported:

● apparmor.service - LSB: AppArmor initialization
   Loaded: loaded (/etc/init.d/apparmor; bad; vendor preset: enabled)
   Active: failed (Result: exit-code) since Tue 2016-03-08 14:34:04 EST; 4h 
23min ago
 Docs: man:systemd-sysv-generator(8)
  Process: 2909 ExecStart=/etc/init.d/apparmor start (code=exited, status=123)

Mar 08 14:34:04 ogre apparmor[2909]: Skipping profile in 
/etc/apparmor.d/disable: usr.bin.firefox
Mar 08 14:34:04 ogre apparmor[2909]: AppArmor parser error for 
/etc/apparmor.d/usr.bin.webbrowser-app in 
/etc/apparmor.d/usr.bin.webbrowser-app at line 26: Could not open 
'/usr/share/apparmor/hardware/graphics.d'
Mar 08 14:34:04 ogre apparmor[2909]: Skipping profile in 
/etc/apparmor.d/disable: usr.bin.firefox
Mar 08 14:34:04 ogre apparmor[2909]: AppArmor parser error for 
/etc/apparmor.d/usr.bin.webbrowser-app in 
/etc/apparmor.d/usr.bin.webbrowser-app at line 26: Could not open 
'/usr/share/apparmor/hardware/graphics.d'
Mar 08 14:34:04 ogre apparmor[2909]: Skipping profile in 
/etc/apparmor.d/disable: usr.sbin.rsyslogd
Mar 08 14:34:04 ogre apparmor[2909]:...fail!
Mar 08 14:34:04 ogre systemd[1]: apparmor.service: Control process exited, 
code=exited status=123
Mar 08 14:34:04 ogre systemd[1]: Failed to start LSB: AppArmor initialization.
Mar 08 14:34:04 ogre systemd[1]: apparmor.service: Unit entered failed state.
Mar 08 14:34:04 ogre systemd[1]: apparmor.service: Failed with result 
'exit-code'.

/usr/share/apparmor/hardware/graphics.d doesn't in fact exist on my
system. I'm not sure what package would provide it, but it seems curious
to me that the package doesn't supply its relevant apparmor bits. I'd
have expected the apparmor package to not complain about bits belonging
to packages that aren't installed.

ProblemType: Bug
DistroRelease: Ubuntu 16.04
Package: apparmor 2.10-3ubuntu2
ProcVersionSignature: Ubuntu 4.4.0-11.26-generic 4.4.4
Uname: Linux 4.4.0-11-generic x86_64
NonfreeKernelModules: zfs zunicode zcommon znvpair zavl nvidia_uvm 
nvidia_modeset nvidia
ApportVersion: 2.20-0ubuntu3
Architecture: amd64
Date: Tue Mar  8 18:53:31 2016
InstallationDate: Installed on 2016-02-24 (13 days ago)
InstallationMedia: Ubuntu-Server 16.04 LTS "Xenial Xerus" - Alpha amd64 
(20160219)
ProcEnviron:
 TERM=xterm-color
 PATH=(custom, no user)
 LANG=en_US.UTF-8
 SHELL=/bin/bash
ProcKernelCmdline: BOOT_IMAGE=/vmlinuz-4.4.0-11-generic.efi.signed 
root=/dev/mapper/hostname-root ro net.ifnames=0 biosdevname=0
SourcePackage: apparmor
Syslog: Mar  8 14:34:05 ogre dbus[3062]: [system] AppArmor D-Bus mediation is 
enabled
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: apparmor (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug xenial

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1554803

Title:
  apparmor throwing inexplicable errors

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1554803/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1554795] Re: timeout on restart or shutdown with LUKS root

2016-03-08 Thread Mason Loring Bliss
** Also affects: initramfs-tools (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1554795

Title:
  timeout on restart or shutdown with LUKS root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/1554795/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1554795] [NEW] timeout on restart or shutdown with LUKS root

2016-03-08 Thread Mason Loring Bliss
Public bug reported:

Using the server install ISO, it's possible to specify root on LUKS and
variations thereof - for instance, root on LUKS on MD-RAID, root on LVM
on LUKS on MD-RAID, and so forth. The installer does the right thing and
initramfs-tools does everything necessary to support booting this sort
of thing.

However, systemd gives a 90-second timeout on restart or shutdown,
presumably because it cannot dispose of the things beneath root.

It's wholly unclear to me where the 90-second timeout is specified,
should I wish to shorten it to reboot without the futile delay, but more
to the point, there seems to be infrastructure for handling this kind of
situation that doesn't exist in Ubuntu at present.

I was pointed at this:

https://www.freedesktop.org/wiki/Software/systemd/InitrdInterface/

However, Ubuntu seems not to have anything in its initramfs-tools to
facilitate "shutdown-initrd" functionality.

I haven't tested this, but I suspect this problem will exist for folks
running root on MD-RAID without the LUKS as well. Either way, a
relatively common vanilla install will force 90-second timeouts on
users, which is unfortunate.

ProblemType: Bug
DistroRelease: Ubuntu 16.04
Package: systemd 229-2ubuntu1 [modified: 
usr/share/dbus-1/system-services/org.freedesktop.systemd1.service]
ProcVersionSignature: Ubuntu 4.4.0-11.26-generic 4.4.4
Uname: Linux 4.4.0-11-generic x86_64
NonfreeKernelModules: zfs zunicode zcommon znvpair zavl nvidia_uvm 
nvidia_modeset nvidia
ApportVersion: 2.20-0ubuntu3
Architecture: amd64
Date: Tue Mar  8 18:06:45 2016
InstallationDate: Installed on 2016-02-24 (13 days ago)
InstallationMedia: Ubuntu-Server 16.04 LTS "Xenial Xerus" - Alpha amd64 
(20160219)
Lsusb:
 Bus 002 Device 002: ID 1058:0820 Western Digital Technologies, Inc. My 
Passport Ultra (WDBMWV, WDBZFP)
 Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
 Bus 001 Device 003: ID 046d:c408 Logitech, Inc. Marble Mouse (4-button)
 Bus 001 Device 002: ID 046d:c318 Logitech, Inc. Illuminated Keyboard
 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
MachineType: Gigabyte Technology Co., Ltd. To be filled by O.E.M.
ProcEnviron:
 TERM=xterm-color
 PATH=(custom, no user)
 XDG_RUNTIME_DIR=
 LANG=en_US.UTF-8
 SHELL=/bin/bash
ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.4.0-11-generic.efi.signed 
root=/dev/mapper/hostname-root ro net.ifnames=0 biosdevname=0
SourcePackage: systemd
SystemdDelta:
 [EXTENDED]   /lib/systemd/system/systemd-timesyncd.service → 
/lib/systemd/system/systemd-timesyncd.service.d/disable-with-time-daemon.conf
 [EXTENDED]   /lib/systemd/system/rc-local.service → 
/lib/systemd/system/rc-local.service.d/debian.conf
 
 2 overridden configuration files found.
UpgradeStatus: No upgrade log present (probably fresh install)
dmi.bios.date: 12/04/2015
dmi.bios.vendor: American Megatrends Inc.
dmi.bios.version: F1
dmi.board.asset.tag: To be filled by O.E.M.
dmi.board.name: X150M-PRO ECC-CF
dmi.board.vendor: Gigabyte Technology Co., Ltd.
dmi.board.version: x.x
dmi.chassis.asset.tag: To Be Filled By O.E.M.
dmi.chassis.type: 3
dmi.chassis.vendor: To Be Filled By O.E.M.
dmi.chassis.version: To Be Filled By O.E.M.
dmi.modalias: 
dmi:bvnAmericanMegatrendsInc.:bvrF1:bd12/04/2015:svnGigabyteTechnologyCo.,Ltd.:pnTobefilledbyO.E.M.:pvrTobefilledbyO.E.M.:rvnGigabyteTechnologyCo.,Ltd.:rnX150M-PROECC-CF:rvrx.x:cvnToBeFilledByO.E.M.:ct3:cvrToBeFilledByO.E.M.:
dmi.product.name: To be filled by O.E.M.
dmi.product.version: To be filled by O.E.M.
dmi.sys.vendor: Gigabyte Technology Co., Ltd.

** Affects: initramfs-tools (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: systemd (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug xenial

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1554795

Title:
  timeout on restart or shutdown with LUKS root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/1554795/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs