[Kernel-packages] [Bug 2051999] Re: Grub2 2.06 has upstream bug that results in Non-booting with ZFS after snapshot of bpool.

2024-02-04 Thread Richard Laager
Any chance this test needs a re-run of “grub-install”, not just “update-
grub” (as you would get from a reconfigure)?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/2051999

Title:
  Grub2 2.06 has upstream bug that results in Non-booting with ZFS after
  snapshot of bpool.

Status in grub2 package in Ubuntu:
  Confirmed
Status in grub2-unsigned package in Ubuntu:
  Confirmed
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  There is an upstream Bug with Grub where if you create snapshots of bpool, it 
results in a non-booting System. The problem was found to be an upstream Bug 
with Grub2:
  https://savannah.gnu.org/bugs/index.php?64297

  Multiple Ubuntu 22.04.3 Users Affected:
  https://ubuntuforums.org/showthread.php?t=2494397=zfs+grub+bug
  https://ubuntuforums.org/showthread.php?t=2494957

  Brought up as an issue at OpenZFS:
  https://github.com/openzfs/zfs/issues/13873

  If you look at this comment 
(https://github.com/openzfs/zfs/issues/13873#issuecomment-1892911836), if was 
found the Savanaugh at GNU released a fix for it in Grub2 2.12, here:
  https://git.savannah.gnu.org/cgit/grub.git/log/grub-core/fs/zfs/zfs.c

  Ubuntu Jammy 22.04.3 is Grub2 2.06. We need to backported this patch
  to Grub2 2.06 so that Users are not caught of in this bug for or
  currently supported LTS Release.

  ProblemType: Bug
  DistroRelease: Ubuntu 22.04
  Package: grub-efi-amd64 2.06-2ubuntu14.4
  ProcVersionSignature: Ubuntu 6.2.0-39.40~22.04.1-generic 6.2.16
  Uname: Linux 6.2.0-39-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair nvidia_modeset 
nvidia
  ApportVersion: 2.20.11-0ubuntu82.5
  Architecture: amd64
  CasperMD5CheckResult: unknown
  CurrentDesktop: GNOME
  Date: Thu Feb  1 16:40:28 2024
  InstallationDate: Installed on 2021-09-23 (861 days ago)
  InstallationMedia: Ubuntu 20.04.3 LTS "Focal Fossa" - Release amd64 (20210819)
  SourcePackage: grub2-unsigned
  UpgradeStatus: Upgraded to jammy on 2022-08-17 (533 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/2051999/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2018960] Re: linux-image-5.4.0-149-generic (regression): 0 at net/core/stream.c:212 sk_stream_kill_queues+0xcf/0xe0

2023-06-19 Thread Richard Laager
My experience so far lines up with the above: 148 good, 150 bad, 152
good.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-signed in Ubuntu.
https://bugs.launchpad.net/bugs/2018960

Title:
  linux-image-5.4.0-149-generic (regression): 0 at net/core/stream.c:212
  sk_stream_kill_queues+0xcf/0xe0

Status in linux-signed package in Ubuntu:
  Confirmed
Status in linux-signed-kvm package in Ubuntu:
  Confirmed

Bug description:
  After upgrading and rebooting this Ubuntu 20.04 LTS server (Ubuntu
  Focal), I noticed that it was suddenly getting a bunch of kernel log
  (dmesg) reports like:

  WARNING: CPU: 4 PID: 0 at net/core/stream.c:212
  sk_stream_kill_queues+0xcf/0xe0

  while investigating I determined that it is currently running the
  focal-proposed kernel (linux-image-5.4.0-149-generic), which it turns
  out was enabled for this server (clearly it seemed like a good idea at
  the time).

  I'm not expecting focal-proposed to be fixed as if it were a release
  package, but since I couldn't find any reports on Launchpad I figured
  I should let y'all know this focal-proposed package could do with some
  additional work before it's actually released :-)

  There have been at least 80 such reports in the last 5 hours since the
  server was rebooted, differing only by the CPU core and the process
  reported, although it seems the last one was a couple of hours ago, so
  I guess it's traffic dependent/timing dependent.

  ewen@naosr620:~$ uptime
   16:27:32 up  5:19,  1 user,  load average: 0.08, 0.14, 0.06
  ewen@naosr620:~$ dmesg -t | grep WARNING | sed 's/CPU: [0-9]*/CPU: N/; s/PID: 
[0-9]*/PID: N/;' | uniq -c
   88 WARNING: CPU: N PID: N at net/core/stream.c:212 
sk_stream_kill_queues+0xcf/0xe0
  ewen@naosr620:~$ 

  Ubuntu Release:

  ewen@naosr620:~$ lsb_release -rd
  Description:  Ubuntu 20.04.6 LTS
  Release:  20.04
  ewen@naosr620:~$ 

  
  Kernel/package version affected:

  ewen@naosr620:~$ uname -a
  Linux naosr620 5.4.0-149-generic #166-Ubuntu SMP Tue Apr 18 16:51:45 UTC 2023 
x86_64 x86_64 x86_64 GNU/Linux
  ewen@naosr620:~$ dpkg -l | grep linux-image | grep 149
  ii  linux-image-5.4.0-149-generic  5.4.0-149.166  
   amd64Signed kernel image generic
  ii  linux-image-generic5.4.0.149.147  
   amd64Generic Linux kernel image
  ewen@naosr620:~$ apt-cache policy linux-image-5.4.0-149-generic 
  linux-image-5.4.0-149-generic:
Installed: 5.4.0-149.166
Candidate: 5.4.0-149.166
Version table:
   *** 5.4.0-149.166 500
  500 https://mirror.fsmg.org.nz/ubuntu focal-proposed/main amd64 
Packages
  100 /var/lib/dpkg/status
  ewen@naosr620:~$ apt-cache policy linux-image-generic
  linux-image-generic:
Installed: 5.4.0.149.147
Candidate: 5.4.0.149.147
Version table:
   *** 5.4.0.149.147 500
  500 https://mirror.fsmg.org.nz/ubuntu focal-proposed/main amd64 
Packages
  100 /var/lib/dpkg/status
   5.4.0.148.146 500
  500 https://mirror.fsmg.org.nz/ubuntu focal-updates/main amd64 
Packages
  500 https://mirror.fsmg.org.nz/ubuntu focal-security/main amd64 
Packages
   5.4.0.26.32 500
  500 https://mirror.fsmg.org.nz/ubuntu focal/main amd64 Packages
  ewen@naosr620:~$ 
  ewen@naosr620:~$ apt-cache show linux-image-5.4.0-149-generic | grep Source:
  Source: linux-signed
  ewen@naosr620:~$ 

  
  Full example dmesg, including stack trace (they all seem to be WARNINGs, and 
other than filling dmesg / system logs the system "appears to be running okay", 
so I'm not going to rush another reboot now -- near end of business day):

  ewen@naosr620:~$ date
  Tue 09 May 2023 16:34:56 NZST
  ewen@naosr620:~$ dmesg -T | tail -100 | grep -B 150 "end trace" | grep -A 999 
"cut here"
  [Tue May  9 14:21:18 2023] [ cut here ]
  [Tue May  9 14:21:18 2023] WARNING: CPU: 10 PID: 0 at net/core/stream.c:212 
sk_stream_kill_queues+0xcf/0xe0
  [Tue May  9 14:21:18 2023] Modules linked in: mpt3sas raid_class 
scsi_transport_sas mptctl mptbase vhost_net vhost tap ip6t_REJECT 
nf_reject_ipv6 ip6table_mangle ip6table_nat ip6table_raw nf_log_ipv6 xt_recent 
ipt_REJECT nf_reject_ipv4 xt_hashlimit xt_addrtype xt_multiport xt_comment 
xt_conntrack xt_mark iptable_mangle xt_MASQUERADE iptable_nat xt_CT xt_tcpudp 
iptable_raw nfnetlink_log xt_NFLOG nf_log_ipv4 nf_log_common xt_LOG nf_nat_tftp 
nf_nat_snmp_basic nf_conntrack_snmp nf_nat_sip nf_nat_pptp nf_nat_irc 
ebtable_filter nf_nat_h323 ebtables nf_nat_ftp nf_nat_amanda ts_kmp 
ip6table_filter nf_conntrack_amanda nf_nat ip6_tables nf_conntrack_sane 
nf_conntrack_tftp nf_conntrack_sip nf_conntrack_pptp nf_conntrack_netlink 
nfnetlink nf_conntrack_netbios_ns nf_conntrack_broadcast nf_conntrack_irc 
nf_conntrack_h323 nf_conntrack_ftp nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 
iptable_filter bpfilter dell_rbu nls_iso8859_1 

[Kernel-packages] [Bug 1913342] Re: zfs.8 man page snapshot listing instructions are confusing

2021-01-26 Thread Richard Laager
listsnaps is an alias of listsnapshots, but you're right that it's on
the pool.

Can you file this upstream:
https://github.com/openzfs/zfs/issues/new/choose

If you want, you could take a stab at submitting a pull request. It's a
pretty simple sounding change. The repo is here:
https://github.com/openzfs/zfs The man pages are in the "man"
subdirectory.

For your "Extra Credit" piece, "zfs list -t filesystem,snapshot" shows
both filesystems and snapshots for everything. "zfs list -t snapshot
dataset" shows snapshots for the specified dataset. But if you combine
those together as "zfs list -t filesystem,snapshot dataset", you do not
get snapshots. However, "zfs list -t filesystem,snapshot -r dataset"
does show the snapshots. Whether that's a bug or not, I can't say. But
that's a more detailed explanation of that problem that will be helpful
if you file a bug report on that.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1913342

Title:
  zfs.8 man page snapshot listing instructions are confusing

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  The zfs.8 man page describes in 3 places how to include snapshots in
  "zfs list":

  1) Snapshots are displayed if the listsnaps property is on (the default is 
off).
  2) Snapshots are displayed if the listsnaps property is on.  The default is 
off.  See zpool(8) for more information on pool properties.
  3) For example, specifying -t snapshot displays only snapshots.

  The first of these has twice now (I'm just learning zfs) sent me down
  a rabbit hole, looking for a zfs dataset property "listsnaps".  There
  is no such property (of datasets).

  I'm on version 0.8.3 of zfsutils-linux that has this man page, but
  when I look at https://zfsonlinux.org/manpages/0.8.6/man8/zfs.8.html
  it seems that the above is still true in 0.8.6

  Suggestion, to help us newbies reduce our learning time to list
  snapshots:

  Change both of the first 2 of the above 3 places in zfs.8 to:
  "
  "zfs list" displays snapshots only if either the "listsnapshots" property of 
the underlying zpool is on, or if the "zfs list -t snapshot" option is 
specified.

  Extra Credit:

  Just now it seems to me that the following command does _NOT_ list
  snapshots, on a dataset that has some snapshots ... so the zfs.8
  documentation that the "zfs list -t type" can be a "comma-separated
  list of types to display" seems incorrect:

  zfs list -t filesystem,snapshot myzfsdataset

  Whether this "Extra Credit" discrepancy is a bug in the documentation,
  or in the zfs command code, or newbie operator error, I'll leave as an
  exercise to the reader .

  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: zfsutils-linux 0.8.3-1ubuntu12.5
  ProcVersionSignature: Ubuntu 5.4.0-64.72-generic 5.4.78
  Uname: Linux 5.4.0-64-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair nvidia_modeset 
nvidia
  ApportVersion: 2.20.11-0ubuntu27.14
  Architecture: amd64
  CasperMD5CheckResult: skip
  CurrentDesktop: XFCE
  Date: Tue Jan 26 13:41:16 2021
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1913342/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1854480] Re: zfs_arc_max not working anymore in zfs 0.8.1

2020-12-16 Thread Richard Laager
The limit in the code does seem to be 64 MiB. I'm not sure why this
isn't working. I am not even close to an expert on this part of OpenZFS,
so all I can suggest is to file a bug report upstream:
https://github.com/openzfs/zfs/issues/new

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1854480

Title:
  zfs_arc_max not working anymore in zfs 0.8.1

Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  In the past I could limit the size of L1ARC by specifying "options zfs
  zfs_arc_max=3221225472" in /etc/modprobe.d/zfs.conf. I tried even to
  fill /sys/module/zfs/parameters/zfs_arc_max directly, but none of
  those methods limits the size of L1ARC. It worked nicely in zfs 0.7.x.

  Nowadays I use a nvme drive with 3400 MB/s read throughput and I see
  not much difference between e.g booting the system from nvme and
  rebooting the system from L1ARC. Having a 96-99% hit rate for the
  L1ARC, I like to free up some memory, thus avoiding some delay of
  freeing memory from L1ARC while loading a VM.

  -
  UPDATE:

  The system only limits the L1ARC, if we directly after the login write
  the zfs_arc_max value to the file
  /sys/module/zfs/parameters/zfs_arc_max.

  

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zfsutils-linux 0.8.1-1ubuntu14.1
  ProcVersionSignature: Ubuntu 5.3.0-23.25-generic 5.3.7
  Uname: Linux 5.3.0-23-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8.2
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Fri Nov 29 06:19:15 2019
  InstallationDate: Installed on 2019-11-25 (3 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854480/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-03 Thread Richard Laager
device_removal only works if you can import the pool normally. That is
what you should have used after you accidentally added the second disk
as another top-level vdev. Whatever you have done in the interim,
though, has resulted in the second device showing as FAULTED. Unless you
can fix that, device_removal is not an option. I had hoped that you just
had the second drive unplugged or something. But since the import is
showing "corrupted data" for the second drive, that's probably not what
happened.

This works for me on Ubuntu 20.04:
echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds

That setting does not exist on Ubuntu 18.04 (which you are running), so
I get the same "Permission denied" error (because bash is trying to
create that file, which you cannot do).

I now see this is an rpool. Is your plan to reinstall? With 18.04 or
20.04?

If 18.04, then:
1. Download the 20.04.1 live image. Write it to a USB disk and boot into that.
2. In the live environment, install the ZFS tools: sudo apt install 
zfsutils-linux
3. echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds
4. mkdir /old
5. Import the old pool renaming it to rpool-old and mount filesystems:
   zpool import -o readonly=on -N -R /old rpool rpool-old
   zfs mount rpool-old/ROOT/ubuntu
   zfs mount -a
6. Confirm you can access your data. Take another backup, if desired. If you 
don't have space to back it up besides the new/second disk, then read on...
7. Follow the 18.04 Root-on-ZFS HOWTO using (only) the second disk. Be very 
careful not to partition or zpool create the disk with your data!!! For 
example, partition the second disk for the mirror scenario. But obviously you 
can't do zpool create with "mirror" because you have only one disk.
8. Once the new system is installed (i.e. after step 6.2), but before 
rebooting, copy data from /old to /mnt as needed.
9. Shut down. Disconnect the old disk. Boot up again.
9. Continue the install as normal.
10. When you are certain that everything is good and that new disk is working 
properly (maybe do a scrub) and you have all your data, then you can connect 
the old disk and do the zpool attach (ATTACH, not add) to attach the old disk 
to the new pool as a mirror

If 20.04, then I'd do this instead:
1. Unplug the disk with your data.
2. Follow the 20.04 Root-on-ZFS HOWTO using only the second disk. Follow the 
steps as if you were mirroring (since that is the ultimate goal) where 
possible. For example, partition the second disk for the mirror scenario. But 
obviously you can't do zpool create with "mirror" because you have only one 
disk.
3. Once the new, 20.04 system is working on the second disk and booting 
normally, connect the other, old drive. (This assumes you can connect it while 
the system is running.)
4. echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds
5. Import the old pool using its GUID renaming it to rpool-old and mount 
filesystems:
   zpool import -o readonly -N -R /mnt 5077426391014001687 rpool-old
   zfs mount rpool-old/ROOT/ubuntu
   zfs mount -a
6. Copy over data.
7. zpool export rpool-old
8. When you are certain that everything is good and that new disk is working 
properly (maybe do a scrub) and you have all your data, then you can do the 
zpool attach (ATTACH, not add) to attach the old disk to the new pool as a 
mirror.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1906542

Title:
  echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds  says
  premission error, unable to reapair lost zfs pool data

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  root@jonipekka-desktop:~# echo 1 >> 
/sys/module/zfs/parameters/zfs_max_missing_tvds
  -bash: /sys/module/zfs/parameters/zfs_max_missing_tvds: Permission denied
  root@jonipekka-desktop:~#

  
  https://www.delphix.com/blog/openzfs-pool-import-recovery

  Import with missing top level vdevs

  The changes to the pool configuration logic have enabled another great
  improvement: the ability to import a pool with missing or faulted top-
  level vdevs. Since some data will almost certainly be missing, a pool
  with missing top-level vdevs can only be imported read-only, and the
  failmode is set to “continue” (failmode=continue means that when
  encountering errors the pool will continue running, as opposed to
  being suspended or panicking).

  To enable this feature, we’ve added a new global variable:
  zfs_max_missing_tvds, which defines how many missing top level vdevs
  we can tolerate before marking a pool as unopenable. It is set to 0 by
  default, and should be changed to other values only temporarily, while
  performing an extreme pool recovery.

  Here as an example we create a pool with two vdevs and write some data
  to a first dataset; we then add a third vdev and write some data to a
  second dataset. Finally we physically remove the new vdev (simulating,
  for instance, 

[Kernel-packages] [Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-03 Thread Richard Laager
Why is the second disk missing? If you accidentally added it and ended
up with a striped pool, as long as both disks are connected, you can
import the pool normally. Then use the new device_removal feature to
remove the new disk from the pool.

If you've done something crazy like pulled the disk and wiped it, then
yeah, you're going to need to figure out how to import the pool read-
only. I don't have any advice on that piece.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1906542

Title:
  echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds  says
  premission error, unable to reapair lost zfs pool data

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  root@jonipekka-desktop:~# echo 1 >> 
/sys/module/zfs/parameters/zfs_max_missing_tvds
  -bash: /sys/module/zfs/parameters/zfs_max_missing_tvds: Permission denied
  root@jonipekka-desktop:~#

  
  https://www.delphix.com/blog/openzfs-pool-import-recovery

  Import with missing top level vdevs

  The changes to the pool configuration logic have enabled another great
  improvement: the ability to import a pool with missing or faulted top-
  level vdevs. Since some data will almost certainly be missing, a pool
  with missing top-level vdevs can only be imported read-only, and the
  failmode is set to “continue” (failmode=continue means that when
  encountering errors the pool will continue running, as opposed to
  being suspended or panicking).

  To enable this feature, we’ve added a new global variable:
  zfs_max_missing_tvds, which defines how many missing top level vdevs
  we can tolerate before marking a pool as unopenable. It is set to 0 by
  default, and should be changed to other values only temporarily, while
  performing an extreme pool recovery.

  Here as an example we create a pool with two vdevs and write some data
  to a first dataset; we then add a third vdev and write some data to a
  second dataset. Finally we physically remove the new vdev (simulating,
  for instance, a device failure) and try to import the pool using the
  new feature.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: zfsutils-linux 0.7.5-1ubuntu16.10
  ProcVersionSignature: Ubuntu 4.15.0-126.129-generic 4.15.18
  Uname: Linux 4.15.0-126-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.9-0ubuntu7.20
  Architecture: amd64
  Date: Wed Dec  2 18:39:58 2020
  InstallationDate: Installed on 2020-12-02 (0 days ago)
  InstallationMedia: Ubuntu 18.04.1 LTS "Bionic Beaver" - Release amd64 
(20180725)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1906542/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1899249] Re: OpenZFS writing stalls, under load

2020-10-12 Thread Richard Laager
You could shrink the DDT by making a copy of the files in place (with
dedup off) and deleting the old file. That only requires enough extra
space for a single file at a time. This assumes no snapshots.

If you need to preserve snapshots, another option would be to send|recv
a dataset at a time. If you have enough free space for a copy of the
largest dataset, this would work.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1899249

Title:
  OpenZFS writing stalls, under load

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  Using a QNAP 4-drive USB enclosure, with a set of SSDs, on a Raspberry
  Pi 8GB. ZFS deduplication, and LZJB compression is enabled.

  This issue seems to occur, intermittently, after some time (happens
  with both SMB access, via Samba, and when interacting with the system,
  via SSH), and never previously occurred, until a few months ago, and I
  sometimes have to force a reboot of the system (at the cost of some
  data loss), in order to use it again.

  The "dmesg" log reports:

  [25375.911590] z_wr_iss_h  D0  2161  2 0x0028
  [25375.911606] Call trace:
  [25375.911627]  __switch_to+0x104/0x170
  [25375.911639]  __schedule+0x30c/0x7c0
  [25375.911647]  schedule+0x3c/0xb8
  [25375.911655]  io_schedule+0x20/0x58
  [25375.911668]  rq_qos_wait+0x100/0x178
  [25375.911677]  wbt_wait+0xb4/0xf0
  [25375.911687]  __rq_qos_throttle+0x38/0x50
  [25375.911700]  blk_mq_make_request+0x128/0x610
  [25375.911712]  generic_make_request+0xb4/0x2d8
  [25375.911722]  submit_bio+0x48/0x218
  [25375.911960]  vdev_disk_io_start+0x670/0x9f8 [zfs]
  [25375.912181]  zio_vdev_io_start+0xdc/0x2b8 [zfs]
  [25375.912400]  zio_nowait+0xd4/0x170 [zfs]
  [25375.912617]  vdev_mirror_io_start+0xa8/0x1b0 [zfs]
  [25375.912839]  zio_vdev_io_start+0x248/0x2b8 [zfs]
  [25375.913057]  zio_execute+0xac/0x110 [zfs]
  [25375.913096]  taskq_thread+0x2f8/0x570 [spl]
  [25375.913108]  kthread+0xfc/0x128
  [25375.913119]  ret_from_fork+0x10/0x1c
  [25375.913149] INFO: task txg_sync:2333 blocked for more than 120 seconds.
  [25375.919916]   Tainted: P C OE 5.4.0-1018-raspi #20-Ubuntu
  [25375.926848] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [25375.934835] txg_syncD0  2333  2 0x0028
  [25375.934850] Call trace:
  [25375.934869]  __switch_to+0x104/0x170
  [25375.934879]  __schedule+0x30c/0x7c0
  [25375.934887]  schedule+0x3c/0xb8
  [25375.934899]  schedule_timeout+0x9c/0x190
  [25375.934908]  io_schedule_timeout+0x28/0x48
  [25375.934946]  __cv_timedwait_common+0x1a8/0x1f8 [spl]
  [25375.934982]  __cv_timedwait_io+0x3c/0x50 [spl]
  [25375.935205]  zio_wait+0x130/0x2a0 [zfs]
  [25375.935423]  dsl_pool_sync+0x3fc/0x498 [zfs]
  [25375.935650]  spa_sync+0x538/0xe68 [zfs]
  [25375.935867]  txg_sync_thread+0x2c0/0x468 [zfs]
  [25375.935911]  thread_generic_wrapper+0x74/0xa0 [spl]
  [25375.935924]  kthread+0xfc/0x128
  [25375.935935]  ret_from_fork+0x10/0x1c
  [25375.936017] INFO: task zbackup:75339 blocked for more than 120 seconds.
  [25375.942780]   Tainted: P C OE 5.4.0-1018-raspi #20-Ubuntu
  [25375.949710] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [25375.957702] zbackup D0 75339   5499 0x
  [25375.957716] Call trace:
  [25375.957732]  __switch_to+0x104/0x170
  [25375.957742]  __schedule+0x30c/0x7c0
  [25375.957750]  schedule+0x3c/0xb8
  [25375.957789]  cv_wait_common+0x188/0x1b0 [spl]
  [25375.957823]  __cv_wait+0x30/0x40 [spl]
  [25375.958045]  zil_commit_impl+0x234/0xd30 [zfs]
  [25375.958263]  zil_commit+0x48/0x70 [zfs]
  [25375.958481]  zfs_create+0x544/0x7d0 [zfs]
  [25375.958698]  zpl_create+0xb8/0x178 [zfs]
  [25375.958711]  lookup_open+0x4ec/0x6a8
  [25375.958721]  do_last+0x260/0x8c0
  [25375.958730]  path_openat+0x84/0x258
  [25375.958739]  do_filp_open+0x84/0x108
  [25375.958752]  do_sys_open+0x180/0x2b0
  [25375.958763]  __arm64_sys_openat+0x2c/0x38
  [25375.958773]  el0_svc_common.constprop.0+0x80/0x218
  [25375.958781]  el0_svc_handler+0x34/0xa0
  [25375.958791]  el0_svc+0x10/0x2cc
  [25375.958801] INFO: task zbackup:95187 blocked for more than 120 seconds.
  [25375.965564]   Tainted: P C OE 5.4.0-1018-raspi #20-Ubuntu
  [25375.972492] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [25375.980479] zbackup D0 95187   5499 0x
  [25375.980493] Call trace:
  [25375.980514]  __switch_to+0x104/0x170
  [25375.980525]  __schedule+0x30c/0x7c0
  [25375.980536]  schedule+0x3c/0xb8
  [25375.980578]  cv_wait_common+0x188/0x1b0 [spl]
  [25375.980612]  __cv_wait+0x30/0x40 [spl]
  [25375.980834]  zil_commit_impl+0x234/0xd30 [zfs]
  [25375.981052]  zil_commit+0x48/0x70 [zfs]
  [25375.981280]  zfs_write+0xa3c/0xb90 [zfs]
  [25375.981498]  zpl_write_common_iovec+0xac/0x120 [zfs]
  

[Kernel-packages] [Bug 1899249] Re: OpenZFS writing stalls, under load

2020-10-12 Thread Richard Laager
Did you destroy and recreate the pool after disabling dedup? Otherwise
you still have the same dedup table and haven’t really accomplished
much.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1899249

Title:
  OpenZFS writing stalls, under load

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  Using a QNAP 4-drive USB enclosure, with a set of SSDs, on a Raspberry
  Pi 8GB. ZFS deduplication, and LZJB compression is enabled.

  This issue seems to occur, intermittently, after some time (happens
  with both SMB access, via Samba, and when interacting with the system,
  via SSH), and never previously occurred, until a few months ago, and I
  sometimes have to force a reboot of the system (at the cost of some
  data loss), in order to use it again.

  The "dmesg" log reports:

  [25375.911590] z_wr_iss_h  D0  2161  2 0x0028
  [25375.911606] Call trace:
  [25375.911627]  __switch_to+0x104/0x170
  [25375.911639]  __schedule+0x30c/0x7c0
  [25375.911647]  schedule+0x3c/0xb8
  [25375.911655]  io_schedule+0x20/0x58
  [25375.911668]  rq_qos_wait+0x100/0x178
  [25375.911677]  wbt_wait+0xb4/0xf0
  [25375.911687]  __rq_qos_throttle+0x38/0x50
  [25375.911700]  blk_mq_make_request+0x128/0x610
  [25375.911712]  generic_make_request+0xb4/0x2d8
  [25375.911722]  submit_bio+0x48/0x218
  [25375.911960]  vdev_disk_io_start+0x670/0x9f8 [zfs]
  [25375.912181]  zio_vdev_io_start+0xdc/0x2b8 [zfs]
  [25375.912400]  zio_nowait+0xd4/0x170 [zfs]
  [25375.912617]  vdev_mirror_io_start+0xa8/0x1b0 [zfs]
  [25375.912839]  zio_vdev_io_start+0x248/0x2b8 [zfs]
  [25375.913057]  zio_execute+0xac/0x110 [zfs]
  [25375.913096]  taskq_thread+0x2f8/0x570 [spl]
  [25375.913108]  kthread+0xfc/0x128
  [25375.913119]  ret_from_fork+0x10/0x1c
  [25375.913149] INFO: task txg_sync:2333 blocked for more than 120 seconds.
  [25375.919916]   Tainted: P C OE 5.4.0-1018-raspi #20-Ubuntu
  [25375.926848] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [25375.934835] txg_syncD0  2333  2 0x0028
  [25375.934850] Call trace:
  [25375.934869]  __switch_to+0x104/0x170
  [25375.934879]  __schedule+0x30c/0x7c0
  [25375.934887]  schedule+0x3c/0xb8
  [25375.934899]  schedule_timeout+0x9c/0x190
  [25375.934908]  io_schedule_timeout+0x28/0x48
  [25375.934946]  __cv_timedwait_common+0x1a8/0x1f8 [spl]
  [25375.934982]  __cv_timedwait_io+0x3c/0x50 [spl]
  [25375.935205]  zio_wait+0x130/0x2a0 [zfs]
  [25375.935423]  dsl_pool_sync+0x3fc/0x498 [zfs]
  [25375.935650]  spa_sync+0x538/0xe68 [zfs]
  [25375.935867]  txg_sync_thread+0x2c0/0x468 [zfs]
  [25375.935911]  thread_generic_wrapper+0x74/0xa0 [spl]
  [25375.935924]  kthread+0xfc/0x128
  [25375.935935]  ret_from_fork+0x10/0x1c
  [25375.936017] INFO: task zbackup:75339 blocked for more than 120 seconds.
  [25375.942780]   Tainted: P C OE 5.4.0-1018-raspi #20-Ubuntu
  [25375.949710] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [25375.957702] zbackup D0 75339   5499 0x
  [25375.957716] Call trace:
  [25375.957732]  __switch_to+0x104/0x170
  [25375.957742]  __schedule+0x30c/0x7c0
  [25375.957750]  schedule+0x3c/0xb8
  [25375.957789]  cv_wait_common+0x188/0x1b0 [spl]
  [25375.957823]  __cv_wait+0x30/0x40 [spl]
  [25375.958045]  zil_commit_impl+0x234/0xd30 [zfs]
  [25375.958263]  zil_commit+0x48/0x70 [zfs]
  [25375.958481]  zfs_create+0x544/0x7d0 [zfs]
  [25375.958698]  zpl_create+0xb8/0x178 [zfs]
  [25375.958711]  lookup_open+0x4ec/0x6a8
  [25375.958721]  do_last+0x260/0x8c0
  [25375.958730]  path_openat+0x84/0x258
  [25375.958739]  do_filp_open+0x84/0x108
  [25375.958752]  do_sys_open+0x180/0x2b0
  [25375.958763]  __arm64_sys_openat+0x2c/0x38
  [25375.958773]  el0_svc_common.constprop.0+0x80/0x218
  [25375.958781]  el0_svc_handler+0x34/0xa0
  [25375.958791]  el0_svc+0x10/0x2cc
  [25375.958801] INFO: task zbackup:95187 blocked for more than 120 seconds.
  [25375.965564]   Tainted: P C OE 5.4.0-1018-raspi #20-Ubuntu
  [25375.972492] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.
  [25375.980479] zbackup D0 95187   5499 0x
  [25375.980493] Call trace:
  [25375.980514]  __switch_to+0x104/0x170
  [25375.980525]  __schedule+0x30c/0x7c0
  [25375.980536]  schedule+0x3c/0xb8
  [25375.980578]  cv_wait_common+0x188/0x1b0 [spl]
  [25375.980612]  __cv_wait+0x30/0x40 [spl]
  [25375.980834]  zil_commit_impl+0x234/0xd30 [zfs]
  [25375.981052]  zil_commit+0x48/0x70 [zfs]
  [25375.981280]  zfs_write+0xa3c/0xb90 [zfs]
  [25375.981498]  zpl_write_common_iovec+0xac/0x120 [zfs]
  [25375.981726]  zpl_iter_write+0xe4/0x150 [zfs]
  [25375.981766]  new_sync_write+0x100/0x1a8
  [25375.981776]  __vfs_write+0x74/0x90
  [25375.981784]  vfs_write+0xe4/0x1c8
  [25375.981794]  ksys_write+0x78/0x100
  [25375.981803]  

[Kernel-packages] [Bug 1893900] Re: ModuleNotFoundError: No module named 'distutils.sysconfig'

2020-09-02 Thread Richard Laager
That sounds like a missing dependency on python3-distutils.

But unless you're running a custom kernel, Ubuntu is shipping the ZFS module 
now:
https://bugs.launchpad.net/ubuntu/+source/linux-raspi/+bug/1884110

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1893900

Title:
  ModuleNotFoundError: No module named 'distutils.sysconfig'

Status in zfs-linux package in Ubuntu:
  New
Status in zfs-linux source package in Groovy:
  New

Bug description:
  Building the ZFS DKMS module on a Raspberry Pi yields:

  checking for the distutils Python package... yes
  checking for Python include path... Traceback (most recent call last):
File "", line 1, in 
  ModuleNotFoundError: No module named 'distutils.sysconfig'
  Traceback (most recent call last):
File "", line 1, in 
  ModuleNotFoundError: No module named 'distutils.sysconfig'

  checking for Python library path... Traceback (most recent call last):
File "", line 4, in 
  ModuleNotFoundError: No module named 'distutils.sysconfig'
  Traceback (most recent call last):
File "", line 3, in 
  ModuleNotFoundError: No module named 'distutils.sysconfig'
  Traceback (most recent call last):
File "", line 2, in 
  ModuleNotFoundError: No module named 'distutils.sysconfig'
  Traceback (most recent call last):
File "", line 1, in 
  ModuleNotFoundError: No module named 'distutils.sysconfig'
  -L -lpython3.8
  checking for Python site-packages path... Traceback (most recent call last):
File "", line 1, in 
  ModuleNotFoundError: No module named 'distutils.sysconfig'

  checking python extra libraries... Traceback (most recent call last):
File "", line 1, in 
  ModuleNotFoundError: No module named 'distutils.sysconfig'

  checking python extra linking flags... Traceback (most recent call last):
File "", line 1, in 
  ModuleNotFoundError: No module named 'distutils.sysconfig'

  checking consistency of all components of python development environment... no
  checking whether to enable pyzfs: ... no

  The build completes so the error doesn't seem to be fatal.
  Full build log: 
https://launchpadlibrarian.net/495895366/buildlog_ubuntu-groovy-arm64.linux-raspi_5.8.0-1001.4_BUILDING.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1893900/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1888405] Re: zfsutils-linux: zfs-volume-wait.service fails with locked encrypted zvols

2020-08-01 Thread Richard Laager
I've posted this upstream (as a draft PR, pending testing) at:
https://github.com/openzfs/zfs/pull/10662

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1888405

Title:
  zfsutils-linux: zfs-volume-wait.service fails with locked encrypted
  zvols

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Focal:
  Fix Committed
Status in zfs-linux source package in Groovy:
  Fix Released

Bug description:
  == SRU Justification Focal ==

  When an encrypted zvol is locked the zfs-volume-wait service does not
  start.  The /sbin/zvol_wait should not wait for links when the volume
  has property keystatus=unavailable.

  == Fix ==

  The attached patch in comment #1.

  == Test ==

  lock an encrypted zvol. without the fix the volume wait will block the
  boot. with the fix it is not blocked.

  == Regression Potential ==

  Limited to zvol wait - this change affects just the encrypted vols
  checking.

  -

  I was doing some experimentation with encrypted zvols and observed
  that the zfs-volume-wait.service systemd unit does not start if the
  encrypted zvol is locked.  The /sbin/zvol_wait should not wait for
  links when the volume has property keystatus=unavailable.  The
  attached patch seems to resolve the issue for me.

  # lsb_release -a
  No LSB modules are available.
  Distributor ID: Ubuntu
  Description:Ubuntu 20.04 LTS
  Release:20.04
  Codename:   focal

  # apt-cache policy zfsutils-linux
  zfsutils-linux:
    Installed: 0.8.3-1ubuntu12.1
    Candidate: 0.8.3-1ubuntu12.2
    Version table:
   0.8.3-1ubuntu12.2 500
  500 http://gb.archive.ubuntu.com/ubuntu focal-updates/main amd64 
Packages
   *** 0.8.3-1ubuntu12.1 100
  100 /var/lib/dpkg/status
   0.8.3-1ubuntu12 500
  500 http://gb.archive.ubuntu.com/ubuntu focal/main amd64 Packages

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1888405/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1888405] Re: zfsutils-linux: zfs-volume-wait.service fails with locked encrypted zvols

2020-08-01 Thread Richard Laager
Here is a completely untested patch that takes a different approach to
the same issue. If this works, it seems more suitable for upstreaming,
as the existing list_zvols seems to be the place where properties are
checked. Can either of you test this? If this looks good, I'll submit it
upstream.

** Patch added: "0001-zvol_wait-Ignore-locked-zvols.patch"
   
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1888405/+attachment/5397735/+files/0001-zvol_wait-Ignore-locked-zvols.patch

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1888405

Title:
  zfsutils-linux: zfs-volume-wait.service fails with locked encrypted
  zvols

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Focal:
  Fix Committed
Status in zfs-linux source package in Groovy:
  Fix Released

Bug description:
  == SRU Justification Focal ==

  When an encrypted zvol is locked the zfs-volume-wait service does not
  start.  The /sbin/zvol_wait should not wait for links when the volume
  has property keystatus=unavailable.

  == Fix ==

  The attached patch in comment #1.

  == Test ==

  lock an encrypted zvol. without the fix the volume wait will block the
  boot. with the fix it is not blocked.

  == Regression Potential ==

  Limited to zvol wait - this change affects just the encrypted vols
  checking.

  -

  I was doing some experimentation with encrypted zvols and observed
  that the zfs-volume-wait.service systemd unit does not start if the
  encrypted zvol is locked.  The /sbin/zvol_wait should not wait for
  links when the volume has property keystatus=unavailable.  The
  attached patch seems to resolve the issue for me.

  # lsb_release -a
  No LSB modules are available.
  Distributor ID: Ubuntu
  Description:Ubuntu 20.04 LTS
  Release:20.04
  Codename:   focal

  # apt-cache policy zfsutils-linux
  zfsutils-linux:
    Installed: 0.8.3-1ubuntu12.1
    Candidate: 0.8.3-1ubuntu12.2
    Version table:
   0.8.3-1ubuntu12.2 500
  500 http://gb.archive.ubuntu.com/ubuntu focal-updates/main amd64 
Packages
   *** 0.8.3-1ubuntu12.1 100
  100 /var/lib/dpkg/status
   0.8.3-1ubuntu12 500
  500 http://gb.archive.ubuntu.com/ubuntu focal/main amd64 Packages

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1888405/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1718761] Re: It's not possible to use OverlayFS (mount -t overlay) to stack directories on a ZFS volume

2020-07-05 Thread Richard Laager
See also this upstream PR: https://github.com/openzfs/zfs/pull/9414
and the one before it: https://github.com/openzfs/zfs/pull/8667

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1718761

Title:
  It's not possible to use OverlayFS (mount -t overlay) to stack
  directories on a ZFS volume

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  -- Configuration
  # echo -e $(grep VERSION= /etc/os-release)\\nSIGNATURE=\"$(cat 
/proc/version_signature)\"
  VERSION="16.04.3 LTS (Xenial Xerus)"
  SIGNATURE="Ubuntu 4.10.0-35.39~16.04.1-generic 4.10.17"
  # dpkg --list | grep zfs
  ii  libzfs2linux0.6.5.6-0ubuntu18
  ii  zfs-doc 0.6.5.6-0ubuntu18
  ii  zfs-initramfs   0.6.5.6-0ubuntu18
  ii  zfs-zed 0.6.5.6-0ubuntu18
  ii  zfsutils-linux  0.6.5.6-0ubuntu18

  -- Fault: Creating an overlay of multiple directories on a ZFS volume 
does not work
  # df /var/tmp
  Filesystem Type 1K-blocks  Used Available Use% Mounted on
  tank07/var/tmp zfs  129916288   128 129916160   1% /var/tmp
  # mkdir /var/tmp/{lower,middle,upper,workdir,overlay}
  # mount -t overlay overlay 
-olowerdir=/var/tmp/middle:/var/tmp/lower,upperdir=/var/tmp/upper,workdir=/var/tmp/workdir
 /var/tmp/overlay
  mount: wrong fs type, bad option, bad superblock on overlay,
 missing codepage or helper program, or other error

 In some cases useful info is found in syslog - try
 dmesg | tail or so.
  # dmesg|tail -1
  [276328.438284] overlayfs: filesystem on '/var/tmp/upper' not supported as 
upperdir

  -- Control test 1: Creating an overlay of multiple directories on 
another filesystem works
  # df /tmp
  Filesystem Type  1K-blocks   Used Available Use% Mounted on
  tmpfs  tmpfs   1048576 133492915084  13% /tmp
  # mkdir /tmp/{lower,middle,upper,workdir,overlay}
  # mount -t overlay overlay 
-olowerdir=/tmp/middle:/tmp/lower,upperdir=/tmp/upper,workdir=/tmp/workdir 
/tmp/overlay
  # mount | grep overlay
  overlay on /tmp/overlay type overlay 
(rw,relatime,lowerdir=/tmp/middle:/tmp/lower,upperdir=/tmp/upper,workdir=/tmp/workdir)

  -- Control test 2: Creating an overlay using AuFS works on ZFS volume 
and elsewhere
  # mount -t aufs -obr=/tmp/lower:/tmp/middle:/tmp/upper:/tmp/workdir none 
/tmp/overlay
  # mount -t aufs 
-obr=/var/tmp/lower:/var/tmp/middle:/var/tmp/upper:/var/tmp/workdir none 
/var/tmp/overlay
  # mount | grep aufs
  none on /var/tmp/overlay type aufs (rw,relatime,si=9ead1ecae778b250)
  none on /tmp/overlay type aufs (rw,relatime,si=9ead1ec9257d1250)

  -- Remark
  While the use of AuFS can be used as a workaround in the above scenario, AuFS 
in turn will not work with [fuse.]glusterfs mounts (this has been documented 
elsewhere). Given that OverlayFS is part of the (upstream) kernel and Ubuntu 
now officially supports ZFS, the above should be fixed.
  --- 
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Sep 18 16:12 seq
   crw-rw 1 root audio 116, 33 Sep 18 16:12 timer
  AplayDevices: Error: [Errno 2] No such file or directory
  ApportVersion: 2.20.1-0ubuntu2.10
  Architecture: amd64
  ArecordDevices: Error: [Errno 2] No such file or directory
  AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', 
'/dev/snd/timer'] failed with exit code 1:
  DistroRelease: Ubuntu 16.04
  IwConfig: Error: [Errno 2] No such file or directory
  Lspci: Error: [Errno 2] No such file or directory
  Lsusb: Error: [Errno 2] No such file or directory
  MachineType: Red Hat KVM
  NonfreeKernelModules: zfs zunicode zavl zcommon znvpair
  Package: linux (not installed)
  PciMultimedia:
   
  ProcFB: 0 cirrusdrmfb
  ProcKernelCmdLine: 
BOOT_IMAGE=/ROOT/ubuntu1604@/boot/vmlinuz-4.10.0-35-generic 
root=ZFS=tank07/ROOT/ubuntu1604 ro
  ProcVersionSignature: Ubuntu 4.10.0-35.39~16.04.1-generic 4.10.17
  RelatedPackageVersions:
   linux-restricted-modules-4.10.0-35-generic N/A
   linux-backports-modules-4.10.0-35-generic  N/A
   linux-firmware 1.157.12
  RfKill: Error: [Errno 2] No such file or directory
  Tags:  xenial
  Uname: Linux 4.10.0-35-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups:
   
  _MarkForUpload: True
  dmi.bios.date: 04/01/2014
  dmi.bios.vendor: SeaBIOS
  dmi.bios.version: 1.9.1-5.el7_3.3
  dmi.chassis.type: 1
  dmi.chassis.vendor: Red Hat
  dmi.chassis.version: RHEL 7.3.0 PC (i440FX + PIIX, 1996)
  dmi.modalias: 
dmi:bvnSeaBIOS:bvr1.9.1-5.el7_3.3:bd04/01/2014:svnRedHat:pnKVM:pvrRHEL7.3.0PC(i440FX+PIIX,1996):cvnRedHat:ct1:cvrRHEL7.3.0PC(i440FX+PIIX,1996):
  dmi.product.name: KVM
  dmi.product.version: RHEL 7.3.0 PC (i440FX + PIIX, 1996)
  dmi.sys.vendor: Red Hat
  --- 
  AlsaDevices:
   total 0
   crw-rw 1 root audio 116,  1 Sep 18 16:12 seq
   crw-rw 1 root audio 116, 33 Sep 18 16:12 timer
  

[Kernel-packages] [Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root

2020-05-30 Thread Richard Laager
I have submitted this upstream:
https://github.com/openzfs/zfs/pull/10388

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  root@eu1:/var/log# lsb_release -a
  No LSB modules are available.
  Distributor ID:   Ubuntu
  Description:  Ubuntu 20.04 LTS
  Release:  20.04
  Codename: focal

  root@eu1:/var/log# apt-cache policy cryptsetup
  cryptsetup:
Installed: (none)
Candidate: 2:2.2.2-3ubuntu2
Version table:
   2:2.2.2-3ubuntu2 500
  500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

  OTHER BACKGROUND INFO:
  ==

  1. machine has 2 drives. each drive is partitioned into 2 partitions,
  zfs and swap

  
  2. Ubuntu 20.04 installed on ZFS root using debootstrap 
(debootstrap_1.0.118ubuntu1_all)

  
  3. The ZFS root pool is a 2 partition mirror (the first partition of each 
disk)

  
  4. /etc/crypttab is set up as follows:

  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256
  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802933-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256


  
  WHAT I EXPECTED
  ===

  I expected machine would reboot and have encrypted swap that used two
  devices under /dev/mapper


  WHAT HAPPENED INSTEAD
  =

  
  On reboot, swap setup fails with the following messages in /var/log/syslog:

  Apr 28 17:13:01 eu1 kernel: [5.360793] systemd[1]: cryptsetup.target: 
Found ordering cycle on systemd-cryptsetup@swap.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360795] systemd[1]: cryptsetup.target: 
Found dependency on systemd-random-seed.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360796] systemd[1]: cryptsetup.target: 
Found dependency on zfs-mount.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360797] systemd[1]: cryptsetup.target: 
Found dependency on zfs-load-module.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360798] systemd[1]: cryptsetup.target: 
Found dependency on cryptsetup.target/start
  Apr 28 17:13:01 eu1 kernel: [5.360799] systemd[1]: cryptsetup.target: Job 
systemd-cryptsetup@swap.service/start deleted to break ordering cycle starting 
with cryptsetup.target/start
  . . . . . .
  Apr 28 17:13:01 eu1 kernel: [5.361082] systemd[1]: Unnecessary job for 
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2 was removed

  
  Also, /dev/mapper does not contain any swap devices:

  root@eu1:/var/log# ls -l /dev/mapper
  total 0
  crw--- 1 root root 10, 236 Apr 28 17:13 control
  root@eu1:/var/log#

  
  And top shows no swap:

  MiB Swap:  0.0 total,  0.0 free,  0.0 used.  63153.6 avail
  Mem

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1779736] Re: umask ignored on NFSv4.2 mounts

2020-05-29 Thread Richard Laager
seth-arnold, the ZFS default is actltype=off, which means that ACLs are
disabled. (I don't think the NFSv4 ACL support in ZFS is wired up on
Linux.) It's not clear to me why this is breaking with ACLs off.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1779736

Title:
  umask ignored on NFSv4.2 mounts

Status in linux package in Ubuntu:
  Confirmed
Status in nfs-utils package in Ubuntu:
  Confirmed
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  After upgrading to kernel 4.15.0-24-generic (on Ubuntu 18.04 LTS)
  NFSv4.2 mounts ignore the umask when creating files and directories.
  Files get permissions 666 and directories get 777.  Therefore, a umask
  of 000 is seemingly being forced when creating files/directories in
  NFS mounts.  Mounting with noacl does not resolve the issue.

  How to replicate:

  1. Mount an NFS share (defaults to NFSv4.2)
  2. Ensure restrictive umask: umask 022
  3. Create directory: mkdir test_dir
  4. Create file: touch test_file
  5. List: ls -l

  The result will be:
  drwxrwxrwx 2 user user 2 Jul  2 12:16 test_dir
  -rw-rw-rw- 1 user user 0 Jul  2 12:16 test_file

  while the expected result would be
  drwxr-xr-x 2 user user 2 Jul  2 12:16 test_dir
  -rw-r--r-- 1 user user 0 Jul  2 12:16 test_file

  Bug does not occur when mounting with any of:
    vers=3
    vers=4.0
    vers=4.1

  I have a suspicion this is related to: 
https://tools.ietf.org/id/draft-ietf-nfsv4-umask-03.html
  But since the server does not have ACL's enabled, and mounting with noacl 
does not resolve the issue this is unexpected behavior.

  Both server and client are running kernel 4.15.0-24-generic on Ubuntu
  18.04 LTS.  NFS package versions are:

  nfs-kernel-server 1:1.3.4-2.1ubuntu5
  nfs-common 1:1.3.4-2.1ubuntu5

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1779736/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


Re: [Kernel-packages] [Bug 1881107] Re: zfs: backport AES-GCM performance accelleration

2020-05-28 Thread Richard Laager
There is another AES-GCM performance acceleration commit for systems
without MOVBE.

-- 
Richard

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1881107

Title:
  zfs: backport AES-GCM performance accelleration

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Focal:
  In Progress
Status in zfs-linux source package in Groovy:
  Fix Released

Bug description:
  == SRU Justification ==

  Upstream commit 31b160f0a6c673c8f926233af2ed6d5354808393 contains AES-
  GCM acceleration changes that significantly improve encrypted
  performance.

  Tests on a memory backed pool show performance improvements of ~15-22%
  for AES-CCM writes, ~17-20% AES-CCM reads, 34-36% AES-GCM writes and
  ~79-80% AES-GCM reads on a Sandybridge x86-64 CPU, so this looks like
  a promising optimization that will benefit a lot of users.

  == The fix ==

  Backport of upstream 31b160f0a6c673c8f926233af2ed6d5354808393 - this
  is already backported in Groovy ZFS 0.8.3-1ubuntu13

  == Test case ==

  Run ZFS performance tests from ubuntu_performance_zfs_encryption
  ubuntu kernel team autotests. With the fix the encryption runs
  significantly faster, as noted earlier in the SRU justification.

  Also test with the 4 types of ZFS ubuntu autotests, should not fail
  any of these.

  == Regression Potential ==

  This fix alters the crypto engine and adds in new optimizations for
  CPUs that have capable instruction sets.  There is a risk that this
  new crypto code is erroneous.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1881107/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1872863] Re: QEMU/KVM display is garbled when booting from kernel EFI stub due to missing bochs-drm module

2020-05-11 Thread Richard Laager
I have confirmed that the fix in -proposed fixes the issue for me.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1872863

Title:
  QEMU/KVM display is garbled when booting from kernel EFI stub due to
  missing bochs-drm module

Status in kmod package in Ubuntu:
  Fix Released
Status in linux package in Ubuntu:
  Fix Released
Status in kmod source package in Bionic:
  Fix Committed
Status in linux source package in Bionic:
  Fix Committed

Bug description:
  BugLink: https://bugs.launchpad.net/bugs/1872863

  [Impact]

  A recent grub2 SRU, LP #1864533, now forces the kernel to boot via the
  kernel EFI stub whenever EFI is enabled. This causes problems for
  QEMU/KVM virtual machines which use the VGA=std video device, as the
  efifb driver yields an unreadable garbled screen. See the attached
  image.

  The correct framebuffer driver to use in this situation is bochs-drm,
  and modprobing it from a HWE kernel fixes the issues.

  bochs-drm is missing from Bionic since CONFIG_DRM_BOCHS was disabled
  in LP #1378648 due to bochs-drm causing problems in a PowerKVM
  machine. This problem appears to be fixed now, and bochs-drm has been
  re-enabled for Disco and up, in LP #1795857 and has been proven safe.

  This has also come up again in LP #1823152, as well as chatter on LP
  #1795857 to get this enabled on Bionic.

  The customer which is experiencing this issue cannot switch to VGA=qxl
  as a workaround, and must use VGA=std, hence I suggest we re-enable
  bochs-drm for Bionic.

  [Fix]

  I noticed on Focal, if you boot, the framebuffer is initially efifb:

  [ 0.603716] efifb: probing for efifb
  [ 0.603733] efifb: framebuffer at 0xc000, using 1876k, total 1875k
  [ 0.603735] efifb: mode is 800x600x32, linelength=3200, pages=1
  [ 0.603736] efifb: scrolling: redraw
  [ 0.603738] efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
  [ 0.604462] Console: switching to colour frame buffer device 100x37
  [ 0.605829] fb0: EFI VGA frame buffer device

  This soon changes to bochs-drm about a second later:

  [ 0.935988] bochs-drm :00:01.0: remove_conflicting_pci_framebuffers: bar 
0: 0xc000 -> 0xc0ff
  [ 0.937023] bochs-drm :00:01.0: remove_conflicting_pci_framebuffers: bar 
2: 0xc1c8c000 -> 0xc1c8cfff
  [ 0.937776] checking generic (c000 1d5000) vs hw (c000 100)
  [ 0.937776] fb0: switching to bochsdrmfb from EFI VGA
  [ 0.939085] Console: switching to colour dummy device 80x25
  [ 0.939117] bochs-drm :00:01.0: vgaarb: deactivate vga console
  [ 0.939210] [drm] Found bochs VGA, ID 0xb0c5.
  [ 0.939212] [drm] Framebuffer size 16384 kB @ 0xc000, mmio @ 0xc1c8c000.
  [ 0.941955] lpc_ich :00:1f.0: I/O space for GPIO uninitialized
  [ 0.942069] [TTM] Zone kernel: Available graphics memory: 2006780 KiB
  [ 0.942081] [TTM] Initializing pool allocator
  [ 0.942089] [TTM] Initializing DMA pool allocator
  [ 0.943026] virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 
GB/10.0 GiB)
  [ 0.944019] [drm] Found EDID data blob.
  [ 0.944162] [drm] Initialized bochs-drm 1.0.0 20130925 for :00:01.0 on 
minor 0
  [ 0.944979] fbcon: bochs-drmdrmfb (fb0) is primary device
  [ 0.947712] Console: switching to colour frame buffer device 128x48

  On bionic, the framebuffer never changes from efifb, since the bochs-
  drm kernel module is not built, and it is also present on the module
  banlist in /etc/modprobe.d/blacklist-framebuffer.conf

  bochs-drm needs to be enabled to be built in the kernel config, and
  removed from the module blacklist in kmod.

  [Testcase]

  Create a new QEMU/KVM virtual machine, I used virt-manager. Before you
  install the OS, check the box to modify settings before install. In
  the "Overview" tab, enable EFI by setting the firmware to "UEFI
  x86_64: /usr/share/OVMF/OVMF_CODE.secboot.fd".

  Set the video device to qxl while you install Bionic. Once the install
  is done, reboot, do a "apt update" and "apt upgrade", to ensure you
  have grub2 2.02-2ubuntu8.15 installed.

  Shut the VM down, and set the video device to VGA. Or VGA=std if you
  use the QEMU command line.

  Start the VM up, and the screen will be garbled. See attached picture.

  If you install my test packages, which are available here:

  https://launchpad.net/~mruffell/+archive/ubuntu/sf272653-test

  Instructions to install (on a bionic system):

  1) Enable bionic-proposed
  2) sudo apt-get update
  3) sudo apt install linux-image-unsigned-4.15.0-96-generic 
linux-modules-4.15.0-96-generic linux-modules-extra-4.15.0-96-generic 
linux-headers-4.15.0-96-generic linux-headers-4.15.0-96 libkmod2 kmod
  4) sudo reboot
  5) uname -rv
  4.15.0-96-generic #97+TEST272653v20200409b1-Ubuntu SMP Thu Apr 9 04:09:18 UTC 
2020

  If you reboot, the screen will be perfectly readable, since the bochs-
  drm driver will be in use.

  [Regression Potential]

  We are enabling a 

[Kernel-packages] [Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root

2020-05-05 Thread Richard Laager
Can you share a bit more details about how you have yours setup? What
does your partition table look like, what does the MD config look like,
what do you have in /etc/fstab for swap, etc.? I'm running into weird
issues with this configuration, separate from this bug.

@didrocks: I'll try to get this proposed upstream soon. If you beat me
to it, I won't complain. :)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  root@eu1:/var/log# lsb_release -a
  No LSB modules are available.
  Distributor ID:   Ubuntu
  Description:  Ubuntu 20.04 LTS
  Release:  20.04
  Codename: focal

  root@eu1:/var/log# apt-cache policy cryptsetup
  cryptsetup:
Installed: (none)
Candidate: 2:2.2.2-3ubuntu2
Version table:
   2:2.2.2-3ubuntu2 500
  500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

  OTHER BACKGROUND INFO:
  ==

  1. machine has 2 drives. each drive is partitioned into 2 partitions,
  zfs and swap

  
  2. Ubuntu 20.04 installed on ZFS root using debootstrap 
(debootstrap_1.0.118ubuntu1_all)

  
  3. The ZFS root pool is a 2 partition mirror (the first partition of each 
disk)

  
  4. /etc/crypttab is set up as follows:

  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256
  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802933-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256


  
  WHAT I EXPECTED
  ===

  I expected machine would reboot and have encrypted swap that used two
  devices under /dev/mapper


  WHAT HAPPENED INSTEAD
  =

  
  On reboot, swap setup fails with the following messages in /var/log/syslog:

  Apr 28 17:13:01 eu1 kernel: [5.360793] systemd[1]: cryptsetup.target: 
Found ordering cycle on systemd-cryptsetup@swap.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360795] systemd[1]: cryptsetup.target: 
Found dependency on systemd-random-seed.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360796] systemd[1]: cryptsetup.target: 
Found dependency on zfs-mount.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360797] systemd[1]: cryptsetup.target: 
Found dependency on zfs-load-module.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360798] systemd[1]: cryptsetup.target: 
Found dependency on cryptsetup.target/start
  Apr 28 17:13:01 eu1 kernel: [5.360799] systemd[1]: cryptsetup.target: Job 
systemd-cryptsetup@swap.service/start deleted to break ordering cycle starting 
with cryptsetup.target/start
  . . . . . .
  Apr 28 17:13:01 eu1 kernel: [5.361082] systemd[1]: Unnecessary job for 
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2 was removed

  
  Also, /dev/mapper does not contain any swap devices:

  root@eu1:/var/log# ls -l /dev/mapper
  total 0
  crw--- 1 root root 10, 236 Apr 28 17:13 control
  root@eu1:/var/log#

  
  And top shows no swap:

  MiB Swap:  0.0 total,  0.0 free,  0.0 used.  63153.6 avail
  Mem

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1874519] Re: ZFS installation on Raspberry Pi is problematic

2020-05-05 Thread Richard Laager
I think it used to be the case that zfsutils-linux depended on zfs-dkms
which was then provided by the kernel packages. That seems like a way to
solve this. Given that dkms is for dynamic kernel modules, it was always
a bit weird to see the kernel providing that. It should probably be that
zfsutils-linux depends on zfs-dkms | zfs-module, and then the kernel
provides zfs-module.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1874519

Title:
  ZFS installation on Raspberry Pi is problematic

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  Version: Ubuntu Server 20.04 - preinstalled 64-bit image for Raspberry
  Pi.

  ZFS on the Pi under 20.04 is currently a bit problematic. Upon issuing
  the command 'zpool status', I'm helpfully directed to install
  zfsutils-linux. When I do this, it complains that it cannot find the
  ZFS module, then errors out. Worse than that, the zfsutils-linux
  package does not depend on the zfs-dkms package, so it doesn't attempt
  to build the ZFS kernel modules automatically.

  The workaround is to install zfs-dkms, which builds the required
  kernel modules. (Once this has been done, the usual errors when
  installing the zfsutils-linux package, caused by there being no ZFS
  pools on the system, can be worked around by creating a zpool, then
  rerunning 'sudo apt install zfsutils-linux', as with previous versions
  of Ubuntu and Debian).

  I have not tested on other hardware platforms - this problem may also
  exist on other platforms where the user has not selected to install to
  ZFS.

  I have selected 'zfsutils' as the affected package, which is not the
  name of an actual current package, since launchpad won't let me submit
  the bug without selecting a package, however it's not clear to me that
  the problem is caused by that package.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1874519/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root

2020-05-05 Thread Richard Laager
I didn't get a chance to test the patch. I'm running into unrelated
issues.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  root@eu1:/var/log# lsb_release -a
  No LSB modules are available.
  Distributor ID:   Ubuntu
  Description:  Ubuntu 20.04 LTS
  Release:  20.04
  Codename: focal

  root@eu1:/var/log# apt-cache policy cryptsetup
  cryptsetup:
Installed: (none)
Candidate: 2:2.2.2-3ubuntu2
Version table:
   2:2.2.2-3ubuntu2 500
  500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

  OTHER BACKGROUND INFO:
  ==

  1. machine has 2 drives. each drive is partitioned into 2 partitions,
  zfs and swap

  
  2. Ubuntu 20.04 installed on ZFS root using debootstrap 
(debootstrap_1.0.118ubuntu1_all)

  
  3. The ZFS root pool is a 2 partition mirror (the first partition of each 
disk)

  
  4. /etc/crypttab is set up as follows:

  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256
  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802933-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256


  
  WHAT I EXPECTED
  ===

  I expected machine would reboot and have encrypted swap that used two
  devices under /dev/mapper


  WHAT HAPPENED INSTEAD
  =

  
  On reboot, swap setup fails with the following messages in /var/log/syslog:

  Apr 28 17:13:01 eu1 kernel: [5.360793] systemd[1]: cryptsetup.target: 
Found ordering cycle on systemd-cryptsetup@swap.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360795] systemd[1]: cryptsetup.target: 
Found dependency on systemd-random-seed.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360796] systemd[1]: cryptsetup.target: 
Found dependency on zfs-mount.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360797] systemd[1]: cryptsetup.target: 
Found dependency on zfs-load-module.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360798] systemd[1]: cryptsetup.target: 
Found dependency on cryptsetup.target/start
  Apr 28 17:13:01 eu1 kernel: [5.360799] systemd[1]: cryptsetup.target: Job 
systemd-cryptsetup@swap.service/start deleted to break ordering cycle starting 
with cryptsetup.target/start
  . . . . . .
  Apr 28 17:13:01 eu1 kernel: [5.361082] systemd[1]: Unnecessary job for 
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2 was removed

  
  Also, /dev/mapper does not contain any swap devices:

  root@eu1:/var/log# ls -l /dev/mapper
  total 0
  crw--- 1 root root 10, 236 Apr 28 17:13 control
  root@eu1:/var/log#

  
  And top shows no swap:

  MiB Swap:  0.0 total,  0.0 free,  0.0 used.  63153.6 avail
  Mem

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root

2020-05-04 Thread Richard Laager
This is a tricky one because all of the dependencies make sense in
isolation. Even if we remove the dependency added by that upstream
OpenZFS commit, given that modern systems use zfs-mount-generator,
systemd-random-seed.service is going to Require= and After= var-
lib.mount because of its RequiresMountsFor=/var/lib/systemd/random-seed.
The generated var-lib.mount will be After=zfs-import.target because you
can't mount a filesystem without importing the pool. And zfs-
import.target is After= the two zfs-import-* services. Those are after
cryptsetup.target, as you might be running your pool on top of LUKS.

Mostly side note: it does seem weird and unnecessary that zfs-load-
module.service has After=cryptsetup.target. We should probably remove
that. That is coming from debian/patches/2100-zfs-load-module.patch
(which is what provides zfs-load-module.service in its entirety).

One idea here would be to eliminate the After=cryptsetup.target from
zfs-import-{cache,scan}.service and require that someone add them via a
drop-in if they are running on LUKS. However, in that case, they'll run
into the same problem anyway. So that's not really a fix.

Another option might be to remove the zfs-mount.service Before=systemd-
random-seed.service and effectively require the use of the mount
generator for Root-on-ZFS setups. That is what the Ubuntu installer does
and what the Root-on-ZFS HOWTO will use for 20.04 anyway. (I'm working
on it actively right now.) Then, modify zfs-mount-generator to NOT After
=zfs-import.target (and likewise for Wants=) if the relevant pool is
already imported (and likewise for the zfs-load-key- services). Since
the rpool will already be imported by the time zfs-mount-generator runs,
that would be omitted.

I've attached an *untested* patch to that effect. I hope to test this
yet tonight as I test more Root-on-ZFS scenarios, but no promises.

** Patch added: "2150-fix-systemd-dependency-loops.patch"
   
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+attachment/5366544/+files/2150-fix-systemd-dependency-loops.patch

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  root@eu1:/var/log# lsb_release -a
  No LSB modules are available.
  Distributor ID:   Ubuntu
  Description:  Ubuntu 20.04 LTS
  Release:  20.04
  Codename: focal

  root@eu1:/var/log# apt-cache policy cryptsetup
  cryptsetup:
Installed: (none)
Candidate: 2:2.2.2-3ubuntu2
Version table:
   2:2.2.2-3ubuntu2 500
  500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

  OTHER BACKGROUND INFO:
  ==

  1. machine has 2 drives. each drive is partitioned into 2 partitions,
  zfs and swap

  
  2. Ubuntu 20.04 installed on ZFS root using debootstrap 
(debootstrap_1.0.118ubuntu1_all)

  
  3. The ZFS root pool is a 2 partition mirror (the first partition of each 
disk)

  
  4. /etc/crypttab is set up as follows:

  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256
  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802933-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256


  
  WHAT I EXPECTED
  ===

  I expected machine would reboot and have encrypted swap that used two
  devices under /dev/mapper


  WHAT HAPPENED INSTEAD
  =

  
  On reboot, swap setup fails with the following messages in /var/log/syslog:

  Apr 28 17:13:01 eu1 kernel: [5.360793] systemd[1]: cryptsetup.target: 
Found ordering cycle on systemd-cryptsetup@swap.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360795] systemd[1]: cryptsetup.target: 
Found dependency on systemd-random-seed.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360796] systemd[1]: cryptsetup.target: 
Found dependency on zfs-mount.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360797] systemd[1]: cryptsetup.target: 
Found dependency on zfs-load-module.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360798] systemd[1]: cryptsetup.target: 
Found dependency on cryptsetup.target/start
  Apr 28 17:13:01 eu1 kernel: [5.360799] systemd[1]: cryptsetup.target: Job 
systemd-cryptsetup@swap.service/start deleted to break ordering cycle starting 
with cryptsetup.target/start
  . . . . . .
  Apr 28 17:13:01 eu1 kernel: [5.361082] systemd[1]: Unnecessary job for 
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2 was removed

  
  Also, /dev/mapper does not contain any swap devices:

  root@eu1:/var/log# ls -l /dev/mapper
  total 0
  crw--- 1 root root 10, 236 Apr 28 17:13 control
  root@eu1:/var/log#

  
  And top shows no swap:

  MiB Swap:  0.0 total,  0.0 free,  0.0 used.  63153.6 

[Kernel-packages] [Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root

2020-05-04 Thread Richard Laager
John Gray: Everything else aside, you should mirror your swap instead of
striping it (which I think is what you're doing). With your current
setup, if a disk dies, your system will crash.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  root@eu1:/var/log# lsb_release -a
  No LSB modules are available.
  Distributor ID:   Ubuntu
  Description:  Ubuntu 20.04 LTS
  Release:  20.04
  Codename: focal

  root@eu1:/var/log# apt-cache policy cryptsetup
  cryptsetup:
Installed: (none)
Candidate: 2:2.2.2-3ubuntu2
Version table:
   2:2.2.2-3ubuntu2 500
  500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

  OTHER BACKGROUND INFO:
  ==

  1. machine has 2 drives. each drive is partitioned into 2 partitions,
  zfs and swap

  
  2. Ubuntu 20.04 installed on ZFS root using debootstrap 
(debootstrap_1.0.118ubuntu1_all)

  
  3. The ZFS root pool is a 2 partition mirror (the first partition of each 
disk)

  
  4. /etc/crypttab is set up as follows:

  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256
  swap  
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802933-part2
/dev/urandom   swap,cipher=aes-xts-plain64,size=256


  
  WHAT I EXPECTED
  ===

  I expected machine would reboot and have encrypted swap that used two
  devices under /dev/mapper


  WHAT HAPPENED INSTEAD
  =

  
  On reboot, swap setup fails with the following messages in /var/log/syslog:

  Apr 28 17:13:01 eu1 kernel: [5.360793] systemd[1]: cryptsetup.target: 
Found ordering cycle on systemd-cryptsetup@swap.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360795] systemd[1]: cryptsetup.target: 
Found dependency on systemd-random-seed.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360796] systemd[1]: cryptsetup.target: 
Found dependency on zfs-mount.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360797] systemd[1]: cryptsetup.target: 
Found dependency on zfs-load-module.service/start
  Apr 28 17:13:01 eu1 kernel: [5.360798] systemd[1]: cryptsetup.target: 
Found dependency on cryptsetup.target/start
  Apr 28 17:13:01 eu1 kernel: [5.360799] systemd[1]: cryptsetup.target: Job 
systemd-cryptsetup@swap.service/start deleted to break ordering cycle starting 
with cryptsetup.target/start
  . . . . . .
  Apr 28 17:13:01 eu1 kernel: [5.361082] systemd[1]: Unnecessary job for 
/dev/disk/by-id/nvme-SAMSUNG_MZVLB1T0HALR-0_S3W6NX0M802914-part2 was removed

  
  Also, /dev/mapper does not contain any swap devices:

  root@eu1:/var/log# ls -l /dev/mapper
  total 0
  crw--- 1 root root 10, 236 Apr 28 17:13 control
  root@eu1:/var/log#

  
  And top shows no swap:

  MiB Swap:  0.0 total,  0.0 free,  0.0 used.  63153.6 avail
  Mem

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1848496] Re: [zfs-root] "device-mapper: reload ioctl on osprober-linux-sdaX failed: Device or resource busy" against devices owned by ZFS

2020-05-04 Thread Richard Laager
brian-willoughby (and pranav.bhattarai):

The original report text confirms that "The exit code is 0, so update-
grub does not fail as a result." That matches my understanding (as
someone who has done a lot of ZFS installs maintaining the upstream
Root-on-ZFS HOWTO) that this is purely cosmetic.

If you're not actually running other operating systems, you can simply
remove the os-prober package to make the errors go away.

I'm not saying it shouldn't be fixed. But it's not actually breaking
your systems, right?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848496

Title:
  [zfs-root] "device-mapper: reload ioctl on osprober-linux-sdaX
  failed: Device or resource busy" against devices owned by ZFS

Status in os-prober package in Ubuntu:
  Confirmed
Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  On a 19.10 (pre-release, 20191016 daily) zfs installation, running
  'os-prober' (as 'update-grub' does by default) triggers the following
  error message:

  x@x:~$ sudo os-prober; echo $?
  device-mapper: reload ioctl on osprober-linux-sda5  failed: Device or 
resource busy
  Command failed.
  0
  x@x:~$

  The exit code is 0, so update-grub does not fail as a result.

  
  Partitions on the only available storage (automatically setup by installer):

  x@x:~$ sudo fdisk /dev/sda -l
  Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
  Disk model: VBOX HARDDISK   
  Units: sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 512 bytes / 512 bytes
  Disklabel type: gpt
  Disk identifier: AB7BECFB-5946-48B2-9C59-D2B1261E29A5

  Device   Start  End  Sectors  Size Type
  /dev/sda1 2048  1050623  1048576  512M EFI System
  /dev/sda2  1050624  1153023   102400   50M Linux filesystem
  /dev/sda3  1153024  2070527   917504  448M Linux swap
  /dev/sda4  2070528  6264831  41943042G Solaris boot
  /dev/sda5  6264832 20971486 147066557G Solaris root
  x@x:~$ 

  
  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: os-prober 1.74ubuntu2
  ProcVersionSignature: Ubuntu 5.3.0-18.19-generic 5.3.1
  Uname: Linux 5.3.0-18-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Thu Oct 17 14:19:59 2019
  InstallationDate: Installed on 2019-10-17 (0 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191016.5)
  SourcePackage: os-prober
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/os-prober/+bug/1848496/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2020-02-10 Thread Richard Laager
The AES-GCM performance improvements patch has been merged to master. This also 
included the changes to make encryption=on mean aes-256-gcm:
https://github.com/zfsonlinux/zfs/commit/31b160f0a6c673c8f926233af2ed6d5354808393

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

Status in ubiquity package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  zfs supports built-in encryption support, but the decision of whether
  a pool is encrypted or not must be made at pool creation time; it is
  possible to add encrypted datasets on top of an unencrypted pool but
  it is not possible to do an online change of a dataset (or a whole
  pool) to toggle encryption.

  We should therefore always install with encryption enabled on zfs
  systems, with a non-secret key by default, and allow the user to use
  'zfs change-key -o keylocation=prompt' after install to take ownership
  of the encryption and upgrade the security.

  This is also the simplest way to allow users to avoid having to choose
  between the security of full-disk encryption, and the advanced
  filesystem features of zfs since it requires no additional UX work in
  ubiquity.

  We should make sure that
  https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1857040 is fixed
  first in the kernel so that enabling zfs encryption does not impose an
  unreasonable performance penalty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1862661] Re: zfs-mount.service and others fail inside unpriv containers

2020-02-10 Thread Richard Laager
What was the expected result? Are you expecting to be able to just
install ZFS in a container (but not use it)? Or are you expecting it to
actually work? The user space tools can’t do much of anything without
talking to the kernel.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1862661

Title:
  zfs-mount.service and others fail inside unpriv containers

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  1)
  # lsb_release -rd
  Description:  Ubuntu Focal Fossa (development branch)
  Release:  20.04

  2)
  # apt-cache policy zfsutils-linux
  zfsutils-linux:
    Installed: (none)
    Candidate: 0.8.3-1ubuntu3
    Version table:
   0.8.3-1ubuntu3 500
  500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

  3) apt install zfsutils-linux installs successfully
  4) apt install zfsutils-linux; echo $? == 0
  installs but apt returns error code due to zfs services failing to start 
successfully

  See "systemctl status zfs-mount.service" and "journalctl -xe" for details.
  invoke-rc.d: initscript zfs-mount, action "start" failed.
  ● zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded 
(]8;;file://f2/lib/systemd/system/zfs-mount.service/lib/systemd/system/zfs-mount.service]8;;;
 enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Mon 2020-02-10 16:18:04 UTC; 
23ms ago
     Docs: ]8;;man:zfs(8)man:zfs(8)]8;;
  Process: 1672 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)
     Main PID: 1672 (code=exited, status=1/FAILURE)

  Feb 10 16:18:04 f2 systemd[1]: Starting Mount ZFS filesystems...
  Feb 10 16:18:04 f2 zfs[1672]: /dev/zfs and /proc/self/mounts are required.
  Feb 10 16:18:04 f2 zfs[1672]: Try running 'udevadm trigger' and 'mount -t 
proc proc /proc' as root.
  Feb 10 16:18:04 f2 systemd[1]: zfs-mount.service: Main process exited, 
code=exited, status=1/FAILURE
  Feb 10 16:18:04 f2 systemd[1]: zfs-mount.service: Failed with result 
'exit-code'.
  Feb 10 16:18:04 f2 systemd[1]: Failed to start Mount ZFS filesystems.

  I'm inside a LXD unpriv container.  By default, there are no
  permissions to mount proc, modprobe modules, etc.

  E: Sub-process /usr/bin/dpkg returned an error code (1)
  100

  
  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: zfsutils-linux 0.8.3-1ubuntu3
  ProcVersionSignature: Ubuntu 5.4.0-9.12-generic 5.4.3
  Uname: Linux 5.4.0-9-generic x86_64
  ApportVersion: 2.20.11-0ubuntu16
  Architecture: amd64
  Date: Mon Feb 10 15:50:42 2020
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   LANG=C.UTF-8
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1862661/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1862661] Re: zfs-mount.service and others fail inside unpriv containers

2020-02-10 Thread Richard Laager
** Changed in: zfs-linux (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1862661

Title:
  zfs-mount.service and others fail inside unpriv containers

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  1)
  # lsb_release -rd
  Description:  Ubuntu Focal Fossa (development branch)
  Release:  20.04

  2)
  # apt-cache policy zfsutils-linux
  zfsutils-linux:
    Installed: (none)
    Candidate: 0.8.3-1ubuntu3
    Version table:
   0.8.3-1ubuntu3 500
  500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

  3) apt install zfsutils-linux installs successfully
  4) apt install zfsutils-linux; echo $? == 0
  installs but apt returns error code due to zfs services failing to start 
successfully

  See "systemctl status zfs-mount.service" and "journalctl -xe" for details.
  invoke-rc.d: initscript zfs-mount, action "start" failed.
  ● zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded 
(]8;;file://f2/lib/systemd/system/zfs-mount.service/lib/systemd/system/zfs-mount.service]8;;;
 enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Mon 2020-02-10 16:18:04 UTC; 
23ms ago
     Docs: ]8;;man:zfs(8)man:zfs(8)]8;;
  Process: 1672 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)
     Main PID: 1672 (code=exited, status=1/FAILURE)

  Feb 10 16:18:04 f2 systemd[1]: Starting Mount ZFS filesystems...
  Feb 10 16:18:04 f2 zfs[1672]: /dev/zfs and /proc/self/mounts are required.
  Feb 10 16:18:04 f2 zfs[1672]: Try running 'udevadm trigger' and 'mount -t 
proc proc /proc' as root.
  Feb 10 16:18:04 f2 systemd[1]: zfs-mount.service: Main process exited, 
code=exited, status=1/FAILURE
  Feb 10 16:18:04 f2 systemd[1]: zfs-mount.service: Failed with result 
'exit-code'.
  Feb 10 16:18:04 f2 systemd[1]: Failed to start Mount ZFS filesystems.

  I'm inside a LXD unpriv container.  By default, there are no
  permissions to mount proc, modprobe modules, etc.

  E: Sub-process /usr/bin/dpkg returned an error code (1)
  100

  
  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: zfsutils-linux 0.8.3-1ubuntu3
  ProcVersionSignature: Ubuntu 5.4.0-9.12-generic 5.4.3
  Uname: Linux 5.4.0-9-generic x86_64
  ApportVersion: 2.20.11-0ubuntu16
  Architecture: amd64
  Date: Mon Feb 10 15:50:42 2020
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   LANG=C.UTF-8
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1862661/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1862165] Re: /usr/local leak in /etc/default/zfs

2020-02-06 Thread Richard Laager
** Bug watch added: Github Issue Tracker for ZFS #9443
   https://github.com/zfsonlinux/zfs/issues/9443

** Also affects: zfs via
   https://github.com/zfsonlinux/zfs/issues/9443
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1862165

Title:
  /usr/local leak in /etc/default/zfs

Status in Native ZFS for Linux:
  Unknown
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  While updating my focal server this morning, I got a dpkg prompt about
  /etc/default/zfs. Inspecting the diff showed some paths changed from
  /etc to /usr/local/etc:

  ...
  Setting up zfsutils-linux (0.8.3-1ubuntu2) ...

 
  Installing new version of config file /etc/cron.d/zfsutils-linux ...  

 


 
  Configuration file '/etc/default/zfs' 

 
   ==> Modified (by you or by a script) since installation. 

 
   ==> Package distributor has shipped an updated version.  

 
 What would you like to do about it ?  Your options are:

 
  Y or I  : install the package maintainer's version

 
  N or O  : keep your currently-installed version   

 
D : show the differences between the versions   

 
Z : start a shell to examine the situation  

 
   The default action is to keep your current version.  

 
  *** zfs (Y/I/N/O/D/Z) [default=N] ? d 
  
   # Full path to the ZFS cache file?
   # See "cachefile" in zpool(8).
  -# The default is "/etc/zfs/zpool.cache".
  -#ZPOOL_CACHE="/etc/zfs/zpool.cache"
  +# The default is "/usr/local/etc/zfs/zpool.cache".
  +#ZPOOL_CACHE="/usr/local/etc/zfs/zpool.cache"
   #
   # Setting ZPOOL_CACHE to an empty string ('') AND setting ZPOOL_IMPORT_OPTS 
to
  -# "-c /etc/zfs/zpool.cache" will _enforce_ the use of a cache file.
  +# "-c /usr/local/etc/zfs/zpool.cache" will _enforce_ the use of a cache file.
   # This is needed in some cases (extreme amounts of VDEVs, multipath etc).
   # Generally, the use of a cache file is usually not recommended on Linux
   # because it sometimes is more trouble than it's worth (laptops with external
   # devices or when/if device nodes changes names).
  -#ZPOOL_IMPORT_OPTS="-c /etc/zfs/zpool.cache"
  +#ZPOOL_IMPORT_OPTS="-c /usr/local/etc/zfs/zpool.cache"
   #ZPOOL_CACHE=""

To manage notifications about this bug go to:
https://bugs.launchpad.net/zfs/+bug/1862165/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1854982] Re: Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

2020-01-30 Thread Richard Laager
There does seem to be a real bug here. The problem is that we don’t know
if it is on the ZoL side or the FreeBSD side. The immediate failure is
that “zfs recv” on the FreeBSD side is failing to receive the stream. So
that is the best place to start figuring out why. If it turns out that
ZoL is generating an invalid stream, then we can take this to ZoL.
Accordingly, my main goal here is to help you produce the best possible
bug report for FreeBSD to help them troubleshoot. I don’t run FreeBSD,
so I can’t test this myself to produce a test case. If you can produce a
test case, with an example send stream that FreeBSD can’t receive, that
gives them the best chance of finding the root cause.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  After I tried to back-up my datapools from Ubuntu 19.10 to FreeBSD
  12.0 as I have done since June each week, I found out it did not work
  anymore. The regression occurred after I reinstalled Ubuntu on my new
  nvme drive. I also had to reorganize my own datapools/datasets,
  because they either moved to the nvme drive or they had to be located
  on 2 HDDs instead of 3. I had one datapool that still works, the
  datapool containing my archives and it is the datapool, that has NOT
  been reorganized. I tried for a whole long day to get the backup
  working again, but I failed. I compared the properties from datapool
  and dataset, but did not see any problem there. Only a lot of new
  features and properties not present before and not present in FreeBSD.
  I used FreeBSD, because I use for backup an old 32-bits Pentium.

  I have two complaints:
  - the Ubuntu upgrade did cost me the compatibility with FreeBSD. Open-ZFS? :(
  - the system transfers the dataset and at the end of a long transfer it 
decides to quit and the error messages are completely useless and self 
contradicting. 

  On the first try it say the dataset does exist and on the second try it says 
it does NOT exist. One of the two is completely wrong. Some consistency and 
some clearer error messages would be helpful for the user.
  See the following set of strange set error messages on two tries:

  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive zroot/hp-data/dummy
  cannot receive new filesystem stream: destination 'zroot/hp-data/dummy' exists
  must specify -F to overwrite it
  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive -F zroot/hp-data/dummy
  cannot receive new filesystem stream: dataset does not exist

  A 2nd subset of my backup is stored on the laptop and that still
  works. I also compared the properties with those of my laptop, that
  still has its original datapools of begin of the year. I aligned the
  properties of FreeBSD with those of my laptop, but it did not help.

  I attach the properties of the datapool and dataset from both FreeBSD
  and Ubuntu.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zfsutils-linux 0.8.1-1ubuntu14.1
  ProcVersionSignature: Ubuntu 5.3.0-23.25-generic 5.3.7
  Uname: Linux 5.3.0-23-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8.2
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Tue Dec  3 13:35:08 2019
  InstallationDate: Installed on 2019-11-30 (3 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1854982] Re: Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

2020-01-30 Thread Richard Laager
The FreeBSD bug report:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=243730

Like I said, boiling this down to a test case would likely help a lot.
Refusing to do so and blaming the people giving you free software and
free support isn’t helpful.

** Bug watch added: bugs.freebsd.org/bugzilla/ #243730
   https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=243730

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  After I tried to back-up my datapools from Ubuntu 19.10 to FreeBSD
  12.0 as I have done since June each week, I found out it did not work
  anymore. The regression occurred after I reinstalled Ubuntu on my new
  nvme drive. I also had to reorganize my own datapools/datasets,
  because they either moved to the nvme drive or they had to be located
  on 2 HDDs instead of 3. I had one datapool that still works, the
  datapool containing my archives and it is the datapool, that has NOT
  been reorganized. I tried for a whole long day to get the backup
  working again, but I failed. I compared the properties from datapool
  and dataset, but did not see any problem there. Only a lot of new
  features and properties not present before and not present in FreeBSD.
  I used FreeBSD, because I use for backup an old 32-bits Pentium.

  I have two complaints:
  - the Ubuntu upgrade did cost me the compatibility with FreeBSD. Open-ZFS? :(
  - the system transfers the dataset and at the end of a long transfer it 
decides to quit and the error messages are completely useless and self 
contradicting. 

  On the first try it say the dataset does exist and on the second try it says 
it does NOT exist. One of the two is completely wrong. Some consistency and 
some clearer error messages would be helpful for the user.
  See the following set of strange set error messages on two tries:

  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive zroot/hp-data/dummy
  cannot receive new filesystem stream: destination 'zroot/hp-data/dummy' exists
  must specify -F to overwrite it
  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive -F zroot/hp-data/dummy
  cannot receive new filesystem stream: dataset does not exist

  A 2nd subset of my backup is stored on the laptop and that still
  works. I also compared the properties with those of my laptop, that
  still has its original datapools of begin of the year. I aligned the
  properties of FreeBSD with those of my laptop, but it did not help.

  I attach the properties of the datapool and dataset from both FreeBSD
  and Ubuntu.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zfsutils-linux 0.8.1-1ubuntu14.1
  ProcVersionSignature: Ubuntu 5.3.0-23.25-generic 5.3.7
  Uname: Linux 5.3.0-23-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8.2
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Tue Dec  3 13:35:08 2019
  InstallationDate: Installed on 2019-11-30 (3 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1854982] Re: Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

2020-01-30 Thread Richard Laager
** Changed in: zfs-linux (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  After I tried to back-up my datapools from Ubuntu 19.10 to FreeBSD
  12.0 as I have done since June each week, I found out it did not work
  anymore. The regression occurred after I reinstalled Ubuntu on my new
  nvme drive. I also had to reorganize my own datapools/datasets,
  because they either moved to the nvme drive or they had to be located
  on 2 HDDs instead of 3. I had one datapool that still works, the
  datapool containing my archives and it is the datapool, that has NOT
  been reorganized. I tried for a whole long day to get the backup
  working again, but I failed. I compared the properties from datapool
  and dataset, but did not see any problem there. Only a lot of new
  features and properties not present before and not present in FreeBSD.
  I used FreeBSD, because I use for backup an old 32-bits Pentium.

  I have two complaints:
  - the Ubuntu upgrade did cost me the compatibility with FreeBSD. Open-ZFS? :(
  - the system transfers the dataset and at the end of a long transfer it 
decides to quit and the error messages are completely useless and self 
contradicting. 

  On the first try it say the dataset does exist and on the second try it says 
it does NOT exist. One of the two is completely wrong. Some consistency and 
some clearer error messages would be helpful for the user.
  See the following set of strange set error messages on two tries:

  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive zroot/hp-data/dummy
  cannot receive new filesystem stream: destination 'zroot/hp-data/dummy' exists
  must specify -F to overwrite it
  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive -F zroot/hp-data/dummy
  cannot receive new filesystem stream: dataset does not exist

  A 2nd subset of my backup is stored on the laptop and that still
  works. I also compared the properties with those of my laptop, that
  still has its original datapools of begin of the year. I aligned the
  properties of FreeBSD with those of my laptop, but it did not help.

  I attach the properties of the datapool and dataset from both FreeBSD
  and Ubuntu.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zfsutils-linux 0.8.1-1ubuntu14.1
  ProcVersionSignature: Ubuntu 5.3.0-23.25-generic 5.3.7
  Uname: Linux 5.3.0-23-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8.2
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Tue Dec  3 13:35:08 2019
  InstallationDate: Installed on 2019-11-30 (3 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1854982] Re: Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

2020-01-29 Thread Richard Laager
In terms of a compact reproducer, does this work:

# Create a temp pool with large_dnode enabled:
truncate -s 1G lp1854982.img
sudo zpool create -d -o feature@large_dnode=enabled lp1854982 
$(pwd)/lp1854982.img

# Create a dataset with dnodesize=auto
sudo zfs create -o dnodesize=auto lp1854982/ldn

# Create a send stream
sudo zfs snapshot lp1854982/ldn@snap
sudo zfs send lp1854982/ldn@snap > lp1854982-ldn.zfs

sudo zpool export lp1854982

cat lp1854982-ldn.zfs | ssh 192.168.1.100 zfs receive zroot/ldn

If that doesn't reproduce the problem, adjust it until it does. You were
using `zfs send -c`, so maybe that's it. You may need to enable more
pool features, etc.

But if this can be reproduced with an empty dataset on an empty pool,
the send stream file is 8.5K (and far less compressed). Attach the
script for reference and the send stream to a FreeBSD bug.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  After I tried to back-up my datapools from Ubuntu 19.10 to FreeBSD
  12.0 as I have done since June each week, I found out it did not work
  anymore. The regression occurred after I reinstalled Ubuntu on my new
  nvme drive. I also had to reorganize my own datapools/datasets,
  because they either moved to the nvme drive or they had to be located
  on 2 HDDs instead of 3. I had one datapool that still works, the
  datapool containing my archives and it is the datapool, that has NOT
  been reorganized. I tried for a whole long day to get the backup
  working again, but I failed. I compared the properties from datapool
  and dataset, but did not see any problem there. Only a lot of new
  features and properties not present before and not present in FreeBSD.
  I used FreeBSD, because I use for backup an old 32-bits Pentium.

  I have two complaints:
  - the Ubuntu upgrade did cost me the compatibility with FreeBSD. Open-ZFS? :(
  - the system transfers the dataset and at the end of a long transfer it 
decides to quit and the error messages are completely useless and self 
contradicting. 

  On the first try it say the dataset does exist and on the second try it says 
it does NOT exist. One of the two is completely wrong. Some consistency and 
some clearer error messages would be helpful for the user.
  See the following set of strange set error messages on two tries:

  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive zroot/hp-data/dummy
  cannot receive new filesystem stream: destination 'zroot/hp-data/dummy' exists
  must specify -F to overwrite it
  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive -F zroot/hp-data/dummy
  cannot receive new filesystem stream: dataset does not exist

  A 2nd subset of my backup is stored on the laptop and that still
  works. I also compared the properties with those of my laptop, that
  still has its original datapools of begin of the year. I aligned the
  properties of FreeBSD with those of my laptop, but it did not help.

  I attach the properties of the datapool and dataset from both FreeBSD
  and Ubuntu.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zfsutils-linux 0.8.1-1ubuntu14.1
  ProcVersionSignature: Ubuntu 5.3.0-23.25-generic 5.3.7
  Uname: Linux 5.3.0-23-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8.2
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Tue Dec  3 13:35:08 2019
  InstallationDate: Installed on 2019-11-30 (3 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1854982] Re: Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

2020-01-28 Thread Richard Laager
So, one of two things is true:
A) ZFS on Linux is generating the stream incorrectly.
B) FreeBSD is receiving the stream incorrectly.

I don't have a good answer as to how we might differentiate those two.
Filing a bug report with FreeBSD might be a good next step. But like I
said, a compact reproducer would go a long way.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  After I tried to back-up my datapools from Ubuntu 19.10 to FreeBSD
  12.0 as I have done since June each week, I found out it did not work
  anymore. The regression occurred after I reinstalled Ubuntu on my new
  nvme drive. I also had to reorganize my own datapools/datasets,
  because they either moved to the nvme drive or they had to be located
  on 2 HDDs instead of 3. I had one datapool that still works, the
  datapool containing my archives and it is the datapool, that has NOT
  been reorganized. I tried for a whole long day to get the backup
  working again, but I failed. I compared the properties from datapool
  and dataset, but did not see any problem there. Only a lot of new
  features and properties not present before and not present in FreeBSD.
  I used FreeBSD, because I use for backup an old 32-bits Pentium.

  I have two complaints:
  - the Ubuntu upgrade did cost me the compatibility with FreeBSD. Open-ZFS? :(
  - the system transfers the dataset and at the end of a long transfer it 
decides to quit and the error messages are completely useless and self 
contradicting. 

  On the first try it say the dataset does exist and on the second try it says 
it does NOT exist. One of the two is completely wrong. Some consistency and 
some clearer error messages would be helpful for the user.
  See the following set of strange set error messages on two tries:

  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive zroot/hp-data/dummy
  cannot receive new filesystem stream: destination 'zroot/hp-data/dummy' exists
  must specify -F to overwrite it
  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive -F zroot/hp-data/dummy
  cannot receive new filesystem stream: dataset does not exist

  A 2nd subset of my backup is stored on the laptop and that still
  works. I also compared the properties with those of my laptop, that
  still has its original datapools of begin of the year. I aligned the
  properties of FreeBSD with those of my laptop, but it did not help.

  I attach the properties of the datapool and dataset from both FreeBSD
  and Ubuntu.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zfsutils-linux 0.8.1-1ubuntu14.1
  ProcVersionSignature: Ubuntu 5.3.0-23.25-generic 5.3.7
  Uname: Linux 5.3.0-23-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8.2
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Tue Dec  3 13:35:08 2019
  InstallationDate: Installed on 2019-11-30 (3 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1854982] Re: Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

2020-01-28 Thread Richard Laager
The last we heard on this, FreeBSD was apparently not receiving the send
stream, even though it supports large_dnode:

https://zfsonlinux.topicbox.com/groups/zfs-
discuss/T187d60c7257e2eb6-M14bb2d52d4d5c230320a4f56/feature-
incompatibility-between-ubuntu-19-10-and-freebsd-12-0

That's really bizarre. If it supports large_dnode, it should be able to
receive that stream. Ideally, this needs more troubleshooting,
particularly on the receive side. "It said (dataset does not exist)
after a long transfer." is not particularly clear. I'd like to see a
copy-and-paste of the actual `zfs recv` output, at a minimum.

@BertN45, if you want to keep troubleshooting, a good next step would be
to boil this down to a reproducible test case. That is, create a list of
specific commands to create dataset and send it that demonstrates the
problem. That would help. We may need to flesh out the reproducer a bit
more, e.g. by creating a pool on sparse files with particular feature
flags.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  After I tried to back-up my datapools from Ubuntu 19.10 to FreeBSD
  12.0 as I have done since June each week, I found out it did not work
  anymore. The regression occurred after I reinstalled Ubuntu on my new
  nvme drive. I also had to reorganize my own datapools/datasets,
  because they either moved to the nvme drive or they had to be located
  on 2 HDDs instead of 3. I had one datapool that still works, the
  datapool containing my archives and it is the datapool, that has NOT
  been reorganized. I tried for a whole long day to get the backup
  working again, but I failed. I compared the properties from datapool
  and dataset, but did not see any problem there. Only a lot of new
  features and properties not present before and not present in FreeBSD.
  I used FreeBSD, because I use for backup an old 32-bits Pentium.

  I have two complaints:
  - the Ubuntu upgrade did cost me the compatibility with FreeBSD. Open-ZFS? :(
  - the system transfers the dataset and at the end of a long transfer it 
decides to quit and the error messages are completely useless and self 
contradicting. 

  On the first try it say the dataset does exist and on the second try it says 
it does NOT exist. One of the two is completely wrong. Some consistency and 
some clearer error messages would be helpful for the user.
  See the following set of strange set error messages on two tries:

  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive zroot/hp-data/dummy
  cannot receive new filesystem stream: destination 'zroot/hp-data/dummy' exists
  must specify -F to overwrite it
  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive -F zroot/hp-data/dummy
  cannot receive new filesystem stream: dataset does not exist

  A 2nd subset of my backup is stored on the laptop and that still
  works. I also compared the properties with those of my laptop, that
  still has its original datapools of begin of the year. I aligned the
  properties of FreeBSD with those of my laptop, but it did not help.

  I attach the properties of the datapool and dataset from both FreeBSD
  and Ubuntu.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zfsutils-linux 0.8.1-1ubuntu14.1
  ProcVersionSignature: Ubuntu 5.3.0-23.25-generic 5.3.7
  Uname: Linux 5.3.0-23-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8.2
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Tue Dec  3 13:35:08 2019
  InstallationDate: Installed on 2019-11-30 (3 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1850130] Re: zpools fail to import after reboot on fresh install of eoan

2020-01-22 Thread Richard Laager
I think there are multiple issues here. If it's just multipath, that
issue should be resolved by adding After=multipathd.service to zfs-
import-{cache,scan}.service.

For other issues, I wonder if this is cache file related. I'd suggest
checking that the cache file exists (I expect it would), and then
looking at the cache file (e.g. strings /etc/zfs/zpool.cache | less). I
suspect the issue is that the cache file has only the rpool. I'm not
entirely sure why that is happening.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1850130

Title:
  zpools fail to import after reboot on fresh install of eoan

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  Fresh installation of stock Ubuntu 19.10 Eoan with experimental root on ZFS.
  System has existing zpools with data.

  Installation is uneventful. First boot with no problems. Updates
  applied. No other changes from fresh installation. Reboot.

  External pool 'tank' imports with no errors. Reboot.

  External pool has failed to import on boot. In contrast bpool and
  rpool are ok. Manually re-import 'tank' with no issues. I can see both
  'tank' and its path in /dev/disk/by-id/ in /etc/zfs/zpool.cache.
  Reboot.

  'tank' has failed to import on boot. It is also missing from
  /etc/zfs/zpool.cache. Is it possible that the cache is being re-
  generated on reboot, and the newly imported pools are getting erased
  from it? I can re-import the pools again manually with no issues, but
  they don't persist between re-boots.

  Installing normally on ext4 this is not an issue and data pools import
  automatically on boot with no further effort.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1850130/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1850130] Re: zpools fail to import after reboot on fresh install of eoan

2020-01-22 Thread Richard Laager
@gustypants: Sorry, the other one is scan, not pool. Are you using a
multipath setup? Does the pool import fine if you do it manually once
booted?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1850130

Title:
  zpools fail to import after reboot on fresh install of eoan

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  Fresh installation of stock Ubuntu 19.10 Eoan with experimental root on ZFS.
  System has existing zpools with data.

  Installation is uneventful. First boot with no problems. Updates
  applied. No other changes from fresh installation. Reboot.

  External pool 'tank' imports with no errors. Reboot.

  External pool has failed to import on boot. In contrast bpool and
  rpool are ok. Manually re-import 'tank' with no issues. I can see both
  'tank' and its path in /dev/disk/by-id/ in /etc/zfs/zpool.cache.
  Reboot.

  'tank' has failed to import on boot. It is also missing from
  /etc/zfs/zpool.cache. Is it possible that the cache is being re-
  generated on reboot, and the newly imported pools are getting erased
  from it? I can re-import the pools again manually with no issues, but
  they don't persist between re-boots.

  Installing normally on ext4 this is not an issue and data pools import
  automatically on boot with no further effort.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1850130/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1860228] Re: addition of zfsutils-linux scrib every 2nd sunday

2020-01-19 Thread Richard Laager
zfs-linux (0.6.5.6-2) unstable; urgency=medium
...
  * Scrub all healthy pools monthly from Richard Laager

So Debian stretch, but not Ubuntu 16.04.

Deleting the file should be safe, as dpkg should retain that. It sounds
like you never deleted it, as you didn’t have it before this upgrade. So
it wasn’t an issue of the file coming back, just appearing for the first
time. Deleting and editing conffiles is a standard thing in Debian
systems.

These days, we may want to convert this to a system timer/service pair
instead, which you could then disable/mask if you don’t want. Of course,
the initial conversion will cause the same complaint you have here:
something changed on upgrade and enabled a job you don’t want.

** Changed in: zfs-linux (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1860228

Title:
  addition of zfsutils-linux scrib every 2nd sunday

Status in zfs-linux package in Ubuntu:
  Invalid

Bug description:
  FULL REF: https://github.com/zfsonlinux/zfs/issues/9858 (initially
  submitted there mistakenly)

  ==System information==
  {{{
  Type  Version/Name
  Distribution Name Ubuntu
  Distribution Version  18.04.3 LTS
  Linux Kernel  4.15.0-73-generic
  Architecture  x86_64
  ZFS Version   0.7.5-1ubuntu16.6
  SPL Version   0.7.5-1ubuntu2
  Describe the problem you're observing
  }}}

  ==When did this file get added into zfsonlinux ecosystem?==

  {{{
  # cat /etc/cron.d/zfsutils-linux
  PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

  # Scrub the second Sunday of every month.
  24 0 8-14 * * root [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ] 
&& /usr/lib/zfs-linux/scrub
  }}}

  I've been a ZoL user for many years now, and have had my own cron setup 
tailored to distribute pool scrubs once per month, spread across the month to 
avoid system I/O overload on any one day of the month, like this:
  {{{
  # zfsmain scrub: 2:30AM 7th of every month
  30 02 7 * * /sbin/zpool scrub zfsmain

  # zstorage scrub: 2:30AM 5th of every month
  30 02 5 * * /sbin/zpool scrub zstorage

  # zmedia scrub: 1:00AM 14th of every month
  00 01 14 * * /sbin/zpool scrub zmedia

  # zstorage scrub: 2:30AM 21st of every month
  30 02 21 * * /sbin/zpool scrub ztank
  }}}

  However suddenly I noticed in an adhoc check of zpool status that ALL my 
pools were scrubbed on Sunday January 12th 2020!
  {{{
  # zpool status | grep -i "scrub"
    scan: scrub repaired 0B in 0h13m with 0 errors on Sun Jan 12 00:37:31 2020
    scan: scrub repaired 0B in 11h24m with 0 errors on Tue Jan 14 12:24:33 2020 
 <-- (one of my own since the Jan12 mega scrub)
    scan: scrub repaired 0B in 0h45m with 0 errors on Sun Jan 12 01:09:40 2020
    scan: scrub repaired 0B in 7h10m with 0 errors on Sun Jan 12 07:34:11 2020
  }}}
  this is NOT what I had configured, so I went digging and found that zfsutil 
cron file :(
  {{{
  # cat /usr/lib/zfs-linux/scrub
  #!/bin/sh -eu

  # Scrub all healthy pools.
  zpool list -H -o health,name 2>&1 | \
   awk 'BEGIN {FS="\t"} {if ($1 ~ /^ONLINE/) print $2}' | \
  while read pool
  do
   zpool scrub "$pool"
  done

  # cat /etc/cron.d/zfsutils-linux
  PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

  # Scrub the second Sunday of every month.
  24 0 8-14 * * root [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ] 
&& /usr/lib/zfs-linux/scrub
  }}}

  This MAY be a desirable default for some system admins, but having it
  suddenly appear IMO is at the same time undesirable for many.

  ==Describe how to reproduce the problem==

  * Having been a ZoL user sys admin for many years.
  * Be a decent sys admin and know the basics, you've configured your own 
cron/schedule for pool scrubbing per guidelines and individual needs.
  * Follow stable upgrade channels. (for me this is Ubuntu LTS 16.04, then 
18.04)
  * Suddenly after some upgrade version XX(?) your pools start scrubbing on the 
2nd Sunday of every month without your having configured it or asked for that.

  
  ==Expectations==

  Hoping we can:

   1. Confirm when and why this was rolled out to all systems by default (does 
the explanation make sense? is it really OK to do this? how was it 
communicated?)
   2. Ensure "how to disable" is documented and supported (i.e. if I just 
delete that cron file, will some future upgrade replace and re-enable it?)

  

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: zfsutils-linux 0.7.5-1ubuntu16.7
  ProcVersionSignature: Ubuntu 4.15.0-73.82-generic 4.15.18
  Uname: Linux 4.15.0-73-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.9-0ubuntu7.10
  Architecture: amd64
  Date: Sat Jan 18 13:41:29 2020
  InstallationDate: Installed on 2015-06-28 (1664 days ago)
  In

[Kernel-packages] [Bug 1860228] Re: addition of zfsutils-linux scrib every 2nd sunday

2020-01-18 Thread Richard Laager
This was added a LONG time ago. The interesting question here is: if you
previously deleted it, why did it come back? Had you deleted it though?
It sounds like you weren’t aware of this file.

You might want to edit it in place, even just to comment out the job.
That would force dpkg to give you a conffile merge prompt instead of
being able to silently put it back.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1860228

Title:
  addition of zfsutils-linux scrib every 2nd sunday

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  FULL REF: https://github.com/zfsonlinux/zfs/issues/9858 (initially
  submitted there mistakenly)

  ==System information==
  {{{
  Type  Version/Name
  Distribution Name Ubuntu
  Distribution Version  18.04.3 LTS
  Linux Kernel  4.15.0-73-generic
  Architecture  x86_64
  ZFS Version   0.7.5-1ubuntu16.6
  SPL Version   0.7.5-1ubuntu2
  Describe the problem you're observing
  }}}

  ==When did this file get added into zfsonlinux ecosystem?==

  {{{
  # cat /etc/cron.d/zfsutils-linux
  PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

  # Scrub the second Sunday of every month.
  24 0 8-14 * * root [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ] 
&& /usr/lib/zfs-linux/scrub
  }}}

  I've been a ZoL user for many years now, and have had my own cron setup 
tailored to distribute pool scrubs once per month, spread across the month to 
avoid system I/O overload on any one day of the month, like this:
  {{{
  # zfsmain scrub: 2:30AM 7th of every month
  30 02 7 * * /sbin/zpool scrub zfsmain

  # zstorage scrub: 2:30AM 5th of every month
  30 02 5 * * /sbin/zpool scrub zstorage

  # zmedia scrub: 1:00AM 14th of every month
  00 01 14 * * /sbin/zpool scrub zmedia

  # zstorage scrub: 2:30AM 21st of every month
  30 02 21 * * /sbin/zpool scrub ztank
  }}}

  However suddenly I noticed in an adhoc check of zpool status that ALL my 
pools were scrubbed on Sunday January 12th 2020!
  {{{
  # zpool status | grep -i "scrub"
    scan: scrub repaired 0B in 0h13m with 0 errors on Sun Jan 12 00:37:31 2020
    scan: scrub repaired 0B in 11h24m with 0 errors on Tue Jan 14 12:24:33 2020 
 <-- (one of my own since the Jan12 mega scrub)
    scan: scrub repaired 0B in 0h45m with 0 errors on Sun Jan 12 01:09:40 2020
    scan: scrub repaired 0B in 7h10m with 0 errors on Sun Jan 12 07:34:11 2020
  }}}
  this is NOT what I had configured, so I went digging and found that zfsutil 
cron file :(
  {{{
  # cat /usr/lib/zfs-linux/scrub
  #!/bin/sh -eu

  # Scrub all healthy pools.
  zpool list -H -o health,name 2>&1 | \
   awk 'BEGIN {FS="\t"} {if ($1 ~ /^ONLINE/) print $2}' | \
  while read pool
  do
   zpool scrub "$pool"
  done

  # cat /etc/cron.d/zfsutils-linux
  PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

  # Scrub the second Sunday of every month.
  24 0 8-14 * * root [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ] 
&& /usr/lib/zfs-linux/scrub
  }}}

  This MAY be a desirable default for some system admins, but having it
  suddenly appear IMO is at the same time undesirable for many.

  ==Describe how to reproduce the problem==

  * Having been a ZoL user sys admin for many years.
  * Be a decent sys admin and know the basics, you've configured your own 
cron/schedule for pool scrubbing per guidelines and individual needs.
  * Follow stable upgrade channels. (for me this is Ubuntu LTS 16.04, then 
18.04)
  * Suddenly after some upgrade version XX(?) your pools start scrubbing on the 
2nd Sunday of every month without your having configured it or asked for that.

  
  ==Expectations==

  Hoping we can:

   1. Confirm when and why this was rolled out to all systems by default (does 
the explanation make sense? is it really OK to do this? how was it 
communicated?)
   2. Ensure "how to disable" is documented and supported (i.e. if I just 
delete that cron file, will some future upgrade replace and re-enable it?)

  

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: zfsutils-linux 0.7.5-1ubuntu16.7
  ProcVersionSignature: Ubuntu 4.15.0-73.82-generic 4.15.18
  Uname: Linux 4.15.0-73-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.9-0ubuntu7.10
  Architecture: amd64
  Date: Sat Jan 18 13:41:29 2020
  InstallationDate: Installed on 2015-06-28 (1664 days ago)
  InstallationMedia: Ubuntu-Server 14.04.2 LTS "Trusty Tahr" - Release amd64 
(20150218.1)
  SourcePackage: zfs-linux
  UpgradeStatus: Upgraded to bionic on 2019-05-10 (253 days ago)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1860228/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : 

[Kernel-packages] [Bug 1860182] Re: zpool scrub malfunction after kernel upgrade

2020-01-17 Thread Richard Laager
You original scrub took just under 4.5 hours. Have you let the second
scrub run anywhere near that long? If not, start there.

The new scrub code uses a two-phase approach. First it works through
metadata determining what (on-disk) blocks to scrub. Second, it does the
actual scrub. This allows ZFS to coalesce the blocks and do large,
sequential reads in the second phase. This dramatically speeds up the
total scrub time. In contrast, the original scrub code is doing a lot of
small, random reads.

You might just be seeing the first phase completing in 5 minutes, but
the second phase still needs to occur. Or, maybe it did part of the
first phase but hit the RAM limit and needed to start the second phase.

If you've let it run for 4.5 hours and it's still showing that status,
then I'd say something is wrong.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1860182

Title:
  zpool scrub malfunction after kernel upgrade

Status in zfs-linux package in Ubuntu:
  Triaged

Bug description:
  I ran a zpool scrub prior to upgrading my 18.04 to the latest HWE
  kernel (5.3.0-26-generic #28~18.04.1-Ubuntu) and it ran properly:

  eric@eric-8700K:~$ zpool status
pool: storagepool1
   state: ONLINE
scan: scrub repaired 1M in 4h21m with 0 errors on Fri Jan 17 07:01:24 2020
  config:

NAME  STATE READ WRITE CKSUM
storagepool1  ONLINE   0 0 0
  mirror-0ONLINE   0 0 0
ata-WDC_WD20EZRZ-00Z5HB0_WD-WCC4M3YFRVJ3  ONLINE   0 0 0
ata-ST2000DM001-1CH164_Z1E285A4   ONLINE   0 0 0
  mirror-1ONLINE   0 0 0
ata-WDC_WD20EZRZ-00Z5HB0_WD-WCC4M1DSASHD  ONLINE   0 0 0
ata-ST2000DM006-2DM164_Z4ZA3ENE   ONLINE   0 0 0


  I ran zpool scrub after upgrading the kernel and rebooting, and now it
  fails to work properly. It appeared to finish in about 5 minutes but
  did not, and says it is going slow:


  eric@eric-8700K:~$ sudo zpool status
pool: storagepool1
   state: ONLINE
scan: scrub in progress since Fri Jan 17 15:32:07 2020
1.89T scanned out of 1.89T at 589M/s, (scan is slow, no estimated time)
0B repaired, 100.00% done
  config:

NAME  STATE READ WRITE CKSUM
storagepool1  ONLINE   0 0 0
  mirror-0ONLINE   0 0 0
ata-WDC_WD20EZRZ-00Z5HB0_WD-WCC4M3YFRVJ3  ONLINE   0 0 0
ata-ST2000DM001-1CH164_Z1E285A4   ONLINE   0 0 0
  mirror-1ONLINE   0 0 0
ata-WDC_WD20EZRZ-00Z5HB0_WD-WCC4M1DSASHD  ONLINE   0 0 0
ata-ST2000DM006-2DM164_Z4ZA3ENE   ONLINE   0 0 0

  errors: No known data errors

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: zfsutils-linux 0.7.5-1ubuntu16.7
  ProcVersionSignature: Ubuntu 5.3.0-26.28~18.04.1-generic 5.3.13
  Uname: Linux 5.3.0-26-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.9-0ubuntu7.9
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Fri Jan 17 16:22:01 2020
  InstallationDate: Installed on 2018-03-07 (681 days ago)
  InstallationMedia: Ubuntu 17.10 "Artful Aardvark" - Release amd64 (20180105.1)
  SourcePackage: zfs-linux
  UpgradeStatus: Upgraded to bionic on 2018-08-02 (533 days ago)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1860182/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2020-01-17 Thread Richard Laager
We discussed this at the January 7th OpenZFS Leadership meeting. The
notes and video recording are now available.

The meeting notes are in the running document here (see page 2 right now, or 
search for this Launchpad bug number):
https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit

The video recording is here; the link starts you at 15:45 when we start 
discussing this:
https://youtu.be/x9-wua_mzt0?t=945

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

Status in ubiquity package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  zfs supports built-in encryption support, but the decision of whether
  a pool is encrypted or not must be made at pool creation time; it is
  possible to add encrypted datasets on top of an unencrypted pool but
  it is not possible to do an online change of a dataset (or a whole
  pool) to toggle encryption.

  We should therefore always install with encryption enabled on zfs
  systems, with a non-secret key by default, and allow the user to use
  'zfs change-key -o keylocation=prompt' after install to take ownership
  of the encryption and upgrade the security.

  This is also the simplest way to allow users to avoid having to choose
  between the security of full-disk encryption, and the advanced
  filesystem features of zfs since it requires no additional UX work in
  ubiquity.

  We should make sure that
  https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1857040 is fixed
  first in the kernel so that enabling zfs encryption does not impose an
  unreasonable performance penalty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2020-01-10 Thread Richard Laager
> It is not appropriate to require the user to type a password on every
> boot by default; this must be opt-in.

Agreed.

The installer should prompt (with a checkbox) for whether the user wants
encryption. It should default to off. If the user selects the checkbox,
prompt them for a passphrase. Setup encryption using that passphrase.
This is exactly how the installer behaves today for non-ZFS (e.g. using
LUKS). I'm proposing to extend that existing behavior to ZFS. Thi should
be trivial to implement; I'm not sure if we still have time for 20.04,
but I'd really love to see at least this much implemented now.

What should happen if the user leaves the encryption box unchecked?
Currently, they get no encryption, and that's what I'm proposing
initially. You'd like to improve that so that the user can later set a
passphrase without having to reformat their disk. I agree that's a
reasonable goal.

I think the blockers / potential blockers are:

1) `zfs change-key` does not overwrite the old wrapped master key on
disk, so it is accessible to forensic analysis. Given that the old
wrapping key is a known passphrase ("ubuntuzfs"), another way of looking
at this is that the master key is still on disk in what is, security-
wise, effectively plaintext. I (and other upstream ZFS developers) are
concerned about giving the user a false sense of security in this
situation. ZFS could overwrite the key on disk when changed. If/when
someone adds that enhancement to `zfs change-key`, then I think this
objection goes away. I don't see this being implemented in time for
20.04.

2) Is the performance acceptable? On older systems without AES-NI, there
is a noticeable impact, which I've seen myself. I recommended using AES-
NI support as the deciding factor here... if they have AES-NI, then
encrypt (with a known passphrase) even if the user didn't opt-in; if
they don't have AES-NI, then not opting-in means encryption is really
off. If that inconsistency is a problem, then ultimately Ubuntu just has
to decide one way or the other. Personally, I'm a big fan of encryption,
so I'm not going to be upset if the decision is that the performance
impact on older hardware is just something to accept.

> > I would recommend setting encryption=aes256-gcm instead of
> > encryption=on (which is aes256-ccm).
> 
> I think the right way to handle this is to change the behavior of
> zfs-linux so that encryption=on defaults to the recommended algorithm -

Agreed. I proposed this at the last OpenZFS Leadership meeting and there
is general agreement to do so. It does need a bit more discussion and
then implementation (which should be trivial).

> rather than hard-coding the algorithm selection in ubiquity, which is
> generally speaking a good recipe for bit rot.

Given that I'd like to see encryption land in 20.04, I think it would be
reasonable to set -o encryption=aes-256-gcm today and then change it
(e.g. for 20.10) to "on" once the default changes in OpenZFS.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

Status in ubiquity package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  zfs supports built-in encryption support, but the decision of whether
  a pool is encrypted or not must be made at pool creation time; it is
  possible to add encrypted datasets on top of an unencrypted pool but
  it is not possible to do an online change of a dataset (or a whole
  pool) to toggle encryption.

  We should therefore always install with encryption enabled on zfs
  systems, with a non-secret key by default, and allow the user to use
  'zfs change-key -o keylocation=prompt' after install to take ownership
  of the encryption and upgrade the security.

  This is also the simplest way to allow users to avoid having to choose
  between the security of full-disk encryption, and the advanced
  filesystem features of zfs since it requires no additional UX work in
  ubiquity.

  We should make sure that
  https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1857040 is fixed
  first in the kernel so that enabling zfs encryption does not impose an
  unreasonable performance penalty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2020-01-06 Thread Richard Laager
I've given this a lot of thought. For what it's worth, if it were my
decision, I would first put your time into making a small change to the
installer to get the "encryption on" case perfect, rather than the
proposal in this bug.

The installer currently has:

 O Erase disk an install Ubuntu
   Warning: ...
   [] Encrypt the new Ubuntu installation for security
  You will choose a security key in the next step.
   [] Use LVM with the new Ubuntu installation
  This will set up Logical Volume Management. It allows taking
  snapshots and easier partition resizing.
 O EXPERIMENTAL: Erase disk and use ZFS
   Warning: This will delete all your files on all operating systems.
   This is experimental and may cause data loss. Do not use on
   production systems.
 O Something else
   ...

I would move the ZFS option to be a peer of / alternative to LVM
instead:

 O Erase disk and install Ubuntu
   Warning: ...
   [] Encrypt the new Ubuntu installation for security
  You will choose a security key in the next step.
   Volume Managment:
 O  None (Fixed Partitions)
 O  Logical Volume Manager (LVM)
LVM allows taking snapshots and easier partition resizing.
 O  EXPERIMENTAL: ZFS
ZFS allows taking snapshots and dynamically shares space between
filesystems.
Warning: This is experimental and may cause data loss. Do not use
on production systems.
 O Something else
   ...

This is a very straightforward UI change. The only new combination
introduced with this UI is encryption on + ZFS, which is what we want.
In that scenario, run the same passphrase prompting screen that is used
now for LUKS. Then pass the passphrase to `zpool create` (and use
encryption=aes-256-gcm for the reasons already discussed).

If the "always enable encryption" feature is to future-proof for people
who would otherwise choose "no encryption", that's worth considering,
but if it's an alternative to prompting them in the installer, I'm
personally opposed.

However, we do need to consider why they're turning off encryption. Are
they saying, "I don't want encryption ever (e.g. because of the
performance penalty)." or "I don't care about encryption right now." If
you always enable encryption, you are forcing encryption on them, which
has real performance impacts on older hardware. For example, I just
yesterday upgraded my personal server to use ZFS encryption, but made a
media dataset that is unencrypted. Sustained writes to the media dataset
are _at least_ twice as fast. With encryption, I was CPU bound. With it
off, I was not, so I suspect I could have written even faster. This
system is older and does not have AES-NI.

You mentioned spinning disks. Perhaps I misunderstood, but I don't know
why you'd be asking about spinning disks in particular. They are slower
than SSDs, so encryption is less likely to be a concern there, not more.
My server scenario involved spinning disks.

If the old wrapped master key were overwritten when changed _and_ the
system has AES-NI instructions, then I think it would be reasonable to
make "no encryption" turn on encryption anyway with a fixed passphrase.
This would achieve the goal of allowing encryption to be enabled later.
But I think that is second in priority to handling the "encryption on"
case.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

Status in ubiquity package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  zfs supports built-in encryption support, but the decision of whether
  a pool is encrypted or not must be made at pool creation time; it is
  possible to add encrypted datasets on top of an unencrypted pool but
  it is not possible to do an online change of a dataset (or a whole
  pool) to toggle encryption.

  We should therefore always install with encryption enabled on zfs
  systems, with a non-secret key by default, and allow the user to use
  'zfs change-key -o keylocation=prompt' after install to take ownership
  of the encryption and upgrade the security.

  This is also the simplest way to allow users to avoid having to choose
  between the security of full-disk encryption, and the advanced
  filesystem features of zfs since it requires no additional UX work in
  ubiquity.

  We should make sure that
  https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1857040 is fixed
  first in the kernel so that enabling zfs encryption does not impose an
  unreasonable performance penalty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : 

[Kernel-packages] [Bug 1850130] Re: zpools fail to import after reboot on fresh install of eoan

2020-01-03 Thread Richard Laager
Try adding "After=multipathd.service" to zfs-import-cache.service and
zfs-import-pool.service. If that fixes it, then we should probably add
that upstream.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1850130

Title:
  zpools fail to import after reboot on fresh install of eoan

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  Fresh installation of stock Ubuntu 19.10 Eoan with experimental root on ZFS.
  System has existing zpools with data.

  Installation is uneventful. First boot with no problems. Updates
  applied. No other changes from fresh installation. Reboot.

  External pool 'tank' imports with no errors. Reboot.

  External pool has failed to import on boot. In contrast bpool and
  rpool are ok. Manually re-import 'tank' with no issues. I can see both
  'tank' and its path in /dev/disk/by-id/ in /etc/zfs/zpool.cache.
  Reboot.

  'tank' has failed to import on boot. It is also missing from
  /etc/zfs/zpool.cache. Is it possible that the cache is being re-
  generated on reboot, and the newly imported pools are getting erased
  from it? I can re-import the pools again manually with no issues, but
  they don't persist between re-boots.

  Installing normally on ext4 this is not an issue and data pools import
  automatically on boot with no further effort.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1850130/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2019-12-27 Thread Richard Laager
I put these questions to Tom Caputi, who wrote the ZFS encryption. The
quoted text below is what I asked him, and the unquoted text is his
response:

> 1. Does ZFS rewrite the wrapped/encrypted master key in place? If
>not, the old master key could be retrieved off disk, decrypted
>with the known passphrase, and used to decrypt at least
>_existing_ data.

1) No. This is definitely an attack vector (although a very minor
   one). At the time we had said that we would revisit the idea of
   overwriting old keys when TRIM was added. That was several years ago
   and TRIM is now in. I will talk to Brian about it after I am back
   from the holiday.

> 2. Does a "zfs change-key" create a new master key? If not, the old
>master key could be used to decrypt _new_ data as well, at least
>until the master key is rotated.

2) zfs change-key does not create a new master key. It simply re-wraps
   the existing master key. The master keys are never rotated. The key
   rotation is done by using the master keys to generate new keys.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

Status in ubiquity package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  zfs supports built-in encryption support, but the decision of whether
  a pool is encrypted or not must be made at pool creation time; it is
  possible to add encrypted datasets on top of an unencrypted pool but
  it is not possible to do an online change of a dataset (or a whole
  pool) to toggle encryption.

  We should therefore always install with encryption enabled on zfs
  systems, with a non-secret key by default, and allow the user to use
  'zfs change-key -o keylocation=prompt' after install to take ownership
  of the encryption and upgrade the security.

  This is also the simplest way to allow users to avoid having to choose
  between the security of full-disk encryption, and the advanced
  filesystem features of zfs since it requires no additional UX work in
  ubiquity.

  We should make sure that
  https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1857040 is fixed
  first in the kernel so that enabling zfs encryption does not impose an
  unreasonable performance penalty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2019-12-26 Thread Richard Laager
I have come up with a potential security flaw with this design:

The user installs Ubuntu with this fixed passphrase. This is used to
derive the "user key", which is used to encrypt the "master key", which
is used to encrypt their data. The encrypted version of the master key
is obviously written to disk.

Later, the user changes their passphrase. This rewraps the master key
with a new user key (derived from the new/real passphrase). It writes
that to disk. But, I presume that does NOT overwrite the old wrapped key
in place on disk. I don't actually know this, but I am assuming so based
on the general design of ZFS being copy-on-write. As far as I know, only
uberblocks are rewritten in place.

Therefore, it is possible for some indeterminate amount of time to read
the old wrapped master key off the disk, which can be decrypted using
the known passphrase. This gives the master key, which can then be used
to decrypt the _existing_ data.

If the master key is not rotated when using zfs change-key, then _new_
data can also be read for some indefinite period of time. I'm not 100%
sure whether change-key changes the master key or only the user key.
>From the man page, it sounds like it does change the master key. It
says, "...use zfs change-key to break an existing relationship, creating
a new encryption root..."

I'll try to get a more clueful answer on these points.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

Status in ubiquity package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  zfs supports built-in encryption support, but the decision of whether
  a pool is encrypted or not must be made at pool creation time; it is
  possible to add encrypted datasets on top of an unencrypted pool but
  it is not possible to do an online change of a dataset (or a whole
  pool) to toggle encryption.

  We should therefore always install with encryption enabled on zfs
  systems, with a non-secret key by default, and allow the user to use
  'zfs change-key -o keylocation=prompt' after install to take ownership
  of the encryption and upgrade the security.

  This is also the simplest way to allow users to avoid having to choose
  between the security of full-disk encryption, and the advanced
  filesystem features of zfs since it requires no additional UX work in
  ubiquity.

  We should make sure that
  https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1857040 is fixed
  first in the kernel so that enabling zfs encryption does not impose an
  unreasonable performance penalty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2019-12-26 Thread Richard Laager
Here are some quick performance comparisons:
https://github.com/zfsonlinux/zfs/pull/9749#issuecomment-569132997

In summary, "the GCM run is approximately 1.15 times faster than the CCM
run. Please also note that this PR doesn't improve AES-CCM performance,
so if this gets merged, the speed difference will be much larger."

I would recommend setting encryption=aes256-gcm instead of encryption=on
(which is aes256-ccm).

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

Status in ubiquity package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  zfs supports built-in encryption support, but the decision of whether
  a pool is encrypted or not must be made at pool creation time; it is
  possible to add encrypted datasets on top of an unencrypted pool but
  it is not possible to do an online change of a dataset (or a whole
  pool) to toggle encryption.

  We should therefore always install with encryption enabled on zfs
  systems, with a non-secret key by default, and allow the user to use
  'zfs change-key -o keylocation=prompt' after install to take ownership
  of the encryption and upgrade the security.

  This is also the simplest way to allow users to avoid having to choose
  between the security of full-disk encryption, and the advanced
  filesystem features of zfs since it requires no additional UX work in
  ubiquity.

  We should make sure that
  https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1857040 is fixed
  first in the kernel so that enabling zfs encryption does not impose an
  unreasonable performance penalty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2019-12-23 Thread Richard Laager
This is an interesting approach. I figured the installer should prompt
for encryption, and it probably still should, but if the performance
impact is minimal, this does have the nice property of allowing for
enabling encryption post-install.

It might be worthwhile (after merging the SIMD fixes) to benchmark
aes256-ccm (the default) vs encryption=aes-256-gcm. I think GCM seems to
be preferred, security wise, in various places (though I don't
immediately have references) and may be faster. There's also an upstream
PR in progress that significantly improves AES-GCM:
https://github.com/zfsonlinux/zfs/pull/9749

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

Status in ubiquity package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  zfs supports built-in encryption support, but the decision of whether
  a pool is encrypted or not must be made at pool creation time; it is
  possible to add encrypted datasets on top of an unencrypted pool but
  it is not possible to do an online change of a dataset (or a whole
  pool) to toggle encryption.

  We should therefore always install with encryption enabled on zfs
  systems, with a non-secret key by default, and allow the user to use
  'zfs change-key -o keylocation=prompt' after install to take ownership
  of the encryption and upgrade the security.

  This is also the simplest way to allow users to avoid having to choose
  between the security of full-disk encryption, and the advanced
  filesystem features of zfs since it requires no additional UX work in
  ubiquity.

  We should make sure that
  https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1857040 is fixed
  first in the kernel so that enabling zfs encryption does not impose an
  unreasonable performance penalty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1856408] Re: zfs-initramfs needs to set FRAMEBUFFER=y

2019-12-14 Thread Richard Laager
Should it set KEYMAP=y too, like cryptsetup does?

I've created a PR upstream and done some light testing:
https://github.com/zfsonlinux/zfs/pull/9723

Are you able to confirm that this fixes the issue wherever you were
seeing it?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1856408

Title:
  zfs-initramfs needs to set FRAMEBUFFER=y

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  The poorly-named 'FRAMEBUFFER' option in initramfs-tools controls
  whether the console_setup and plymouth scripts are included and used
  in the initramfs.  These are required for any initramfs which will be
  prompting for user input: console_setup because without it the user's
  configured keymap will not be set up, and plymouth because you are not
  guaranteed to have working video output in the initramfs without it
  (e.g. some nvidia+UEFI configurations with the default GRUB behavior).

  The zfs initramfs script may need to prompt the user for passphrases
  for encrypted zfs datasets, and we don't know definitively whether
  this is the case or not at the time the initramfs is constructed (and
  it's difficult to dynamically populate initramfs config variables
  anyway), therefore the zfs-initramfs package should just set
  FRAMEBUFFER=yes in a conf snippet the same way that the cryptsetup-
  initramfs package does (/usr/share/initramfs-tools/conf-
  hooks.d/cryptsetup).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1856408/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1854982] Re: Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

2019-12-04 Thread Richard Laager
I received the email of your latest comment, but oddly I’m not seeing it
here.

Before you go to all the work to rebuild the system, I think you should
do some testing to determine exactly what thing is breaking the send
stream compatibility. From your comment about your laptop, it sounds
like you think it is large_dnode. It really shouldn’t be large_dnode
because you said you have that feature on the receive side.

I would suggest creating some file-backed pools with different features.
You can do that with something like:

truncate -s 1G test1.img
zpool create test1 $(pwd)/test1.img

To adjust the features, add -d to disable all features and then add
various -o feature@something=enabled.

To actually use large dnodes, I believe you also have to set
dnodesize=auto on a filesystem with either “zfs -o” or for the root
dataset, “zpool -O” at the time of creation.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  After I tried to back-up my datapools from Ubuntu 19.10 to FreeBSD
  12.0 as I have done since June each week, I found out it did not work
  anymore. The regression occurred after I reinstalled Ubuntu on my new
  nvme drive. I also had to reorganize my own datapools/datasets,
  because they either moved to the nvme drive or they had to be located
  on 2 HDDs instead of 3. I had one datapool that still works, the
  datapool containing my archives and it is the datapool, that has NOT
  been reorganized. I tried for a whole long day to get the backup
  working again, but I failed. I compared the properties from datapool
  and dataset, but did not see any problem there. Only a lot of new
  features and properties not present before and not present in FreeBSD.
  I used FreeBSD, because I use for backup an old 32-bits Pentium.

  I have two complaints:
  - the Ubuntu upgrade did cost me the compatibility with FreeBSD. Open-ZFS? :(
  - the system transfers the dataset and at the end of a long transfer it 
decides to quit and the error messages are completely useless and self 
contradicting. 

  On the first try it say the dataset does exist and on the second try it says 
it does NOT exist. One of the two is completely wrong. Some consistency and 
some clearer error messages would be helpful for the user.
  See the following set of strange set error messages on two tries:

  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive zroot/hp-data/dummy
  cannot receive new filesystem stream: destination 'zroot/hp-data/dummy' exists
  must specify -F to overwrite it
  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive -F zroot/hp-data/dummy
  cannot receive new filesystem stream: dataset does not exist

  A 2nd subset of my backup is stored on the laptop and that still
  works. I also compared the properties with those of my laptop, that
  still has its original datapools of begin of the year. I aligned the
  properties of FreeBSD with those of my laptop, but it did not help.

  I attach the properties of the datapool and dataset from both FreeBSD
  and Ubuntu.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zfsutils-linux 0.8.1-1ubuntu14.1
  ProcVersionSignature: Ubuntu 5.3.0-23.25-generic 5.3.7
  Uname: Linux 5.3.0-23-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8.2
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Tue Dec  3 13:35:08 2019
  InstallationDate: Installed on 2019-11-30 (3 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1854982] Re: Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

2019-12-03 Thread Richard Laager
I'm not sure if userobj_accounting and/or project_quota have
implications for send stream compatibility, but my hunch is that they do
not. large_dnode is documented as being an issue, but since your
receiver supports that, that's not it.

I'm not sure what the issue is, nor what a good next step would be. You
might ask on IRC (#zfsonlinux on FreeNode) or the zfs-discuss mailing
list. See: https://github.com/zfsonlinux/zfs/wiki/Mailing-Lists

Not that it helps now, but this will get somewhat better in the future,
as FreeBSD is switching to using the current ZFS-on-Linux, which will be
renamed to OpenZFS, codebase as its upstream. So Linux and FreeBSD will
have feature parity, outside of the usual time lag of release cycles.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  After I tried to back-up my datapools from Ubuntu 19.10 to FreeBSD
  12.0 as I have done since June each week, I found out it did not work
  anymore. The regression occurred after I reinstalled Ubuntu on my new
  nvme drive. I also had to reorganize my own datapools/datasets,
  because they either moved to the nvme drive or they had to be located
  on 2 HDDs instead of 3. I had one datapool that still works, the
  datapool containing my archives and it is the datapool, that has NOT
  been reorganized. I tried for a whole long day to get the backup
  working again, but I failed. I compared the properties from datapool
  and dataset, but did not see any problem there. Only a lot of new
  features and properties not present before and not present in FreeBSD.
  I used FreeBSD, because I use for backup an old 32-bits Pentium.

  I have two complaints:
  - the Ubuntu upgrade did cost me the compatibility with FreeBSD. Open-ZFS? :(
  - the system transfers the dataset and at the end of a long transfer it 
decides to quit and the error messages are completely useless and self 
contradicting. 

  On the first try it say the dataset does exist and on the second try it says 
it does NOT exist. One of the two is completely wrong. Some consistency and 
some clearer error messages would be helpful for the user.
  See the following set of strange set error messages on two tries:

  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive zroot/hp-data/dummy
  cannot receive new filesystem stream: destination 'zroot/hp-data/dummy' exists
  must specify -F to overwrite it
  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive -F zroot/hp-data/dummy
  cannot receive new filesystem stream: dataset does not exist

  A 2nd subset of my backup is stored on the laptop and that still
  works. I also compared the properties with those of my laptop, that
  still has its original datapools of begin of the year. I aligned the
  properties of FreeBSD with those of my laptop, but it did not help.

  I attach the properties of the datapool and dataset from both FreeBSD
  and Ubuntu.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zfsutils-linux 0.8.1-1ubuntu14.1
  ProcVersionSignature: Ubuntu 5.3.0-23.25-generic 5.3.7
  Uname: Linux 5.3.0-23-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8.2
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Tue Dec  3 13:35:08 2019
  InstallationDate: Installed on 2019-11-30 (3 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1854982] Re: Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

2019-12-03 Thread Richard Laager
This is probably an issue of incompatible pool features.  Check what you
have active on the Ubuntu side:

zpool get all | grep feature | grep active

Then compare that to the chart here:
http://open-zfs.org/wiki/Feature_Flags

There is an as-yet-unimplemented proposal upstream to create a features
“mask” to limit the features to those with broad cross-platform support.

If it’s not a features issue, I think there was some unintentional send
compatibility break. I don’t have the specifics or a bug number, but a
friend ran into a similar issue with 18.04 sending to 16.04.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  After I tried to back-up my datapools from Ubuntu 19.10 to FreeBSD
  12.0 as I have done since June each week, I found out it did not work
  anymore. The regression occurred after I reinstalled Ubuntu on my new
  nvme drive. I also had to reorganize my own datapools/datasets,
  because they either moved to the nvme drive or they had to be located
  on 2 HDDs instead of 3. I had one datapool that still works, the
  datapool containing my archives and it is the datapool, that has NOT
  been reorganized. I tried for a whole long day to get the backup
  working again, but I failed. I compared the properties from datapool
  and dataset, but did not see any problem there. Only a lot of new
  features and properties not present before and not present in FreeBSD.
  I used FreeBSD, because I use for backup an old 32-bits Pentium.

  I have two complaints:
  - the Ubuntu upgrade did cost me the compatibility with FreeBSD. Open-ZFS? :(
  - the system transfers the dataset and at the end of a long transfer it 
decides to quit and the error messages are completely useless and self 
contradicting. 

  On the first try it say the dataset does exist and on the second try it says 
it does NOT exist. One of the two is completely wrong. Some consistency and 
some clearer error messages would be helpful for the user.
  See the following set of strange set error messages on two tries:

  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive zroot/hp-data/dummy
  cannot receive new filesystem stream: destination 'zroot/hp-data/dummy' exists
  must specify -F to overwrite it
  root@VM-Host-Ryzen:/home/bertadmin# /sbin/zfs send -c dpool/dummy@191130 | 
ssh 192.168.1.100 zfs receive -F zroot/hp-data/dummy
  cannot receive new filesystem stream: dataset does not exist

  A 2nd subset of my backup is stored on the laptop and that still
  works. I also compared the properties with those of my laptop, that
  still has its original datapools of begin of the year. I aligned the
  properties of FreeBSD with those of my laptop, but it did not help.

  I attach the properties of the datapool and dataset from both FreeBSD
  and Ubuntu.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zfsutils-linux 0.8.1-1ubuntu14.1
  ProcVersionSignature: Ubuntu 5.3.0-23.25-generic 5.3.7
  Uname: Linux 5.3.0-23-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8.2
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Tue Dec  3 13:35:08 2019
  InstallationDate: Installed on 2019-11-30 (3 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1847389] Re: Prevent bpool (or pools with /BOOT/) to be upgraded

2019-11-27 Thread Richard Laager
If the pool has an _active_ (and not "read-only compatible") feature
that GRUB does not understand, then GRUB will (correctly) refuse to load
the pool. Accordingly, you will be unable to boot.

Some features go active immediately, and others need you to enable some
filesystem-level feature or take some other action to go from enabled to
active. The features that are left disabled in the upstream Root-on-ZFS
HOWTO (that I manage) are disabled because GRUB does not support them.
At best, you never use them and it's fine. At worst, you make one active
and then you can't boot. Since you can't use them without breaking
booting, there is no point in having them enabled.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1847389

Title:
  Prevent bpool (or pools with /BOOT/) to be upgraded

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  The bpool status is confusing. Should I upgrade the pool or is it on
  purpose that the bpool is like this. I do no like to see this warning
  after installing the system on ZFS from scratch.

  See screenshot

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1847389/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1852854] Re: Update of zfs-linux fails

2019-11-18 Thread Richard Laager
** Changed in: zfs-linux (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1852854

Title:
  Update of zfs-linux fails

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  I use Virtualbox for Ubuntu 19.10. I have an installation with two disks, one 
to boot from zfs and one to boot from ext4. The last one has zfs also 
installed, but the update of zfs failed on many "directories not empty" 
messages. I'm 74, but still learning and one day ago I learned that initramfs 
could take care of the mount, so I changed all canmount=on to canmount=noauto. 
Afterwards the update of zfs succeeded :)
  When I booted again from the zfs disk, the boot went OK, but I ended up in a 
login loop. The login loop disappeared, when I did set canmount=on again for 
the user dataset. But that dataset has been one of the error messages, while 
the zfs update failed :(

  So i see two issues to be solved for 19.10 and 20.04.
  - for rpool/ROOT use canmount=noauto instead of canmount=on
  - for rpool/USERDATA. Good luck, since canmount=on is needed for the ZFS 
system and canmount=off or noauto is needed for the other system. 

  Some time ago I created another user on that ZFS system, a user
  without an own dataset and that user did not suffer from a login loop.
  That user can be found in /home/user2, user2 being a normal folder, I
  hope that can be used.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zfsutils-linux 0.8.1-1ubuntu14.1
  ProcVersionSignature: Ubuntu 5.3.0-23.25-generic 5.3.7
  Uname: Linux 5.3.0-23-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8.2
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Sat Nov 16 09:58:03 2019
  InstallationDate: Installed on 2019-04-26 (203 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan EANIMAL" - Alpha amd64 (20190425)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1852854/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1852854] Re: Update of zfs-linux fails

2019-11-17 Thread Richard Laager
Which specific filesystems are failing to mount?

Typically, this situation occurs because something is misconfigured, so
the mount fails, so files end up inside what should otherwise be empty
mountpoint directories. Then, even once the original problem is fixed,
the non-empty directories prevent ZFS from mounting on them. We already
know you had such an underlying issue, so there is a high likelihood
that this is what is happening here.

I’m on mobile now, but try something like:
zfs get -r canmount,mountpoint,mounted POOLNAME

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1852854

Title:
  Update of zfs-linux fails

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  I use Virtualbox for Ubuntu 19.10. I have an installation with two disks, one 
to boot from zfs and one to boot from ext4. The last one has zfs also 
installed, but the update of zfs failed on many "directories not empty" 
messages. I'm 74, but still learning and one day ago I learned that initramfs 
could take care of the mount, so I changed all canmount=on to canmount=noauto. 
Afterwards the update of zfs succeeded :)
  When I booted again from the zfs disk, the boot went OK, but I ended up in a 
login loop. The login loop disappeared, when I did set canmount=on again for 
the user dataset. But that dataset has been one of the error messages, while 
the zfs update failed :(

  So i see two issues to be solved for 19.10 and 20.04.
  - for rpool/ROOT use canmount=noauto instead of canmount=on
  - for rpool/USERDATA. Good luck, since canmount=on is needed for the ZFS 
system and canmount=off or noauto is needed for the other system. 

  Some time ago I created another user on that ZFS system, a user
  without an own dataset and that user did not suffer from a login loop.
  That user can be found in /home/user2, user2 being a normal folder, I
  hope that can be used.

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zfsutils-linux 0.8.1-1ubuntu14.1
  ProcVersionSignature: Ubuntu 5.3.0-23.25-generic 5.3.7
  Uname: Linux 5.3.0-23-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8.2
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Sat Nov 16 09:58:03 2019
  InstallationDate: Installed on 2019-04-26 (203 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan EANIMAL" - Alpha amd64 (20190425)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1852854/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1852793] Re: Various problems related to "zfs mount -a

2019-11-15 Thread Richard Laager
> I think "zfs mount -a" should NOT try to mount datasets with
> mountpoint "/"

There is no need for this to be (confusingly, IMHO) special-cased in
zpool mount.

You should set canmount=noauto on your root filesystems (the ones with
mountpoint=/). The initramfs handles mounting the selected root
filesystem.

** Changed in: zfs-linux (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1852793

Title:
  Various problems related to "zfs mount -a

Status in zfs-linux package in Ubuntu:
  Invalid

Bug description:
  I boot from ZFS since the begin of the year. I have a multi-boot situation 
with:
  - Ubuntu 19.10 with Virtualbox booting from ZFS
  - Ubuntu Mate 19.10 with QEMU/KVM booting from ZFS
  - Ubuntu 19.10 booting from ext4

  I have two problems with zfs:
  - the last update of zfs failed, because the dataset "/" was not empty. Of 
course it was not empty, it contained the second OS e.g. Ubuntu Mate.
  - during startup my datapools were not mounted, that is a regression, since 
approx a month I have that issue.

  Both I can solve by changing the mountpoint for the other system from
  "/" to e.g. "/systems/roots/mate". Afterwards the update has been
  executed without problems and the system rebooted with the datapools
  mounted as expected.

  I think "zfs mount -a" should either
  - NOT try to mount datasets with mountpoint "/", except their own system or
  - change the error into a warning and continue the mount process or
  - set the mountpoints in /boot/grub.cfg by changing etc/grub.d/10_linux_zfs. 
The advantage of the last one is, that the other OS is accessible by a file 
manager, exactly like in an ext4 multi-boot situations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1852793/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1852406] Re: Double-escape in initramfs DECRYPT_CMD

2019-11-13 Thread Richard Laager
The fix here seems fine, given that you're going for minimal impact in
an SRU. I agree that the character restrictions are such that the pool
names shouldn't actually need to be escaped. That's not to say that I
would remove the _proper_ quoting of variables that currently exists
upstream, as it's good shell programming practice to always quote
variables.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1852406

Title:
  Double-escape in initramfs DECRYPT_CMD

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Eoan:
  In Progress
Status in zfs-linux source package in Focal:
  Fix Released

Bug description:
  == SRU Justification, Eoan ==

  initramfs/scripts/zfs.in incorrectly quotes ${ENCRYPTIONROOT} on line
  414:

  DECRYPT_CMD="${ZFS} load-key '${ENCRYPTIONROOT}'"

  This is OK when the line is executed by shell, such as in line 430 or
  436, but when plymouth is used it results in plymouth executing "zfs
  load-key 'rpool'" - and zfs  is unable to find pool called "'rpool'".

  If I understand
  https://docs.oracle.com/cd/E23824_01/html/821-1448/gbcpt.html
  correctly zfs pool name is always 'shell-friendly', so removing the
  quotation marks would be a proper fix for that.

  == Fix ==

  One line fix as attached in https://bugs.launchpad.net/ubuntu/+source
  /zfs-linux/+bug/1852406/comments/1

  == Test ==

  Boot with encrypted data set with plymouth. Without the fix zfs is
  unable to find the root encrypted pool. With the fix this works.

  == Regression Potential ==

  This just affects the encrypted dataset that holds key for root
  dataset; currently this is causing issues because of the bug, so the
  risk of the fix outweighs the current situation where this is
  currently broken.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1852406/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1847628] Re: When using swap in ZFS, system stops when you start using swap

2019-10-15 Thread Richard Laager
> "com.sun:auto-snapshot=false" do we need to add that or does our zfs
not support it?

You do not need that. That is used by some snapshot tools, but Ubuntu is
doing its own zsys thing.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1847628

Title:
  When using swap in ZFS, system stops when you start using swap

Status in Release Notes for Ubuntu:
  New
Status in Native ZFS for Linux:
  Unknown
Status in ubiquity package in Ubuntu:
  Incomplete
Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  # Problem

  When using swap in ZFS, system stops when you start using swap.

  > stress --vm 100

  if you doing swapoff will only occur OOM and the system will not stop.

  # Environment

  jehos@MacBuntu:~$ lsb_release -a
  No LSB modules are available.
  Distributor ID: Ubuntu
  Description:Ubuntu Eoan Ermine (development branch)
  Release:19.10
  Codename:   eoan

  jehos@MacBuntu:~$ dpkg -l | grep zfs
  ii  libzfs2linux   0.8.1-1ubuntu13
 amd64OpenZFS filesystem library for Linux
  ii  zfs-initramfs  0.8.1-1ubuntu13
 amd64OpenZFS root filesystem capabilities for Linux - initramfs
  ii  zfs-zed0.8.1-1ubuntu13
 amd64OpenZFS Event Daemon
  ii  zfsutils-linux 0.8.1-1ubuntu13
 amd64command-line tools to manage OpenZFS filesystems

  jehos@MacBuntu:~$ uname -a
  Linux MacBuntu 5.3.0-13-generic #14-Ubuntu SMP Tue Sep 24 02:46:08 UTC 2019 
x86_64 x86_64 x86_64 GNU/Linux

  jehos@MacBuntu:~$ zpool list
  NAMESIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP  DEDUPHEALTH  
ALTROOT
  bpool  1.88G  66.1M  1.81G- -  - 3%  1.00xONLINE  
-
  rpool   230G   124G   106G- - 9%53%  1.00xONLINE  
-

  jehos@MacBuntu:~$ zfs get all rpool/swap
  NAMEPROPERTY  VALUESOURCE
  rpool/swap  type  volume   -
  rpool/swap  creation  목 10월 10 15:56 2019  -
  rpool/swap  used  2.13G-
  rpool/swap  available 98.9G-
  rpool/swap  referenced72K  -
  rpool/swap  compressratio 1.11x-
  rpool/swap  reservation   none default
  rpool/swap  volsize   2G   local
  rpool/swap  volblocksize  4K   -
  rpool/swap  checksum  on   default
  rpool/swap  compression   zle  local
  rpool/swap  readonly  off  default
  rpool/swap  createtxg 34   -
  rpool/swap  copies1default
  rpool/swap  refreservation2.13Glocal
  rpool/swap  guid  18209330213704683244 -
  rpool/swap  primarycache  metadata local
  rpool/swap  secondarycachenone local
  rpool/swap  usedbysnapshots   0B   -
  rpool/swap  usedbydataset 72K  -
  rpool/swap  usedbychildren0B   -
  rpool/swap  usedbyrefreservation  2.13G-
  rpool/swap  logbias   throughput   local
  rpool/swap  objsetid  393  -
  rpool/swap  dedup off  default
  rpool/swap  mlslabel  none default
  rpool/swap  sync  always   local
  rpool/swap  refcompressratio  1.11x-
  rpool/swap  written   72K  -
  rpool/swap  logicalused   40K  -
  rpool/swap  logicalreferenced 40K  -
  rpool/swap  volmode   default  default
  rpool/swap  snapshot_limitnone default
  rpool/swap  snapshot_countnone default
  rpool/swap  snapdev   hidden   default
  rpool/swap  context   none default
  rpool/swap  fscontext none default
  rpool/swap  defcontextnone default
  rpool/swap  rootcontext   none default
  rpool/swap  redundant_metadataall  default
  rpool/swap  encryptionoff  default
  rpool/swap  keylocation   none default
  rpool/swap  keyformat none 

[Kernel-packages] [Bug 1847628] Re: When using swap in ZFS, system stops when you start using swap

2019-10-14 Thread Richard Laager
** Also affects: ubiquity (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1847628

Title:
  When using swap in ZFS, system stops when you start using swap

Status in ubiquity package in Ubuntu:
  Confirmed
Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  # Problem

  When using swap in ZFS, system stops when you start using swap.

  > stress --vm 100

  if you doing swapoff will only occur OOM and the system will not stop.

  # Environment

  jehos@MacBuntu:~$ lsb_release -a
  No LSB modules are available.
  Distributor ID: Ubuntu
  Description:Ubuntu Eoan Ermine (development branch)
  Release:19.10
  Codename:   eoan

  jehos@MacBuntu:~$ dpkg -l | grep zfs
  ii  libzfs2linux   0.8.1-1ubuntu13
 amd64OpenZFS filesystem library for Linux
  ii  zfs-initramfs  0.8.1-1ubuntu13
 amd64OpenZFS root filesystem capabilities for Linux - initramfs
  ii  zfs-zed0.8.1-1ubuntu13
 amd64OpenZFS Event Daemon
  ii  zfsutils-linux 0.8.1-1ubuntu13
 amd64command-line tools to manage OpenZFS filesystems

  jehos@MacBuntu:~$ uname -a
  Linux MacBuntu 5.3.0-13-generic #14-Ubuntu SMP Tue Sep 24 02:46:08 UTC 2019 
x86_64 x86_64 x86_64 GNU/Linux

  jehos@MacBuntu:~$ zpool list
  NAMESIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP  DEDUPHEALTH  
ALTROOT
  bpool  1.88G  66.1M  1.81G- -  - 3%  1.00xONLINE  
-
  rpool   230G   124G   106G- - 9%53%  1.00xONLINE  
-

  jehos@MacBuntu:~$ zfs get all rpool/swap
  NAMEPROPERTY  VALUESOURCE
  rpool/swap  type  volume   -
  rpool/swap  creation  목 10월 10 15:56 2019  -
  rpool/swap  used  2.13G-
  rpool/swap  available 98.9G-
  rpool/swap  referenced72K  -
  rpool/swap  compressratio 1.11x-
  rpool/swap  reservation   none default
  rpool/swap  volsize   2G   local
  rpool/swap  volblocksize  4K   -
  rpool/swap  checksum  on   default
  rpool/swap  compression   zle  local
  rpool/swap  readonly  off  default
  rpool/swap  createtxg 34   -
  rpool/swap  copies1default
  rpool/swap  refreservation2.13Glocal
  rpool/swap  guid  18209330213704683244 -
  rpool/swap  primarycache  metadata local
  rpool/swap  secondarycachenone local
  rpool/swap  usedbysnapshots   0B   -
  rpool/swap  usedbydataset 72K  -
  rpool/swap  usedbychildren0B   -
  rpool/swap  usedbyrefreservation  2.13G-
  rpool/swap  logbias   throughput   local
  rpool/swap  objsetid  393  -
  rpool/swap  dedup off  default
  rpool/swap  mlslabel  none default
  rpool/swap  sync  always   local
  rpool/swap  refcompressratio  1.11x-
  rpool/swap  written   72K  -
  rpool/swap  logicalused   40K  -
  rpool/swap  logicalreferenced 40K  -
  rpool/swap  volmode   default  default
  rpool/swap  snapshot_limitnone default
  rpool/swap  snapshot_countnone default
  rpool/swap  snapdev   hidden   default
  rpool/swap  context   none default
  rpool/swap  fscontext none default
  rpool/swap  defcontextnone default
  rpool/swap  rootcontext   none default
  rpool/swap  redundant_metadataall  default
  rpool/swap  encryptionoff  default
  rpool/swap  keylocation   none default
  rpool/swap  keyformat none default
  rpool/swap  pbkdf2iters   0default

To manage notifications about this bug go to:

[Kernel-packages] [Bug 1848102] Re: ZFS Installer create ZVOL for swap

2019-10-14 Thread Richard Laager
*** This bug is a duplicate of bug 1847628 ***
https://bugs.launchpad.net/bugs/1847628

** This bug has been marked a duplicate of bug 1847628
   When using swap in ZFS, system stops when you start using swap

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1848102

Title:
  ZFS Installer create ZVOL for swap

Status in ubiquity package in Ubuntu:
  New
Status in zfs-linux package in Ubuntu:
  New

Bug description:
  I've been experiencing problems with a system installed via the new
  Ubiquity ZFS installer.

  When the system gets low on memory, it starts doing heavy IO (to my
  SSD in this case), and the system freezes for several minutes.

  My system has 16gig of ram, arc cache uses roughly 4gig of that when
  the system freezes. Just need to have phpstorm, intellij and chrome
  open to put enough memory pressure to make it freeze.

  This bug is reported at: https://bugs.launchpad.net/ubuntu/+source
  /zfs-linux/+bug/1847628

  I'm reporting this bug here as I believe the Ubiquity ZFS Installer
  should create a swap partition instead of making it a ZVOL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1848102/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1847927] Re: Upgrading of 20191010 installed on ZFS will lead to "device-mapper: reload ioctl on osprober-linux-sda6 failed: Device or resource busy" and then to auto-removal of

2019-10-13 Thread Richard Laager
The osprober part is a duplicate of #1847632.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1847927

Title:
  Upgrading of 20191010 installed on ZFS will lead to "device-mapper:
  reload ioctl on osprober-linux-sda6  failed: Device or resource busy"
  and then to auto-removal of all zfs packages

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  Steps to reproduce:
  1. Install Ubuntu 19.10 from 20191010 ISO
  2. Run `sudo apt-get update` and `sudo apt-get upgrade`
  3. Note the following lines in the APT output:

  ...
  Warning: Couldn't find any valid initrd for dataset rpool/ROOT/ubuntu_69xc2t.
  Found memtest86+ image: /BOOT/ubuntu_69xc2t@/memtest86+.elf
  Found memtest86+ image: /BOOT/ubuntu_69xc2t@/memtest86+.bin
  device-mapper: reload ioctl on osprober-linux-sda6  failed: Device or 
resource busy
  Command failed.
  done
  ...
  device-mapper: reload ioctl on osprober-linux-sda6  failed: Device or 
resource busy
  Command failed.
  done

  4. Run `sudo apt-get autoremove --purge`

  $ sudo apt-get autoremove --purge
  [sudo] password for zfs: 
  Reading package lists... Done
  Building dependency tree   
  Reading state information... Done
  The following packages will be REMOVED:
libnvpair1linux* libuutil1linux* libzfs2linux* libzpool2linux* 
zfs-initramfs* zfs-zed* zfsutils-linux*
  0 upgraded, 0 newly installed, 7 to remove and 0 not upgraded.
  After this operation, 5 243 kB disk space will be freed.
  Do you want to continue? [Y/n] n
  Abort.

  
  Expected results:
  * upgrading of packages run without errors

  Actual results:
  * system will not boot if user will proceed with package autoremoval

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: libzfs2linux 0.8.1-1ubuntu14
  ProcVersionSignature: Ubuntu 5.3.0-13.14-generic 5.3.0
  Uname: Linux 5.3.0-13-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Sun Oct 13 22:51:39 2019
  InstallationDate: Installed on 2019-10-13 (0 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Beta amd64 (20191010)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1847927/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1847632] Re: Error during update ZFS installation and during update-grub

2019-10-10 Thread Richard Laager
osprober complaining about ZFS is a known issue. I don’t know if I
bothered to file a bug report, so this will probably be the report for
that.

Side question: where did you find an installer image with ZFS support? I
tried the daily yesterday but I had no ZFS option.

** Changed in: zfs-linux (Ubuntu)
   Status: New => Confirmed

** Summary changed:

- Error during update ZFS installation and during update-grub
+ osprober prints errors about ZFS partitions

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1847632

Title:
  osprober prints errors about ZFS partitions

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  It is probably a minor problem. I did run an upgrade of my official ZFS 
installation in a VM.
  It did produce an error:

  device-mapper: reload ioctl on osprober-linux-sdb6  failed: Device or 
resource busy
  Command failed.

  I have a dual boot so afterwards I did boot from ext4 and did run
  update-grub and that did report the same error.

  See attachment, first lines the update-grub and the lines afterwards
  the last part of the zfs apt upgrade.

  I could boot the systems afterwards and Linux 5.3.0-17 were loaded.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1847632/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1847637] Re: Strange output zfs list -t snapshots

2019-10-10 Thread Richard Laager
This is not a bug as far as I can see. This looks like the snapshot has
no unique data so its USED is 0. Note that REFER is non-zero.

** Changed in: zfs-linux (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1847637

Title:
  Strange output zfs list -t snapshots

Status in zfs-linux package in Ubuntu:
  Invalid

Bug description:
  yesterday I did a manual snapshot
  today i upgrade the system and part of that large upgrade was Linux 5.3.0-17.
  Afterwards I took another snapshot

  I expect to see the columns "used" and "refer" to contain realistic
  values, not "0" or a some standard value. See screenshot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1847637/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1846424] Re: 19.10 ZFS Update failed on 2019-10-02

2019-10-09 Thread Richard Laager
You had a setup with multiple root filesystems which each had
canmount=on and mountpoint=/. So they both tried to automatically mount
at /. (When booting in the root-on-ZFS config, one was already mounted
as your root filesystem.) ZFS, unlike other Linux filesystems, refuses
to mount over non-empty directories. Thus, mounting over / fails.  Thus
`zfs mount -a` would fail, which was the underlying command for zfs-
mount.service. As a result of the mount failing, you got into a state
where some datasets mounted, but not all of them. As a result of this,
you had empty directories for some mountpoints in the wrong filesystems.
As a result, those empty directories continued to break `zfs mount -a`.

In your case, it's likely that the relevant directory was /vms/vms. This
was preventing you from mounting the vms dataset at /vms. To be
absolutely clear, this was because /vms was non-empty, because it
contained /vms/vms.

You first fixed the underlying issue with the root filesystems by
setting canmount=noauto on both of them. That still left the second
problem. Once you `rmdir`ed the directory(ies) that were in the way,
mounting works correctly.

Separately from the issues above, it's best practice to NOT store anything in 
the root dataset on a pool (the dataset with the same name as the pool, in this 
case "vms"), because that dataset can never be renamed. If you're not actually 
using the "vms" dataset itself, I suggest the following:
zfs unmount vms/vms
rmdir /vms/vms # this rmdir not required, but I like to cleanup completely
zfs unmount vms
zfs set canmount=off vms
zfs mount vms/vms

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1846424

Title:
  19.10 ZFS Update failed on 2019-10-02

Status in zfs-linux package in Ubuntu:
  Invalid

Bug description:
  On all my systems the update from zfs-initrams_0.8.1-1ubuntu12_amd64.deb 
failed the same is true for zfs-zed and zfsutils-linux.
  The system still runs on 0.8.1-1ubuntu11_amd64.
  The first error message was about a failing mount and at the end it announced 
that all 3 modules were not updated.
  I have the error on Xubuntu 19.10, Ubuntu Mate 19.10 on my laptop i5-2520M 
and in a VBox VM on a Ryzen 3 2200G with Ubuntu 19.10.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1846424/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1847548] Re: Update of zfs-linux fails

2019-10-09 Thread Richard Laager
The error is again related to something trying to mount at /. That means
you have something setup wrong. If it was setup properly, nothing should
be trying to _automatically_ (i.e. canmount=on) mount at /. (In a root-
on-ZFS setup, the root filesystem is canmount=noauto and mounted by the
initramfs, as this allows for multiple root filesystems e.g. for
rollbacks / multiple OSes.) Look at your mountpoints, figure out what
has mountpoint=/, and fix that one way or another.

Also, if zfs-mount.service is failing, you should be having that failure
on boot anyway. This isn't particularly related to the update process.

** Changed in: zfs-linux (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1847548

Title:
  Update of zfs-linux fails

Status in zfs-linux package in Ubuntu:
  Invalid

Bug description:
  I have often problems with zfsutils-linux updates. I have in a Virtualbox VM 
a dual boot from two disks, 
  - one from ext4 (sda), the one with the failing update and 
  - another one from zfs (sdb). 

  The whole update process is far from robust, any fly on the road causes a car 
crash. 
  Why do datapools, that have nothing to do with the update itself, influence 
the update. 

  Last week another update failed because there were somewhere apparently empty 
directories. 
  Please avoid the dependencies on the whole zfs-world, if I want to update 2 
or 3 packages in an ext4 system.

  The attachment show a re-update try-out, forced through the apt remove of a 
non-existing package. It contains:
  - the second failed update
  - zpool status, that show your new rpool of your standard zfs boot, installed 
yesterday

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zfsutils-linux 0.8.1-1ubuntu13
  ProcVersionSignature: Ubuntu 5.3.0-13.14-generic 5.3.0
  Uname: Linux 5.3.0-13-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu7
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Wed Oct  9 18:28:17 2019
  InstallationDate: Installed on 2019-04-26 (166 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan EANIMAL" - Alpha amd64 (20190425)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1847548/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1847105] Re: very slow disk creation, snapshotting

2019-10-09 Thread Richard Laager
I've commented upstream (with ZFS) that we should fake the pre-
allocation (i.e. return success from fallocate() when mode == 0) because
with ZFS it's worthless at best and counterproductive at worst:

https://github.com/zfsonlinux/zfs/issues/326#issuecomment-540162402

Replies (agreeing or disagreeing) are welcome there. If you only want to
say "I agree", please use the emoji button (the + icon) on my comment to
show your support rather than spamming the issue tracker with "me too"
type comments.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1847105

Title:
  very slow disk creation, snapshotting

Status in virt-manager:
  Confirmed
Status in Native ZFS for Linux:
  Unknown
Status in libvirt package in Ubuntu:
  Triaged
Status in virt-manager package in Ubuntu:
  Triaged
Status in zfs-linux package in Ubuntu:
  New
Status in libvirt source package in Bionic:
  Invalid
Status in virt-manager source package in Bionic:
  Invalid
Status in zfs-linux source package in Bionic:
  New
Status in libvirt source package in Disco:
  Triaged
Status in virt-manager source package in Disco:
  Triaged
Status in zfs-linux source package in Disco:
  New

Bug description:
  This is a regression in eoan for me. I use virt-manager to create vms,
  and I noticed that creating one now takes more than a minute.

  Looking at the process listing while the backing disk is being created, I see 
this qemu-img command line:
  15658 ?Ssl0:00 /usr/sbin/libvirtd
  23726 ?Sl 0:04  \_ /usr/bin/qemu-img create -f qcow2 -o 
preallocation=falloc,compat=1.1,lazy_refcounts 
/var/lib/libvirt/images/live-server.qcow2 41943040K

  If I run qemu-img with that preallocation parameter set, even on
  bionic, then it also takes a very long time.

  On eoan, for comparison:
  andreas@nsn7:~$ time qemu-img create -f qcow2 no-prealloc-image.qcow2 40G
  Formatting 'no-prealloc-image.qcow2', fmt=qcow2 size=42949672960 
cluster_size=65536 lazy_refcounts=off refcount_bits=16

  real  0m0,016s
  user  0m0,010s
  sys   0m0,006s
  andreas@nsn7:~$ qemu-img info no-prealloc-image.qcow2 
  image: no-prealloc-image.qcow2
  file format: qcow2
  virtual size: 40G (42949672960 bytes)
  disk size: 17K
  cluster_size: 65536
  Format specific information:
  compat: 1.1
  lazy refcounts: false
  refcount bits: 16
  corrupt: false
  andreas@nsn7:~$ du -hs no-prealloc-image.qcow2 
  17K   no-prealloc-image.qcow2
  andreas@nsn7:~$ 

  
  and now with preallocation=falloc:
  andreas@nsn7:~$ time qemu-img create -f qcow2 -o preallocation=falloc 
with-prealloc-image.qcow2 40G
  Formatting 'with-prealloc-image.qcow2', fmt=qcow2 size=42949672960 
cluster_size=65536 preallocation=falloc lazy_refcounts=off refcount_bits=16

  real  1m43,196s
  user  0m3,564s
  sys   1m26,720s
  andreas@nsn7:~$ qemu-img info with-prealloc-image.qcow2 
  image: with-prealloc-image.qcow2
  file format: qcow2
  virtual size: 40G (42949672960 bytes)
  disk size: 2.7M
  cluster_size: 65536
  Format specific information:
  compat: 1.1
  lazy refcounts: false
  refcount bits: 16
  corrupt: false
  andreas@nsn7:~$ du -hs with-prealloc-image.qcow2 
  2,8M  with-prealloc-image.qcow2
  andreas@nsn7:~$

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: libvirt-daemon 5.4.0-0ubuntu5
  ProcVersionSignature: Ubuntu 5.3.0-13.14-generic 5.3.0
  Uname: Linux 5.3.0-13-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu7
  Architecture: amd64
  Date: Mon Oct  7 11:36:03 2019
  InstallationDate: Installed on 2019-10-07 (0 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Beta amd64 (20191006)
  SourcePackage: libvirt
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.libvirt.nwfilter.allow-arp.xml: [inaccessible: [Errno 
13] Permission denied: '/etc/libvirt/nwfilter/allow-arp.xml']
  modified.conffile..etc.libvirt.nwfilter.allow-dhcp-server.xml: [inaccessible: 
[Errno 13] Permission denied: '/etc/libvirt/nwfilter/allow-dhcp-server.xml']
  modified.conffile..etc.libvirt.nwfilter.allow-dhcp.xml: [inaccessible: [Errno 
13] Permission denied: '/etc/libvirt/nwfilter/allow-dhcp.xml']
  modified.conffile..etc.libvirt.nwfilter.allow-incoming-ipv4.xml: 
[inaccessible: [Errno 13] Permission denied: 
'/etc/libvirt/nwfilter/allow-incoming-ipv4.xml']
  modified.conffile..etc.libvirt.nwfilter.allow-ipv4.xml: [inaccessible: [Errno 
13] Permission denied: '/etc/libvirt/nwfilter/allow-ipv4.xml']
  modified.conffile..etc.libvirt.nwfilter.clean-traffic-gateway.xml: 
[inaccessible: [Errno 13] Permission denied: 
'/etc/libvirt/nwfilter/clean-traffic-gateway.xml']
  modified.conffile..etc.libvirt.nwfilter.clean-traffic.xml: [inaccessible: 
[Errno 13] Permission denied: '/etc/libvirt/nwfilter/clean-traffic.xml']
  

[Kernel-packages] [Bug 1847497] Re: ZFS boot takes long time

2019-10-09 Thread Richard Laager
What is the installer doing for swap? The upstream HOWTO uses a zvol and
then this is necessary: “The RESUME=none is necessary to disable
resuming from hibernation. This does not work, as the zvol is not
present (because the pool has not yet been imported) at the time the
resume script runs. If it is not disabled, the boot process hangs for 30
seconds waiting for the swap zvol to appear.”

The installer probably should set RESUME=none unless it creates a swap
setup (i.e. partition) that would be compatible with hibernation.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1847497

Title:
  ZFS boot takes long time

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  After installing Ubuntu 19.10 on ZFS, the boot process is slow,
  because the following file is empty: /etc/initramfs-
  tools/conf.d/resume

  I have added RESUME=none

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1847497/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1847389] Re: Confusing zpool status in Ubuntu 19.10 installed onZFS

2019-10-08 Thread Richard Laager
Do NOT upgrade your bpool.

The dangerous warning is a known issue. There has been talk of an
upstream feature that would allow a nice fix for this, but nobody has
taken up implementing it yet. I wonder how hard it would be to
temporarily patch zpool status / zpool upgrade to not warn about /
upgrade a pool named bpool.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1847389

Title:
  Confusing zpool status in Ubuntu 19.10 installed onZFS

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  The bpool status is confusing. Should I upgrade the pool or is it on
  purpose that the bpool is like this. I do no like to see this warning
  after installing the system on ZFS from scratch.

  See screenshot

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1847389/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1846424] Re: 19.10 ZFS Update failed on 2019-10-02

2019-10-08 Thread Richard Laager
That has the same error so you are using the same two pools. Please
follow the instructions I’ve given and fix this once so you are in a
fully working state. Once things are working, then you can retry
whatever upgrade steps you think break it.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1846424

Title:
  19.10 ZFS Update failed on 2019-10-02

Status in zfs-linux package in Ubuntu:
  Invalid

Bug description:
  On all my systems the update from zfs-initrams_0.8.1-1ubuntu12_amd64.deb 
failed the same is true for zfs-zed and zfsutils-linux.
  The system still runs on 0.8.1-1ubuntu11_amd64.
  The first error message was about a failing mount and at the end it announced 
that all 3 modules were not updated.
  I have the error on Xubuntu 19.10, Ubuntu Mate 19.10 on my laptop i5-2520M 
and in a VBox VM on a Ryzen 3 2200G with Ubuntu 19.10.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1846424/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1846424] Re: 19.10 ZFS Update failed on 2019-10-02

2019-10-08 Thread Richard Laager
The size of the pool is not particularly relevant. It sounds like you
think I'm asking you to backup and restore your pool, which I definitely
am not. A pool "import" is somewhat like "mounting" a pool (though it's
not literally mounting, because mounting is something that happens with
filesystems). An "export" is similarly like unmounting.

As I said, those instructions were intended to be a _safe_ way for you
to return your system to working order. It's likely not necessary to
boot into recovery mode, or export the pool to resolve this.

The key thing is that /vms and /hp-data are non-empty. Fix that and your
mounting issue will be resolved. I was suggesting you go into recovery
mode and export the data pool to avoid the potential for accidents. But
really, if you only use `rmdir` rather than `rm -rf`, you should be
safe.

Root-on-ZFS installs are not supported by Ubuntu or Canonical. Canonical
is working on providing experimental support for Root-on-ZFS in the
installer for 19.10, which should be out this month.

I do not work for Canonical. I provide the upstream ZFS-on-Linux Root-
on-ZFS HOWTOs for Debian and Ubuntu as a volunteer project. I have never
guaranteed any in-place upgrade path. And you likely didn't follow my
HOWTO anyway, as your dataset naming and mount properties don't match
those from the HOWTO.

I certainly agree that this is fragile. But it's an unofficial,
experimental setup. That comes with the territory. Important progress is
being made, and this will hopefully be more robust in the future.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1846424

Title:
  19.10 ZFS Update failed on 2019-10-02

Status in zfs-linux package in Ubuntu:
  Invalid

Bug description:
  On all my systems the update from zfs-initrams_0.8.1-1ubuntu12_amd64.deb 
failed the same is true for zfs-zed and zfsutils-linux.
  The system still runs on 0.8.1-1ubuntu11_amd64.
  The first error message was about a failing mount and at the end it announced 
that all 3 modules were not updated.
  I have the error on Xubuntu 19.10, Ubuntu Mate 19.10 on my laptop i5-2520M 
and in a VBox VM on a Ryzen 3 2200G with Ubuntu 19.10.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1846424/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1846424] Re: 19.10 ZFS Update failed on 2019-10-02

2019-10-08 Thread Richard Laager
As the error message indicates, /vms and /hp-data are not empty. ZFS, by
default, will not mount over non-empty directories.

There are many ways to fix this, but here's something that is probably
the safest:

Boot up in rescue mode. If it is imported, export the hp-data pool with
`zpool export hp-data`. See what is mounted `cat /proc/mounts`. Unmount
anything ZFS other than the root filesystem. At that point, you should
be able to rmdir everything under /vms and /hp-data, as all they should
have are empty directories. Then, try remounting the rest of the
datasets in the vms pool with `zfs mount -a`. It should work this time.
If so, you should be able to re-run your apt command(s) to get apt &
dpkg into a happy state. Assuming that works, you can re-import the hp-
data pool with `zpool import hp-data`. Then, if you didn't get any
errors, reboot.

In any event, this is not a bug.

** Changed in: zfs-linux (Ubuntu)
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1846424

Title:
  19.10 ZFS Update failed on 2019-10-02

Status in zfs-linux package in Ubuntu:
  Invalid

Bug description:
  On all my systems the update from zfs-initrams_0.8.1-1ubuntu12_amd64.deb 
failed the same is true for zfs-zed and zfsutils-linux.
  The system still runs on 0.8.1-1ubuntu11_amd64.
  The first error message was about a failing mount and at the end it announced 
that all 3 modules were not updated.
  I have the error on Xubuntu 19.10, Ubuntu Mate 19.10 on my laptop i5-2520M 
and in a VBox VM on a Ryzen 3 2200G with Ubuntu 19.10.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1846424/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1846424] Re: 19.10 ZFS Update failed on 2019-10-02

2019-10-07 Thread Richard Laager
You have two datasets with mountpoint=/ (and canmount=on) which is going
to cause problems like this.

vms/roots/mate-1804 mountpoint  / local
vms/roots/mate-1804 canmountondefault
vms/roots/xubuntu-1804  mountpoint  / local
vms/roots/xubuntu-1804  canmountondefault

The normal setup for root-on-ZFS is for the initramfs to mount the root
filesystem, so it is supposed to be canmount=noauto.

You should do something like:
zfs set canmount=noauto vms/roots/mate-1804
zfs set canmount=noauto vms/roots/xubuntu-1804

Also, this too, though it isn't actually a problem here:
zfs set canmount=off vms/roots
zfs set mountpoint=none vms/roots

** Changed in: zfs-linux (Ubuntu)
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1846424

Title:
  19.10 ZFS Update failed on 2019-10-02

Status in zfs-linux package in Ubuntu:
  Invalid

Bug description:
  On all my systems the update from zfs-initrams_0.8.1-1ubuntu12_amd64.deb 
failed the same is true for zfs-zed and zfsutils-linux.
  The system still runs on 0.8.1-1ubuntu11_amd64.
  The first error message was about a failing mount and at the end it announced 
that all 3 modules were not updated.
  I have the error on Xubuntu 19.10, Ubuntu Mate 19.10 on my laptop i5-2520M 
and in a VBox VM on a Ryzen 3 2200G with Ubuntu 19.10.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1846424/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1846424] Re: 19.10 ZFS Update failed on 2019-10-02

2019-10-07 Thread Richard Laager
Can you provide the following details on your datasets' mountpoints.
zfs get mountpoint,canmount -t filesystem

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1846424

Title:
  19.10 ZFS Update failed on 2019-10-02

Status in zfs-linux package in Ubuntu:
  In Progress

Bug description:
  On all my systems the update from zfs-initrams_0.8.1-1ubuntu12_amd64.deb 
failed the same is true for zfs-zed and zfsutils-linux.
  The system still runs on 0.8.1-1ubuntu11_amd64.
  The first error message was about a failing mount and at the end it announced 
that all 3 modules were not updated.
  I have the error on Xubuntu 19.10, Ubuntu Mate 19.10 on my laptop i5-2520M 
and in a VBox VM on a Ryzen 3 2200G with Ubuntu 19.10.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1846424/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1843296] Re: ZFS auto scrub after upgrade to Xubuntu 19.10

2019-09-10 Thread Richard Laager
I’m not aware of anything new starting scrubs. Scrubs are throttled and
usually the complaint is that they are throttled too much, not too
little. Having two pools on the same disk is likely the issue. That
should be avoided, with the exception of a small boot pool on the same
disk as the root pool for root on ZFS setups. Is that your multiple
pools situation, or are you doing something else?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1843296

Title:
  ZFS auto scrub after upgrade to Xubuntu 19.10

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  I have upgraded my dual boot laptop (Xubuntu 19.04 on ZFS) to 19.10, I
  only had a minor problem:

  The system on boot did start scrub of all datapools automatically, that 
was unexpected and it took a couple of minutes before I realized, why the 
system was that effing slow. There is a small chance that the monthly 
auto-scrub was due. If it is part of the upgrade do not do it without at least 
a notification and a confirmation of the user. Never start it during the boot 
process, please wait a few minutes. The same remarks are valid for the monthly 
default auto-scrub. 
  Even not Microsoft is allowed to monopolize my system for a long time without 
telling and without my permission, that is partly why I moved to Linux :) :)

  I detected it relatively fast on my conky display, but else?? 
  Of course I stopped the scrub of the two datapools on the same SSHD and 
restarted it one by one. 
  I'm more happy with the SSHD using ZFS and LZ4 compression.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1843296/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1843298] Re: Upgrade of datapool to ZFS 0.8 creates a problem if dual booting

2019-09-10 Thread Richard Laager
This is a known issue which will hopefully be improved by 20.04 or so.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1843298

Title:
  Upgrade of datapool to ZFS 0.8 creates a problem if dual booting

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  I have upgraded my dual boot laptop (Xubuntu 19.04 on ZFS) to 19.10, I
  only had a minor problems:

  The system started scrubbing the exisisting datapools.
  In the scrub I have been notified, that my datapool did not support all new 
features and that I had to upgrade the datapool. Of course I followed that 
advice enthusiastically and now the second 18.04.3 OS does not recognize the 
upgraded datapool anymore. I found out I can’t disable the culprit features.

  I think there should be a more specific warning in the "zpool upgrade"
  for stupids like me. Only upgrade the datapool, if all OSes on the PC
  have been upgraded to zfs 0.8.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1843298/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1838276] Re: zfs-module depedency selects random kernel package to install

2019-07-30 Thread Richard Laager
I closed this as requested, but I'm actually going to reopen it to see
what people think about the following...

Is there a "default" kernel in Ubuntu? I think there is, probably linux-
generic.

So perhaps this dependency should be changed:
OLD: zfs-modules | zfs-dkms
NEW: linux-generic | zfs-modules | zfs-dkms

That way, if you have something satisfying the zfs-modules dependency,
it is fine. If you don't, it will install the default kernel.

On the other hand, if you don't already have the default kernel, you're
clearly in some sort of special case, so I'm not sure what sane thing
can be done. So that might argue against this.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1838276

Title:
  zfs-module depedency selects random kernel package to install

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  In MAAS (ephemeral environment) or LXD where no kernel package is
  currently installed; installing the zfsutils-linux package will pull
  in a kernel package from the zfs-modules dependency.

  
  1) # lsb_release -rd
  Description:  Ubuntu Eoan Ermine (development branch)
  Release:  19.10

  2) n/a

  3) zfsutils-linux installed without pulling in a random kernel

  4) # apt install --dry-run zfsutils-linux
  Reading package lists... Done
  Building dependency tree   
  Reading state information... Done
  The following additional packages will be installed:
grub-common grub-gfxpayload-lists grub-pc grub-pc-bin grub2-common 
libzfs2linux libzpool2linux
linux-image-unsigned-5.0.0-1010-oem-osp1 linux-modules-5.0.0-1010-oem-osp1 
os-prober zfs-zed
  Suggested packages:
multiboot-doc grub-emu xorriso desktop-base fdutils linux-oem-osp1-tools 
linux-headers-5.0.0-1010-oem-osp1
nfs-kernel-server samba-common-bin zfs-initramfs | zfs-dracut
  The following NEW packages will be installed:
grub-common grub-gfxpayload-lists grub-pc grub-pc-bin grub2-common 
libzfs2linux libzpool2linux
linux-image-unsigned-5.0.0-1010-oem-osp1 linux-modules-5.0.0-1010-oem-osp1 
os-prober zfs-zed zfsutils-linux
  0 upgraded, 12 newly installed, 0 to remove and 1 not upgraded.
  Inst grub-common (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64])
  Inst grub2-common (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64])
  Inst grub-pc-bin (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64])
  Inst grub-pc (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64]) []
  Inst grub-gfxpayload-lists (0.7 Ubuntu:19.10/eoan [amd64])
  Inst linux-modules-5.0.0-1010-oem-osp1 (5.0.0-1010.11 Ubuntu:19.10/eoan 
[amd64])
  Inst linux-image-unsigned-5.0.0-1010-oem-osp1 (5.0.0-1010.11 
Ubuntu:19.10/eoan [amd64])
  Inst os-prober (1.74ubuntu2 Ubuntu:19.10/eoan [amd64])
  Inst libzfs2linux (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64])
  Inst libzpool2linux (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64])
  Inst zfsutils-linux (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64])
  Inst zfs-zed (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64])
  Conf grub-common (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64])
  Conf grub2-common (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64])
  Conf grub-pc-bin (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64])
  Conf grub-pc (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64])
  Conf grub-gfxpayload-lists (0.7 Ubuntu:19.10/eoan [amd64])
  Conf linux-modules-5.0.0-1010-oem-osp1 (5.0.0-1010.11 Ubuntu:19.10/eoan 
[amd64])
  Conf linux-image-unsigned-5.0.0-1010-oem-osp1 (5.0.0-1010.11 
Ubuntu:19.10/eoan [amd64])
  Conf os-prober (1.74ubuntu2 Ubuntu:19.10/eoan [amd64])
  Conf libzfs2linux (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64])
  Conf libzpool2linux (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64])
  Conf zfsutils-linux (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64])
  Conf zfs-zed (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64])

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1838276/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1838276] Re: zfs-module depedency selects random kernel package to install

2019-07-29 Thread Richard Laager
What was the expected behavior from your perspective?

The ZFS utilities are useless without a ZFS kernel module. It seems to
me that this is working fine, and installing the ZFS utilities in this
environment doesn’t make sense.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1838276

Title:
  zfs-module depedency selects random kernel package to install

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  In MAAS (ephemeral environment) or LXD where no kernel package is
  currently installed; installing the zfsutils-linux package will pull
  in a kernel package from the zfs-modules dependency.

  
  1) # lsb_release -rd
  Description:  Ubuntu Eoan Ermine (development branch)
  Release:  19.10

  2) n/a

  3) zfsutils-linux installed without pulling in a random kernel

  4) # apt install --dry-run zfsutils-linux
  Reading package lists... Done
  Building dependency tree   
  Reading state information... Done
  The following additional packages will be installed:
grub-common grub-gfxpayload-lists grub-pc grub-pc-bin grub2-common 
libzfs2linux libzpool2linux
linux-image-unsigned-5.0.0-1010-oem-osp1 linux-modules-5.0.0-1010-oem-osp1 
os-prober zfs-zed
  Suggested packages:
multiboot-doc grub-emu xorriso desktop-base fdutils linux-oem-osp1-tools 
linux-headers-5.0.0-1010-oem-osp1
nfs-kernel-server samba-common-bin zfs-initramfs | zfs-dracut
  The following NEW packages will be installed:
grub-common grub-gfxpayload-lists grub-pc grub-pc-bin grub2-common 
libzfs2linux libzpool2linux
linux-image-unsigned-5.0.0-1010-oem-osp1 linux-modules-5.0.0-1010-oem-osp1 
os-prober zfs-zed zfsutils-linux
  0 upgraded, 12 newly installed, 0 to remove and 1 not upgraded.
  Inst grub-common (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64])
  Inst grub2-common (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64])
  Inst grub-pc-bin (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64])
  Inst grub-pc (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64]) []
  Inst grub-gfxpayload-lists (0.7 Ubuntu:19.10/eoan [amd64])
  Inst linux-modules-5.0.0-1010-oem-osp1 (5.0.0-1010.11 Ubuntu:19.10/eoan 
[amd64])
  Inst linux-image-unsigned-5.0.0-1010-oem-osp1 (5.0.0-1010.11 
Ubuntu:19.10/eoan [amd64])
  Inst os-prober (1.74ubuntu2 Ubuntu:19.10/eoan [amd64])
  Inst libzfs2linux (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64])
  Inst libzpool2linux (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64])
  Inst zfsutils-linux (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64])
  Inst zfs-zed (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64])
  Conf grub-common (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64])
  Conf grub2-common (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64])
  Conf grub-pc-bin (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64])
  Conf grub-pc (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64])
  Conf grub-gfxpayload-lists (0.7 Ubuntu:19.10/eoan [amd64])
  Conf linux-modules-5.0.0-1010-oem-osp1 (5.0.0-1010.11 Ubuntu:19.10/eoan 
[amd64])
  Conf linux-image-unsigned-5.0.0-1010-oem-osp1 (5.0.0-1010.11 
Ubuntu:19.10/eoan [amd64])
  Conf os-prober (1.74ubuntu2 Ubuntu:19.10/eoan [amd64])
  Conf libzfs2linux (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64])
  Conf libzpool2linux (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64])
  Conf zfsutils-linux (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64])
  Conf zfs-zed (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64])

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1838276/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1772412] Re: zfs 0.7.9 fixes a bug (https://github.com/zfsonlinux/zfs/pull/7343) that hangs the system completely

2019-04-03 Thread Richard Laager
Your upgrade is done, but for the record, installing the HWE kernel
doesn't remove the old kernel. So you still have the option to go back
to that in the GRUB menu.

Also, once you're sure the HWE kernel is working, you'll probably want
to remove the linux-image-generic package so you're not continuously
upgrading two sets of kernels.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1772412

Title:
  zfs 0.7.9 fixes a bug (https://github.com/zfsonlinux/zfs/pull/7343)
  that hangs the system completely

Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  I have experienced the problems fixed by this commit
  https://github.com/zfsonlinux/zfs/pull/7343 a few times on my NAS. The
  system hangs completely when it occurs. It looks like 0.7.9 brings
  other interesting bug fixes that potentially freeze the system.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1772412/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1772412] Re: zfs 0.7.9 fixes a bug (https://github.com/zfsonlinux/zfs/pull/7343) that hangs the system completely

2019-04-02 Thread Richard Laager
ZFS 0.7.9 was released in Cosmic (18.10). You could update to Cosmic.
Alternatively, on 18.04, you can install the HWE kernel package: linux-
image-generic-hwe-18.04

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1772412

Title:
  zfs 0.7.9 fixes a bug (https://github.com/zfsonlinux/zfs/pull/7343)
  that hangs the system completely

Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  I have experienced the problems fixed by this commit
  https://github.com/zfsonlinux/zfs/pull/7343 a few times on my NAS. The
  system hangs completely when it occurs. It looks like 0.7.9 brings
  other interesting bug fixes that potentially freeze the system.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1772412/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1802481] Re: cryptsetup waits for zvol /dev/zvol/rpool/swap with no zpool imported during boot. Timing problem?

2018-12-13 Thread Richard Laager
I really don’t know what to suggest here. As you mentioned, this used to
work. If you are only using LUKS for swap, maybe you could just remove
it from crypttab and run the appropriate commands manually in rc.local
or a custom systemd unit.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1802481

Title:
  cryptsetup waits for zvol /dev/zvol/rpool/swap with no zpool imported
  during boot. Timing problem?

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  Hi,
  first of all: I don't know if it is a bug in cryptsetup-initramfs or 
zfs-initramfs or another one.

  The problem is that since my upgrade from bionic to cosmic the
  cryptsetup tries too early to setup my cryptswap device which is on a
  zvol.

cryptsetup: Waiting for encrypted source device /dev/zvol/rpool/swap...
 ALERT! encrypted source device /dev/zvol/rpool/swap does not exist, 
can't unlock cryptswap1.
...

  After timeout I find myself in a initramfs-shell (initramfs).
  When I do zpool list there is no zpool importet.
  After

zpool import -N rpool
^D

  the cryptsetup succeeds.
  Greetings,

 Lars

  ProblemType: Bug
  DistroRelease: Ubuntu 18.10
  Package: cryptsetup-initramfs 2:2.0.4-2ubuntu2
  ProcVersionSignature: Ubuntu 4.18.0-11.12-generic 4.18.12
  Uname: Linux 4.18.0-11-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.10-0ubuntu13.1
  Architecture: amd64
  CurrentDesktop: GNOME
  Date: Fri Nov  9 10:01:19 2018
  EcryptfsInUse: Yes
  PackageArchitecture: all
  SourcePackage: cryptsetup
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.default.apport: [modified]
  modified.conffile..etc.logrotate.d.apport: [modified]
  mtime.conffile..etc.default.apport: 2015-03-15T20:01:19.851334
  mtime.conffile..etc.logrotate.d.apport: 2018-05-18T08:58:12.902005

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1802481/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1802481] Re: cryptsetup waits for zvol /dev/zvol/rpool/swap with no zpool imported during boot. Timing problem?

2018-12-10 Thread Richard Laager
If the pool is on top of LUKS (a relatively common configuration when
ZFS and cryptsetup are both being used), then you'd need cryptsetup
first. My advice is that you should either stop encrypting swap or start
encrypting the whole pool. Hopefully in another (Ubuntu) release or two,
we'll have native ZFS encryption and this whole things Just Works.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1802481

Title:
  cryptsetup waits for zvol /dev/zvol/rpool/swap with no zpool imported
  during boot. Timing problem?

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  Hi,
  first of all: I don't know if it is a bug in cryptsetup-initramfs or 
zfs-initramfs or another one.

  The problem is that since my upgrade from bionic to cosmic the
  cryptsetup tries too early to setup my cryptswap device which is on a
  zvol.

cryptsetup: Waiting for encrypted source device /dev/zvol/rpool/swap...
 ALERT! encrypted source device /dev/zvol/rpool/swap does not exist, 
can't unlock cryptswap1.
...

  After timeout I find myself in a initramfs-shell (initramfs).
  When I do zpool list there is no zpool importet.
  After

zpool import -N rpool
^D

  the cryptsetup succeeds.
  Greetings,

 Lars

  ProblemType: Bug
  DistroRelease: Ubuntu 18.10
  Package: cryptsetup-initramfs 2:2.0.4-2ubuntu2
  ProcVersionSignature: Ubuntu 4.18.0-11.12-generic 4.18.12
  Uname: Linux 4.18.0-11-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.10-0ubuntu13.1
  Architecture: amd64
  CurrentDesktop: GNOME
  Date: Fri Nov  9 10:01:19 2018
  EcryptfsInUse: Yes
  PackageArchitecture: all
  SourcePackage: cryptsetup
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.default.apport: [modified]
  modified.conffile..etc.logrotate.d.apport: [modified]
  mtime.conffile..etc.default.apport: 2015-03-15T20:01:19.851334
  mtime.conffile..etc.logrotate.d.apport: 2018-05-18T08:58:12.902005

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1802481/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1802481] Re: cryptsetup waits for zvol /dev/zvol/rpool/swap with no zpool imported during boot. Timing problem?

2018-12-07 Thread Richard Laager
Try adding initramfs as an option in /etc/crypttab. That's the approach
I use when putting the whole pool on a LUKS device, and is necessary due
to: https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1802481

Title:
  cryptsetup waits for zvol /dev/zvol/rpool/swap with no zpool imported
  during boot. Timing problem?

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  Hi,
  first of all: I don't know if it is a bug in cryptsetup-initramfs or 
zfs-initramfs or another one.

  The problem is that since my upgrade from bionic to cosmic the
  cryptsetup tries too early to setup my cryptswap device which is on a
  zvol.

cryptsetup: Waiting for encrypted source device /dev/zvol/rpool/swap...
 ALERT! encrypted source device /dev/zvol/rpool/swap does not exist, 
can't unlock cryptswap1.
...

  After timeout I find myself in a initramfs-shell (initramfs).
  When I do zpool list there is no zpool importet.
  After

zpool import -N rpool
^D

  the cryptsetup succeeds.
  Greetings,

 Lars

  ProblemType: Bug
  DistroRelease: Ubuntu 18.10
  Package: cryptsetup-initramfs 2:2.0.4-2ubuntu2
  ProcVersionSignature: Ubuntu 4.18.0-11.12-generic 4.18.12
  Uname: Linux 4.18.0-11-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.10-0ubuntu13.1
  Architecture: amd64
  CurrentDesktop: GNOME
  Date: Fri Nov  9 10:01:19 2018
  EcryptfsInUse: Yes
  PackageArchitecture: all
  SourcePackage: cryptsetup
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.default.apport: [modified]
  modified.conffile..etc.logrotate.d.apport: [modified]
  mtime.conffile..etc.default.apport: 2015-03-15T20:01:19.851334
  mtime.conffile..etc.logrotate.d.apport: 2018-05-18T08:58:12.902005

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1802481/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1791143] Re: Suggestion to make zfsutils-linux a snap

2018-09-06 Thread Richard Laager
Is there something inherent in snaps that makes this easier or better
than debs? For example, do snaps support multiple installable versions
of the same package name?

If snaps aren’t inherently better, the same thing could be done with
debs using the usual convention for having multiple versions in the
archive simultaneously: having zfsutils0.6 and zfsutils0.7 source
packages producing similarly versioned-in-the-name binary packages
(which in this case conflict as they are not co-installable). Each would
depend on an appropriate kernel package that has the matching module.
Then zfsutils-linux would be an empty package with: Depends: zfsutils-
linux0.7 | zfsutils-linux0.6.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1791143

Title:
  Suggestion to make zfsutils-linux a snap

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  This circumvents the need to keep it on the same major version
  throughout the LTS cycle. LXD is doing snaps, perhaps for zfs this is
  the best approach as well.

  Xenial still has zfsutils on generation 0.6, with the module on 0.7.
  Even when patches are applied as needed that approach has its
  limitations. E.g. the Bionic cycle might possibly see 2 major zfs
  releases, who'll say.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1791143/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1602409] Re: Ubuntu 16.04 Should package zfs-auto-snapshot

2018-08-01 Thread Richard Laager
I don't have permissions to change this, but my recommendation would be
to set this as "Won't Fix". It's my understanding that zfs-auto-snapshot
is more-or-less unmaintained upstream. I know I've seen recommendations
to switch to something else (e.g. sanoid) on issues there.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1602409

Title:
  Ubuntu 16.04 Should package zfs-auto-snapshot

Status in zfs-linux package in Ubuntu:
  Fix Released

Bug description:
  ZFS on Linux provide a handy script for automatically creating hourly,
  weekly, and monthly snapshots of ZFS pools. I can't think of a good
  reason for it not being packaged with the official Ubuntu ZFS support.
  The ZoL PPA includes the script and crontabs, so why isn't in the
  official repositories too?

  https://github.com/zfsonlinux/zfs-auto-snapshot

  lsb_release
  -
  Description:  Ubuntu 16.04 LTS
  Release:  16.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1602409/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1738259] Re: need to ensure microcode updates are available to all bare-metal installs of Ubuntu

2018-06-12 Thread Richard Laager
@sdeziel, I agree 100%.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-meta in Ubuntu.
https://bugs.launchpad.net/bugs/1738259

Title:
  need to ensure microcode updates are available to all bare-metal
  installs of Ubuntu

Status in linux-meta package in Ubuntu:
  Triaged
Status in linux-meta-hwe package in Ubuntu:
  New
Status in linux-meta-hwe-edge package in Ubuntu:
  New
Status in linux-meta-lts-xenial package in Ubuntu:
  Fix Released
Status in linux-meta-oem package in Ubuntu:
  Fix Released
Status in linux-meta source package in Precise:
  New
Status in linux-meta source package in Trusty:
  Fix Released
Status in linux-meta source package in Xenial:
  Fix Released
Status in linux-meta-hwe source package in Xenial:
  Fix Released
Status in linux-meta-hwe-edge source package in Xenial:
  Fix Released
Status in linux-meta-lts-xenial source package in Xenial:
  Fix Committed
Status in linux-meta-oem source package in Xenial:
  Fix Released
Status in linux-meta source package in Zesty:
  Invalid
Status in linux-meta source package in Artful:
  Fix Released
Status in linux-meta source package in Bionic:
  Fix Released

Bug description:
  From time to time, CPU vendors release updates to microcode that can
  be loaded into the CPU from the OS.  For x86, we have these updates
  available in the archive as amd64-microcode and intel-microcode.

  Sometimes, these microcode updates have addressed security issues with
  the CPU.  They almost certainly will again in the future.

  We should ensure that all users of Ubuntu on baremetal x86 receive
  these security updates, and have them applied to the CPU in early boot
  where at all feasible.

  Because these are hardware-dependent packages which we don't want to
  install except on baremetal (so: not in VMs or containers), the
  logical place to pull them into the system is via the kernel, so that
  only the kernel baremetal flavors pull them in.  This is analogous to
  linux-firmware, which is already a dependency of the linux-
  image-{lowlatency,generic} metapackages, and whose contents are
  applied to the hardware by the kernel similar to microcode.

  So, please update the linux-image-{lowlatency,generic} metapackages to
  add a dependency on amd64-microcode [amd64], intel-microcode [amd64],
  and the corresponding hwe metapackages also.

  Please time this change to coincide with the next updates of the
  microcode packages in the archive.

  I believe we will also need to promote the *-microcode packages to
  main from restricted as part of this (again, by analogy with linux-
  firmware).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-meta/+bug/1738259/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1738259] Re: need to ensure microcode updates are available to all bare-metal installs of Ubuntu

2018-06-06 Thread Richard Laager
This is particularly annoying for me too.

All of my virtual machines use linux-image-generic because I need linux-
image-extra to get the i6300esb watchdog driver for the KVM watchdog.
This change forces the amd64-microcode and intel-microcode packages to
be installed on all of my VMs.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-meta in Ubuntu.
https://bugs.launchpad.net/bugs/1738259

Title:
  need to ensure microcode updates are available to all bare-metal
  installs of Ubuntu

Status in linux-meta package in Ubuntu:
  Triaged
Status in linux-meta-hwe package in Ubuntu:
  New
Status in linux-meta-hwe-edge package in Ubuntu:
  New
Status in linux-meta-lts-xenial package in Ubuntu:
  Fix Released
Status in linux-meta-oem package in Ubuntu:
  Fix Released
Status in linux-meta source package in Precise:
  New
Status in linux-meta source package in Trusty:
  Fix Released
Status in linux-meta source package in Xenial:
  Fix Released
Status in linux-meta-hwe source package in Xenial:
  Fix Released
Status in linux-meta-hwe-edge source package in Xenial:
  Fix Released
Status in linux-meta-lts-xenial source package in Xenial:
  Fix Committed
Status in linux-meta-oem source package in Xenial:
  Fix Released
Status in linux-meta source package in Zesty:
  Invalid
Status in linux-meta source package in Artful:
  Fix Released
Status in linux-meta source package in Bionic:
  Fix Committed

Bug description:
  From time to time, CPU vendors release updates to microcode that can
  be loaded into the CPU from the OS.  For x86, we have these updates
  available in the archive as amd64-microcode and intel-microcode.

  Sometimes, these microcode updates have addressed security issues with
  the CPU.  They almost certainly will again in the future.

  We should ensure that all users of Ubuntu on baremetal x86 receive
  these security updates, and have them applied to the CPU in early boot
  where at all feasible.

  Because these are hardware-dependent packages which we don't want to
  install except on baremetal (so: not in VMs or containers), the
  logical place to pull them into the system is via the kernel, so that
  only the kernel baremetal flavors pull them in.  This is analogous to
  linux-firmware, which is already a dependency of the linux-
  image-{lowlatency,generic} metapackages, and whose contents are
  applied to the hardware by the kernel similar to microcode.

  So, please update the linux-image-{lowlatency,generic} metapackages to
  add a dependency on amd64-microcode [amd64], intel-microcode [amd64],
  and the corresponding hwe metapackages also.

  Please time this change to coincide with the next updates of the
  microcode packages in the archive.

  I believe we will also need to promote the *-microcode packages to
  main from restricted as part of this (again, by analogy with linux-
  firmware).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-meta/+bug/1738259/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1688890] Re: initramfs-zfs should support misc /dev dirs

2018-04-16 Thread Richard Laager
I haven't had a chance to write and test the zpool.cache copying. I keep
meaning to get to it every day, but pushing it back for lack of time.

The zfs-initramfs script in 16.04 (always) and in 18.04 (by default)
runs a plain `zpool import`.

ZoL 0.7.5 has a default search order for imports that prefers /dev/disk/by-id:
https://github.com/zfsonlinux/zfs/blob/zfs-0.7.5/lib/libzfs/libzfs_import.c#L1835

That said, so did ZoL on Xenial (0.6.5.6).

On my bionic test VM, non-root pools import using whatever name I
created them with. If I use /dev, they import later with /dev. If I use
/dev/disk/by-id, they import later with /dev/disk/by-id. This is true
immediately, and true across reboots.

Root pools seem to behave the same way. My root pool was created using
/dev/disk/by-id per the HOWTO (which I maintain) and is being imported
with /dev/disk/by-id. This is with no zpool.cache, either on the live
system or in the initramfs (which I unpacked to verify).

In other words, I cannot reproduce this on bionic. I'm pretty confident
this is an issue on Xenial, though I haven't re-tested just now to
absolutely confirm. But I will say that on my Xenial laptop at the
moment, my root pool is imported with a /dev name.

In short, I think this is fixed in Bionic, but I'm not 100% sure which
code changed to fix it.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1688890

Title:
  initramfs-zfs should support misc /dev dirs

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  Right now 'zfs-initramfs', i.e. /usr/share/initramfs-tools/scripts/zfs
  does not support any other directory than /dev for "zpool import ...".
  Therefore even if a pool gets created from a different directory like
  /dev, say /dev/disk/by-id or /dev/chassis/SYS on next reboot /dev will
  be used and thus zpool status will show the /dev/sd* etc. on
  successful import. Beside that now a user does not see the original
  names used in "zpool create ..." the unstable names like "/dev/sd*"
  are shown, which is explicitly NOT recommended.

  The following patch introduces the "pseudo" kernel param named "zdirs"
  - a comma separated list of dev dirs to scan on import - which gets
  used by /usr/share/initramfs-tools/scripts/zfs to honor it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1688890/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1600060] Re: ZFS "partially filled holes lose birth time"

2018-03-17 Thread Richard Laager
** Changed in: zfs-linux (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1600060

Title:
  ZFS "partially filled holes lose birth time"

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux package in Debian:
  Fix Released

Bug description:
  Note: This is different from Launchpad bug #1557151. This is another,
  similar, bug.

  Bug description from Matt Ahrens at OpenZFS:
  "If a ZFS object contains a hole at level one, and then a data block is
   created at level 0 underneath that l1 block, l0 holes will be created.
   However, these l0 holes do not have the birth time property set; as a
   result, incremental sends will not send those holes.

   Fix is to modify the dbuf_read code to fill in birth time data."
  -- https://www.illumos.org/issues/6513

  From pcd on IRC in #zfsonlinux:
  "basically, what happens is this: if you zero out an entire l1 indirect
   block's worth of data (several megabytes), we save space by storing that
   entire indirect block as a single hole  in an l2 indirect block with
   birth time N.  If you then modify some of the data under that, but not
   all of it, when the l1 indirect block is filled back in with mostly
   holes and some data blocks, the wholes will not have any"

  Fixed in ZoL here:
  
https://github.com/zfsonlinux/zfs/commit/bc77ba73fec82d37c0b57949ec29edd9aa207677

  This has *not* been merged into a ZoL release yet, nor the release
  branch.

  This is a very unfortunate bug because the fix only helps you moving forward. 
A separate bug has been opened to propose a fix for that:
  https://www.illumos.org/issues/7175

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1600060/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1685528] Re: ZFS initramfs mounts dataset explicitly set not to be mounted, causing boot process to fail

2018-03-17 Thread Richard Laager
I fixed this upstream, which was released in 0.7.4. Bionic has 0.7.5.

** Changed in: zfs-linux (Ubuntu)
   Status: Confirmed => Fix Committed

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1685528

Title:
  ZFS initramfs mounts dataset explicitly set not to be mounted, causing
  boot process to fail

Status in zfs-linux package in Ubuntu:
  Fix Committed

Bug description:
  Per https://github.com/zfsonlinux/pkg-zfs/issues/221: the initramfs
  zfs script might overrule canmount and mountpoint options for a
  dataset, causing other mount operations and with them the boot process
  to fail.

  Experienced this with Ubuntu Zesty. Xenial seems to ship with a
  different zfs script for the initrd.

  Work around when it happens: unmount the dataset that should not be
  mounted, and exit the initramfs rescue prompt to resume booting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1685528/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1688890] Re: initramfs-zfs should support misc /dev dirs

2018-03-17 Thread Richard Laager
I need to do some testing, but we might want to consider using the cache
file. An approach (suggested to my by ryao, I think) was that we first
import the root pool read-only, copy the cache file out of it, export
the pool, and then import the pool read-write using the cache file.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1688890

Title:
  initramfs-zfs should support misc /dev dirs

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  Right now 'zfs-initramfs', i.e. /usr/share/initramfs-tools/scripts/zfs
  does not support any other directory than /dev for "zpool import ...".
  Therefore even if a pool gets created from a different directory like
  /dev, say /dev/disk/by-id or /dev/chassis/SYS on next reboot /dev will
  be used and thus zpool status will show the /dev/sd* etc. on
  successful import. Beside that now a user does not see the original
  names used in "zpool create ..." the unstable names like "/dev/sd*"
  are shown, which is explicitly NOT recommended.

  The following patch introduces the "pseudo" kernel param named "zdirs"
  - a comma separated list of dev dirs to scan on import - which gets
  used by /usr/share/initramfs-tools/scripts/zfs to honor it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1688890/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1741081] Re: zfs-import-cache.service fails on startup

2018-03-12 Thread Richard Laager
I updated to the version from -proposed and rebooted. I verified that no
units failed on startup.

** Tags added: verification-done-artful

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1741081

Title:
  zfs-import-cache.service fails on startup

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Artful:
  Fix Committed
Status in zfs-linux source package in Bionic:
  Fix Released

Bug description:
  == SRU Request, Artful ==

  Enable ZFS module to be loaded without the broken ubuntu-load-zfs-
  unconditionally.patch.

  == Fix ==

  Add a new zfs-load-module.service script that modprobes the ZFS module
  and remove any hard coded module loading from zfs-import-cache.service
  & zfs-import-scan.service and make these latter scripts require the
  new zfs-load-module.service script.  Also remove the now defunct
  ubuntu-load-zfs-unconditionally.patch as this will no longer be
  required.

  == Testcase ==

  On a clean VM, install with the fixed package, zfs should load
  automatically.

  == Regression potential ==

  ZFS module may not load if the changes are broken. However, testing
  proves this not to be the case.

  

  I just noticed on my test VM of artful that zfs-import-cache.service
  does not have a ConditionPathExists=/etc/zfs/zpool.cache. Because of
  that, it fails on startup, since the cache file does not exist.

  This line is being deleted by 
debian/patches/ubuntu-load-zfs-unconditionally.patch. This patch seems to exist 
per:
  https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1672749

  This patch still exists in bionic, so I assume it will be similarly
  broken.

  If the goal of the patch is to load the module (and only that), I
  think it should create a third unit instead:

  zfs-load-module.service
   ^^ runs modprobe zfs

  zfs-import-cache.service & zfs-import-scan.service
    ^^ per upstream minus modprobe plus Requires=zfs-load-module.service

  I have tested this manually and it works. I can submit a package patch
  if this is the desired solution.

  Interestingly, before this change, zfs-import-scan.service wasn't
  starting. If started manually, it worked. I had to give it a
  `systemctl enable zfs-import-scan.service` to create the Wants
  symlinks. Looking at the zfsutils-linux.postinst, I see the correct
  boilerplate from dh_systemd, so I'm not sure why this wasn't already
  done. Can anyone confirm or deny whether zfs-import-scan.service is
  enabled out-of-the-box on their system?

  Is the zfs-import-scan.service not starting actually the cause of the
  original bug? The design is that *either* zfs-import-cache.service or
  zfs-import-scan.service starts. They both call modprobe zfs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1741081/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1741081] Re: zfs-import-cache.service fails on startup

2018-03-02 Thread Richard Laager
zfs-load-module.service seems to have a Requires on itself? That has to
be wrong.

Also, zfs-import-cache.service and zfs-import-scan.service need an After
=zfs-load-module.service. They're not getting one automatically because
of DefaultDependencies=no (which seems appropriate here, so leave that
alone). Scott, can you try this? I think it will fix your issue, since
zfs-mount.service already has After=zfs-load-cache.service and After
=zfs-load-scan.service.

The proposed package, with the changes above should fix the issue I'm
reporting.

However, LP #1672749 says, "Since zfsutils-linux/0.6.5.9-4, zfs module
is not automatically loaded on systems that no zpool exists, this avoids
tainting everyone's kernel who has the package installed but is not
using zfs." Is this still an important goal? If so, wasn't that lost
with ubuntu-load-zfs-unconditionally.patch? Or perhaps something is
keeping zfs-import-scan.service from being enabled by default?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1741081

Title:
  zfs-import-cache.service fails on startup

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Artful:
  In Progress
Status in zfs-linux source package in Bionic:
  Fix Released

Bug description:
  == SRU Request, Artful ==

  Enable ZFS module to be loaded without the broken ubuntu-load-zfs-
  unconditionally.patch.

  == Fix ==

  Add a new zfs-load-module.service script that modprobes the ZFS module
  and remove any hard coded module loading from zfs-import-cache.service
  & zfs-import-scan.service and make these latter scripts require the
  new zfs-load-module.service script.  Also remove the now defunct
  ubuntu-load-zfs-unconditionally.patch as this will no longer be
  required.

  == Testcase ==

  On a clean VM, install with the fixed package, zfs should load
  automatically.

  == Regression potential ==

  ZFS module may not load if the changes are broken. However, testing
  proves this not to be the case.

  

  I just noticed on my test VM of artful that zfs-import-cache.service
  does not have a ConditionPathExists=/etc/zfs/zpool.cache. Because of
  that, it fails on startup, since the cache file does not exist.

  This line is being deleted by 
debian/patches/ubuntu-load-zfs-unconditionally.patch. This patch seems to exist 
per:
  https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1672749

  This patch still exists in bionic, so I assume it will be similarly
  broken.

  If the goal of the patch is to load the module (and only that), I
  think it should create a third unit instead:

  zfs-load-module.service
   ^^ runs modprobe zfs

  zfs-import-cache.service & zfs-import-scan.service
    ^^ per upstream minus modprobe plus Requires=zfs-load-module.service

  I have tested this manually and it works. I can submit a package patch
  if this is the desired solution.

  Interestingly, before this change, zfs-import-scan.service wasn't
  starting. If started manually, it worked. I had to give it a
  `systemctl enable zfs-import-scan.service` to create the Wants
  symlinks. Looking at the zfsutils-linux.postinst, I see the correct
  boilerplate from dh_systemd, so I'm not sure why this wasn't already
  done. Can anyone confirm or deny whether zfs-import-scan.service is
  enabled out-of-the-box on their system?

  Is the zfs-import-scan.service not starting actually the cause of the
  original bug? The design is that *either* zfs-import-cache.service or
  zfs-import-scan.service starts. They both call modprobe zfs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1741081/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1751671] Re: zfs in bionic (0.7.5) is missing encryption support

2018-02-25 Thread Richard Laager
Native encryption was merged to master but has not been released in a
tagged version. There are actually a couple of issues that will result
in on-disk format changes. It should be the major feature for the 0.8.0
release.

** Changed in: zfs-linux (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1751671

Title:
  zfs in bionic (0.7.5) is missing encryption support

Status in zfs-linux package in Ubuntu:
  Invalid

Bug description:
  ZFS native encryption support was merged in August of last year, which
  means it should be in every ZFSonLinux release since 0.7.2. However,
  the latest released ZFS packages in bionic (0.7.5) seem to be missing
  this functionality.

  # zpool set feature@encryption=enabled pool
  cannot set property for 'pool': invalid feature 'encryption'
  # dpkg -l | grep zfs
  ii  libzfs2linux   0.7.5-1ubuntu2 
   amd64OpenZFS filesystem library for Linux
  ii  zfs-zed0.7.5-1ubuntu2 
   amd64OpenZFS Event Daemon
  ii  zfsutils-linux 0.7.5-1ubuntu2 
   amd64command-line tools to manage OpenZFS filesystems
  # zfs get all | grep encryp
  [ no results ]

  Personally, I consider this pretty important functionality to make
  sure is present in 18.04, so I hope there's time to get it fixed
  before then.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: zfsutils-linux 0.7.5-1ubuntu2
  ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
  Uname: Linux 4.15.0-10-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.8-0ubuntu10
  Architecture: amd64
  CasperVersion: 1.388
  Date: Sun Feb 25 23:41:03 2018
  LiveMediaBuild: Ubuntu 18.04 LTS "Bionic Beaver" - Alpha amd64 (20180225)
  ProcEnviron:
   TERM=screen
   PATH=(custom, no user)
   LANG=C.UTF-8
   SHELL=/bin/bash
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1751671/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1734172] Re: Upgrade ZFS to 0.7.3

2018-01-29 Thread Richard Laager
16.04's HWE's updates will top out at the kernel version shipped in
18.04. I assume this is because you can then just use 18.04.

See:
https://wiki.ubuntu.com/Kernel/RollingLTSEnablementStack
as linked from:
https://wiki.ubuntu.com/Kernel/LTSEnablementStack

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1734172

Title:
  Upgrade ZFS to 0.7.3

Status in zfs-linux package in Ubuntu:
  In Progress

Bug description:
  ZFS v0.7 is out, current version is 0.7.3:

  https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.7.3 .

  
  It is desired to have the latest stable version at least in LTS.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: zfsutils-linux 0.6.5.11-1ubuntu5
  ProcVersionSignature: Ubuntu 4.13.0-16.19-generic 4.13.4
  Uname: Linux 4.13.0-16-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl zcommon znvpair
  ApportVersion: 2.20.8-0ubuntu2
  Architecture: amd64
  Date: Thu Nov 23 19:03:47 2017
  ProcEnviron:
   LANGUAGE=en_US:en
   TERM=xterm
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/zsh
  SourcePackage: zfs-linux
  UpgradeStatus: Upgraded to bionic on 2017-05-20 (187 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1734172/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1741081] [NEW] zfs-import-cache.service fails on startup

2018-01-03 Thread Richard Laager
Public bug reported:

I just noticed on my test VM of artful that zfs-import-cache.service
does not have a ConditionPathExists=/etc/zfs/zpool.cache. Because of
that, it fails on startup, since the cache file does not exist.

This line is being deleted by 
debian/patches/ubuntu-load-zfs-unconditionally.patch. This patch seems to exist 
per:
https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1672749

This patch still exists in bionic, so I assume it will be similarly
broken.

If the goal of the patch is to load the module (and only that), I think
it should create a third unit instead:

zfs-load-module.service
 ^^ runs modprobe zfs

zfs-import-cache.service & zfs-import-scan.service
  ^^ per upstream minus modprobe plus Requires=zfs-load-module.service

I have tested this manually and it works. I can submit a package patch
if this is the desired solution.

Interestingly, before this change, zfs-import-scan.service wasn't
starting. If started manually, it worked. I had to give it a `systemctl
enable zfs-import-scan.service` to create the Wants symlinks. Looking at
the zfsutils-linux.postinst, I see the correct boilerplate from
dh_systemd, so I'm not sure why this wasn't already done. Can anyone
confirm or deny whether zfs-import-scan.service is enabled out-of-the-
box on their system?

Is the zfs-import-scan.service not starting actually the cause of the
original bug? The design is that *either* zfs-import-cache.service or
zfs-import-scan.service starts. They both call modprobe zfs.

** Affects: zfs-linux (Ubuntu)
 Importance: Undecided
 Status: New

** Description changed:

  I just noticed on my test VM of artful that zfs-import-cache.service
  does not have a ConditionPathExists=/etc/zfs/zpool.cache. Because of
  that, it fails on startup, since the cache file does not exist.
  
  This line is being deleted by 
debian/patches/ubuntu-load-zfs-unconditionally.patch. This patch seems to exist 
per:
  https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1672749
  
  This patch still exists in bionic, so I assume it will be similarly
  broken.
  
  If the goal of the patch is to load the module (and only that), I think
  it should create a third unit instead:
  
  zfs-load-module.service
-  ^^ runs modprobe-zfs
+  ^^ runs modprobe zfs
  
  zfs-import-cache.service & zfs-import-scan.service
-   ^^ per upstream minus modprobe plus Requires=zfs-load-module.service
+   ^^ per upstream minus modprobe plus Requires=zfs-load-module.service
  
  I have tested this manually and it works. I can submit a package patch
  if this is the desired solution.
  
  Interestingly, zfs-import-scan.service doesn't seem to have run at all.
  If run manually, it works. I had to give it a `systemctl enable zfs-
  import-scan.service` to create the Wants symlinks. Looking at the
  zfsutils-linux.postinst, I see the correct boilerplate from dh_systemd,
  so I'm not sure why this wasn't already done. Can anyone confirm or deny
  whether zfs-import-scan.service is enabled out-of-the-box on their
  system?

** Description changed:

  I just noticed on my test VM of artful that zfs-import-cache.service
  does not have a ConditionPathExists=/etc/zfs/zpool.cache. Because of
  that, it fails on startup, since the cache file does not exist.
  
  This line is being deleted by 
debian/patches/ubuntu-load-zfs-unconditionally.patch. This patch seems to exist 
per:
  https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1672749
  
  This patch still exists in bionic, so I assume it will be similarly
  broken.
  
  If the goal of the patch is to load the module (and only that), I think
  it should create a third unit instead:
  
  zfs-load-module.service
   ^^ runs modprobe zfs
  
  zfs-import-cache.service & zfs-import-scan.service
    ^^ per upstream minus modprobe plus Requires=zfs-load-module.service
  
  I have tested this manually and it works. I can submit a package patch
  if this is the desired solution.
  
- Interestingly, zfs-import-scan.service doesn't seem to have run at all.
- If run manually, it works. I had to give it a `systemctl enable zfs-
- import-scan.service` to create the Wants symlinks. Looking at the
- zfsutils-linux.postinst, I see the correct boilerplate from dh_systemd,
- so I'm not sure why this wasn't already done. Can anyone confirm or deny
- whether zfs-import-scan.service is enabled out-of-the-box on their
- system?
+ Interestingly, before this change, zfs-import-scan.service wasn't
+ starting. If started manually, it worked. I had to give it a `systemctl
+ enable zfs-import-scan.service` to create the Wants symlinks. Looking at
+ the zfsutils-linux.postinst, I see the correct boilerplate from
+ dh_systemd, so I'm not sure why this wasn't already done. Can anyone
+ confirm or deny whether zfs-import-scan.service is enabled out-of-the-
+ box on their system?

** Description changed:

  I just noticed on my test VM of artful that zfs-import-cache.service
  does not have a ConditionPathExists=/etc/zfs/zpool.cache. 

[Kernel-packages] [Bug 1734172] Re: Upgrade ZFS to 0.7.3

2017-11-23 Thread Richard Laager
I have a related question... as far as I'm aware, the ZoL
kernel<->userspace interface is still not versioned:
https://github.com/zfsonlinux/zfs/issues/1290

Effectively, this means that the version of zfsutils-linux must always
match the version of the kernel modules. What is the plan to handle this
in Ubuntu's HWE kernels? For example, if Bionic gets ZFS 0.7.3, will
that show up in the Xenial HWE kernel? If so, that will create an
incompatibility, unless zfsutils-linux is updated in Xenial. But if
zfsutils-linux is updated in Xenial, then it will be incompatible with
the Xenial GA kernel.

** Bug watch added: Github Issue Tracker for ZFS #1290
   https://github.com/zfsonlinux/zfs/issues/1290

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1734172

Title:
  Upgrade ZFS to 0.7.3

Status in zfs-linux package in Ubuntu:
  In Progress

Bug description:
  ZFS v0.7 is out, current version is 0.7.3:

  https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.7.3 .

  
  It is desired to have the latest stable version at least in LTS.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: zfsutils-linux 0.6.5.11-1ubuntu5
  ProcVersionSignature: Ubuntu 4.13.0-16.19-generic 4.13.4
  Uname: Linux 4.13.0-16-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl zcommon znvpair
  ApportVersion: 2.20.8-0ubuntu2
  Architecture: amd64
  Date: Thu Nov 23 19:03:47 2017
  ProcEnviron:
   LANGUAGE=en_US:en
   TERM=xterm
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/zsh
  SourcePackage: zfs-linux
  UpgradeStatus: Upgraded to bionic on 2017-05-20 (187 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1734172/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1731735] Re: zfs scrub starts on all pools simultaneously

2017-11-12 Thread Richard Laager
ZFS already limits the amount of IO that a scrub can do. Putting
multiple pools on the same disk defeats ZFS's IO scheduler.* Scrubs are
just one example of the performance problems that will cause. I don't
think we should complicate the scrub script to accommodate this
scenario.

My suggestion is that you comment out the default scrub job in
/etc/cron.d/zfsutils-linux and replace it with something that meets your
needs. Don't change /usr/lib/zfs-linux/scrub, as that will get
overwritten on package upgrades.

For example, you might scrub the pools on different weeks with something like 
this:
24 0 0-7 * * root [ $(date +\%w) -eq 0 ] && zpool list -H -o health POOL1 2> 
/dev/null | grep -q ONLINE && zpool scrub POOL1
24 0 8-14 * * root [ $(date +\%w) -eq 0 ] && zpool list -H -o health POOL2 2> 
/dev/null | grep -q ONLINE && zpool scrub POOL2

I'm going to boldly mark this Invalid. Others can override me,
obviously. Or, if you want to make more of a case, go for it.

* As a side note, in the general case, such a configuration also implies
that one is using partitions. This means they have the Linux IO
scheduler also in the mix, unless they're doing root-on-ZFS, in which
case zfs-initramfs is setting the noop scheduler. I assume you're doing
root-on-ZFS, since you mentioned "One pool holds OS", so that's not an
issue for you personally.

** Changed in: zfs-linux (Ubuntu)
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1731735

Title:
  zfs scrub starts on all pools simultaneously

Status in zfs-linux package in Ubuntu:
  Invalid

Bug description:
  # Environment

  Description:Ubuntu 16.04.3 LTS
  Release:16.04

  Linux 4.10.0-38-generic-tuxonice #42~ppa1-Ubuntu SMP Mon Oct 30
  20:21:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

  zfsutils-linux  0.6.5.6-0ubuntu18 amd64

  
  # Current behaviour

  `/usr/lib/zfs-linux/scrub` starts `zfs scrub` on all pools at the same
  time. If pools are located on the same disk - scrub performance
  degrades badly.

  # Proposed behaviour

  * simplest one - start scrub of one pool after another scrub is finished
  * advanced - detect pools which are located on the same disk and start scrubs 
on them sequentially, if they are on different disks it is fine to run them in 
parallel

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1731735/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1731735] Re: zfs scrub starts on all pools simultaneously

2017-11-12 Thread Richard Laager
Why do you have multiple pools on the same disks? That's very much not a
best practice or even typical ZFS installation.

** Changed in: zfs-linux (Ubuntu)
   Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1731735

Title:
  zfs scrub starts on all pools simultaneously

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  # Environment

  Description:Ubuntu 16.04.3 LTS
  Release:16.04

  Linux 4.10.0-38-generic-tuxonice #42~ppa1-Ubuntu SMP Mon Oct 30
  20:21:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

  zfsutils-linux  0.6.5.6-0ubuntu18 amd64

  
  # Current behaviour

  `/usr/lib/zfs-linux/scrub` starts `zfs scrub` on all pools at the same
  time. If pools are located on the same disk - scrub performance
  degrades badly.

  # Proposed behaviour

  * simplest one - start scrub of one pool after another scrub is finished
  * advanced - detect pools which are located on the same disk and start scrubs 
on them sequentially, if they are on different disks it is fine to run them in 
parallel

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1731735/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1550301] Re: ZFS: Set elevator=noop on disks in the root pool

2017-11-01 Thread Richard Laager
I submitted a fix upstream:
https://github.com/zfsonlinux/zfs/pull/6807/

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1550301

Title:
  ZFS: Set elevator=noop on disks in the root pool

Status in zfs-linux package in Ubuntu:
  Won't Fix
Status in zfs-linux source package in Xenial:
  Fix Released
Status in zfs-linux source package in Zesty:
  Won't Fix

Bug description:
  ZFS-on-Linux has its own I/O scheduler, so it sets the "noop" elevator
  on whole disks used in a pool.
  https://github.com/zfsonlinux/zfs/issues/90

  It does not set the scheduler for a disk if a partition is used in a
  pool out of respect for the possibility that there are non-ZFS
  partitions on the same disk.
  https://github.com/zfsonlinux/zfs/issues/152

  For regular pools, the recommendation is to use whole disks. For a
  root pools, it's just the opposite. The typical case is that
  partitions are used. And, for root pools, it is unlikely that the same
  disks have non-ZFS filesystems.

  The debdiff in comment #5 applies cleanly to the latest package and
  functions correctly. This is an important change for root-on-ZFS
  users. It has no effect on non-root-on-ZFS installs, because the code
  is only in the zfs-initramfs package.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1550301/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1685528] Re: ZFS initramfs mounts dataset explicitly set not to be mounted, causing boot process to fail

2017-11-01 Thread Richard Laager
samvde, can you provide your `zfs list` output? The script seems
designed to only import filesystems *below* the filesystem that is the
root filesystem. In the typical case, the root filesystem is something
like rpool/ROOT/ubuntu. There typically shouldn't be children of
rpool/ROOT/ubuntu.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1685528

Title:
  ZFS initramfs mounts dataset explicitly set not to be mounted, causing
  boot process to fail

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  Per https://github.com/zfsonlinux/pkg-zfs/issues/221: the initramfs
  zfs script might overrule canmount and mountpoint options for a
  dataset, causing other mount operations and with them the boot process
  to fail.

  Experienced this with Ubuntu Zesty. Xenial seems to ship with a
  different zfs script for the initrd.

  Work around when it happens: unmount the dataset that should not be
  mounted, and exit the initramfs rescue prompt to resume booting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1685528/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1685528] Re: ZFS initramfs mounts dataset explicitly set not to be mounted, causing boot process to fail

2017-07-06 Thread Richard Laager
Copying the script is probably fine for now. I still intend to look at
this, hopefully in the next month or so. It's been relatively low on my
list, since LTS releases are my main priority.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1685528

Title:
  ZFS initramfs mounts dataset explicitly set not to be mounted, causing
  boot process to fail

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  Per https://github.com/zfsonlinux/pkg-zfs/issues/221: the initramfs
  zfs script might overrule canmount and mountpoint options for a
  dataset, causing other mount operations and with them the boot process
  to fail.

  Experienced this with Ubuntu Zesty. Xenial seems to ship with a
  different zfs script for the initrd.

  Work around when it happens: unmount the dataset that should not be
  mounted, and exit the initramfs rescue prompt to resume booting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1685528/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1694090] Re: ZoL: wrong import order prevents boot

2017-05-29 Thread Richard Laager
The output only shows the results, not the order of mount attempts. It
may be the case that there is an ordering bug here. But we need to rule
out the other case first. It could easily be the case that the directory
is non-empty, so the /var/share mount failed even though it was properly
attempted first.

Try manually unmounting everything under /var/share. Then rmdir the
empty lxc directory. Once you are certain that /var/share is empty, re-
run zfs mount -a.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1694090

Title:
  ZoL: wrong import order prevents boot

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  I've the following zfs:

  # zfs list -r rpool/VARSHARE
  NAME  USED  AVAIL  REFER  MOUNTPOINT
  rpool/VARSHARE   114K   165G30K  /var/share
  rpool/VARSHARE/lxc84K   165G19K  /var/share/lxc
  rpool/VARSHARE/lxc/xenial 65K   165G19K  
/var/share/lxc/xenial
  rpool/VARSHARE/lxc/xenial/pkg 19K   165G19K  
/var/share/lxc/xenial/pkg
  rpool/VARSHARE/lxc/xenial/rootfs-amd6427K   165G27K  
/var/share/lxc/xenial/rootfs-amd64

  On boot, we see

   Starting Mount ZFS filesystems...
  [FAILED] Failed to start Mount ZFS filesystems.
  See 'systemctl status zfs-mount.service' for details.
  Welcome to emergPress Enter for maintenance
  (or press Control-D to continue): 

  
  # df -h /var/share
  rpool/VARSHARE/lxc165G 0  165G   0% /var/share/lxc
  rpool/VARSHARE/lxc/xenial 165G 0  165G   0% 
/var/share/lxc/xenial
  rpool/VARSHARE/lxc/xenial/pkg 165G 0  165G   0% 
/var/share/lxc/xenial/pkg
  rpool/VARSHARE/lxc/xenial/rootfs-amd64165G 0  165G   0% 
/var/share/lxc/xenial/rootfs-amd64

  
  Obviously rpool/VARSHARE - the parent of rpool/VARSHARE/lxc - was not 
mounted, even so canmount property is for all set to on, rpool/VARSHARE's 
mountpoint to /var/share and rpool/VARSHARE/lxc children inherit their 
mountpoint.

  
  # systemctl status zfs-mount.service
  ● zfs-mount.service - Mount ZFS filesystems
 Loaded: loaded (/lib/systemd/system/zfs-mount.service; static; vendor 
preset: enabled)
 Active: failed (Result: exit-code) since Sun 2017-05-28 04:51:46 CEST; 
13min ago
Process: 6935 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)
   Main PID: 6935 (code=exited, status=1/FAILURE)

  May 28 04:51:45 ares systemd[1]: Starting Mount ZFS filesystems...
  May 28 04:51:45 ares zfs[6935]: cannot mount '/var/share': directory is not 
empty
  May 28 04:51:46 ares systemd[1]: zfs-mount.service: Main process exited, 
code=exited, status=1/FAILURE
  May 28 04:51:46 ares systemd[1]: Failed to start Mount ZFS filesystems.
  May 28 04:51:46 ares systemd[1]: zfs-mount.service: Unit entered failed state.
  May 28 04:51:46 ares systemd[1]: zfs-mount.service: Failed with result 
'exit-code'.

  So 'zfs mount ...' seems to be severely buggy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1694090/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1694090] Re: ZoL: wrong import order prevents boot

2017-05-27 Thread Richard Laager
This is the problem:
'/var/share': directory is not empty

Figure out what is in there and deal with this appropriately.

I don't personally love that ZFS refuses to mount on non-empty
directories, but most of the time, the fact that the directory is non-
empty is the real problem.

** Changed in: zfs-linux (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1694090

Title:
  ZoL: wrong import order prevents boot

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  I've the following zfs:

  # zfs list -r rpool/VARSHARE
  NAME  USED  AVAIL  REFER  MOUNTPOINT
  rpool/VARSHARE   114K   165G30K  /var/share
  rpool/VARSHARE/lxc84K   165G19K  /var/share/lxc
  rpool/VARSHARE/lxc/xenial 65K   165G19K  
/var/share/lxc/xenial
  rpool/VARSHARE/lxc/xenial/pkg 19K   165G19K  
/var/share/lxc/xenial/pkg
  rpool/VARSHARE/lxc/xenial/rootfs-amd6427K   165G27K  
/var/share/lxc/xenial/rootfs-amd64

  On boot, we see

   Starting Mount ZFS filesystems...
  [FAILED] Failed to start Mount ZFS filesystems.
  See 'systemctl status zfs-mount.service' for details.
  Welcome to emergPress Enter for maintenance
  (or press Control-D to continue): 

  
  # df -h /var/share
  rpool/VARSHARE/lxc165G 0  165G   0% /var/share/lxc
  rpool/VARSHARE/lxc/xenial 165G 0  165G   0% 
/var/share/lxc/xenial
  rpool/VARSHARE/lxc/xenial/pkg 165G 0  165G   0% 
/var/share/lxc/xenial/pkg
  rpool/VARSHARE/lxc/xenial/rootfs-amd64165G 0  165G   0% 
/var/share/lxc/xenial/rootfs-amd64

  
  Obviously rpool/VARSHARE - the parent of rpool/VARSHARE/lxc - was not 
mounted, even so canmount property is for all set to on, rpool/VARSHARE's 
mountpoint to /var/share and rpool/VARSHARE/lxc children inherit their 
mountpoint.

  
  # systemctl status zfs-mount.service
  ● zfs-mount.service - Mount ZFS filesystems
 Loaded: loaded (/lib/systemd/system/zfs-mount.service; static; vendor 
preset: enabled)
 Active: failed (Result: exit-code) since Sun 2017-05-28 04:51:46 CEST; 
13min ago
Process: 6935 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)
   Main PID: 6935 (code=exited, status=1/FAILURE)

  May 28 04:51:45 ares systemd[1]: Starting Mount ZFS filesystems...
  May 28 04:51:45 ares zfs[6935]: cannot mount '/var/share': directory is not 
empty
  May 28 04:51:46 ares systemd[1]: zfs-mount.service: Main process exited, 
code=exited, status=1/FAILURE
  May 28 04:51:46 ares systemd[1]: Failed to start Mount ZFS filesystems.
  May 28 04:51:46 ares systemd[1]: zfs-mount.service: Unit entered failed state.
  May 28 04:51:46 ares systemd[1]: zfs-mount.service: Failed with result 
'exit-code'.

  So 'zfs mount ...' seems to be severely buggy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1694090/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


  1   2   3   >