[Bug 1860228] Re: addition of zfsutils-linux scrib every 2nd sunday

2024-07-24 Thread Richard Laager
[This comment was seemingly hidden?]

> The real question is, why is "[ $(date +\%w) -eq 0 ]" in there, when
cron can do day-of-week like:
>
> 24 0 8-14 * 0 root [ -x /usr/lib/zfs-linux/scrub ] && /usr/lib/zfs-
linux/scrub

This is because if you specify the "day of month" and the "day of week"
fields, they are ORed, not ANDed. From crontab(5), "If both fields are
restricted (i.e., aren't *), the command will be run when either field
matches the current time."

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860228

Title:
  addition of zfsutils-linux scrib every 2nd sunday

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1860228/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2055114] Re: fail2ban is broken in 24.04 Noble

2024-06-25 Thread Richard Laager
Note that fail2ban is in universe, not main. This was surprising to me,
and something I only realized because of this bug. I too think of
fail2ban as a core security component. I wish Ubuntu would promote it to
main, but that's a different conversation.

Traditionally, being in universe has meant that support is "best
effort". In my opinion, that was generally security/CVE support at most.
Canonical has recently announced expanded support of packages in
universe, which is great. However, I share your concern that they may
not be able to keep up with all of the packages in universe. Time will
tell.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2055114

Title:
  fail2ban is broken in 24.04 Noble

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fail2ban/+bug/2055114/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2055114] Re: fail2ban is broken in 24.04 Noble

2024-06-03 Thread Richard Laager
I tested (rebuilt in a PPA) the version from:
https://launchpadlibrarian.net/731722634/fail2ban_1.0.2-3_1.0.2-3ubuntu1.24.04.1.diff.gz

It works for me. I can't mark this verification-done, as I didn't use
the actual version from -proposed (since it isn't available there yet).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2055114

Title:
  fail2ban is broken in 24.04 Noble

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fail2ban/+bug/2055114/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2055114] Re: fail2ban is broken in 24.04 Noble

2024-05-29 Thread Richard Laager
@ghadi-rahme:

The version in the changelog is wrong. You have "1.0.2-ubuntu1", which
should presumably be "1.0.2-3ubuntu1". You are missing the "3" after the
dash.

Also, configure-setup-to-install-fail2ban.compat.patch does not apply
cleanly. Your version has spaces throughout the whole patch (both the
context lines and the line you are adding), where the code in the
package uses tabs.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2055114

Title:
  fail2ban is broken in 24.04 Noble

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fail2ban/+bug/2055114/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771740] Re: Expose link offload options

2022-02-22 Thread Richard Laager
Upon further investigation, I see that the systemd networkd settings
have similar documentation only listing true and unset. But the systemd
NEWS file explicitly talks about disabling and the settings are parsed
in networkd using config_parse_tristate, so I think networkd properly
handles =0 on these options. (I'm still trying to test it.)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771740

Title:
  Expose link offload options

To manage notifications about this bug go to:
https://bugs.launchpad.net/netplan/+bug/1771740/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1771740] Re: Expose link offload options

2022-02-22 Thread Richard Laager
This change does NOT fix the issue from the [Impact] statement. The
[Impact] talks about disabling offload, but the test case talks about
enabling offload. The patch only implements enabling offload, not
disabling it.

** Changed in: netplan
   Status: Fix Committed => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1771740

Title:
  Expose link offload options

To manage notifications about this bug go to:
https://bugs.launchpad.net/netplan/+bug/1771740/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940916] Re: Incorrectly excludes tmpfs filesystems

2022-02-09 Thread Richard Laager
** Tags removed: verification-needed-focal
** Tags added: verification-done-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940916

Title:
  Incorrectly excludes tmpfs filesystems

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/monitoring-plugins/+bug/1940916/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1796047] Re: update-ieee-data throws error because of wrong url

2022-01-31 Thread Richard Laager
** Bug watch added: Debian Bug tracker #1004709
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1004709

** Also affects: ieee-data (Debian) via
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1004709
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1796047

Title:
  update-ieee-data throws error because of wrong url

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ieee-data/+bug/1796047/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1958481] Re: check_disk forcibly ignores tmpfs

2022-01-19 Thread Richard Laager
On the stock version, tmpfs filesystems do not show up, even if I
specify -X:

$ /usr/lib/nagios/plugins/check_disk -w 10 -c 10
DISK OK - free space: /dev 5944 MB (100% inode=99%); / 2357 MB (28% inode=75%); 
/srv 17618 MB (94% inode=99%); /boot/efi 498 MB (98% inode=-);| 
/dev=0MB;5934;5934;0;5944 /=5993MB;8807;8807;0;8817 
/srv=1009MB;18634;18634;0;18644 /boot/efi=5MB;494;494;0;504

$ /usr/lib/nagios/plugins/check_disk -w 10 -c 10 -X devpts
DISK OK - free space: /dev 5944 MB (100% inode=99%); / 2357 MB (28% inode=75%); 
/srv 17618 MB (94% inode=99%); /boot/efi 498 MB (98% inode=-);| 
/dev=0MB;5934;5934;0;5944 /=5993MB;8807;8807;0;8817 
/srv=1009MB;18634;18634;0;18644 /boot/efi=5MB;494;494;0;504

With this version using my alternative patch, they still don't show up by 
default, but do if I specify something with -X:
$ ./check_disk -w 10 -c 10
DISK OK - free space: /dev 5944 MB (100% inode=99%); / 2357 MB (28% inode=75%); 
/srv 17618 MB (94% inode=99%); /boot/efi 498 MB (98% inode=-);| 
/dev=0MB;5934;5934;0;5944 /=5992MB;8807;8807;0;8817 
/srv=1009MB;18634;18634;0;18644 /boot/efi=5MB;494;494;0;504

$ ./check_disk -w 10 -c 10 -X devpts
DISK CRITICAL - free space: /dev 5944 MB (100% inode=99%); /run 1076 MB (89% 
inode=99%); / 2357 MB (28% inode=75%); /dev/shm 5987 MB (100% inode=99%); 
/run/lock 5 MB (100% inode=99%); /sys/fs/cgroup 5987 MB (100% inode=99%); /tmp 
5986 MB (99% inode=99%); /srv 17618 MB (94% inode=99%); /boot/efi 498 MB (98% 
inode=-); /run/user/25045 1197 MB (100% inode=99%);| /dev=0MB;5934;5934;0;5944 
/run=120MB;1187;1187;0;1197 /=5992MB;8807;8807;0;8817 
/dev/shm=0MB;5977;5977;0;5987 /run/lock=0MB;-5;-5;0;5 
/sys/fs/cgroup=0MB;5977;5977;0;5987 /tmp=0MB;5977;5977;0;5987 
/srv=1009MB;18634;18634;0;18644 /boot/efi=5MB;494;494;0;504 
/run/user/25045=0MB;1187;1187;0;1197

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1958481

Title:
  check_disk forcibly ignores tmpfs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/monitoring-plugins/+bug/1958481/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1958481] Re: check_disk forcibly ignores tmpfs

2022-01-19 Thread Richard Laager
** Patch added: "An updated version of the patch with my alternative solution"
   
https://bugs.launchpad.net/ubuntu/+source/monitoring-plugins/+bug/1958481/+attachment/736/+files/exclude-tmpfs-squashfs-tracefs.patch

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1958481

Title:
  check_disk forcibly ignores tmpfs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/monitoring-plugins/+bug/1958481/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1958481] Re: check_disk forcibly ignores tmpfs

2022-01-19 Thread Richard Laager
-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1958481

Title:
  check_disk forcibly ignores tmpfs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/monitoring-plugins/+bug/1958481/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1958481] [NEW] check_disk forcibly ignores tmpfs

2022-01-19 Thread Richard Laager
Public bug reported:

check_disk ignores tmpfs filesystems due to an Ubuntu patch
(debian/patches/exclude-tmpfs-squashfs-tracefs.patch) added for LP
#1827159. This is a bad idea.

On my servers, I have a tmpfs mounted at /tmp. Last night, /tmp filled
up on one of them, resulting in significant breakage. My Icinga disk
alerting did not notify me, because it's ignoring all tmpfs filesystems.
I looked to see why and found that ignoring tmpfs is the Icinga default,
so I adjusted that in my configuration. Unfortunately, that still didn't
work. I eventually tracked that down to this patch. Because this tmpfs
exclusion has been hardcoded in, it is impossible for me to override
this decision.

While I think you should simply remove tmpfs from that patch, an
alternative approach would be to move the code lower, after the
arguments have been parsed, and conditionalize it. Something like this:

if (!fs_exclude_list) {
  np_add_name(&fs_exclude_list, "squashfs");
  np_add_name(&fs_exclude_list, "tmpfs");
  np_add_name(&fs_exclude_list, "tracefs");
}
np_add_name(&fs_exclude_list, "iso9660");

Note that iso9660 is outside the conditional and thus always added as
per the upstream behavior. Thus the default behavior would be the same
as now with the patch, but if the user customizes the exclude list with
-X (e.g. in Icinga, or just via the command line), that would be
honored.

** Affects: monitoring-plugins (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1958481

Title:
  check_disk forcibly ignores tmpfs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/monitoring-plugins/+bug/1958481/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1892108] Re: ping prints ip address octets backwards on host redirect

2021-08-04 Thread Richard Laager
I was able to verify this is fixed in iputils-ping 20210202-1. That is,
I saw this same problem, grabbed those sources from Debian, built them,
and tested again. Accordingly, this should already be fixed in Ubuntu
impish.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1892108

Title:
  ping prints ip address octets backwards on host redirect

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/iputils/+bug/1892108/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1909950] Re: named: TCP connections sometimes never close due to race in socket teardown

2021-02-25 Thread Richard Laager
> I will also write back in a few days time with feedback from a user,
> who is testing this fixed package in production.

That user is me. I've been running 1:9.16.1-0ubuntu2.7 on a ISP
production recursive server "since Fri 2021-02-19 17:44:17 CST; 5 days
ago" (per systemd). The system remains stable. The TCP numbers in "rndc
status" look good, compared to when we were experiencing the problem of
named hitting the TCP client limit due to sockets not closing.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1909950

Title:
  named: TCP connections sometimes never close due to race in socket
  teardown

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/bind9/+bug/1909950/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1913342] Re: zfs.8 man page snapshot listing instructions are confusing

2021-01-26 Thread Richard Laager
listsnaps is an alias of listsnapshots, but you're right that it's on
the pool.

Can you file this upstream:
https://github.com/openzfs/zfs/issues/new/choose

If you want, you could take a stab at submitting a pull request. It's a
pretty simple sounding change. The repo is here:
https://github.com/openzfs/zfs The man pages are in the "man"
subdirectory.

For your "Extra Credit" piece, "zfs list -t filesystem,snapshot" shows
both filesystems and snapshots for everything. "zfs list -t snapshot
dataset" shows snapshots for the specified dataset. But if you combine
those together as "zfs list -t filesystem,snapshot dataset", you do not
get snapshots. However, "zfs list -t filesystem,snapshot -r dataset"
does show the snapshots. Whether that's a bug or not, I can't say. But
that's a more detailed explanation of that problem that will be helpful
if you file a bug report on that.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1913342

Title:
  zfs.8 man page snapshot listing instructions are confusing

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1913342/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1908473] Re: rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state which leads to file descriptor leak

2021-01-20 Thread Richard Laager
** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1908473

Title:
  rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state which
  leads to file descriptor leak

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/librelp/+bug/1908473/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1908473] Re: rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state which leads to file descriptor leak

2021-01-20 Thread Richard Laager
I tested this on Focal. I installed librelp0 and restart rsyslog. Prior
to the change, sockets were stacking up in CLOSE-WAIT (both from normal
use and from the netcat test). After the change, sockets are being
closed correctly.

** Tags removed: verification-needed-focal
** Tags added: verification-done-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1908473

Title:
  rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state which
  leads to file descriptor leak

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/librelp/+bug/1908473/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1909950] Re: TCP connections never close

2021-01-20 Thread Richard Laager
** Changed in: bind9 (Ubuntu)
   Status: Confirmed => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1909950

Title:
  TCP connections never close

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/bind9/+bug/1909950/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1908473] Re: rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state which leads to file descriptor leak

2021-01-06 Thread Richard Laager
The test package fixes the issue for me.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1908473

Title:
  rsyslog-relp: imrelp module leaves sockets in CLOSE_WAIT state which
  leads to file descriptor leak

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/librelp/+bug/1908473/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1854480] Re: zfs_arc_max not working anymore in zfs 0.8.1

2020-12-16 Thread Richard Laager
The limit in the code does seem to be 64 MiB. I'm not sure why this
isn't working. I am not even close to an expert on this part of OpenZFS,
so all I can suggest is to file a bug report upstream:
https://github.com/openzfs/zfs/issues/new

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854480

Title:
  zfs_arc_max not working anymore in zfs 0.8.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854480/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1901784] Re: configuration file without pid causes "Bad magic in options.c, line 1059" crash on SIGHUP

2020-12-08 Thread Richard Laager
I hit this bug. The analysis here appears correct to me. PIDFILE is a
static string (via a preprocessor define). The suggested fix of calling
str_dup() sounds correct.

Adding this to the top of a stunnel config file is a work-around:
pid = /var/run/stunnel4.pid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1901784

Title:
  configuration file without pid causes "Bad magic in options.c, line
  1059" crash on SIGHUP

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/stunnel4/+bug/1901784/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-03 Thread Richard Laager
device_removal only works if you can import the pool normally. That is
what you should have used after you accidentally added the second disk
as another top-level vdev. Whatever you have done in the interim,
though, has resulted in the second device showing as FAULTED. Unless you
can fix that, device_removal is not an option. I had hoped that you just
had the second drive unplugged or something. But since the import is
showing "corrupted data" for the second drive, that's probably not what
happened.

This works for me on Ubuntu 20.04:
echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds

That setting does not exist on Ubuntu 18.04 (which you are running), so
I get the same "Permission denied" error (because bash is trying to
create that file, which you cannot do).

I now see this is an rpool. Is your plan to reinstall? With 18.04 or
20.04?

If 18.04, then:
1. Download the 20.04.1 live image. Write it to a USB disk and boot into that.
2. In the live environment, install the ZFS tools: sudo apt install 
zfsutils-linux
3. echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds
4. mkdir /old
5. Import the old pool renaming it to rpool-old and mount filesystems:
   zpool import -o readonly=on -N -R /old rpool rpool-old
   zfs mount rpool-old/ROOT/ubuntu
   zfs mount -a
6. Confirm you can access your data. Take another backup, if desired. If you 
don't have space to back it up besides the new/second disk, then read on...
7. Follow the 18.04 Root-on-ZFS HOWTO using (only) the second disk. Be very 
careful not to partition or zpool create the disk with your data!!! For 
example, partition the second disk for the mirror scenario. But obviously you 
can't do zpool create with "mirror" because you have only one disk.
8. Once the new system is installed (i.e. after step 6.2), but before 
rebooting, copy data from /old to /mnt as needed.
9. Shut down. Disconnect the old disk. Boot up again.
9. Continue the install as normal.
10. When you are certain that everything is good and that new disk is working 
properly (maybe do a scrub) and you have all your data, then you can connect 
the old disk and do the zpool attach (ATTACH, not add) to attach the old disk 
to the new pool as a mirror

If 20.04, then I'd do this instead:
1. Unplug the disk with your data.
2. Follow the 20.04 Root-on-ZFS HOWTO using only the second disk. Follow the 
steps as if you were mirroring (since that is the ultimate goal) where 
possible. For example, partition the second disk for the mirror scenario. But 
obviously you can't do zpool create with "mirror" because you have only one 
disk.
3. Once the new, 20.04 system is working on the second disk and booting 
normally, connect the other, old drive. (This assumes you can connect it while 
the system is running.)
4. echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds
5. Import the old pool using its GUID renaming it to rpool-old and mount 
filesystems:
   zpool import -o readonly -N -R /mnt 5077426391014001687 rpool-old
   zfs mount rpool-old/ROOT/ubuntu
   zfs mount -a
6. Copy over data.
7. zpool export rpool-old
8. When you are certain that everything is good and that new disk is working 
properly (maybe do a scrub) and you have all your data, then you can do the 
zpool attach (ATTACH, not add) to attach the old disk to the new pool as a 
mirror.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906542

Title:
  echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds  says
  premission error, unable to reapair lost zfs pool data

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1906542/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906542] Re: echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds says premission error, unable to reapair lost zfs pool data

2020-12-03 Thread Richard Laager
Why is the second disk missing? If you accidentally added it and ended
up with a striped pool, as long as both disks are connected, you can
import the pool normally. Then use the new device_removal feature to
remove the new disk from the pool.

If you've done something crazy like pulled the disk and wiped it, then
yeah, you're going to need to figure out how to import the pool read-
only. I don't have any advice on that piece.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906542

Title:
  echo 1 >> /sys/module/zfs/parameters/zfs_max_missing_tvds  says
  premission error, unable to reapair lost zfs pool data

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1906542/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1899249] Re: OpenZFS writing stalls, under load

2020-10-12 Thread Richard Laager
You could shrink the DDT by making a copy of the files in place (with
dedup off) and deleting the old file. That only requires enough extra
space for a single file at a time. This assumes no snapshots.

If you need to preserve snapshots, another option would be to send|recv
a dataset at a time. If you have enough free space for a copy of the
largest dataset, this would work.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1899249

Title:
  OpenZFS writing stalls, under load

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1899249/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1899249] Re: OpenZFS writing stalls, under load

2020-10-11 Thread Richard Laager
Did you destroy and recreate the pool after disabling dedup? Otherwise
you still have the same dedup table and haven’t really accomplished
much.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1899249

Title:
  OpenZFS writing stalls, under load

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1899249/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852747] Re: mdcheck_start.service trying to start unexisting file

2020-10-04 Thread Richard Laager
The "natural start" succeeded on all 4 of my systems. The start times
were 01:41, 10:50, 18:11, and 21:43.

** Tags removed: verification-needed verification-needed-focal
** Tags added: verification-done verification-done-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852747

Title:
  mdcheck_start.service trying to start unexisting file

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1852747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852747] Re: mdcheck_start.service trying to start unexisting file

2020-09-28 Thread Richard Laager
I repeated my same test procedure. Everything worked as expected.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852747

Title:
  mdcheck_start.service trying to start unexisting file

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1852747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852747] Re: mdcheck_start.service trying to start unexisting file

2020-09-28 Thread Richard Laager
It might be mad about the extra space after the equals. Note that is is
complaining about the empty string. If it is splitting by spaces, that
would explain it.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852747

Title:
  mdcheck_start.service trying to start unexisting file

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1852747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852747] Re: mdcheck_start.service trying to start unexisting file

2020-09-24 Thread Richard Laager
Yeah, I can confirm that's broken too. Here is the fix:
https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/commit/?id=6636788aaf4ec0cacaefb6e77592e4a68e70a957

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852747

Title:
  mdcheck_start.service trying to start unexisting file

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1852747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852747] Re: mdcheck_start.service trying to start unexisting file

2020-09-24 Thread Richard Laager
It was trivial, so I sent in the patches. I didn't change `...` to
$(...) as I don't care to argue with them about that. We'll see what
upstream says.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852747

Title:
  mdcheck_start.service trying to start unexisting file

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1852747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852747] Re: mdcheck_start.service trying to start unexisting file

2020-09-24 Thread Richard Laager
I installed the update on 4 basically identical systems (note to self:
hostnames starting with g, k, r, w):

I enabled -proposed and installed the package:

sudo vi /etc/apt/sources.list.d/ubuntu-proposed.list
sudo apt update
sudo apt install mdadm=4.1-5ubuntu1.1


I tested the scrub on one system (hostname starts with k):

# In another terminal:
watch cat /proc/mdstat

sudo systemctl start mdcheck_start.service &
# This started the scrub.

In ~20 minutes, the scrub completed and the service stopped ~1 min
thereafter. (This makes sense given the "sleep 120" that the script
uses.)

Logs:
2020-09-24T12:28:56.517769-05:00 k... root: mdcheck start checking /dev/md0
2020-09-24T12:50:56.665042-05:00 k... root: mdcheck finished checking /dev/md0


I tested the continue script on one other system (hostname starts with r):

# I changed the time from 6 hours to 3 minutes with a drop-in unit file.
sudo mkdir /etc/systemd/system/mdcheck_start.service.d
sudo vi /etc/systemd/system/mdcheck_start.service.d/time.conf
sudo systemctl daemon-reload

# In another terminal:
watch cat /proc/mdstat

sudo systemctl start mdcheck_start.service &
# This started the scrub.

watch systemctl status mdcheck_start.service
# Again, the script uses a "sleep 120" (two minutes), so at the 4 minute mark
# the service stopped, as did the scrub.

sudo systemctl start mdcheck_continue.service &
watch systemctl status mdcheck_start.service
# The scrub started where it left off.
# The time on this was still the default of 6 hours.
# After another ~18 minutes, the scrub completed.

sudo rm /etc/systemd/system/mdcheck_start.service.d/time.conf
sudo rmdir /etc/systemd/system/mdcheck_start.service.d
sudo systemctl daemon-reload

sudo systemctl start mdcheck_start.service &
# This started the scrub.

watch systemctl status mdcheck_start.service
# After ~20 minutes, the scrub completed and the service stopped.

Logs:
2020-09-24T12:14:56.204254-05:00 r... root: mdcheck start checking /dev/md0
2020-09-24T12:17:37.912431-05:00 r... root: mdcheck start checking /dev/md0
2020-09-24T12:21:38.282462-05:00 r... root: pause checking /dev/md0 at 95207168
2020-09-24T12:21:50.636301-05:00 r... root: mdcheck continue checking /dev/md0 
from 95207168
2020-09-24T12:39:50.737671-05:00 r... root: mdcheck finished checking /dev/md0
2020-09-24T12:41:03.127050-05:00 r... root: mdcheck start checking /dev/md0
2020-09-24T13:03:03.243179-05:00 r... root: mdcheck finished checking /dev/md0


I have NOT marked this verification-done, as I believe you wanted to see
a "natural" start of the service on October 4 before calling the testing
complete. I will leave these systems as is. That will give us 4 examples
of it, two of which were not touched in any special way at all (other
than installing the update from -proposed, of course). However, if you
want to mark this verification-done now, or ask me to do it, I am not
opposed to that either.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852747

Title:
  mdcheck_start.service trying to start unexisting file

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1852747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852747] Re: mdcheck_start.service trying to start unexisting file

2020-09-24 Thread Richard Laager
[The following is probably outside the scope of this SRU, but since this
will be the first time that people see this logging, maybe you do want
to improve it now.]

The existing log statements are:

logger -p daemon.info mdcheck start checking $dev
logger -p daemon.info mdcheck continue checking $dev from $start
logger -p daemon.info mdcheck finished checking $dev
logger -p daemon.info pause checking $dev at `cat $fl`

Some issues:
1. The last one does not contain "mdcheck", which is inconsistent and hampers 
grepping.
2. These do not set a "tag", so we get "root" as the tag. The typical syslog 
convention is that the tag is the daemon/script name. I propose "-t mdcheck". 
That can just be the "mdcheck" that starts the log messages now; there is no 
need for two "mdcheck"s.
3. nit: I'd use $() instead of ``.

That is, I would change them to the following:

logger -p daemon.info -t mdcheck start checking $dev
logger -p daemon.info -t mdcheck continue checking $dev from $start
logger -p daemon.info -t mdcheck finished checking $dev
logger -p daemon.info -t mdcheck pause checking $dev at $(cat $fl)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852747

Title:
  mdcheck_start.service trying to start unexisting file

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1852747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852747] Re: mdcheck_start.service trying to start unexisting file

2020-09-23 Thread Richard Laager
I have tested the fix on Focal and confirmed it works. Here is a link to the 
diff in our PPA:
https://launchpadlibrarian.net/498490932/mdadm_4.1-5ubuntu1_4.1-5ubuntu1.1~wiktel1.20.04.1.diff.gz

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852747

Title:
  mdcheck_start.service trying to start unexisting file

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1852747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852747] Re: mdcheck_start.service trying to start unexisting file

2020-09-14 Thread Richard Laager
Unfortunately, we are past the DebianImportFreeze for groovy. Can you
apply the one-line bug fix to Groovy so that it can then SRU into Focal?

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=960132#15

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852747

Title:
  mdcheck_start.service trying to start unexisting file

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1852747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1893900] Re: ModuleNotFoundError: No module named 'distutils.sysconfig'

2020-09-02 Thread Richard Laager
That sounds like a missing dependency on python3-distutils.

But unless you're running a custom kernel, Ubuntu is shipping the ZFS module 
now:
https://bugs.launchpad.net/ubuntu/+source/linux-raspi/+bug/1884110

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1893900

Title:
  ModuleNotFoundError: No module named 'distutils.sysconfig'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1893900/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1872118] Re: [SRU] DHCP Cluster crashes after a few hours

2020-08-17 Thread Richard Laager
Likewise, it's been stable 24 hours here.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872118

Title:
  [SRU] DHCP Cluster crashes after a few hours

To manage notifications about this bug go to:
https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1872118] Re: [SRU] DHCP Cluster crashes after a few hours

2020-08-16 Thread Richard Laager
First I reverted isc-dhcp-server back to the original focal version, since I 
had an updated version from the PPA:
$ sudo apt install isc-dhcp-server=4.4.1-2.1ubuntu5 
isc-dhcp-common=4.4.1-2.1ubuntu5


Then I install the update packages:
$ sudo apt update
$ sudo apt install libdns-export1109/focal-proposed 
libirs-export161/focal-proposed libisc-export1105/focal-proposed
$ dpkg --status libdns-export1109 libirs-export161 libisc-export1105 | grep 
Version
Version: 1:9.11.16+dfsg-3~ubuntu1
Version: 1:9.11.16+dfsg-3~ubuntu1
Version: 1:9.11.16+dfsg-3~ubuntu1

Then I restarted dhcpd:
sudo systemctl restart isc-dhcp-server

It has been running for four hours on both systems.

** Tags removed: verification-needed verification-needed-focal
** Tags added: verification-done verification-done-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872118

Title:
  [SRU] DHCP Cluster crashes after a few hours

To manage notifications about this bug go to:
https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1872118] Re: [SRU] DHCP Cluster crashes after a few hours

2020-08-15 Thread Richard Laager
Andrew, 1:9.11.16+dfsg-3~build1 is wrong. The correct version is
1:9.11.16+dfsg-3~ubuntu1 (~ubuntu1 instead of ~build1).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872118

Title:
  [SRU] DHCP Cluster crashes after a few hours

To manage notifications about this bug go to:
https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1872118] Re: [SRU] DHCP Cluster crashes after a few hours

2020-08-12 Thread Richard Laager
Excellent. I'm available to test the -proposed update for focal whenever
it is ready.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872118

Title:
  [SRU] DHCP Cluster crashes after a few hours

To manage notifications about this bug go to:
https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1872118] Re: [SRU] DHCP Cluster crashes after a few hours

2020-08-11 Thread Richard Laager
Jorge, I agree with Gianfranco Costamagna that a rebuild of isc-dhcp is
NOT required. Why do you think it is?

Presumably BIND also uses these libraries? If so, it seems like the Test
Case should involve making sure BIND still seems to work, and that BIND
should be mentioned in the Regression Potential. My DHCP servers also
run BIND for recursive DNS and that has been fine with the patch
applied.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872118

Title:
  [SRU] DHCP Cluster crashes after a few hours

To manage notifications about this bug go to:
https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1872118] Re: DHCP Cluster crashes after a few hours

2020-08-06 Thread Richard Laager
No crashes to report.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872118

Title:
  DHCP Cluster crashes after a few hours

To manage notifications about this bug go to:
https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1872118] Re: DHCP Cluster crashes after a few hours

2020-08-06 Thread Richard Laager
Jorge, it sounds like ISC might think there is a more fundamental issue here:
https://gitlab.isc.org/isc-projects/dhcp/-/issues/121#note_152804

** Bug watch added: gitlab.isc.org/isc-projects/dhcp/-/issues #121
   https://gitlab.isc.org/isc-projects/dhcp/-/issues/121

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872118

Title:
  DHCP Cluster crashes after a few hours

To manage notifications about this bug go to:
https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1872118] Re: DHCP Cluster crashes after a few hours

2020-08-05 Thread Richard Laager
Jorge, I have been running for 25 hours on the patched version with no
crashes on either server.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872118

Title:
  DHCP Cluster crashes after a few hours

To manage notifications about this bug go to:
https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1872118] Re: DHCP Cluster crashes after a few hours

2020-08-04 Thread Richard Laager
I ran:
sudo apt install \
isc-dhcp-server=4.4.1-2.1ubuntu6~ppa1 \
libdns-export1109=1:9.11.16+dfsg-3~ppa1 \
libirs-export161=1:9.11.16+dfsg-3~ppa1 \
libisc-export1105=1:9.11.16+dfsg-3~ppa1 && \
sudo systemctl restart isc-dhcp-server

The restart at the end was just for extra good measure, to make sure I
was running on the new libraries.

I'm coming up on 3 hours running, which is a good sign.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872118

Title:
  DHCP Cluster crashes after a few hours

To manage notifications about this bug go to:
https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1872118] Re: DHCP Cluster crashes after a few hours

2020-08-01 Thread Richard Laager
** Bug watch added: gitlab.isc.org/isc-projects/dhcp/issues #128
   https://gitlab.isc.org/isc-projects/dhcp/issues/128

** Also affects: dhcp via
   https://gitlab.isc.org/isc-projects/dhcp/issues/128
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872118

Title:
  DHCP Cluster crashes after a few hours

To manage notifications about this bug go to:
https://bugs.launchpad.net/dhcp/+bug/1872118/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1872118] Re: DHCP Cluster crashes after a few hours

2020-08-01 Thread Richard Laager
I was able to reproduce this with 4.4.2 plus the Ubuntu packaging. I did
not try with stock 4.4.2 from source.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872118

Title:
  DHCP Cluster crashes after a few hours

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/isc-dhcp/+bug/1872118/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1888405] Re: zfsutils-linux: zfs-volume-wait.service fails with locked encrypted zvols

2020-08-01 Thread Richard Laager
I've posted this upstream (as a draft PR, pending testing) at:
https://github.com/openzfs/zfs/pull/10662

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1888405

Title:
  zfsutils-linux: zfs-volume-wait.service fails with locked encrypted
  zvols

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1888405/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1888405] Re: zfsutils-linux: zfs-volume-wait.service fails with locked encrypted zvols

2020-08-01 Thread Richard Laager
Here is a completely untested patch that takes a different approach to
the same issue. If this works, it seems more suitable for upstreaming,
as the existing list_zvols seems to be the place where properties are
checked. Can either of you test this? If this looks good, I'll submit it
upstream.

** Patch added: "0001-zvol_wait-Ignore-locked-zvols.patch"
   
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1888405/+attachment/5397735/+files/0001-zvol_wait-Ignore-locked-zvols.patch

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1888405

Title:
  zfsutils-linux: zfs-volume-wait.service fails with locked encrypted
  zvols

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1888405/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1888926] [NEW] tls.tlscfgcmd not recognized; rebuild rsyslog against librelp 1.5.0

2020-07-25 Thread Richard Laager
Public bug reported:

rsyslogd: error during parsing file /etc/rsyslog.d/FILENAME.conf, on or
before line 22: imrelp: librelp does not support input parameter
'tls.tlscfgcmd'; it probably is too old (1.5.0 or higher should be
fine); ignoring setting now. [v8.2001.0 try
https://www.rsyslog.com/e/2207 ]

Here is the config:


module(load="imrelp" tls.tlslib="openssl")

input(
type="imrelp" port="2515"
tls="on"
# This should work in rsyslog 8.2006.0:
#tls.mycert="/etc/rsyslog.tls/fullchain.pem"
# for now we use the work-around discussed in:
# https://github.com/rsyslog/rsyslog/issues/4360
tls.cacert="/etc/rsyslog.tls/chain.pem"
tls.mycert="/etc/rsyslog.tls/cert.pem"
tls.myprivkey="/etc/rsyslog.tls/privkey.pem"
tls.tlscfgcmd="ServerPreference 
CipherString=ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
 
Ciphersuites=TLS_AES_128_GCM_SHA256:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_256_GCM_SHA384
 MinProtocol=TLSv1.2"
)



This error comes from this code in plugins/imrelp/imrelp.c:


#if defined(HAVE_RELPENGINESETTLSCFGCMD)
inst->tlscfgcmd = 
(uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
#else
parser_errmsg("imrelp: librelp does not support input 
parameter 'tls.tlscfgcmd'; "
"it probably is too old (1.5.0 or higher should 
be fine); ignoring setting now.");
#endif


The build log for focal:
https://launchpadlibrarian.net/464665610/buildlog_ubuntu-focal-arm64.rsyslog_8.2001.0-1ubuntu1_BUILDING.txt.gz
says:
checking for relpSrvSetTlsConfigCmd... no
checking for relpSrvSetTlsConfigCmd... (cached) no


The build log for groovy:
https://launchpadlibrarian.net/486409321/buildlog_ubuntu-groovy-arm64.rsyslog_8.2006.0-2ubuntu1_BUILDING.txt.gz
says:
checking for relpSrvSetTlsConfigCmd... yes
checking for relpSrvSetTlsConfigCmd... (cached) yes

If I rebuild the rsyslog package, I get:
checking for relpSrvSetTlsConfigCmd... yes
checking for relpSrvSetTlsConfigCmd... (cached) yes

I suspect that the rsyslog package was built against and older librelp
version. A simple rebuild of rsyslog should fix this, though a more
complete fix would be to raise the Build-Depends from librelp-dev (>=
1.4.0) to librelp-dev (>= 1.5.0).

** Affects: rsyslog (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1888926

Title:
  tls.tlscfgcmd not recognized; rebuild rsyslog against librelp 1.5.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rsyslog/+bug/1888926/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1718761] Re: It's not possible to use OverlayFS (mount -t overlay) to stack directories on a ZFS volume

2020-07-05 Thread Richard Laager
See also this upstream PR: https://github.com/openzfs/zfs/pull/9414
and the one before it: https://github.com/openzfs/zfs/pull/8667

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1718761

Title:
  It's not possible to use OverlayFS (mount -t overlay) to stack
  directories on a ZFS volume

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1718761/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root

2020-05-30 Thread Richard Laager
I have submitted this upstream:
https://github.com/openzfs/zfs/pull/10388

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1881442] [NEW] grub-initrd-fallback.service should RequiresMountsFor=/boot/grub

2020-05-30 Thread Richard Laager
Public bug reported:

grub-initrd-fallback.service should have:

[Unit]
RequiresMountsFor=/boot/grub

If /boot/grub is on a separate filesystem, this can run before that
filesystem is mounted and cause problems.

** Affects: grub2 (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881442

Title:
  grub-initrd-fallback.service should RequiresMountsFor=/boot/grub

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1881442/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1779736] Re: umask ignored on NFSv4.2 mounts

2020-05-29 Thread Richard Laager
seth-arnold, the ZFS default is actltype=off, which means that ACLs are
disabled. (I don't think the NFSv4 ACL support in ZFS is wired up on
Linux.) It's not clear to me why this is breaking with ACLs off.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1779736

Title:
  umask ignored on NFSv4.2 mounts

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1779736/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [Bug 1881107] Re: zfs: backport AES-GCM performance accelleration

2020-05-28 Thread Richard Laager
There is another AES-GCM performance acceleration commit for systems
without MOVBE.

-- 
Richard

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881107

Title:
  zfs: backport AES-GCM performance accelleration

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1881107/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1872863] Re: QEMU/KVM display is garbled when booting from kernel EFI stub due to missing bochs-drm module

2020-05-11 Thread Richard Laager
I have confirmed that the fix in -proposed fixes the issue for me.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1872863

Title:
  QEMU/KVM display is garbled when booting from kernel EFI stub due to
  missing bochs-drm module

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/kmod/+bug/1872863/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root

2020-05-05 Thread Richard Laager
Can you share a bit more details about how you have yours setup? What
does your partition table look like, what does the MD config look like,
what do you have in /etc/fstab for swap, etc.? I'm running into weird
issues with this configuration, separate from this bug.

@didrocks: I'll try to get this proposed upstream soon. If you beat me
to it, I won't complain. :)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1874519] Re: ZFS installation on Raspberry Pi is problematic

2020-05-05 Thread Richard Laager
I think it used to be the case that zfsutils-linux depended on zfs-dkms
which was then provided by the kernel packages. That seems like a way to
solve this. Given that dkms is for dynamic kernel modules, it was always
a bit weird to see the kernel providing that. It should probably be that
zfsutils-linux depends on zfs-dkms | zfs-module, and then the kernel
provides zfs-module.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1874519

Title:
  ZFS installation on Raspberry Pi is problematic

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1874519/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root

2020-05-05 Thread Richard Laager
I didn't get a chance to test the patch. I'm running into unrelated
issues.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root

2020-05-04 Thread Richard Laager
John Gray: Everything else aside, you should mirror your swap instead of
striping it (which I think is what you're doing). With your current
setup, if a disk dies, your system will crash.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1875577] Re: Encrypted swap won't load on 20.04 with zfs root

2020-05-04 Thread Richard Laager
This is a tricky one because all of the dependencies make sense in
isolation. Even if we remove the dependency added by that upstream
OpenZFS commit, given that modern systems use zfs-mount-generator,
systemd-random-seed.service is going to Require= and After= var-
lib.mount because of its RequiresMountsFor=/var/lib/systemd/random-seed.
The generated var-lib.mount will be After=zfs-import.target because you
can't mount a filesystem without importing the pool. And zfs-
import.target is After= the two zfs-import-* services. Those are after
cryptsetup.target, as you might be running your pool on top of LUKS.

Mostly side note: it does seem weird and unnecessary that zfs-load-
module.service has After=cryptsetup.target. We should probably remove
that. That is coming from debian/patches/2100-zfs-load-module.patch
(which is what provides zfs-load-module.service in its entirety).

One idea here would be to eliminate the After=cryptsetup.target from
zfs-import-{cache,scan}.service and require that someone add them via a
drop-in if they are running on LUKS. However, in that case, they'll run
into the same problem anyway. So that's not really a fix.

Another option might be to remove the zfs-mount.service Before=systemd-
random-seed.service and effectively require the use of the mount
generator for Root-on-ZFS setups. That is what the Ubuntu installer does
and what the Root-on-ZFS HOWTO will use for 20.04 anyway. (I'm working
on it actively right now.) Then, modify zfs-mount-generator to NOT After
=zfs-import.target (and likewise for Wants=) if the relevant pool is
already imported (and likewise for the zfs-load-key- services). Since
the rpool will already be imported by the time zfs-mount-generator runs,
that would be omitted.

I've attached an *untested* patch to that effect. I hope to test this
yet tonight as I test more Root-on-ZFS scenarios, but no promises.

** Patch added: "2150-fix-systemd-dependency-loops.patch"
   
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+attachment/5366544/+files/2150-fix-systemd-dependency-loops.patch

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1875577

Title:
  Encrypted swap won't load on 20.04 with zfs root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1875577/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1848496] Re: [zfs-root] "device-mapper: reload ioctl on osprober-linux-sdaX failed: Device or resource busy" against devices owned by ZFS

2020-05-04 Thread Richard Laager
brian-willoughby (and pranav.bhattarai):

The original report text confirms that "The exit code is 0, so update-
grub does not fail as a result." That matches my understanding (as
someone who has done a lot of ZFS installs maintaining the upstream
Root-on-ZFS HOWTO) that this is purely cosmetic.

If you're not actually running other operating systems, you can simply
remove the os-prober package to make the errors go away.

I'm not saying it shouldn't be fixed. But it's not actually breaking
your systems, right?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1848496

Title:
  [zfs-root] "device-mapper: reload ioctl on osprober-linux-sdaX
  failed: Device or resource busy" against devices owned by ZFS

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/os-prober/+bug/1848496/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2020-02-10 Thread Richard Laager
The AES-GCM performance improvements patch has been merged to master. This also 
included the changes to make encryption=on mean aes-256-gcm:
https://github.com/zfsonlinux/zfs/commit/31b160f0a6c673c8f926233af2ed6d5354808393

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1862661] Re: zfs-mount.service and others fail inside unpriv containers

2020-02-10 Thread Richard Laager
** Changed in: zfs-linux (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1862661

Title:
  zfs-mount.service and others fail inside unpriv containers

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1862661/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1862661] Re: zfs-mount.service and others fail inside unpriv containers

2020-02-10 Thread Richard Laager
What was the expected result? Are you expecting to be able to just
install ZFS in a container (but not use it)? Or are you expecting it to
actually work? The user space tools can’t do much of anything without
talking to the kernel.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1862661

Title:
  zfs-mount.service and others fail inside unpriv containers

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1862661/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1862165] Re: /usr/local leak in /etc/default/zfs

2020-02-06 Thread Richard Laager
** Bug watch added: Github Issue Tracker for ZFS #9443
   https://github.com/zfsonlinux/zfs/issues/9443

** Also affects: zfs via
   https://github.com/zfsonlinux/zfs/issues/9443
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1862165

Title:
  /usr/local leak in /etc/default/zfs

To manage notifications about this bug go to:
https://bugs.launchpad.net/zfs/+bug/1862165/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1854982] Re: Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

2020-01-30 Thread Richard Laager
There does seem to be a real bug here. The problem is that we don’t know
if it is on the ZoL side or the FreeBSD side. The immediate failure is
that “zfs recv” on the FreeBSD side is failing to receive the stream. So
that is the best place to start figuring out why. If it turns out that
ZoL is generating an invalid stream, then we can take this to ZoL.
Accordingly, my main goal here is to help you produce the best possible
bug report for FreeBSD to help them troubleshoot. I don’t run FreeBSD,
so I can’t test this myself to produce a test case. If you can produce a
test case, with an example send stream that FreeBSD can’t receive, that
gives them the best chance of finding the root cause.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1854982] Re: Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

2020-01-30 Thread Richard Laager
The FreeBSD bug report:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=243730

Like I said, boiling this down to a test case would likely help a lot.
Refusing to do so and blaming the people giving you free software and
free support isn’t helpful.

** Bug watch added: bugs.freebsd.org/bugzilla/ #243730
   https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=243730

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1854982] Re: Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

2020-01-30 Thread Richard Laager
** Changed in: zfs-linux (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1854982] Re: Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

2020-01-29 Thread Richard Laager
In terms of a compact reproducer, does this work:

# Create a temp pool with large_dnode enabled:
truncate -s 1G lp1854982.img
sudo zpool create -d -o feature@large_dnode=enabled lp1854982 
$(pwd)/lp1854982.img

# Create a dataset with dnodesize=auto
sudo zfs create -o dnodesize=auto lp1854982/ldn

# Create a send stream
sudo zfs snapshot lp1854982/ldn@snap
sudo zfs send lp1854982/ldn@snap > lp1854982-ldn.zfs

sudo zpool export lp1854982

cat lp1854982-ldn.zfs | ssh 192.168.1.100 zfs receive zroot/ldn

If that doesn't reproduce the problem, adjust it until it does. You were
using `zfs send -c`, so maybe that's it. You may need to enable more
pool features, etc.

But if this can be reproduced with an empty dataset on an empty pool,
the send stream file is 8.5K (and far less compressed). Attach the
script for reference and the send stream to a FreeBSD bug.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1854982] Re: Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

2020-01-28 Thread Richard Laager
So, one of two things is true:
A) ZFS on Linux is generating the stream incorrectly.
B) FreeBSD is receiving the stream incorrectly.

I don't have a good answer as to how we might differentiate those two.
Filing a bug report with FreeBSD might be a good next step. But like I
said, a compact reproducer would go a long way.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1854982] Re: Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

2020-01-28 Thread Richard Laager
The last we heard on this, FreeBSD was apparently not receiving the send
stream, even though it supports large_dnode:

https://zfsonlinux.topicbox.com/groups/zfs-
discuss/T187d60c7257e2eb6-M14bb2d52d4d5c230320a4f56/feature-
incompatibility-between-ubuntu-19-10-and-freebsd-12-0

That's really bizarre. If it supports large_dnode, it should be able to
receive that stream. Ideally, this needs more troubleshooting,
particularly on the receive side. "It said (dataset does not exist)
after a long transfer." is not particularly clear. I'd like to see a
copy-and-paste of the actual `zfs recv` output, at a minimum.

@BertN45, if you want to keep troubleshooting, a good next step would be
to boil this down to a reproducible test case. That is, create a list of
specific commands to create dataset and send it that demonstrates the
problem. That would help. We may need to flesh out the reproducer a bit
more, e.g. by creating a pool on sparse files with particular feature
flags.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1850130] Re: zpools fail to import after reboot on fresh install of eoan

2020-01-22 Thread Richard Laager
I think there are multiple issues here. If it's just multipath, that
issue should be resolved by adding After=multipathd.service to zfs-
import-{cache,scan}.service.

For other issues, I wonder if this is cache file related. I'd suggest
checking that the cache file exists (I expect it would), and then
looking at the cache file (e.g. strings /etc/zfs/zpool.cache | less). I
suspect the issue is that the cache file has only the rpool. I'm not
entirely sure why that is happening.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1850130

Title:
  zpools fail to import after reboot on fresh install of eoan

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1850130/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1850130] Re: zpools fail to import after reboot on fresh install of eoan

2020-01-22 Thread Richard Laager
@gustypants: Sorry, the other one is scan, not pool. Are you using a
multipath setup? Does the pool import fine if you do it manually once
booted?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1850130

Title:
  zpools fail to import after reboot on fresh install of eoan

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1850130/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860228] Re: addition of zfsutils-linux scrib every 2nd sunday

2020-01-19 Thread Richard Laager
zfs-linux (0.6.5.6-2) unstable; urgency=medium
...
  * Scrub all healthy pools monthly from Richard Laager

So Debian stretch, but not Ubuntu 16.04.

Deleting the file should be safe, as dpkg should retain that. It sounds
like you never deleted it, as you didn’t have it before this upgrade. So
it wasn’t an issue of the file coming back, just appearing for the first
time. Deleting and editing conffiles is a standard thing in Debian
systems.

These days, we may want to convert this to a system timer/service pair
instead, which you could then disable/mask if you don’t want. Of course,
the initial conversion will cause the same complaint you have here:
something changed on upgrade and enabled a job you don’t want.

** Changed in: zfs-linux (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860228

Title:
  addition of zfsutils-linux scrib every 2nd sunday

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1860228/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860228] Re: addition of zfsutils-linux scrib every 2nd sunday

2020-01-18 Thread Richard Laager
This was added a LONG time ago. The interesting question here is: if you
previously deleted it, why did it come back? Had you deleted it though?
It sounds like you weren’t aware of this file.

You might want to edit it in place, even just to comment out the job.
That would force dpkg to give you a conffile merge prompt instead of
being able to silently put it back.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860228

Title:
  addition of zfsutils-linux scrib every 2nd sunday

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1860228/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1860182] Re: zpool scrub malfunction after kernel upgrade

2020-01-17 Thread Richard Laager
You original scrub took just under 4.5 hours. Have you let the second
scrub run anywhere near that long? If not, start there.

The new scrub code uses a two-phase approach. First it works through
metadata determining what (on-disk) blocks to scrub. Second, it does the
actual scrub. This allows ZFS to coalesce the blocks and do large,
sequential reads in the second phase. This dramatically speeds up the
total scrub time. In contrast, the original scrub code is doing a lot of
small, random reads.

You might just be seeing the first phase completing in 5 minutes, but
the second phase still needs to occur. Or, maybe it did part of the
first phase but hit the RAM limit and needed to start the second phase.

If you've let it run for 4.5 hours and it's still showing that status,
then I'd say something is wrong.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1860182

Title:
  zpool scrub malfunction after kernel upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1860182/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2020-01-17 Thread Richard Laager
We discussed this at the January 7th OpenZFS Leadership meeting. The
notes and video recording are now available.

The meeting notes are in the running document here (see page 2 right now, or 
search for this Launchpad bug number):
https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit

The video recording is here; the link starts you at 15:45 when we start 
discussing this:
https://youtu.be/x9-wua_mzt0?t=945

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2020-01-10 Thread Richard Laager
> It is not appropriate to require the user to type a password on every
> boot by default; this must be opt-in.

Agreed.

The installer should prompt (with a checkbox) for whether the user wants
encryption. It should default to off. If the user selects the checkbox,
prompt them for a passphrase. Setup encryption using that passphrase.
This is exactly how the installer behaves today for non-ZFS (e.g. using
LUKS). I'm proposing to extend that existing behavior to ZFS. Thi should
be trivial to implement; I'm not sure if we still have time for 20.04,
but I'd really love to see at least this much implemented now.

What should happen if the user leaves the encryption box unchecked?
Currently, they get no encryption, and that's what I'm proposing
initially. You'd like to improve that so that the user can later set a
passphrase without having to reformat their disk. I agree that's a
reasonable goal.

I think the blockers / potential blockers are:

1) `zfs change-key` does not overwrite the old wrapped master key on
disk, so it is accessible to forensic analysis. Given that the old
wrapping key is a known passphrase ("ubuntuzfs"), another way of looking
at this is that the master key is still on disk in what is, security-
wise, effectively plaintext. I (and other upstream ZFS developers) are
concerned about giving the user a false sense of security in this
situation. ZFS could overwrite the key on disk when changed. If/when
someone adds that enhancement to `zfs change-key`, then I think this
objection goes away. I don't see this being implemented in time for
20.04.

2) Is the performance acceptable? On older systems without AES-NI, there
is a noticeable impact, which I've seen myself. I recommended using AES-
NI support as the deciding factor here... if they have AES-NI, then
encrypt (with a known passphrase) even if the user didn't opt-in; if
they don't have AES-NI, then not opting-in means encryption is really
off. If that inconsistency is a problem, then ultimately Ubuntu just has
to decide one way or the other. Personally, I'm a big fan of encryption,
so I'm not going to be upset if the decision is that the performance
impact on older hardware is just something to accept.

> > I would recommend setting encryption=aes256-gcm instead of
> > encryption=on (which is aes256-ccm).
> 
> I think the right way to handle this is to change the behavior of
> zfs-linux so that encryption=on defaults to the recommended algorithm -

Agreed. I proposed this at the last OpenZFS Leadership meeting and there
is general agreement to do so. It does need a bit more discussion and
then implementation (which should be trivial).

> rather than hard-coding the algorithm selection in ubiquity, which is
> generally speaking a good recipe for bit rot.

Given that I'd like to see encryption land in 20.04, I think it would be
reasonable to set -o encryption=aes-256-gcm today and then change it
(e.g. for 20.10) to "on" once the default changes in OpenZFS.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1769890] Re: Icingaweb2 does not work with PHP 7.2

2020-01-08 Thread Richard Laager
New debdiff attached.

** Patch added: "icingaweb2_2.4.1-1ubuntu0.1.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/icingaweb2/+bug/1769890/+attachment/5318697/+files/icingaweb2_2.4.1-1ubuntu0.1.debdiff

** Description changed:

  [Impact]
  icingaweb2 does not work on PHP 7.2 or higher, e.g. as shipped in Ubuntu 
18.04.
  
  [Test Case]
  
  Steps to reproduce:
  $ sudo apt install mariadb-server
  # mysql_secure_installation
  $ sudo apt install icinga2 icinga2-ido-mysql
  Yes to both questions relating to automatically set up database.
  $ sudo apt install icingaweb2
  Point a browser at http://localhost/icingaweb2/setup
  
  Expected results:
  The setup wizard loads.
  
  Actual results:
  
  Fatal error: Uncaught ErrorException: session_name(): Cannot change
  session name when session is active in
  /usr/share/php/Icinga/Web/Session/PhpSession.php:97 Stack trace: #0
  [internal function]:
  Icinga\Application\ApplicationBootstrap->Icinga\Application\{closure}(2,
  'session_name():...', '/usr/share/php/...', 97, Array) #1
  /usr/share/php/Icinga/Web/Session/PhpSession.php(97):
  session_name('Icingaweb2') #2
  /usr/share/php/Icinga/Web/Session/PhpSession.php(152):
  Icinga\Web\Session\PhpSession->open() #3
  /usr/share/php/Icinga/Web/Controller/ActionController.php(544):
  Icinga\Web\Session\PhpSession->write() #4
  /usr/share/php/Icinga/Web/Controller/ActionController.php(489):
  Icinga\Web\Controller\ActionController->shutdownSession() #5
  /usr/share/icingaweb2/library/vendor/Zend/Controller/Action.php(512):
  Icinga\Web\Controller\ActionController->postDispatch() #6
  /usr/share/php/Icinga/Web/Controller/Dispatcher.php(76):
  Zend_Controller_Action->dispatch('errorAction') #7
  /usr/share/icingaweb2/library/vendor/Zend/Controller/Front.php( in
  /usr/share/icingaweb2/library/vendor/Zend/Controller/Plugin/Broker.php
  on line 332
  
  [Regression Potential]
  The package is completely broken for setup now, and IIRC, at least somewhat 
broken if you get past that somehow, so the regression potential is very low.
  
  [Other Info]
  
- Upstream fix:
+ Upstream fixes:
  
https://github.com/Icinga/icingaweb2/pull/3315/commits/dadd2c80f6819111f25e3799c072ec39c991897e
+ 
https://github.com/Icinga/icingaweb2/commit/72ec132f25c868d9510e6d36a2d5c92fc8dd59d1
  
  Backported to Debian package here:
  
https://salsa.debian.org/nagios-team/pkg-icingaweb2/commit/5804954da6cf08a74eeeb689d8d094eefa6ba9bc

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1769890

Title:
  Icingaweb2 does not work with PHP 7.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/icingaweb2/+bug/1769890/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1769890] Re: Icingaweb2 does not work with PHP 7.2

2020-01-08 Thread Richard Laager
To get this into a stable release, you need to follow the SRU process:
https://wiki.ubuntu.com/StableReleaseUpdates

I've prepared an SRU debdiff. The procedure says that you need to make
sure it's fixed in the development release (which it is) and that the
bug status is Fix Released. So I changed it to that. But then later, it
says to use In Progress. Apparently I could change it to Fix Released,
but now can't change it back to In Progress. I also can't add distro-
version-specific tasks. Can someone set this to In Progress and/or add a
Bionic task that's In Progress.

** Changed in: icingaweb2 (Ubuntu Bionic)
   Status: New => In Progress

** Changed in: icingaweb2 (Ubuntu Bionic)
     Assignee: (unassigned) => Richard Laager (rlaager)

** Description changed:

  [Impact]
  icingaweb2 does not work on PHP 7.2 or higher, e.g. as shipped in Ubuntu 
18.04.
  
  [Test Case]
  
  Steps to reproduce:
- # apt install mariadb
+ $ sudo apt install mariadb-server
  # mysql_secure_installation
- # apt install icinga2 icinga2-ido-mysql
- Yes to both questions relating to automatically set up database
- # apt install icingaweb2
- Point a browser at http://host.example.co.uk/icingaweb2 to run the setup 
wizard
+ $ sudo apt install icinga2 icinga2-ido-mysql
+ Yes to both questions relating to automatically set up database.
+ $ sudo apt install icingaweb2
+ Point a browser at http://localhost/icingaweb2/setup
  
  Expected results:
  The setup wizard loads.
  
  Actual results:
  
  Fatal error: Uncaught ErrorException: session_name(): Cannot change
  session name when session is active in
  /usr/share/php/Icinga/Web/Session/PhpSession.php:97 Stack trace: #0
  [internal function]:
  Icinga\Application\ApplicationBootstrap->Icinga\Application\{closure}(2,
  'session_name():...', '/usr/share/php/...', 97, Array) #1
  /usr/share/php/Icinga/Web/Session/PhpSession.php(97):
  session_name('Icingaweb2') #2
  /usr/share/php/Icinga/Web/Session/PhpSession.php(152):
  Icinga\Web\Session\PhpSession->open() #3
  /usr/share/php/Icinga/Web/Controller/ActionController.php(544):
  Icinga\Web\Session\PhpSession->write() #4
  /usr/share/php/Icinga/Web/Controller/ActionController.php(489):
  Icinga\Web\Controller\ActionController->shutdownSession() #5
  /usr/share/icingaweb2/library/vendor/Zend/Controller/Action.php(512):
  Icinga\Web\Controller\ActionController->postDispatch() #6
  /usr/share/php/Icinga/Web/Controller/Dispatcher.php(76):
  Zend_Controller_Action->dispatch('errorAction') #7
  /usr/share/icingaweb2/library/vendor/Zend/Controller/Front.php( in
  /usr/share/icingaweb2/library/vendor/Zend/Controller/Plugin/Broker.php
  on line 332
  
  [Regression Potential]
- The package is completely broken on startup now, so the regression potential 
is effectively nil.
+ The package is completely broken for setup now, and IIRC, at least somewhat 
broken if you get past that somehow, so the regression potential is very low.
  
  [Other Info]
  
  Upstream fix:
  
https://github.com/Icinga/icingaweb2/pull/3315/commits/dadd2c80f6819111f25e3799c072ec39c991897e
  
  Backported to Debian package here:
  
https://salsa.debian.org/nagios-team/pkg-icingaweb2/commit/5804954da6cf08a74eeeb689d8d094eefa6ba9bc

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1769890

Title:
  Icingaweb2 does not work with PHP 7.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/icingaweb2/+bug/1769890/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1769890] Re: Icingaweb2 does not work with PHP 7.2

2020-01-08 Thread Richard Laager
The proposed fix seems to be incomplete?

I'm still getting this:
Fatal error: Declaration of Icinga\Web\Form\Element\Note::isValid($value) must 
be compatible with Zend_Form_Element::isValid($value, $context = NULL) in 
/usr/share/php/Icinga/Web/Form/Element/Note.php on line 0

I'm unsubscribing ubuntu-sponsors until we get that sorted out.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1769890

Title:
  Icingaweb2 does not work with PHP 7.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/icingaweb2/+bug/1769890/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1769890] Re: Icingaweb2 does not work with PHP 7.2

2020-01-08 Thread Richard Laager
Attached is a debdiff that backports the fix from the Debian package.

** Description changed:

  [Impact]
  icingaweb2 does not work on PHP 7.2 or higher, e.g. as shipped in Ubuntu 
18.04.
  
  [Test Case]
  
  Steps to reproduce:
  # apt install mariadb
  # mysql_secure_installation
  # apt install icinga2 icinga2-ido-mysql
  Yes to both questions relating to automatically set up database
  # apt install icingaweb2
  Point a browser at http://host.example.co.uk/icingaweb2 to run the setup 
wizard
  
  Expected results:
  The setup wizard loads.
  
  Actual results:
  
  Fatal error: Uncaught ErrorException: session_name(): Cannot change
  session name when session is active in
  /usr/share/php/Icinga/Web/Session/PhpSession.php:97 Stack trace: #0
  [internal function]:
  Icinga\Application\ApplicationBootstrap->Icinga\Application\{closure}(2,
  'session_name():...', '/usr/share/php/...', 97, Array) #1
  /usr/share/php/Icinga/Web/Session/PhpSession.php(97):
  session_name('Icingaweb2') #2
  /usr/share/php/Icinga/Web/Session/PhpSession.php(152):
  Icinga\Web\Session\PhpSession->open() #3
  /usr/share/php/Icinga/Web/Controller/ActionController.php(544):
  Icinga\Web\Session\PhpSession->write() #4
  /usr/share/php/Icinga/Web/Controller/ActionController.php(489):
  Icinga\Web\Controller\ActionController->shutdownSession() #5
  /usr/share/icingaweb2/library/vendor/Zend/Controller/Action.php(512):
  Icinga\Web\Controller\ActionController->postDispatch() #6
  /usr/share/php/Icinga/Web/Controller/Dispatcher.php(76):
  Zend_Controller_Action->dispatch('errorAction') #7
  /usr/share/icingaweb2/library/vendor/Zend/Controller/Front.php( in
  /usr/share/icingaweb2/library/vendor/Zend/Controller/Plugin/Broker.php
  on line 332
  
  [Regression Potential]
  The package is completely broken on startup now, so the regression potential 
is effectively nil.
  
- -
+ [Other Info]
  
  Upstream:
  https://github.com/Icinga/icingaweb2/pull/3186

** Description changed:

  [Impact]
  icingaweb2 does not work on PHP 7.2 or higher, e.g. as shipped in Ubuntu 
18.04.
  
  [Test Case]
  
  Steps to reproduce:
  # apt install mariadb
  # mysql_secure_installation
  # apt install icinga2 icinga2-ido-mysql
  Yes to both questions relating to automatically set up database
  # apt install icingaweb2
  Point a browser at http://host.example.co.uk/icingaweb2 to run the setup 
wizard
  
  Expected results:
  The setup wizard loads.
  
  Actual results:
  
  Fatal error: Uncaught ErrorException: session_name(): Cannot change
  session name when session is active in
  /usr/share/php/Icinga/Web/Session/PhpSession.php:97 Stack trace: #0
  [internal function]:
  Icinga\Application\ApplicationBootstrap->Icinga\Application\{closure}(2,
  'session_name():...', '/usr/share/php/...', 97, Array) #1
  /usr/share/php/Icinga/Web/Session/PhpSession.php(97):
  session_name('Icingaweb2') #2
  /usr/share/php/Icinga/Web/Session/PhpSession.php(152):
  Icinga\Web\Session\PhpSession->open() #3
  /usr/share/php/Icinga/Web/Controller/ActionController.php(544):
  Icinga\Web\Session\PhpSession->write() #4
  /usr/share/php/Icinga/Web/Controller/ActionController.php(489):
  Icinga\Web\Controller\ActionController->shutdownSession() #5
  /usr/share/icingaweb2/library/vendor/Zend/Controller/Action.php(512):
  Icinga\Web\Controller\ActionController->postDispatch() #6
  /usr/share/php/Icinga/Web/Controller/Dispatcher.php(76):
  Zend_Controller_Action->dispatch('errorAction') #7
  /usr/share/icingaweb2/library/vendor/Zend/Controller/Front.php( in
  /usr/share/icingaweb2/library/vendor/Zend/Controller/Plugin/Broker.php
  on line 332
  
  [Regression Potential]
  The package is completely broken on startup now, so the regression potential 
is effectively nil.
  
  [Other Info]
  
- Upstream:
- https://github.com/Icinga/icingaweb2/pull/3186
+ Upstream fix:
+ 
https://github.com/Icinga/icingaweb2/pull/3315/commits/dadd2c80f6819111f25e3799c072ec39c991897e
+ 
+ Backported to Debian package here:
+ 
https://salsa.debian.org/nagios-team/pkg-icingaweb2/commit/5804954da6cf08a74eeeb689d8d094eefa6ba9bc

** Patch added: "icingaweb2_2.4.1-1ubuntu0.1.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/icingaweb2/+bug/1769890/+attachment/5318696/+files/icingaweb2_2.4.1-1ubuntu0.1.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1769890

Title:
  Icingaweb2 does not work with PHP 7.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/icingaweb2/+bug/1769890/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1769890] Re: Icingaweb2 does not work with PHP 7.2

2020-01-08 Thread Richard Laager
** Changed in: icingaweb2 (Ubuntu)
   Status: Confirmed => Fix Released

** Description changed:

- Release: 18.04 - Bionic
+ [Impact]
+ icingaweb2 does not work on PHP 7.2 or higher, e.g. as shipped in Ubuntu 
18.04.
  
- # apt-cache policy icingaweb2
- icingaweb2:
-   Installed: 2.4.1-1
+ [Test Case]
  
- Expected:  A setup wizard in my browser post initial install
+ Steps to reproduce:
+ # apt install mariadb
+ # mysql_secure_installation
+ # apt install icinga2 icinga2-ido-mysql
+ Yes to both questions relating to automatically set up database
+ # apt install icingaweb2
+ Point a browser at http://host.example.co.uk/icingaweb2 to run the setup 
wizard
  
- Attempting to run the setup wizard straight after install, results in
- this:
+ Expected results:
+ The setup wizard loads.
+ 
+ Actual results:
  
  Fatal error: Uncaught ErrorException: session_name(): Cannot change
  session name when session is active in
  /usr/share/php/Icinga/Web/Session/PhpSession.php:97 Stack trace: #0
  [internal function]:
  Icinga\Application\ApplicationBootstrap->Icinga\Application\{closure}(2,
  'session_name():...', '/usr/share/php/...', 97, Array) #1
  /usr/share/php/Icinga/Web/Session/PhpSession.php(97):
  session_name('Icingaweb2') #2
  /usr/share/php/Icinga/Web/Session/PhpSession.php(152):
  Icinga\Web\Session\PhpSession->open() #3
  /usr/share/php/Icinga/Web/Controller/ActionController.php(544):
  Icinga\Web\Session\PhpSession->write() #4
  /usr/share/php/Icinga/Web/Controller/ActionController.php(489):
  Icinga\Web\Controller\ActionController->shutdownSession() #5
  /usr/share/icingaweb2/library/vendor/Zend/Controller/Action.php(512):
  Icinga\Web\Controller\ActionController->postDispatch() #6
  /usr/share/php/Icinga/Web/Controller/Dispatcher.php(76):
  Zend_Controller_Action->dispatch('errorAction') #7
  /usr/share/icingaweb2/library/vendor/Zend/Controller/Front.php( in
  /usr/share/icingaweb2/library/vendor/Zend/Controller/Plugin/Broker.php
  on line 332
  
+ [Regression Potential]
+ The package is completely broken on startup now, so the regression potential 
is effectively nil.
+ 
+ -
  
  Upstream:
  https://github.com/Icinga/icingaweb2/pull/3186
- 
- Steps to reproduce:
- # apt install mariadb
- # mysql_secure_installation
- # apt install icinga2 icinga2-ido-mysql
- Yes to both questions relating to automatically set up database
- # apt install icingaweb2
- Point a browser at http://host.example.co.uk/icingaweb2 to run the setup 
wizard

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1769890

Title:
  Icingaweb2 does not work with PHP 7.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/icingaweb2/+bug/1769890/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2020-01-06 Thread Richard Laager
I've given this a lot of thought. For what it's worth, if it were my
decision, I would first put your time into making a small change to the
installer to get the "encryption on" case perfect, rather than the
proposal in this bug.

The installer currently has:

 O Erase disk an install Ubuntu
   Warning: ...
   [] Encrypt the new Ubuntu installation for security
  You will choose a security key in the next step.
   [] Use LVM with the new Ubuntu installation
  This will set up Logical Volume Management. It allows taking
  snapshots and easier partition resizing.
 O EXPERIMENTAL: Erase disk and use ZFS
   Warning: This will delete all your files on all operating systems.
   This is experimental and may cause data loss. Do not use on
   production systems.
 O Something else
   ...

I would move the ZFS option to be a peer of / alternative to LVM
instead:

 O Erase disk and install Ubuntu
   Warning: ...
   [] Encrypt the new Ubuntu installation for security
  You will choose a security key in the next step.
   Volume Managment:
 O  None (Fixed Partitions)
 O  Logical Volume Manager (LVM)
LVM allows taking snapshots and easier partition resizing.
 O  EXPERIMENTAL: ZFS
ZFS allows taking snapshots and dynamically shares space between
filesystems.
Warning: This is experimental and may cause data loss. Do not use
on production systems.
 O Something else
   ...

This is a very straightforward UI change. The only new combination
introduced with this UI is encryption on + ZFS, which is what we want.
In that scenario, run the same passphrase prompting screen that is used
now for LUKS. Then pass the passphrase to `zpool create` (and use
encryption=aes-256-gcm for the reasons already discussed).

If the "always enable encryption" feature is to future-proof for people
who would otherwise choose "no encryption", that's worth considering,
but if it's an alternative to prompting them in the installer, I'm
personally opposed.

However, we do need to consider why they're turning off encryption. Are
they saying, "I don't want encryption ever (e.g. because of the
performance penalty)." or "I don't care about encryption right now." If
you always enable encryption, you are forcing encryption on them, which
has real performance impacts on older hardware. For example, I just
yesterday upgraded my personal server to use ZFS encryption, but made a
media dataset that is unencrypted. Sustained writes to the media dataset
are _at least_ twice as fast. With encryption, I was CPU bound. With it
off, I was not, so I suspect I could have written even faster. This
system is older and does not have AES-NI.

You mentioned spinning disks. Perhaps I misunderstood, but I don't know
why you'd be asking about spinning disks in particular. They are slower
than SSDs, so encryption is less likely to be a concern there, not more.
My server scenario involved spinning disks.

If the old wrapped master key were overwritten when changed _and_ the
system has AES-NI instructions, then I think it would be reasonable to
make "no encryption" turn on encryption anyway with a fixed passphrase.
This would achieve the goal of allowing encryption to be enabled later.
But I think that is second in priority to handling the "encryption on"
case.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1850130] Re: zpools fail to import after reboot on fresh install of eoan

2020-01-03 Thread Richard Laager
Try adding "After=multipathd.service" to zfs-import-cache.service and
zfs-import-pool.service. If that fixes it, then we should probably add
that upstream.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1850130

Title:
  zpools fail to import after reboot on fresh install of eoan

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1850130/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2019-12-27 Thread Richard Laager
I put these questions to Tom Caputi, who wrote the ZFS encryption. The
quoted text below is what I asked him, and the unquoted text is his
response:

> 1. Does ZFS rewrite the wrapped/encrypted master key in place? If
>not, the old master key could be retrieved off disk, decrypted
>with the known passphrase, and used to decrypt at least
>_existing_ data.

1) No. This is definitely an attack vector (although a very minor
   one). At the time we had said that we would revisit the idea of
   overwriting old keys when TRIM was added. That was several years ago
   and TRIM is now in. I will talk to Brian about it after I am back
   from the holiday.

> 2. Does a "zfs change-key" create a new master key? If not, the old
>master key could be used to decrypt _new_ data as well, at least
>until the master key is rotated.

2) zfs change-key does not create a new master key. It simply re-wraps
   the existing master key. The master keys are never rotated. The key
   rotation is done by using the master keys to generate new keys.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2019-12-26 Thread Richard Laager
I have come up with a potential security flaw with this design:

The user installs Ubuntu with this fixed passphrase. This is used to
derive the "user key", which is used to encrypt the "master key", which
is used to encrypt their data. The encrypted version of the master key
is obviously written to disk.

Later, the user changes their passphrase. This rewraps the master key
with a new user key (derived from the new/real passphrase). It writes
that to disk. But, I presume that does NOT overwrite the old wrapped key
in place on disk. I don't actually know this, but I am assuming so based
on the general design of ZFS being copy-on-write. As far as I know, only
uberblocks are rewritten in place.

Therefore, it is possible for some indeterminate amount of time to read
the old wrapped master key off the disk, which can be decrypted using
the known passphrase. This gives the master key, which can then be used
to decrypt the _existing_ data.

If the master key is not rotated when using zfs change-key, then _new_
data can also be read for some indefinite period of time. I'm not 100%
sure whether change-key changes the master key or only the user key.
>From the man page, it sounds like it does change the master key. It
says, "...use zfs change-key to break an existing relationship, creating
a new encryption root..."

I'll try to get a more clueful answer on these points.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2019-12-26 Thread Richard Laager
Here are some quick performance comparisons:
https://github.com/zfsonlinux/zfs/pull/9749#issuecomment-569132997

In summary, "the GCM run is approximately 1.15 times faster than the CCM
run. Please also note that this PR doesn't improve AES-CCM performance,
so if this gets merged, the speed difference will be much larger."

I would recommend setting encryption=aes256-gcm instead of encryption=on
(which is aes256-ccm).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1857398] Re: ubiquity should support encryption by default with zfsroot, with users able to opt in to running change-key after install

2019-12-23 Thread Richard Laager
This is an interesting approach. I figured the installer should prompt
for encryption, and it probably still should, but if the performance
impact is minimal, this does have the nice property of allowing for
enabling encryption post-install.

It might be worthwhile (after merging the SIMD fixes) to benchmark
aes256-ccm (the default) vs encryption=aes-256-gcm. I think GCM seems to
be preferred, security wise, in various places (though I don't
immediately have references) and may be faster. There's also an upstream
PR in progress that significantly improves AES-GCM:
https://github.com/zfsonlinux/zfs/pull/9749

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1857398

Title:
  ubiquity should support encryption by default with zfsroot, with users
  able to opt in to running change-key after install

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1857398/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1856408] Re: zfs-initramfs needs to set FRAMEBUFFER=y

2019-12-14 Thread Richard Laager
Should it set KEYMAP=y too, like cryptsetup does?

I've created a PR upstream and done some light testing:
https://github.com/zfsonlinux/zfs/pull/9723

Are you able to confirm that this fixes the issue wherever you were
seeing it?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1856408

Title:
  zfs-initramfs needs to set FRAMEBUFFER=y

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1856408/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1854982] Re: Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

2019-12-04 Thread Richard Laager
I received the email of your latest comment, but oddly I’m not seeing it
here.

Before you go to all the work to rebuild the system, I think you should
do some testing to determine exactly what thing is breaking the send
stream compatibility. From your comment about your laptop, it sounds
like you think it is large_dnode. It really shouldn’t be large_dnode
because you said you have that feature on the receive side.

I would suggest creating some file-backed pools with different features.
You can do that with something like:

truncate -s 1G test1.img
zpool create test1 $(pwd)/test1.img

To adjust the features, add -d to disable all features and then add
various -o feature@something=enabled.

To actually use large dnodes, I believe you also have to set
dnodesize=auto on a filesystem with either “zfs -o” or for the root
dataset, “zpool -O” at the time of creation.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1854982] Re: Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

2019-12-03 Thread Richard Laager
I'm not sure if userobj_accounting and/or project_quota have
implications for send stream compatibility, but my hunch is that they do
not. large_dnode is documented as being an issue, but since your
receiver supports that, that's not it.

I'm not sure what the issue is, nor what a good next step would be. You
might ask on IRC (#zfsonlinux on FreeNode) or the zfs-discuss mailing
list. See: https://github.com/zfsonlinux/zfs/wiki/Mailing-Lists

Not that it helps now, but this will get somewhat better in the future,
as FreeBSD is switching to using the current ZFS-on-Linux, which will be
renamed to OpenZFS, codebase as its upstream. So Linux and FreeBSD will
have feature parity, outside of the usual time lag of release cycles.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1854982] Re: Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

2019-12-03 Thread Richard Laager
This is probably an issue of incompatible pool features.  Check what you
have active on the Ubuntu side:

zpool get all | grep feature | grep active

Then compare that to the chart here:
http://open-zfs.org/wiki/Feature_Flags

There is an as-yet-unimplemented proposal upstream to create a features
“mask” to limit the features to those with broad cross-platform support.

If it’s not a features issue, I think there was some unintentional send
compatibility break. I don’t have the specifics or a bug number, but a
friend ran into a similar issue with 18.04 sending to 16.04.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1854982

Title:
  Lost compatibilty for backup between Ubuntu 19.10 and FreeBSD 12.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854982/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1847389] Re: Prevent bpool (or pools with /BOOT/) to be upgraded

2019-11-27 Thread Richard Laager
If the pool has an _active_ (and not "read-only compatible") feature
that GRUB does not understand, then GRUB will (correctly) refuse to load
the pool. Accordingly, you will be unable to boot.

Some features go active immediately, and others need you to enable some
filesystem-level feature or take some other action to go from enabled to
active. The features that are left disabled in the upstream Root-on-ZFS
HOWTO (that I manage) are disabled because GRUB does not support them.
At best, you never use them and it's fine. At worst, you make one active
and then you can't boot. Since you can't use them without breaking
booting, there is no point in having them enabled.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1847389

Title:
  Prevent bpool (or pools with /BOOT/) to be upgraded

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1847389/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852854] Re: Update of zfs-linux fails

2019-11-18 Thread Richard Laager
** Changed in: zfs-linux (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852854

Title:
  Update of zfs-linux fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1852854/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852854] Re: Update of zfs-linux fails

2019-11-17 Thread Richard Laager
Which specific filesystems are failing to mount?

Typically, this situation occurs because something is misconfigured, so
the mount fails, so files end up inside what should otherwise be empty
mountpoint directories. Then, even once the original problem is fixed,
the non-empty directories prevent ZFS from mounting on them. We already
know you had such an underlying issue, so there is a high likelihood
that this is what is happening here.

I’m on mobile now, but try something like:
zfs get -r canmount,mountpoint,mounted POOLNAME

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852854

Title:
  Update of zfs-linux fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1852854/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852793] Re: Various problems related to "zfs mount -a

2019-11-15 Thread Richard Laager
> I think "zfs mount -a" should NOT try to mount datasets with
> mountpoint "/"

There is no need for this to be (confusingly, IMHO) special-cased in
zpool mount.

You should set canmount=noauto on your root filesystems (the ones with
mountpoint=/). The initramfs handles mounting the selected root
filesystem.

** Changed in: zfs-linux (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852793

Title:
  Various problems related to "zfs mount -a

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1852793/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852406] Re: Double-escape in initramfs DECRYPT_CMD

2019-11-13 Thread Richard Laager
The fix here seems fine, given that you're going for minimal impact in
an SRU. I agree that the character restrictions are such that the pool
names shouldn't actually need to be escaped. That's not to say that I
would remove the _proper_ quoting of variables that currently exists
upstream, as it's good shell programming practice to always quote
variables.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852406

Title:
  Double-escape in initramfs DECRYPT_CMD

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1852406/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1847628] Re: When using swap in ZFS, system stops when you start using swap

2019-10-15 Thread Richard Laager
> "com.sun:auto-snapshot=false" do we need to add that or does our zfs
not support it?

You do not need that. That is used by some snapshot tools, but Ubuntu is
doing its own zsys thing.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1847628

Title:
  When using swap in ZFS, system stops when you start using swap

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-release-notes/+bug/1847628/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1847628] Re: When using swap in ZFS, system stops when you start using swap

2019-10-14 Thread Richard Laager
** Also affects: ubiquity (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1847628

Title:
  When using swap in ZFS, system stops when you start using swap

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1847628/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

  1   2   3   4   5   6   7   8   9   10   >