Your message dated Fri, 5 Jan 2024 18:53:29 +0000
with message-id <609a1f15-c012-4b17-9d36-c4c3223ec...@outlook.com>
and subject line Closing bug 1003528
has caused the Debian Bug report #1003528,
regarding zfsutils-linux: datasets with mountpoint=legacy defined in fstab 
prevent the system from booting
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact ow...@bugs.debian.org
immediately.)


-- 
1003528: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1003528
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems
--- Begin Message ---
Package: zfsutils-linux
Version: 2.0.3-9
|Severity: important|

Hi,
Currently, creating a ZFS dataset with mountpoint=legacy and adding it to 
/etc/fstab as auto causes the system to hang at boot because the mount is 
attempted before the pool has been imported or the ZFS module loaded.
How to reproduce:
apt install zfsutils-linux
modprobe zfs
zpool create tank sdb
zfs set mountpoint=legacy tank
echo "tank /mnt zfs defaults 0 0" >> /etc/fstab
reboot

The boot process will hang because /mnt did not mount properly. "journalctl -b -u 
mnt.mount" shows the following (I don't know why the dates aren't sorted):
Jan 11 11:41:16 localhost mount[702]: The ZFS modules are not loaded.
Jan 11 11:41:16 localhost mount[702]: Try running '/sbin/modprobe zfs' as root 
to load them.
Jan 11 11:41:13 localhost systemd[1]: Mounting /mnt...
Jan 11 11:41:14 localhost systemd[1]: mnt.mount: Mount process exited, 
code=exited, status=2/INVALIDARGUMENT
Jan 11 11:41:14 localhost systemd[1]: mnt.mount: Failed with result 'exit-code'.
Jan 11 11:41:14 localhost systemd[1]: Failed to mount /mnt.

"journalctl -b -u zfs-load-module.service" shows that the module was loaded 
afterwards:
Jan 11 11:41:19 localhost systemd[1]: Starting Install ZFS kernel module...
Jan 11 11:41:19 localhost systemd[1]: Finished Install ZFS kernel module.

The same goes for "journalctl -b -u zfs-import-cache.service":
Jan 11 11:41:19 localhost systemd[1]: Starting Import ZFS pools by cache file...
Jan 11 11:41:19 localhost systemd[1]: Finished Import ZFS pools by cache file.


I worked around the problem by making sure the zfs-import.target (used by 
zfs-import-{cache,scan}.service) is active before mounts are attempted.
Contents of /etc/systemd/system/zfs-import.target.d/override.conf
[Unit]
Before=local-fs-pre.target

Should Debian edit /lib/systemd/system/zfs-import.target to include this? 
Should I report this bug upstream? Are there any dependency loop risks I might 
have overlooked?

Kind regards,

Louis

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


--- End Message ---
--- Begin Message ---
Control: tags -1 + wontfix

Hi Louis,

> Should Debian edit /lib/systemd/system/zfs-import.target to include this?

Adding dependency would solve your problem, but it forcibly serialize two 
parallel-able tasks.

> I worked around the problem by making sure the zfs-import.target (used by 
> zfs-import-{cache,scan}.service) is active before mounts are attempted.

According to systemd.mount [1], you can just write something like:

tank /mnt zfs defaults,x-systemd.requires=zfs-load-modules.service 0 0

I’m closing this report since it is not a bug indeed.

[1]: https://www.freedesktop.org/software/systemd/man/latest/systemd.mount.html

Thanks,
Shengqi Chen

--- End Message ---

Reply via email to