Alternatively you can try to install the machine in UEFI mode. In UEFI mode 
with root ZFS the installer will set up systemd-boot instead of grub. This 
alleviates the problem with grubs limited ZFS compatibility.

See the docs for more infos: 
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysboot

HTH,
Aaron

On 9/14/20 12:16 AM, Gianni Milo wrote:
GRUB does not support all zfs features, so it's quite common for it to fail
to recognise the rpool during boot if it has a feature which is
incompatible with it. In your case, I believe that is the "encyption"
feature.
Do you recall the issue starting after enabling encryption on rpool/data
dataset? If so, you may have to rebuild the pool and leave rpool/data
unencrypted.
  Note that even though you enabled encryption only on rpool/data, the
feature will take effect at the pool level, hence the GRUB issue you
are facing.
Because of this issue, people have started using a separate boot pool
(bpool), with limited zfs features enabled and a different data pool
(rpool) for the OS and the data. I believe that the PVE installer should be
modified to follow this approach (if it hasn't done it already) to overcome
similar issues.

Gianni


On Sun, 13 Sep 2020 at 22:29, Stephan Leemburg <[email protected]>
wrote:

Hi All,



I have a proxmox system at netcup that was clean installed about two

weeks ago.



It is full zfs, so root on zfs. I am migrating from one netcup system

with less storage to this system.



This system is using the pve-no-subscription repository, but after

migration I will move the subscription from the 'old' system to this
system.



The rpool/data dataset is zfs encrypted.



Today I did zfs send/recv from the 'old' system to this system to the

rpool/data dataset.



After that I did an apt update and noticed there where updates available.



After the upgrade and the mandatory reboot, the system does not come up

anymore. It is stuck in grub rescue.



Grub mentions that it has a 'unknown filesystem'.



Has anyone else experienced this same situation?



If so and you could recover, what was the reason and fix?



I am still researching what is causing this. If I boot a .iso then I can

import the pool and see all datasets and subvolumes.



So, it seems that the zpool itself and the datasets are ok, it is just

that grub is unable to recognize them for some reason.



I have read about other situations like this where large_dnode seemed to

be the cause. I noticed that on the zpool large_dnode is enabled. And it

is a creation only setting. It cannot be changed to disabled afterwards.



This must have been done by the Proxmox installer. As booting is about

the root filesystem, I guess the zfs send / recv to the rpool/data

dataset would have nothing to do with it, but I could be wrong.



I will continue my research tomorrow evening after some other

obligations, but if anyone has an idea, please share it. If it is just

how to get more debugging info out of the zfs module in grub.



Because 'unknown filesystem', with the zfs module loaded is kind of not

helping enough..



Best regards,



Stephan





_______________________________________________

pve-user mailing list

[email protected]

https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user




_______________________________________________
pve-user mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user




_______________________________________________
pve-user mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to