Hi Christof, On Mon, 13 May 2019 at 20:48:41 +0200, Christof Baumann wrote: > In order to get rid of this I changed the script to only attempt > activation of lvm volume groups after all the disks in /etc/crypttab > have been unlocked.
Thanks for the patch! > The check for dm-crypt devices needs to stay in the first pass as this > is part of the unlocking procedure but the lvm volume group activation > can be moved to a second step. > Like this the above error messages are gone and I couldn't think > of anything that would now go wrong because of that. It's not regression per se, but our root supports arbitrary block device stacks, and moving the lvm logic to the end adds a bit of complexity while only solving one step on the way. The warning would likely appear with a more complex stack, for instance something involving RAID between dm-crypt and lvm. I think a better fix would be to remove the lvm logic from our boot script altogether and better interact with the one from the lvm2 package (which handles dmcrypt-over-lvm setups). Maybe run the lvm2 script again after cryptroot and loop until a stable state is reached. Unfortunately lvm is chatty and is responsible for the infamous Loading initial ramdisk ... Volume group "$foo-vg" not found Cannot process volume group $foo-vg warning at startup time. It's also the `lvm pvs` call that yields the “Couldn't find device with uuid …” warning. In principle one could skip the call to vchange if the VG is missing or some of its PVs are missing, but `pvs -o vg_missing_pv_count` spits the warning too along with the missing count, and I was not able to silence it (short of throwing away the whole error output which is not really an option). Cheers, -- Guilhem.
signature.asc
Description: PGP signature