Hello,

just a data point: I ran into this issue three times in the last months and everytime the setup was a degraded software RAID-1 array with two active disks and LVM on top of LUKS on top of MD RAID-1.

Now I (accidently) rebooted a server with the very same setup and a degraded RAID-1 setup, and didn't run into the issue. The one major difference was that the degraded RAID-1 array is setup with *three* active disks - so even after one went off, *two* active disks were remaining. Even though mdadm reports the array as degraded, the server still booted.

Just wanted to share this as a data point for whoever (maybe myself) further debugs this issue ;)

Kind regards
 jonas

Am 26.07.19 um 11:44 schrieb Magnus Sandberg:
Package: cryptsetup-initramfs
Version: 2:2.1.0-5

First for the cryptsetup developers. A comment for mdadm developers, see below.

I'm setting up a new Debian Buster computer with LVM ontop of LUKS ontop of MD
raid1 with UEFI and GPT.

It was some manual disk setup to get even /boot/efi on raid (with metadata
1.0), etc. The short version is: md0 as /boot/efi, md1 as /boot and md2 holds
a LUKS container. Inside the LUKS container I use LVM to have all my other
parititions including swap and root (/), /home, etc.

Everything works as expected until I disconnect one of the disks to verify
that the system works even in degraded mode. Without the patch below I had to
wait for the initramfs timeout and while in busybox I cound do "mdadm --run
md0", etc followed by exit and then enter the passphrase for the LUKS
container.

Attachment: OpenPGP_signature
Description: OpenPGP digital signature

Reply via email to