Hello everyone,
I'm facing one persistent issue that I've been trying to solve for quite
some time now:
If the clients' disks have been incorrectly/insufficiently wiped, a
subsequent install might fail after creating the initial disk layout, as
at that moment, leftover mdraid/lvm metadata get autodetected by either
the kernel or the rd (not sure which is responsible for this), and
activated, instanciating them as md/lvm devices.
If the setup-storage routine then attempts to create a filesystem on
their underlying block devices, it fails with a "Device or resource busy"
Example layout:
disk_config sda disklabel:gpt-bios
primary - 1024 - -
primary - 10240 - -
primary - 0- - -
disk_config sdb sameas:sda
disk_config raid
raid1 swap sda1,sdb1 swap sw
raid1 / sda2,sdb2 ext4 noatime,errors=remount-ro
mdcreateopts="--metadata=0.90"
raid1 - sda3,sdb3 - -
disk_config lvm
vg disk_tmp md2
disk_tmp-var /var 20G ext4
noatime,errors=remount-ro
disk_tmp-home /home 50G ext4
noatime,errors=remount-ro
disk_config tmpfs
tmpfs /tmp RAM:10% defaults
After a server gets installed with this config, if I wipe its disks only
by running a wipefs -af /dev/sda /dev/sdb, raid signatures are left at
the same offsets, and will get automatically activated after re-applying
the same disk partition layout, preventing further device manipulation.
Reading dracut.cmdline manual page, I tried appending rd.md=0 rd.dm=0
rd.lvm=0 to kernel's cmdline, but to no avail.
Full fai log can be found at: https://pastebin.com/7LNgZ2av
Note: The three errors following the partition.DEFAULT hook are related
to my feeble attempt at solving this issue by activating all found
mdraids and wiping all /dev/sd* /dev/vd* and /dev/md* devices
pre-partitioning. But as the disk doesn't have its initial partition
layout yet, no md devices are found, and so can't be wiped.
Thank you for any advice you'll be able to provide!
--L. Pavljuk