On 12/07/2014 04:32 PM, Konstantin wrote:
I know this and I'm using 0.9 on purpose. I need to boot from these disks so I can't use 1.2 format as the BIOS wouldn't recognize the partitions. Having an additional non-RAID disk for booting introduces a single point of failure which contrary to the idea of RAID>0.
GRUB2 has raid 1.1 and 1.2 metadata support via the mdraid1x module. LVM is also supported. I don't know if a stack of both is supported.
There is, BTW, no such thing as a (commodity) computer without a single point of failure in it somewhere. I've watched government contracts chase this demon for decades. Be it disk, controller, network card, bus chip, cpu or stick-of-ram you've got a single point of failure somewhere. Actually you likely have several such points of potential failure.
For instance, are you _sure_ your BIOS is going to check the second drive if it gets read failure after starting in on your first drive? Chances are it won't because that four-hundred bytes-or-so boot loader on that first disk has no way to branch back into the bios.
You can waste a lot of your life chasing that ghost and you'll still discover you've missed it and have to whip out your backup boot media.
It may well be worth having a second copy of /boot around, but make sure you stay out of bandersnatch territory when designing your system. "The more you over-think the plumbing, the easier it is to stop up the pipes."
-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html