On 19/01/24 at 09:03, Anssi Saari wrote:
One case against using partitions on mdraid: if your array gets messed
up, you get to recreate those partition tables yourself and that's just
hilarious if you don't have a backup. Happened to a friend of mine,
reason was a UPS brownout.

How can I get a backup of mdadm RAID partition? And which tool to backup the whole disks of an array? The only tool that it comes in mind it is "dd" that it isn't a viable solution for me. I think is useless to backup the raw data stored in a partition or the whole disk. I backup files and directories stored in the filesystem not raw data. If an error occurs in the RAID, mdadm takes care to warn me via email... I hope!

I think he scanned his disks for copies of
the superblock but didn't find any and then somehow with a lot of hassle
eventually figured out what the partition tables were.

So in a catastrophe, partition tables are one more obstacle to cross
before you can start actually recovering your data.

Me too ran into a catastrophe scenario, I had lost /dev/md0, the reason was using hibernate (suspend to disk) in a logical volume placed inside the RAID. I think it was damaged the RAID metadata. I got rid of this using Debian-installer, I thought that I had loosed everything and I prepared for reinstall, when Debian-installer asked me to create the new RAID I specify all the four partitions, I saved, and magically the logical device and all my logical volumes, embedded in the old RAID, reappeared. To partition was not a trouble in those circumstances.


My only mdraid was on raw partitions but that never had any issues. I
think zfs effectively does the same, no partitions.

Which raw partitions? Maybe did you mean without partitions? I never used zfs it's full featured, I prefer to keep the things simple: RAID -> LVM -> ext4

Cheers,
--
Franco Martelli

Reply via email to