On 15/05/2020 12:30, Rich Freeman wrote:
On Fri, May 15, 2020 at 7:16 AM antlists <antli...@youngman.org.uk> wrote:

On 15/05/2020 11:20, Neil Bothwick wrote:

Or you can create a custom module, they are just shell scripts. I recall
reading a blog post by Rich on how to do this a few years ago.

My custom module calls a shell script, so it shouldn't be that hard from
what you say. I then need to make sure the program it invokes
(integritysetup) is in the initramfs?

The actual problem that this module solves is no-doubt long solved
upstream, but here is the blog post on dracut modules (which is fairly
well-documented in the official docs as well):
https://rich0gentoo.wordpress.com/2012/01/21/a-quick-dracut-module/

I don't think it is ... certainly I'm not aware of anything other than LUKS that uses dm-integrity, and LUKS sets it up itself.

Basically you have a shell script that tells dracut when building the
initramfs to include in it whatever you need.  Then you have the phase
hooks that actually run whatever you need to run at the appropriate
time during boot (presumably before the mdadm stuff runs).

My example doesn't install any external programs, but there is a
simple syntax for that.

If your module is reasonably generic you could probably get upstream
to merge it as well.

No. Like LUKS, I intend to merge the code into mdadm and let the raid side handle it. If mdadm detects a dm-integrity/raid setup, it'll set up dm-integrity and then recurse to set up raid.

Good luck with it, and I'm curious as to how you like this setup vs
something more "conventional" like zfs/btrfs.  I'm using single-volume
zfs for integrity for my lizardfs chunkservers and it strikes me that
maybe dm-integrity could accomplish the same goal with perhaps better
performance (and less kernel fuss).  I'm not sure I'd want to replace
more general-purpose zfs with this, though the flexibility of
lvm+mdadm is certainly attractive.

openSUSE is my only experience of btrfs. And it hasn't been nice. When it goes wrong it's nasty. Plus only raid 1 really works - I've heard that 5 and 6 have design flaws which means it will be very hard to get them to work properly. I've never met zfs.

As the linux raid wiki says (I wrote it :-) do you want the complexity of a "do it all" filesystem, or the abstraction of dedicated layers?

The big problem that md-raid has is that it has no way of detecting or dealing with corruption underneath. Hence me wanting to put dm-integrity underneath, because that's dedicated to detecting corruption. So if something goes wrong, the raid gets a read error and sorts it out.

Then lvm provides the snap-shotting and sort-of-backups etc.

But like all these things, it's learning that's the big problem. With my main system, I don't want to experiment. My first gentoo system was an Athlon K8 Thunderbird on ext. The next one is my current Athlon X III mirrored across two 3TB drives. Now I'm throwing dm-integrity and lvm into the mix with two 4TB drives. So I'm going to try and learn KVM ... :-)

Cheers,
Wol

Reply via email to